The potential of a personal assistant gets exciting when it has access to personal data and the real world via the Internet of Things. New possibilities arise — from requesting your assistant turn on your lights to asking it how well you slept. We will be connecting our Api.ai assistant to the Jawbone Up API as an example of this.
What You'll Need
This article builds upon a variety of concepts we've already covered in previous articles here at SitePoint. In order to follow along with this tutorial comfortably, you'll need the following.
- An Api.ai agent connected to a simple HTML web app - See this article if you'd like to understand this process, otherwise, you can download the code from this guide and use it too.
- An agent that has been taught the entity of "sleep" - We created this in Empowering Your Api.ai Assistant with Entities. It should understand concepts like "how much sleep did I have last night?" and "how much REM sleep did I get?". If you're looking to adapt this to your own IoT device, you'll need to have created your own custom entity that understands your IoT functionality.
- A general knowledge of Node.js and running a Node server - Without it, you won't be able to get the server running!
- Knowledge of how to use the Jawbone UP API (or another API you intend to use) - We've covered the Jawbone Up API previously in Connecting to the Jawbone Up API with Node.js and will be referring to sections from that article throughout.
- An SSL certificate to run your site on HTTPS - You'll need this if working with the Jawbone Up API. We cover how to set up a self signed certificate on the Jawbone Up API article.
The Code
All code for this demo is available for you to download and use however you please! You can find it all on GitHub.
How This Works
Our Api.ai assistant is already connected to a simple web app that accepts statements via the HTML5 Speech Recognition API. From here, we need to add a new bit of functionality that listens for a specific action from our Api.ai agent. In our case, this is the action of "sleepHours".
Whenever our JavaScript detects this action, it triggers a separate call to our Node.js app to ask the Jawbone API for that data. Once the web app receives this data, our web app turns it into a nice sentence and reads it out — giving our assistant a whole new range of intelligence!
Our Project Structure
I've adjusted the app from the initial HTML-only structure to one which uses EJS views so that we can switch pages in our web app when logging into the Jawbone Up API via OAuth. In reality, we only really have one page but this method allows us to add more in future if needed for other IoT devices. This single view is at /views/index.ejs. We then have our Node server in the root folder as server.js and certificate files in root too. To keep things relatively simple and contained, all front-end JavaScript and CSS is inline. Feel free to move these into CSS and JS files as you prefer, minify them and make them pretty.
Responding to Api.ai Actions in JavaScript
As you might remember from our previous article, when Api.ai returns a response, it provides a JSON object that looks like so:
[code language="js"]
{
"id": "6b42eb42-0ad2-4bab-b7ea-853773b90219",
"timestamp": "2016-02-12T01:25:09.173Z",
"result": {
"source": "agent",
"resolvedQuery": "how did I sleep last night",
"speech": "I'll retrieve your sleep stats for you now, one moment!",
"action": "sleepHours",
"parameters": {
"sleep": "sleep"
},
"metadata": {
"intentId": "25d04dfc-c90c-4f55-a7bd-6681e83b45ec",
"inputContexts": [],
"outputContexts": [],
"contexts": [],
"intentName": "How many hours of @sleep:sleep did I get last night?"
}
},
"status": {
"code": 200,
"errorType": "success"
}
}
[/code]
Within that JSON object there are two bits of data we need to use — action and parameters.sleep:
[code language="js"]
"action": "sleepHours",
"parameters": {
"sleep": "sleep"
},
[/code]
action is the name we gave to the Api.ai action which the user has triggered. In the case of our sleep example, we named it "sleepHours". parameters contain the variables in our sentence that can change a few details. In the case of sleep, our parameter tells us what type of sleep – "sleep", "deep sleep", "light sleep" or "REM sleep" (or just "REM").
Initially, in our early article on Api.ai, our prepareResponse() function took the JSON response from Api.ai, put the whole thing into our debug text field on the bottom right and took out Api.ai's verbal response to display in the web app. We completely relied on what our Api.ai agent said, without adding any of our own functionality:
[code language="js"]
function prepareResponse(val) {
var debugJSON = JSON.stringify(val, undefined, 2),
spokenResponse = val.result.speech;
respond(spokenResponse);
debugRespond(debugJSON);
}
[/code]
This time around, we keep an eye out for the action field and run our own function called requestSleepData() if it the action contains "sleepHours". Within this function, we pass in the sleep parameter so we know what type of sleep is being requested:
[code language="js"]
function prepareResponse(val) {
var debugJSON = JSON.stringify(val, undefined, 2),
spokenResponse = val.result.speech;
if (val.result.action == "sleepHours") {
requestSleepData(val.result.parameters.sleep);
} else {
respond(spokenResponse);
}
debugRespond(debugJSON);
}
[/code]
Within requestSleepData(), we request all sleep data from our Node.js server and then filter it by looking at the very first value in the returned array of data (data.items[0].details) — this would be last night's sleep. Within these details, we have data.items[0].details.rem with our REM sleep, data.items[0].details.sound with our deep sleep, data.items[0].details.light with our light sleep and data.items[0].details.duration with the combined amount of sleep recorded:
[code language="js"]
function requestSleepData(type) {
$.ajax({
type: "GET",
url: "/sleep_data/",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(data) {
console.log("Sleep data!", data);
if (data.error) {
respond(data.error);
window.location.replace("/login/jawbone");
}
switch (type) {
case "REM sleep":
respond("You had " + toHours(data.items[0].details.rem) + " of REM sleep.");
break;
case "deep sleep":
respond("You had " + toHours(data.items[0].details.sound) + " of deep sleep.");
break;
case "light sleep":
respond("You had " + toHours(data.items[0].details.light) + " of light sleep.");
break;
case "sleep":
respond("You had " + toHours(data.items[0].details.duration) + " of sleep last night. That includes " + toHours(data.items[0].details.rem) + " of REM sleep, " + toHours(data.items[0].details.sound) + " of deep sleep and " + toHours(data.items[0].details.light) + " of light sleep.");
break;
}
},
error: function() {
respond(messageInternalError);
}
});
}
[/code]
Continue reading %How to Connect Your Api.ai Assistant to the IoT%
by Patrick Catanzariti via SitePoint
No comments:
Post a Comment