Recently, we gave our website (www.hatch.be) a major facelift. After all the effort that went into it, we naturally wanted to see how the redesign impacted our traffic. Enter Google Analytics! However, there's one major drawback: in my opinion, Google Analytics has a terrible user interface, and I wasn’t looking forward to navigating its dashboard daily or weekly to pull data.
So, we decided to make things easier by building an Alexa skill to handle the task. The idea is simple: just ask Alexa for high-level statistics, and she’ll deliver. By connecting to the Google Analytics API, we can pull the data, pass it to Alexa, and—voilà! Sounds great, right?
In this post, I’ll show you how to create this setup. While I won’t dive into the details of using the Google Analytics API, I’ll focus on building the Alexa skill.
Prerequisites
Amazon Developer Account (developer.amazon.com)
AWS Account (aws.amazon.com)
AWS CLI (and configure it)
Node.js v12.x
Typescript
npm package manager
Your favorite IDE (I personally use WebStorm)
Initial setup
Start by creating a new directory and navigate into it.
mkdir hatch-statistics
cd hatch-statistics
We’ll also need to install a few dependencies. Serverless (shorthand: sls) will make our life much easier!
npm install -g serverless
There are some templates available, so I’ll use one to get a head start. For this guide, I will be using the aws-alexa-typescript template and name my service hatch-statistics.
sls create --template aws-alexa-typescript --name hatch-statistics
After running this command, we already get all the required files, but of course, we’ll need to edit some. If you open the handler.ts file, you should get an error for the ask-SDK import. This is because the package.json comes with a few dependencies, so let’s go ahead and run the install command:
npm install
Create an Alexa Skill
Now everything is installed and we can look at the different steps to create the skill. Let’s have a look at the serverless.yml file. As you can see, there are already a lot of comments and a few placeholders to help us here.
Step 1: Run sls alexa auth to authenticate
This will allow us to create and build an Alexa skill with the Alexa Skill Kit CLI, and that is exactly what we need. Note that you need your Amazon Developer Account for this step. It should open a window in your browser, where you need to allow access.
sls alexa auth
The next step is to create a new skill.
Step 2: Run sls alexa create --name "Serverless Alexa Typescript" --locale en-GB --type custom to create a new skill
Obviously you can change the name to whatever you like, I’ll name mine "Hatch Statistics". You can also change the locale, but watch out for this. The locale needs to match the language of your Alexa. If you want to change it here, you’ll also have to update it later! The type will be ‘custom’ and cannot be changed.
sls alexa create --name "Hatch Statistics" --locale en-GB --type custom
When you execute the command, you should get a skill ID in return. We need this in the next step:
Step 3: Paste the skill id returned by the create command here:
So go ahead and replace the placeholder for the id with your own skill ID. You can now also go to the Alexa developer console and see your newly created skill here. (Where you can also find your skill ID)
Click on your newly created skill. You’ll be asked to choose a template. We are going to start from scratch.
In the next screen, on the left-hand side (Interaction Model > Intents) you can see there are already 5 built-in intents. These will be important later in this guide, but for now, switch back to the serverless.yml file. If you decided to change the skill name (with the Alexa create command), you should also change the name in the publishing Information, a few lines below the comment of step 3.
For step 4, we’re going to deploy the Serverless stack. This requires you to have AWS account and to have configured the AWS CLI.
Step 4: Do your first deploy of your Serverless stack
By default, it will use the us-east-1 region (N. Virginia). If you want to deploy the stack in another region, you have to specify it in the serverless.yml file. You have to add a line for the region (under provider). I’m going to change mine to eu-west-1 (Ireland).
serverless.yml
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
Once you are ready, you can deploy the serverless stack. It will create the stack in CloudFormation, upload the code to an S3 bucket, and create the required Lambda. The command we need here is:
sls deploy
This can take a few minutes. Once it is ready, you can log in on the AWS console, navigate to AWS Lambda and you should find a new function here (make sure you are checking in the correct region). We need to copy the ARN of this Lambda, for step 5.
Step 5: Paste the ARN of your lambda here:
Step 6: Run sls alexa update to deploy the skill manifest
In the next step, we need to update the Skill Manifest. This is the JSON representation of your skill, and it provides Alexa with metadata. In this case, it will update the ARN.
sls alexa update
Step 7: Run sls alexa build to build the skill interaction model
Now we are going to build the Alexa skill interaction model. You can find this interaction model a few lines below.
serverless.yml
models:
en-GB:
interactionModel:
languageModel:
invocationName: serverless typescript
intents:
- name: HelloIntent
samples:
- 'hello'
The invocation name is used to begin an interaction with a skill.
Alexa, ask < invocation name > for <utterance >
Alexa, open < invocation name >
By default, this is set to ‘serverless typescript’.I want to use this skill to consult some statistics about the Hatch website, so I’m going to change the invocation name to ‘hatch statistics’. Note that the invocation name should be at least 2 lower case words.
Next is the Intent.The name of the intent is HelloIntent, and we see 1 sample utterance ‘hello’. The utterance is something a user might say or ask, to get a reply from Alexa.We’ll change this in a minute.For now, we’ll try to build the model:
sls alexa build
Unfortunately, there will be an error about a missing intent.
StatusCodeError: 400 — {“message”:” Interaction model is not valid.”,”violations”:[{“message”:” MissingIntent: AMAZON.StopIntent is required for a custom skill.”}]}
Remember the built - in intents we could see in the Alexa console ? The StopIntent is actually required, but we did not add it to our model.All we have to do now is add it, and Alexa will know how to handle it.The updated model will look like this:
serverless.yml
models:
en-GB:
interactionModel:
languageModel:
invocationName: hatch statistics
intents:
- name: AMAZON.StopIntent
samples: []
- name: HelloIntent
samples:
- 'hello'
Run the build command again. This time it will not give an error. Once it is finished, have a look at the Alexa console.
Here you can now see the HelloIntent, the StopIntent, and an additional NavigateHomeIntent. We did not define this last one, but it is added automatically to the interaction model after the build. The NavigateHomeIntent is needed on - screen devices to return to the home screen. We do not need this one, so we’ll ignore it.
The HelloIntent is more interesting.This is the predefined custom intent in our model and we are going to update this later, but let ’s try it out for now.We had one sample utterance in our model ‘hello’. We can now also find this in the Alexa console.
At the top of the page, navigate to Test.Enable it for development.Now we can use the invocation name and the utterance to see if we get a response from Alexa.
ask hatch statistics for hello
or
open hatch statistics
In the left panel, you can have a conversation and try out the invocation name and utterances.Alexa will try to respond.
In the right panel, you can see the JSON request and response.When you scroll to the bottom of the request, you can see that there was an IntentRequest with the HelloIntent as the intent name.Alexa responded with ‘Hello world!’. Everything seems to be working as intended.
In case you try ‘open hatch statistics’, you’ll see that you get the same response, but that the request was a LaunchRequest. You can also try any other utterance, like ‘ask hatch statistics for the weather’. Now it is a HelloIntent again, and the response remains the same.We’ll fix this behavior in the next section.
Customizing the Alexa Skill
So far, so good. We can get a response from Alexa. Now we’ll go and update the Hello World! example. First I want to update the intent name and utterances. The name I’ll change to StatisticsIntent and instead of saying ‘hello’, I want to say ‘what is the amount of visitors’, and maybe some variations of that question. This will have to be updated in the serverless.yml file.
serverless.yml
models:
en-GB:
interactionModel:
languageModel:
invocationName: hatch statistics
intents:
- name: AMAZON.StopIntent
samples: []
- name: StatisticsIntent
samples:
- 'what is the amount of visitors'
- 'what is the number of visitors'
- 'how many visitors were on the website'
Next, I want to change the response.This is done in the handler.ts file. Here is the original content of this file:
handler.ts
import * as Ask from 'ask-sdk';
import 'source-map-support/register';
export const alexa = Ask.SkillBuilders.custom()
.addRequestHandlers({
canHandle: handlerInput => true,
handle: handlerInput =>
handlerInput.responseBuilder.speak('Hello world!').getResponse()
})
.lambda();
We can see the addRequestHandlers, this function takes a list of handlers.A handler should have 2 functions, canHandle and handle, and both have a handlerInput parameter.If the_canHandle_ returns true, then the handle function will be called.
In this case, there is 1 handler(for the HelloIntent), which will always return true for the_canHandle_. The handle will return a response, with a spoken command saying_‘Hello World!’_. This explains why we always got the same response, even if we used the LaunchRequest or other utterances. Let’s fix this with some additional handlers.
handler.ts
const StatisticsIntent_Handler = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === 'IntentRequest' && (
request.intent.name === 'StatisticsIntent');
},
async handle(handlerInput) {
const visitors = await GoogleAnalyticsApi.getAnalytics();
return handlerInput.responseBuilder
.speak(`There were ${visitors} visitors on the website yesterday`)
.withShouldEndSession(true)
.getResponse();
}
}
const LaunchRequest_Handler = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === 'LaunchRequest';
},
handle(handlerInput) {
return handlerInput.responseBuilder
.speak('Hello ! Ready for some statistics?')
.withShouldEndSession(false)
.getResponse();
},
};
const AmazonStopIntent_Handler = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === 'IntentRequest' && request.intent.name === 'AMAZON.StopIntent' ;
},
handle(handlerInput) {
return handlerInput.responseBuilder
.speak('Okay, talk to you later! ')
.withShouldEndSession(true)
.getResponse();
},
};
export const alexa = Ask.SkillBuilders.custom()
.addRequestHandlers(
StatisticsIntent_Handler,
LaunchRequest_Handler,
AmazonStopIntent_Handler
)
.lambda();
Now you can see I have added 3 handlers. The first handler is the handler for the statistics(what is the amount of visitors). The canHandle will only return true if the request is an_IntentRequest_ and if the request name equals to StatisticsIntent(so it matches the name in the serverless.yml). The handle will get the number of visitors from my GoogleAnalyticsApi add it to the response.After this request, we will not continue the conversation, so we also add.withShouldEndSession(true).
The second handler is for the LaunchRequest(open hatch statistics), for when we start a conversation. In this case, we still have to send a second request to get the number of visitors(with the StatisticsIntent), so the session should remain and the.withShouldEndSession will get false as an argument.
The third and last handler is for the built - in StopIntent. In case we make a LaunchRequest and the session has started, but we do not ask for the number of visitors but finish the conversation with one of the stop words (stop, shut up, ..), then Alexa will reply with ‘Okay, talk to you later!’ and the session will end.
We have now updated both the conversation model in the serverless.yml file and the code in the handler.ts. This means we have to deploy again on aws and build again in the Alexa console:
sls deploy
sls alexa build
Once it’s deployed and the build has finished, we can test again. We can now start the conversation and ask for the number of visitors. Or we can start the conversation and just end the conversation immediately.
Connecting to your Alexa Echo
The Alexa simulator works great and is easy to use.But of course, we want to deploy the skill to an actual Alexa device, like the Echo. This is very easy. All you have to do is sign up in Alexa Echo with the same account you used for the Alexa console, and make sure you enabled the skill. That’s it!
Next Steps
I’ll finish the guide here, but not without offering a few next steps you can (and maybe should) take, in order to improve your Alexa skill.
Most likely you should optimize the utterances. As in the example above, you can have a short conversation with Alexa to ask her for the number of visitors. But maybe you’d like to shorten the command to just one sentence: ‘Alexa, ask Hatch statistics for the number of visitors’. This sentence sounds a bit weird but it will get the job done. The advantage of using more utterances just makes the conversation feel more natural in my opinion.
Next, you can look into continuing the conversation. Do we only want to know the number of visitors? Maybe there are some more, different statistics we can consult. This would require some more intents and handlers, and you would have to keep in mind when the session ends (withShouldEndSession).
Also, you should have a handler that intercepts utterances that do not match. What happens when I ask the hatch statistics for the time? Right now, Alexa will try to match it with the existing utterances, and since the only 2 intents are the built - in StopIntent and the custom StatisticsIntent, it will pick one of these 2 to handle the request.This doesn’t make any sense. Luckily there is also a built - in FallbackIntent, make sure to set this up before deploying anything to a live environment.
What happens when Alexa waits for a response but the user does not reply and the sessions end unexpectedly? We probably need a handler for this as well. No worries, just leverage the SessionEndedRequest to handle this correctly.
Finally, you should also consider exploring and using more of the other built - in intents.There is a CancelIntent and a HelpIntent, that can greatly improve user experience. A more complete list of built - in intents can be found in the Alexa documentation.
Final Words
While building an Alexa skill, there are a lot of things to keep in mind. Utterances, intents, handlers, ... In this example, I only build one custom intent with three utterances, which is not nearly enough to support a smooth user experience. A real - world use -case would obviously require a lot more consideration (see the previous Next steps section).
At the point of writing, we can ask Alexa for our website’s daily visitor count. Maybe in the future, we will expand this to ask her for a full status report on all our web applications and code pipelines.
That’s all folks. I hope this was interesting to you, talk to you soon!