This post takes some of the material from the talk (slides, notes) I gave at the New York Purescript Meetup about my Alexa Skill Secret Word. In this post I will
- Proselytize Purescript
- Give a step-by-step walkthrough of creating a brand new Alexa Skill in purescript, "Known Word".
Purescript is my favorite among the family of pure functional languages that compile to Javascript. I like these languages because they encourage a very disciplined style of programming. Record types and ADTs permit you to explicitly model the space of valid inputs to your program, and the compiler will force you to handle all possible cases unless you explicitly opt out. The path of least resistance is to be exhaustive, whereas in a more forgiving language, the path of least resistance is often to focus primarily on the "happy path" and neglect proper handling of "corner cases".
In writing an Alexa skill, or more generally in developing for a voice platform, being exhaustive is particularly important.
In a [recent episode of the Alexa Dev Chat podcast](https://soundcloud.com/user-652822799/episode-019-looking-back-at-the-year-in-voice-and-whats-next), Dave Ibitski and Paul Cutsinger briefly discussed how voice or 'conversational' UX shifts the discovery of intent from the user to the application. In a graphical UX, the burden is typically on the user to discover how the developer intends the interface to be used, and the burden on the developer is to make sure the interface as unsurprising and unambiguous as possible. Voice UX is different. The user may choose to interact with a voice UX in a variety of ways, and it is up to the application to determine the user's intent and apply it appropriately. An interface that does not permit users to express their intent in a variety of ways this will seem rigid and unnatural.
I reason that this means the input to a voice application is a lot less predictable than other sorts of applications. A voice application has many "happy paths", and ostensible "corner cases" will adventure beyond the confines of the corner. I believe that in such an environment, discipline and exhaustivity of the sort encouraged by pure functional languages are doubly important. Thus Purescript is an excellent choice for Alexa Skill developers seriously interested in creating robust voice experiences.
Install purescript, bower, pulp, and the Alexa Skills Kit CLI.
npm install -g purescript bower pulp ask-cli.
Initialize ask-cli with your AWS credentials.
ask init
Now clone the purescript-alexa starter template -- rename the directory appropriately to your skill
git clone https://github.com/twitchard/purescript-alexa-template known-word
Install the prerequisites for the project.
bower install
Now open .ask/config and change the inner 'uri' property to be an appropriate name for your skill.
vim .ask/config
cat .ask/config
# {
# "deploy_settings": {
# "default": {
# "skill_id": "",
# "was_cloned": false,
# "merge": {
# "skillManifest": {
# "apis": {
# "custom": {
# "endpoint": {
# "uri": "known-word"
# }
# }
# }
# }
# }
# }
# }
# }
Let's go ahead and build and deploy the template skill as it is to Amazon, just to kick the tires. First, let's compile it:
npm run build-all
npm run deploy-all
These commands generated and deployed three entities
- An AWS lambda function named 'known-word' using the javascript created by the purescript compiler and put into the
output
directory. The entry point is thehandler
function defined insrc/Main.purs
. - An Alexa skill named "purescript template" with metadata defined in
skill.json
, which is generated fromsrc/Skill.purs
. - An American English "language model" defined in
models/en-US.json
, which is generated fromsrc/Model.purs
.
There's a couple of things we still need to do before we test it out.
-
Navigate to the AWS Lambda console, click on your newly-created lambda function, and change the value of the 'Handler' field to
Main/index.handler
, and hit 'Save'. By default, AWS lambda expects a function in the root of the output folder called index.js that contains the entry point, but our purescript compiler puts files underneath their module, in this case inside theMain/
subfolder. -
Navigate to the Alexa dashboard, click 'edit' on your newly created skill, navigate to the 'test' tab, and make sure the 'Test is enabled for this skill' switch is toggled on. While you're here, you can go ahead and test out the skill. Enter something into the box.
First let me explain the idea for the skill. I've already made a skill "Secret Word", where
- Alexa picks a 5-letter word.
- You guess 5-letter words.
- Alexa tells you how many letters are shared between your guess and the secret word.
- You try and use the information to figure out the word, but eventually just give up because it's too hard.
"Known word" will be the inverse of this:
- You pick a 5-letter word.
- Alexa guesses 5-letter words.
- You tell Alexa how many letters are shared between her guess and your secret word.
- Alexa guesses your word.
Let's start with the fun part: