Amazon rolls out developer tools to improve Alexa voice apps

Amazon rolls out developer tools to improve Alexa voice apps

Amazon’s adding a trio of new tools to the Alexa Skills Kit, a suite of self-service APIs and resources for conversational app development, designed to improve the quality of experiences developed for Alexa. The first two, which are now generally available — Natural Language Understanding (NLU) Evaluation Tool and Utterance Conflict Detection — enhance overall voice model accuracy, while Get Metrics API (which is in beta) supports the analysis of app usage metrics in third- or first-party aggregatory platforms.

“These tools help complete the suite of Alexa skill testing and analytics tools that aide in creating and validating your voice model prior to publishing your skill, detect possible issues when your skill is live, and help you refine your skill over time,” wrote Amazon product marketing manager Leo Ohannesian. “[We hope these] three new tools [help] to create … optimal customer experience[s].”

The NLU Evaluation Tool can test batches of utterances and compare how they’re interpreted by a voice app’s natural language processing (NLP) model against expectations. (As Ohannesian notes, overtraining an NLU model with too many sample utterances can reduce its accuracy.) Instead of adding sample utterances to an interaction model, NLU Evaluations can run with commands users are expected to say, and in this way isolate new training data by bubbling up problematic utterances that resolve to the wrong intent.

The NLU Evaluation Tool additionally supports regression testing, allowing developers to create and run evaluations after adding new features to voice apps. And it’s able to perform measurements with anonymized frequent live utterances surfaced in production data, which help to gauge the impact on accuracy for any changes made to the voice model.

As for Utterance Conflict Detection, it’s intended to detect utterances that are accidentally mapped to multiple intents, another factor that can reduce NLP model accuracy. It’s automatically run on each model build and can be used prior to publishing the first version of the app or as intents are added over time.

Lastly, there’s the Get Metrics API (Beta), which lets Alexa developers more easily analyze metrics like unique customers in environments like Amazon Web Services CloudWatch. Plus, it supports the creation of monitors, alarms, and dashboards that spotlight changes that could impact customer engagement.

Amazon says the Get Metrics API is available for in all locales and currently supports the Custom skill model, the pre-built Flash Briefing model, and the Smart Home Skill API.

The rollout of Natural Language Understanding (NLU) Evaluation Tool, Utterance Conflict Detection, and the Get Metrics API follows the launch of Alexa Presentation Language, the toolset designed to make it easier for developers to create “visually rich” skills for Alexa devices with screens, in general availability last month. It arrived alongside skill personalization, which enables developers to create personalized skill experiences using voice profiles captured by the Alexa app, and the Alexa Web API for Games, which Amazon describes as a collection of tech and tools for creating visually rich and interactive voice-controlled game experiences.