Amazon Lex adds Assisted NLU to improve bot accuracy, AWS says

AWS says Amazon Lex’s Assisted NLU feature is designed to improve bot accuracy by better handling the way customers speak naturally. The company says the feature uses large language models to support intent classification and slot resolution.

What Assisted NLU does

According to AWS, Assisted NLU helps Amazon Lex process typos, ambiguous requests, multi-slot utterances, and other natural language variations without requiring developers to manually configure every utterance pattern.

The post says the feature can use the names and descriptions of intents and slots to interpret user input. AWS says Assisted NLU achieves 92 percent intent classification accuracy and 84 percent slot resolution accuracy on average.

Modes, pricing and rollout

AWS says Assisted NLU operates in two modes:

  • Primary mode: uses the LLM for every user input.
  • Fallback mode: uses traditional NLU first and invokes the LLM only when confidence is low or the request would route to FallbackIntent.

The company says Assisted NLU, including Primary mode, Fallback mode and intent disambiguation, is included at no additional cost with standard Amazon Lex pricing.

AWS also says hundreds of active customers have been onboarded to Assisted NLU. In the post, the company says customers have reported 11–15 percent higher intent classification, 23.5 percent fewer fallback responses and 30 percent better handling of noisy inputs.

Implementation guidance

The post lays out best practices for using the feature effectively, including writing intent descriptions in a consistent format, giving slot descriptions clear context and testing ambiguous utterances with Amazon Lex Test Workbench.

AWS says intent descriptions should act as prompts for the LLM, while slot descriptions should clarify what each slot captures and any relevant constraints. The company also recommends using versioning and aliases to test changes safely before pushing them into production.

What to test before deploying

For validation, AWS says teams should focus on edge cases such as typos, colloquial phrases, incomplete utterances, ambiguous requests and slot variations. The company also says developers should verify that adversarial inputs route predictably to FallbackIntent.

The post closes by recommending that teams compare failed utterances, refine descriptions and rerun tests until the bot performs as expected. AWS says readers can enable Assisted NLU from the Amazon Lex console or through the NluImprovementSpecification API reference.

Source: AWS Machine Learning Blog

Leave a Comment