The client is a popular US-based fast-casual food chain, specializing in made-to-order burritos and tacos. The chain boasts thousands of locations, a net income in the hundreds of millions, and tens of thousands of employees.
Customers already have the option of placing their orders through the restaurant’s website for a quick and easy pickup in store. Not only is this option convenient, but it has also allowed for social distancing during COVID-19 surges, cutting down on long lines indoors. However, the restaurant chain decided to take the experience to the next level, by giving customers the option to order through a voice-enabled device, reducing the time it would take for them to type their order into an app. The chain partnered with Concentrix Catalyst for its first Amazon Alexa voice ordering capability. The company envisioned customers telling Alexa to place an order from their list of favorite menu items, and receiving a notification when the order is ready for pickup.
Catalyst enabled the experience to reorder from past orders with fewer words, as a means of avoiding a long and convoluted web experience. This next-gen conversational AI experience allows customers to order items from their “favorite” menu, to be prepared right away at their local franchise.
Enabling a pioneering voice-enabled experience, such as reordering a meal, started with a human-centric design approach with an outside-in perspective—putting the customer at the center of the journey and visualizing the intent behind requests. Designing this type of experience takes sophisticated strategic design thinking. Imagine that you are hungry for lunch, and you say to Alexa: “deliver me a burrito to my office.” You as a human being will understand the intent. However, machines and robots don’t immediately know what you are trying to say because the technology has an ability to recognize an almost infinite number of ways to ask for the same intent. Understanding such intent relies on Natural Language Processing (NLP) and requires relevant context to provide accurate results or services. Simple phrases such as “hold the beans” could mean many things without context.
In order to build this kind of experience, the Catalyst team leveraged the technology’s capability to learn from past interactions through Machine Learning, to determine between multiple ways of asking a question, such as “what’s the weather?” or “could you tell me the weather for today?” The intelligence had to be built by understanding all the intents and entities required to facilitate a voice-enabled experience skill.
Catalyst took a differentiated approach to drive the key outcome for this project by emphasizing on four components that paved the path to voice maturity:
- Exploring CX goals: We looked at different avenues with specific business and consumer goals in mind to please the people who matter the most—customers.
- Creating a vision: We visualized how customers would use the voice functionality and benefit from the usage prior to initiating the journey.
- Starting small and failing fast: Rather than experimenting across every audience interaction channel, we tested bots with a single audience segment or interaction type. We then analyzed, trained, and expanded interaction types from there.
- Continuous feedback: To ensure processes and practices were working, we gathered and analyzed metrics at regular intervals, striving towards continuous improvement.
This solution required more strategy work beyond us simply coming in and immediately building a point solution with a technology preference. Engaging with the client, and understanding the customer’s problem before jumping into solution design, led to a user-friendly, human-centered solution. Once implemented, the Alexa voice-enabled experience skill noticeably increased operational efficiency by 40% for the restaurant chain by reducing lines within individual locations. It also increased profits by 10% due to allowing for the store to serve more customers in a day without significantly burdening employees.
The solution was built leveraging Azure Bot Service, Azure Cognitive Services (LUIS), and Azure Bot Framework (SDKs), and the skill was enabled using the Amazon Alexa skill (part of AWS). It has the potential to expand options for customers, including refining the intelligence to allow for ordering items not from past orders, and the ability to connect to delivery services such as GrubHub or DoorDash. There’s also the opportunity for the solution to integrate directly with the restaurant’s website so that use of the voice skill does not depend on customers already having the restaurant’s app on their devices.