Cocktail-matching application with Google Assistant at
the Google Experience Centers, Mountain View
16 weeks, 2018
Google Home and
Designed use cases and overall user experience that can be implemented at the Experience Centers
Designed the table displays and assistant's integration with Phillips Hue products
Created user personas; organized testing, defined success metrics and conducted 40+ user interviews.
Contributed to design documentation for client.
a 60-second cocktail-matching experience with voice assistant during cocktail hour (5:30 - 9pm) in the Experience Center's Lounge. The experience involves a set of 4 narrative-driven, personality quizzes that map the guest with a delicious cocktail, made by a human bartender.
faculty wanted us to experiment with Machine Learning and deliver a more Disney-like experience
client only wanted a quick and fun use case that feels "Googley" and can be installed at their centers
 Defined project constraints
This project posed a very interesting challenge - It was not just about designing a voice-assistant application. It was about delivering a voice-assistant application that could be implemented at a public space that is not someone's home.
The Experience Centers are located around the world where Google hosts meetings with executive guests and diplomats as well as demonstrating their Machine Learning products with non-traditional use cases. After learning about the centers' typical 1-day program for VIP guests, I mapped out several key moments within the guest's journey that allow them to get away from meetings and engage with Google products.
non-traditional use case of
that can be implemented at the Experience Centers
visual accessibility for all guests to participate
 Defined design pillars, final direction, & guest personas
Relevance: fits in the guest's program at the centers
Time: 1-2 minutes (since guests should be interacting with each other as well)
Result: surprises and delights (the outcome of guest interactions should be positive and engaging)
Permission: guests know where the app is and if they can interact with the assistant
This final product not only provides guests with a surprised and delightful drink, but also acts as a conversation starter between the guests. Guests can be directed to assistant by a Google employee or a human bartender; or they can discover assistant on one of the bar-height tables with its integration with Phillips Hue lights.
I interviewed several samples of guests to understand how drinks are usually ordered at the lounge.
 Implemented physical design for a theatrical experience
In order to attract the guests to approach voice assistant, I brought assistant to life by channeling her interactions through the table design and colorful lighting effects.
I created color-coded table cards with simple "wake words" to activate assistant and a set of questions that associates with a distinct light color. I also matched the ending light colors with colors of the drinks that are about to be served - These design decisions enable more conversations between the guests.
Noise interference was another design challenge I had to tackle to ensure the product can be implemented successfully. I ran a decibel test to figure out a safe distance between tables so multiple Google Assistants can be placed in the Lounge.
"Breathing" light pattern attracts guests to the table
Light progression shows assistant is "listening" for the guest's responses to the 4 questions that would determine their drink.
assistant-in-action, when revealing to guest their drink, also masking assistant's delay in responses.
 Conducted user testing and documented voice user interface lessons
early playtest with narrative design
final playtest with the whole experience
Don’t try too hard to act human - Guests had little to no tolerance when assistant made a mistake if assistant was acting disingenuous. When assistant used humor to acknowledge that it is a robot, guests became more curious and forgiving when there were flaws within the conversation.
Personalized but not creepy - Perceived privacy is a fine line to cross. Machine Learning-driven conversations should feel smart and personalized but not too personal.
Signaling turn-taking - Assistant cannot listen as she speaks. Use audio and visual (physical and digital) cues to let the guests know when they should speak, when assistant is done talking, or when she is listening.
Confirmation reassures - Repeating what the guests just said was a way to make the feel heard and create a positive relationship between the guest and the assistant.
Visual supports audio - Since human can read/skim faster than listen, visual elements should appear gradually, following the assistant’s conversation to not break the immersion of the experience. If used touch screen, follow 2D interface guidelines to ensure elements function as expected (touchable, slidable). Google's live visual transcription is also an accessibility tool for this.
Error handling - Find creative ways to handle assistant’s errors. When words like “Know” being heard as “No,” assistant automatically carries on the interactions as designed. Guests who like to challenge the experience would try to trick assistant by playing with words. Use humor to reconfirm with guests “Do you mean [this intent]?”
Delay in guest’s response causes exit - Have visual cues (e.g: a timer) for hold screens. Change the number of times assistant could receive an incorrect input before exiting out the program. This was a useful solution for a loud-room setting.
Delay in assistant’s response causes frustration - Have audio and visual cues to distract guests.
Have short and clear “wake words” and “exit words” - for ease of entering and exiting the app.