Skip to content

Notes from Voice Summit 2019

I spent some of this past week at the massive Voice Summit 2019.  Thousands attended to learn and share their experience in the Voice space.  Attendees and speakers were all from a broad set of backgrounds. A lot of the Summit’s speakers and workshops were given by the companies that provide smart speakers and digital Assistants, like Amazon with Alexa and Google.  In addition to helping to market the event, Amazon Alexa staffed a lot of Workshops and Breakout sessions, most of which were aimed at helping developers hone their skills in the voice space. Never used AWS services to boost your skills feature set? They held sessions to walk attendees through exactly what to do.  Samsung's Bixby and Microsoft’s Cortana also made appearances at the summit, by also hosting workshops and announcing new feature releases on the big stage.

It appears that the major Voice Platforms are going about their devices differently. However, one big takeaway from the event is the area that all the platforms seem to agree on. These devices require serious computing services.  Between storage, compute power, scalability, etc. the main platforms agree, you need a strong back-end to make a powerful voice application. Amazon, through their AWS services, Google through its “Cloud Computing” and Microsoft through Azure, all offer options to support an amazing voice experience.  It became a major theme of the Summit. Voice is able to work because of a culmination of technology. Great 4G or WiFi allows the voice devices to communicate with servers in the cloud to quickly give the user helpful information. It is the expertise of all these things that make the most compelling voice applications. This was one side of the conference, the other was all about making great dialog.

The consensus seemed to be, in order to make a compelling voice application, you need great computing, and you need great dialog. Voice applications are easy enough to make and implement; they are hard to make helpful and sticky.  On Alexa alone, a massive amount (over 90%) of skills are used once by a user, and they never return. The emphasis was put on the computing, you need useful information and tasks for a compelling skill, but more than that, you need to get the user in a compelling way.  The voice interaction needs to be quick and to the point.

Alexa and other devices allow for you to change the native assistant’s voice and replace it with actual human voice overs. The number of voice actors at the Summit was quite astonishing.  I counted at least 5 sessions that were focused on voice acting for the voice assistant age. Speakers talked about how the use of human voice differentiated their voice application and made it feel more natural.  

Many other speakers talked about how they use context to help speed up their voice experiences.  By remembering users, skills can quickly repeat actions without re-entering data. Every time I check the weather, I shouldn’t need to tell Alexa where I am.  Voice applications, that take advantage of this have succeeded. Voice applications even allow for third party integration. Want to know the balance of your bank account? Use your username and password once, then get the answer quickly every time you ask.  Similar to other Applications driven by User Experience, voice applications that are able to reduce customer friction have found the most success to date. Just because its called Conversational UI doesn’t mean you have to architect a half hour gab session just to get some basic information from your device.  Interactions designed for efficiency are getting the most utilization.

Find out more about how Voice could help you by reaching out to our team Alexa Subject Matter Experts, VUI designers and Developers:

https://www.bluefintechnologypartners.com/voice-interface-development