In the past 18 months, commercial speech recognition technologies have seen a dramatic 30 percent improvement—a bigger gain in performance than we’ve seen in the past 15 years combined. These improvements are in part being driven by deep learning approaches combined with massive data sets.
In this presentation, Tim Tuttle will address the trend of using voice, data and AI and the role these will have in next-generation IOT devices and real-time contextual applications.
He will demonstrate how voice-search engines in conjunction with AI computing, continuously analyze and understand contextual signals from users in order to make it easier for them to find the right information at the right time from a variety of different connected devices. As more voice usage data becomes available, speech recognition accuracy will get better and better. Tim will then explain how this approach offers a unique set of characteristics not found in conventional search platforms: contextual, intelligent, continuous, and anticipatory.
To date, only companies like Google, Apple and Microsoft have been able to deploy this technology. Now, companies such as Wit.ai recently acquired by Facebook), Speaktoit and Expect Labs offer cloud-based voice-enabled backends that can be used by many companies and developers to create great voice interfaces for applications.
Attendees will walk away from this session with a better understanding of:
- How advanced speech recognition and AI-driven computing is a driving force in the next generation of intelligent applications
- How to integrate contextual and intelligent voice-search into next generation apps and personal assistance
- How to improve search relevance with user context and then display contextually-driven search results and recommendations