Conversational Technologies

Conversational Technologies

Conversational Technologies offers consulting services in natural language processing and speech technology. For example:
1. Educate
1. Promote Standards
1.

We can help with requirements definition, discovery and technical direction in a vendor-independent way. Some of the kinds of things we do:

Catalyze innovative applications by enabling entrepreneurs to apply speech and language technologies. Adding speech recognition to MossRehab's MossTalkWords software for aphasia rehabilitation.
2. Helping researchers at Columbia College, Chicago integrate
s

ci-conference 27/11/2018

I'm very excited to be speaking at the Conversational Interaction conference in March in San Jose (http://www.conversationalinteraction.com/). I'll be talking about "Resources and tools for natural language design: intents and entities". Although there are some great natural language application development toolkits around, they assume that you pretty much know how your application will be structured -- that is, what intents and entities you want to use in your application. I don't think that's typically the case, and making a mistake on the application structure at the beginning can lead to a lot of rework later on. I first talked about how to find the right intents and entities at the Voice Summit last July in Newark (https://www.voicesummit.ai/) Here's my presentation (http://www.conversational-technologies.com/presentations/voice2018.pdf). At Conversational Interaction I'll continue on with this topic with much more detail.

ci-conference HOME

VOICE 04/06/2018

I'm excited to be speaking at on July 24-26 in Newark, NJ. Join me and the community to explore the voice ecosystem and how it's impacting your industry. Use CODE:
TalkVoice2Me for 10% off your ticket: http://bit.ly/2wLGKQA

VOICE VOICE is a three-day summit at the forefront of natural language processing. VOICE, sponsored by Amazon Alexa, will be held at the New Jersey Institute of Technology July 24-26. Natural language is revolutionizing the way we interact with devices, services, products and one another. VOICE is where t...

Integrating Speech with Intelligent Services 02/05/2018

What if you combined , , and other services in applications? Come to my free online talk sponsored bye on June 5 to find out. Register here https://www.eventbrite.com/e/integrating-speech-with-intelligent-services-tickets-45351046200

Integrating Speech with Intelligent Services We are currently experiencing an explosion of sophisticated, cloud-based intelligent services. There are several well-known speech and natural language services, but other cognitive technologies are fast becoming available as cloud services. These include face recognition, emotion recognition, trans...

Untitled album 02/05/2018
08/12/2016

My book on standards for multimodal interaction is finally out! It includes 19 chapters on such topics as the W3C Multimodal Architecture, voice standards, and EmotionML, written by experts in their areas. More information at http://www.springer.com/us/book/9783319428147. I hope this can inspire developers who want to use multimodal technologies like speech, natural language, gesture and vision to use a standards-based approach.

Multimodal Interaction with W3C Standards - Toward Natural User | Deborah Dahl | Springer This book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language,...

Speech and Language Technology for Language Disorders (Speech Technology and Text Mining in... 04/01/2016

"Speech and Language Technologies for Language Disorders" (co-authored with Katharine Beals, Marcia Linebarger and Ruth Fink) is finally out!
http://www.amazon.com/dp/1614517584/ref=tsm_1_fb_lk. I hope it will be useful for people with family members who have a language disorder as well as for speech and language pathology professionals.

Speech and Language Technology for Language Disorders (Speech Technology and Text Mining in... Speech and Language Technology for Language Disorders (Speech Technology and Text Mining in Medicine and Health Care)

02/12/2015

Finally finishing up "Speech and Language Technologies for Language Disorders"! It should be available in December http://www.degruyter.com/view/product/284563?rskey=oEVrTQ&result=1

Speech and Language Technology for Language Disorders This book draws on the recent remarkable advances in speech and language processing: advances that have moved speech technology beyond basic applications such as medical dictation and telephone self-service to increasingly sophisticated and clinically significant applications aimed at complex speech…

SpeechTEK 2015 - The Smart Customer Interactions Event 19/08/2015

I'm having a great time at SpeechTek www.speechtek.com in New York. Looking forward to today's track on the Internet of Things and my talk on Natural Language and the Internet of Things.

SpeechTEK 2015 - The Smart Customer Interactions Event This year, SpeechTEK is turning 20! Through the years, we've seen so many exciting changes in the industry, from the 90's when ASR was first made available to consumers and the first IVR from BellSouth was introduced, to today, when we can speak to our computers, phones, cars, and more. We've witne…

W3C Multimodal Interaction Working Group 09/07/2015

There's a new charter for the W3C Multimodal Interaction Working Group (http://www.w3.org/2002/mmi/. Exciting new work on component discovery for dynamic systems, (very important for the Web of Things) and a new version of EMMA that addresses system output. See the announcement https://lists.w3.org/Archives/Public/www-multimodal/2015Jul/0002.html

W3C Multimodal Interaction Working Group The Multimodal Interaction Activity seeks to extend the Web to allow users to dynamically select the most appropriate mode of interaction for their current needs, including any disabilities, while enabling developers to provide an effective user interface for whichever modes the user selects. Depend…

Standards for Multimodal Interaction 17/06/2015

I'm very excited to be starting a new book project, "Multimodal Interaction with W3C Standards: Towards Natural User Interfaces to Everything". It's going to be an edited book with contributed chapters about standards for interacting with technology using speech, handwriting and other forms of natural interaction. See the book website, www.mmi-standards.com for more information.

Standards for Multimodal Interaction From tiny fitness trackers to huge industrial robots, we are interacting today with devices with shapes, sizes, and capabilities that would have been hard to imagine when the traditional graphical user interface (GUI) first became popular in the 1980’s. It is becoming increasingly apparent that the…

Speech Recognition in the Browser 11/06/2015

Here's a video demo I made of multilingual speech recognition in the browser https://youtu.be/S-6fIA2U2R4 showing some basic speech recognition concepts. Try it yourself www.proloquia.com/speechDemo.html!

Speech Recognition in the Browser This is a demo of multilingual speech recognition in the browser using the Web Speech API and EMMA. It shows speech recognition concepts like nbest and confi...

Team Watt-OurApp 12/05/2015

I recently helped some EE students with ideas for using speech in electrical engineering applications that they were doing for a class project. Here's the hands-free, voice-enabled electrical data recorder that they built https://www.youtube.com/watch?v=8jpuOPidJiw

Team Watt-OurApp ECE 473 Spring 2015 Final Project Are you tired of putting down your multimeter probes to record a measurement, only to forget the value by the time you pick...

Web accessibility for people with cognitive disabilities 28/04/2015

Here are some slides on a recent talk I gave on Web accessibility for people with cognitive disabilities.

Web accessibility for people with cognitive disabilities Web Accessibility for People with Cognitive Disabilities Deborah Dahl, Ph.D. Principal, Conversational Technologies Member, W3C Cognitive Accessibility Task Fo…

Mobile Voice Conference - 2015 : Home 26/03/2015

I'm looking forward to this year's Mobile Voice Conference (http://mobilevoiceconference.com/) in San Jose, April 20-21. I'll be speaking on "Natural Language Interaction with the Web of Things". Have you ever wondered how people are supposed to interact with all those cool new connected devices like light bulbs, appliances, dog collars, medical sensors and, well, everything? Apps? Fine if there are only a few devices, but what happens when the Internet of Things takes off and these things are everywhere? Will you have thousands of apps? That can't work!

Mobile Voice Conference - 2015 : Home The maturing of speech and natural language technology is changing user expectations of the ease of interacting with increasing numbers of devices, applications, services, and features - “digital overload.” Mobile devices make this intuitive connection with computers always available. The Mobile Voi…

evoHaX - Hack to Change Lives! 24/03/2015

On April 17, I'll be speaking at EvoHax (http://www.evohax.com/), an accessibility hackathon in Philadelphia, on "Web Accessibility for People with Cognitive Disabilities). I'll talk about cognitive accessibility in general, as well as the recent activities of the W3C Task Force on Cognitive Accessibility (http://www.w3.org/WAI/PF/cognitive-a11y-tf/).

evoHaX - Hack to Change Lives! evoHaX will bring together students from various universities around Philadelphia to develop web accessibility solutions in a 24-hour-long Hackathon. The event will focus on creating awareness about web accessibility among student developers, and tackling the challenges people with disabilities face…

16/09/2014

On September 24, I'll be speaking in a Forrester Research webinar on the New Engagement Workplace, talking about how W3C standards can simplify pervasive mobile collaboration within the enterprise https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&eventid=850031&sessionid=1&key=338BCB27C01D9F348791C9FD761F2A9F&sourcepage=register
Please join me if you're interested in this topic!

Event Registration

SpeechTEK 2014 - Presentations 27/08/2014

SpeechTEK 2014 in New York was stimulating and exciting, with a lot of innovative technologies being presented. Some of the hot topics were wearables, robots, virtual assistants and the connected home. Many of the presentations are now online at http://www.speechtek.com/2014/Presentations.aspx, including my presentations, "Use Speech and Language Technologies to Overcome Language Disorders" and "Develop Multimodal Applications With Free and Open Source Tools".

SpeechTEK 2014 - Presentations

13/08/2014

I'm very honored to have received a Speech Luminary Award from Speech Technology Magazine this year, for my work on speech standards, especially for helping move Emotion Markup Language ( http://www.w3.org/TR/emotionml/) through the standardization process. I hope this helps bring Emotion Markup Language and other speech and multimodal standards, to more people's attention. The article is at http://www.speechtechmag.com/Articles/Editorial/Cover-Story/2014-Speech-Industry-Awards-98321.aspx.

Emotion Markup Language (EmotionML) 1.0 Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.

SIGDIAL 2014 20/06/2014

I've been at the SIGDIAL conference (Special Interest Group on Discourse and Dialog, http://www.sigdial.org/workshops/conference15/) for the past couple of days. Some interesting themes have been open domains, unsupervised learning, affective systems, and a common evaluation task on dialog state tracking. Also, great networking and a fantastic Chinese banquet at Joy Tsin Lau in Philadelphia.

SIGDIAL 2014

Telephone