Apple is training Siri to better understand those with atypical speech

Siri does an okay job of understanding those who query the digital assistant. However, there is still room for improvement. Especially for those with atypical speech. But Apple, and other tech companies, are working on that.

According to a report today from The Wall Street Journal, which outlines major companies like Apple and others training digital assistants, Apple is doing just that. Specifically, working to improve how Siri understands those who speak with a stutter.

Podcasts actually help in this effort, according to the report. It turns out Apple has over 28,000 audio clips from podcasts with individuals who stutter. That data can then be used to help train Siri to better understand users with atypical speech. An Apple spokesperson confirmed that the training can help improve voice recognition systems involving atypical speech patterns.

Compact Siri on iPhone at Desk

Google and Amazon are also working to improve Assistant and Alexa, those companies’ digital assistants.

Per the report:

The company is now researching how to automatically detect if someone speaks with a stutter, and has built a bank of 28,000 audio clips from podcasts featuring stuttering to help do so, according to a research paper due to be published by Apple employees this week that was seen by The Wall Street Journal.

The data aims to help improve voice-recognition systems for people with atypical speech patterns, an Apple spokesman said. He declined to comment on how Apple may use findings from the data in detail.

Apple also introduced the Type to Siri feature alongside the release of iOS 11. That feature makes it possible to reach out to the digital assistant without using a voice query at all.

Apple is going to publish a research paper outlining its improvements in this regard in the near future.