Just how exactly does Siri learn a new language? In today’s interview with Reuters, Apple’s speech team head Alex Acero offered a behind-the-scenes look at how Siri is being taught new languages, a process that involves script-writing, capturing voices in multiple accents and dialects and using machine learning and artificial intelligence to build and evolve new language models over time. The system requires a team of people tasked with reading passages of manually transcribed text.
Before actually updating Siri, Apple first rolls out Dictation support for a new language.
Siri currently speaks 21 languages in 36 countries. By comparison, Microsoft’s Cortana supports eight languages tailored for thirteen countries, Google Assistant speaks four languages while Amazon’s Alexa works only in English and German.
Call me crazy or call me what you will, but when I saw Android Wear 2.0 was bringing support for third-party keyboards, I immediately started imagining how useful that would be on my Apple Watch.
Of course the screen is too small to accommodate a keyboard. Heck, it’s already too small to punch in your passcode without missing a tap target. Still, not only do I think there may me a need for it, but I also believe the technology to make this right is now available.
tvOS 9.2, a new update for the operating system which powers the fourth-generation Apple TV, is now available for public consumption. The new firmware, released alongside iOS 9.3, OS X El Capitan 10.11.4 and watchOS 2.2, is a very interesting update for the cool new features it brings to the table.
tvOS 9.2 enables several features missing from the initial tvOS release, including long-awaited support for wireless keyboards, dictation, Siri support for App Store searches, app folders on the Home screen, a revamped app switcher, Siri Remote improvements, support for Live Photos and iCloud Photo Library and more.
Your Mac comes with the ability to speak selected text. This comes in handy when you can’t see the text very well and would find it useful to have it read out loud to you. In this tutorial, we’ll show you how you can make your Mac speak a selected body of text with ease.
It seems like every time there’s a new tvOS beta, an interesting new feature is included for us to talk about. With tvOS 9.2 beta 3, that new feature is the ability to use Dictation on text input fields. It also includes the ability to dictate—character by character—usernames and passwords.
Along with the new Dictation feature, comes support for searching the App Store using voice input from the Siri Remote. Have a look at our video preview that showcases each new feature in action.
Both iOS and tvOS give you all the controls you need to prevent profanities from showing up when you use speech-to-text or Siri. In this post, you’ll learn how to disable explicit language for Siri and Dictation on your iPhone, iPod touch or iPad and filter out explicit language for Siri on your Apple TV.
OS X includes a nifty Dictation feature which allows you to control your Mac and apps with your voice. You can use “speakable items”, basically a set of spoken commands, to open apps, choose menu items, email contacts and convert whole spoken sentences to text, wherever you can type text.
This is much like iOS’s Dictation feature as both iOS and OS X use the same Nuance-powered technology that turns speech to text. iOS devices have limited computing power so the Dictation feature on the iPhone, iPod touch and iPad requires network connectivity in iOS 7 (iOS 8 supports streaming voice recognition and 22 new languages).
On the Mac, computing resources like CPU power, battery life and RAM are not of paramount importance as on mobile, Therefore, OS X Mavericks provides a new Enhanced Dictation feature which converts your words to text without utilizing Apple’s servers.
In other words, server-based Dictation lets you dictate without an active Internet connection. Because voice recognition processing runs locally on your Mac, text appears instantly as you speak. That is: continuos, streaming dictation with live feedback is made possible.
In this tutorial, I’m going to show you how to turn on Enhanced Dictation in OS X and take advantage of speech-to-text, even when you’re off the grid…
Among the headline new Mac features set to debut when OS X Mavericks gets released this Fall is a little but important enhancement to Dictation.
Hawk-eyed readers will recall that Dictation was brought to the Mac as part of OS X Mountain Lion last summer. Based on Siri’s speech-to-text component, Dictation on the Mac requires a broadband Internet connection.
In Mavericks, Apple will let you optionally download a nearly 1GB package to power offline Dictation.
But Apple’s engineers aren’t stopping here and are reportedly privately testing offline Dictation for iOS 7. More details right after the break…
I appreciate memorable, creative advertising as much as the next guy, but I just don’t get a new Galaxy S4 commercial Samsung is airing in Iceland.
Instead of focusing on the handset’s features or the usual iPhone bashing, this time around Samsung’s creative agency has gone over the top in depicting a guy trying to make a phone call on a real apple.
Realizing swiping across fruit makes no sense, the ad then switches to a happy scene where our hero operates a Galaxy S4. The not-so-subtle jab at Apple is plain weird, to put it mildly. I know ads are supposed to take into account the often vast cultural differences across markets, but I’m not sure Samsung did itself a favor with this particular commercial…
OS X 10.9, the next major revision to Apple’s desktop operating system, will contain Siri and Apple Maps, the two headline capabilities currently exclusively available on newer iPhones, iPads and iPod touches. According to a new report this morning, early builds of OS X 10.9 that were previously spotted in web logs include Siri and Maps integration. Both features are purportedly in the early testing stages so it has yet to be determined if they will be ready for prime time when OS X 10.9 ships some time next year…
A university in Taiwan has sued Apple over its use of dictation feature in Siri and the underlying speech recognition engine, claiming Apple’s implementation violates its patents. National Cheng Kung University has on Monday launched a lawsuit against the iPhone maker and is seeking undisclosed damages, though its lawyers noted that any calculation would be based on Apple’s U.S. sales of devices that use Siri, quite possibly amounting to millions of dollars in damages…