When I spoke to Phil Schiller, Apple’s Senior Vice President, earlier this month about the new A13 Bionic chip, we focused on how the A13 will help enhance the performance of the phone and make some applications better. He highlighted text-to-speech capabilities as one such application.
“One of the biggest examples of the benefits of the performance increase this year is the text to speech. We’ve enhanced our iOS 13 text to speech capabilities such that it’s much more natural language processing. That’s all done with machine learning and the neural engine.” Phil Schiller, Apple Senior Vice President of Worldwide Marketing
I decided to put this to test and for past four days — I have been slightly under the weather — I decided to use the microphone button for pretty much everything: replying to texts, leaving comments on Instagram, and even writing a blog post or two. If I keep the sentences short, the text to speech works flawlessly. When I ramble on, then it is good, but not great. And occasionally it translates incorrectly — not enough to bother me. And the more I use text-to-speech, the better it gets. I am quite satisfied with the text to speech synthesis and most definitely be using this more often.
How about Siri? Well, since it seems to understand me better, even if it has limited capabilities when compared to Google Assistant. Nevertheless, I am encountering fewer errors in setting up to-do items, calendar entries and using the phone via voice commands. If they can graduate Siri from high-school to post graduate level, then all this chip power might be put to even more use.
PS: An exclusive look inside Apple 13 Bionic Chip. My piece in Wired.