Siri is set to gain contextual learning abilities and deep integration with iMessage and iCloud in iOS 11, according to an unverified rumor shared by Israeli site The Verifier.
Citing information "received directly from the development teams based in Israel and the U.S.", the site says Siri's AI codebase will receive a major update that will enable it to learn a user's usage habits, similar to abilities claimed by Samsung for its new virtual assistant Bixby.
The upgrade is said to extend Siri's capabilities beyond its current limited command pool by stacking multiple queries and offering different actions depending on the context. For example, Siri's integration into iMessage means it will be able to offer suggestions relevant to the ongoing conversation, such as where to dine out, how to get there, and one-step Uber taxi booking.
In addition, the claim is that Siri's integration with iCloud will enable it to identify meaningful connections between the various devices associated with an Apple ID account and offer practicable actions that crisscross Mac and iOS systems.
Moreover, Apple will embed Siri deeper into the Apple TV and Apple Watch experience, with significant updates to tvOS and watchOS. Advanced Siri abilities are also said to extend to a "smart clock" feature, although no other details were given.
The Verifier does not have an established track record for accurate rumors, making it unclear how reliable the information is, while the iMessage features described above can already be found in published Apple patents. Previously the site said that group FaceTime calls will be introduced in iOS 11, but so far we've been unable to corroborate the claim.
Details of what's in store for iOS 11 have been scant in general, but it is expected to be released with new iPhones in the fall of 2017, while a preview of the new software could come at the Worldwide Developers Conference 2017 held in June.
Top Rated Comments
Apple's staunch stance on privacy means they'll be going the longer and more difficult route, which I maintain will be the better approach in the long term. This is because the core recognition will evolve to recognise any speech and any language without having to reference lots of other data already collected.
Think Star Trek Next Gen/Voyager and see how crew or new visitors interact with the computer using speech; that's what I'm envisioning could still be possible with a commited AI team, whilst still maintaining an obsession for privacy. That's far from an impossible idea.
Yes, Siri does need a lot of improvement; nobody's denying that. But so do the others. With the leaps and bounds that technology makes every year, it's much too early to simply say that the only way forward in AI and speech/context recognition is data mining.
Apple's obsession with privacy will never allow their offering to be good as Google's; you need collectively analyze all human speech to actually develop something that will understand all dialects, intonations, and nuances of a spoken language.
Siri's not particularly smart. Neither is Google's solution, though; nor Amazon's, nor Microsoft's. None are anywhere close to what we've seen in science fiction. Until some serious advancements are made, I'll happily take a bit poorer recognition, iOS integration, and user privacy in the interim.
So yeah, I'll continue to wait. So will you, by the way. It's not like Google's current system rivals the Enterprise computer. :)