With the early arrival of the final version of iOS in October, probably in September, the voice assistant "Siri" should sound a bit more natural thanks to machine learning technology that Apple is implementing.
The company is carrying out some licensed technology for deep neural network (DNN), as he hinted at a conference made today where several key Apple executives participated. The head of advanced development for Siri, Tom Gruber said that while the responses of Siri are still made from a central database of recordings, automatic learning voice assistant "will smooth" and make the responses of Siri They sound more "human".
The robotic voice assistant often has been a point of discussion and criticism. Without going too lejo recently, singer Barbra Streisand called Apple CEO Tim Cook to complain about Siri pronounces his name and in response Cook promised to solve the problem in a future update. More than anything this seems to be a marketing ploy to camouflage the announcement of a forthcoming update wizard voice.
In the interview where senior executives as Eddy Cue and Craig Federighi, participated it reported that Apple actually moved the Siri voice recognition a neural network system in July 2014, but did not know the fact until today. It is said that the implementation of this new technology has dramatically improved the ability of Siri to understand commands.
Moreover it is known that Apple has a great team working on machine learning technology, which not only includes Siri but also other products of the company. Still, we have to wait a bit to see the effects of this technology more directly, through September ?.