Virginia: How about iphones recognizing its owner's voice and carrying out certain tasks based on the instructions of the user?
Yes! A new patent application of Apple published by the U.S. Patent and Trademark Office reveals this concept, which will identify individual user's voice when they speak aloud. The process has been termed 'User Profiling for Voice Input Processing' and is credited to Allen Haughay.
The new feature would be different from the usual voice commands in the phone. It would be unique in the sense that the iPhone would be able to recognize its owners by their voice. This would remove the requirement of a password for the phone. However, the new invention could only be brought in if Apple's current plans succeed.
The main benefit for the users could be the custom settings that would appear on the phone when the owners speak aloud into it. It may also open personal contents for the users.
The patent application includes examples of highly specific voice commands that a complex system might be able to interpret. For example, saying aloud, "call Joseph's cell phone," includes the keyword "call," as well as the variables "Joseph" and "cell phone".
In a more complicated example, the application cites a lengthy command, "Find my most played song with a 4-star rating and create a Genius playlist using it as a seed." Natural language voice input is also included with the command: "Pick a good song to add to a party mix."
The voice fed into the electronic system could be complex and would include the initially important process of identifying the individual words of input and extracting an instruction from the input. Execution of a corresponding device operation would then follow.
In order to simplify the process, an iPhone would have words that relate specifically to the user of a device. Certain media information could be made specific to a particular group, for example. This would allow two users to share an iPhone or iPad with unique personal settings and content, the application reads.
While identifying a user's voice, the system could also be dynamically altered to suit the preferences of the user. For example, a user's musical preference would be tracked and simply speaking aloud to the system for a song would identify the user and his requirements.
Presently, there are numerous devices that have voice controls in one form or the other. However, these systems use word libraries to implement the voice commands. The trouble is that these voice libraries could grow very large leading to the system slowing down while implementing the voice commands.
Apple plans to overcome this problem by directly identifying the user's voice and avoiding word library that adds to the pressure of a mobile system.
Yes! A new patent application of Apple published by the U.S. Patent and Trademark Office reveals this concept, which will identify individual user's voice when they speak aloud. The process has been termed 'User Profiling for Voice Input Processing' and is credited to Allen Haughay.
The new feature would be different from the usual voice commands in the phone. It would be unique in the sense that the iPhone would be able to recognize its owners by their voice. This would remove the requirement of a password for the phone. However, the new invention could only be brought in if Apple's current plans succeed.
The main benefit for the users could be the custom settings that would appear on the phone when the owners speak aloud into it. It may also open personal contents for the users.
The patent application includes examples of highly specific voice commands that a complex system might be able to interpret. For example, saying aloud, "call Joseph's cell phone," includes the keyword "call," as well as the variables "Joseph" and "cell phone".
In a more complicated example, the application cites a lengthy command, "Find my most played song with a 4-star rating and create a Genius playlist using it as a seed." Natural language voice input is also included with the command: "Pick a good song to add to a party mix."
The voice fed into the electronic system could be complex and would include the initially important process of identifying the individual words of input and extracting an instruction from the input. Execution of a corresponding device operation would then follow.
In order to simplify the process, an iPhone would have words that relate specifically to the user of a device. Certain media information could be made specific to a particular group, for example. This would allow two users to share an iPhone or iPad with unique personal settings and content, the application reads.
While identifying a user's voice, the system could also be dynamically altered to suit the preferences of the user. For example, a user's musical preference would be tracked and simply speaking aloud to the system for a song would identify the user and his requirements.
Presently, there are numerous devices that have voice controls in one form or the other. However, these systems use word libraries to implement the voice commands. The trouble is that these voice libraries could grow very large leading to the system slowing down while implementing the voice commands.
Apple plans to overcome this problem by directly identifying the user's voice and avoiding word library that adds to the pressure of a mobile system.
No comments:
Post a Comment