US20100145696A1 - Method, system and apparatus for improved voice recognition - Google Patents

Method, system and apparatus for improved voice recognition Download PDF

Info

Publication number
US20100145696A1
US20100145696A1 US12/704,320 US70432010A US2010145696A1 US 20100145696 A1 US20100145696 A1 US 20100145696A1 US 70432010 A US70432010 A US 70432010A US 2010145696 A1 US2010145696 A1 US 2010145696A1
Authority
US
United States
Prior art keywords
voice recognition
voice
vkt
recognition system
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/704,320
Inventor
Yen-Son Paul Huang
Bo-Ren Bai
Zhen Hou
Yaying Liu
Hang Yu
Ming Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fortemedia Inc
Original Assignee
Fortemedia Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fortemedia Inc filed Critical Fortemedia Inc
Priority to US12/704,320 priority Critical patent/US20100145696A1/en
Publication of US20100145696A1 publication Critical patent/US20100145696A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context

Definitions

  • the present invention relates to voice recognition, and more specifically to improving the performance of a voice recognition apparatus.
  • a Voice Keypad is a device with the ability to recognize keywords uttered by a user and generate corresponding outputs, for example, commands or text-strings, for use by an application device.
  • VKP Voice Call Identity
  • a VKP speakerphone for use with a mobile telephone provided with Bluetooth functionality.
  • the VKP speakerphone and mobile telephone are paired.
  • a voice recognition engine on the VKP is implemented to recognize a name uttered by a user with reference to a user-defined name list and output a corresponding telephone number.
  • a dialing function on the mobile telephone then dials the number, and the user is able carry on a conversation through the mobile telephone via the speakerphone.
  • SI speaker independent
  • SD speaker dependent
  • SA speaker adapted
  • a method for improved voice recognition in a system having a set-up device and a voice recognition device comprises the steps of generating a Voice Keyword Table (VKT) and downloading the VKT to the voice recognition device; upgrading a voice recognition system on the voice recognition device; and modifying a voice model in the voice recognition device.
  • VKT Voice Keyword Table
  • the VKT preferably comprises visual form data, spoken form data, phonetic format data, and an entry corresponding to a keyword, and TTS-generated voice prompts and voice models corresponding to the phonetic format data.
  • the step of generating a VKT preferably comprises the steps of inputting visual form data and entry data; transforming visual form data to default spoken form data; mapping spoken form data to phonetic format; and performing TTS-guided-pronunciation editing to modify phonetic format data.
  • an additional step of a confusion test using the phonetic format data, voice models and a confusion table to identify keywords in a confusion set is performed. Furthermore, additional steps may be taken to eliminate keywords from the confusion set.
  • a user-initiated step of modifying a voice model in the voice recognition device comprises the steps of building a keyword model from keywords in the VKT; selecting keywords for adaptation; obtaining new speech input for selected keywords; adapting voice models for selected keywords using existing keyword voice models and new speech input to produce adapted voice models; and downloading adapted speech models to the voice recognition device.
  • a new-model-availability-initiated step of modifying a voice model in the voice recognition device comprises the steps of downloading a new voice model from a network to the set-up device; if the new voice model is a newer version than the voice model on the voice recognition device, determining if accumulated personal acoustic data exists; if accumulated personal acoustic data exists, uploading the VKT from the voice recognition device to the set-up device, building a keyword model for adaptation from keywords in the uploaded VKT, performing adaptation using the new voice model and accumulated personal data to produce an adapted new voice model, and downloading the adapted new voice model to the voice recognition device; and if no accumulated speech data exists, uploading the VKT to the set-up device, and building a keyword model for keywords in the uploaded VKT using the new voice model, and downloading the updated new voice model to the voice recognition device.
  • the accumulated personal acoustic data may be, for example, speech input recorded during user-initiated
  • the step of upgrading and downloading a voice recognition system to the voice recognition device comprises the steps of downloading an updated voice recognition system to the set-up device via a network; determining if the updated voice recognition system is more recent than a voice recognition system on the voice recognition device; and if the updated voice recognition system is more recent, downloading the updated voice recognition system from the voice recognition device to the set-up device.
  • run-time information is saved in the voice recognition device; saved run-time information is up-loaded from the voice recognition device to the set-up device; the up-loaded run-time information is processed on the set-up device; and the voice recognition device is updated according to the results of the processing of run-time information on the set-up device to improve voice recognition performance.
  • the method preferably includes one or more of the steps of initiating a diagnostic test on the voice recognition device by the set-up device, providing customer support over a network, and providing wireless capable device compatibility support comprising instructions for pairing the voice recognition device with a wireless capable application device.
  • a voice recognition system installed on a set-up device for improving voice recognition on a voice recognition device.
  • the voice recognition system comprises a Voice Keyword Table (VKT) generating means for generating a VKT and downloading the VKT to the voice recognition device; and means for updating voice models on the voice recognition device.
  • VKT preferably comprises visual form data, spoken form data, phonetic format data, and an entry corresponding to a keyword, and TTS-generated voice prompts and voice models corresponding to the phonetic format data.
  • the voice recognition system further comprises means for performing a confusion test using the phonetic format data, voice models and a confusion table to identify keywords in a confusion set, and eliminating keywords from the confusion set.
  • the voice recognition system further preferably comprises means for updating the voice recognition device according to the results of the processing of run-time information saved on the voice recognition device to improve voice recognition performance.
  • the voice recognition system further comprises means for user-initiated and/or new-model-availability-initiated adaptation of voice models on the voice recognition device.
  • the means for new-model-availability-initiated adaptation preferably uses accumulated personal acoustic data recorded during user-initiated adaptation of voice models on the voice recognition device or recorded during operation of the voice recognition device to identify keywords.
  • the voice recognition system further comprises one or more means for upgrading and downloading a voice recognition system to the voice recognition device, means for initiating a diagnostic test on the voice recognition device, means for providing customer support via a network, and means for providing wireless capable device compatibility support comprising instructions for pairing the voice recognition device with a wireless capable application device.
  • an apparatus for improved voice recognition comprises a set-up device comprising a first Voice Keyword Table (VKT) and a first voice recognition system; and a voice recognition device comprising a second VKT corresponding to the first VKT and a second voice recognition system, the voice recognition device connectible to the set-up device through an interface.
  • VKT comprises visual form data, spoken form data, phonetic format data, and an entry corresponding to a keyword, and TTS-generated voice prompts and voice models corresponding to the phonetic format data.
  • the voice recognition device is preferably a Voice Key Pad (VKP) device or a wireless earset.
  • the set-up device is preferably a personal computer (PC).
  • FIG. 1 is a block diagram of a voice recognition (VR) apparatus according to an embodiment of the present invention
  • FIG. 2A is a block diagram of a set-up device according to an embodiment of the present invention.
  • FIG. 2B is a block diagram of a Voice Keyword Table (VKT) on the set-up device according to an embodiment of the present invention
  • FIG. 3A is a block diagram of a VR device according to an embodiment of the present invention.
  • FIG. 3B is a block diagram of a corresponding VKT on the VR device according to an embodiment of the present invention.
  • FIG. 4 is a block diagram of an application device according to an embodiment of the present invention.
  • FIG. 5 is a flow diagram of a method of improved voice recognition according to an embodiment of the present invention.
  • FIG. 6A is a flow diagram of a method of generating a VKT according to an embodiment of the present invention.
  • FIG. 6B is a flow diagram of a method of performing TTS-guided-pronunciation editing according to an embodiment of the present invention
  • FIG. 7A is a flow diagram of a method of upgrading the set-up device VR system according to an embodiment of the present invention.
  • FIG. 7B is a flow diagram of downloading an updated version of the VR device system to the set-up device according to an embodiment of the present invention.
  • FIG. 7C is a flow diagram of a method of updating the VR system on the VR device according to an embodiment of the present invention.
  • FIG. 8 is a flow diagram of a method of user-initiated voice model adaptation according to an embodiment of the present invention.
  • FIG. 9A is a flow diagram of a method of downloading new voice models to the set-up device according to an embodiment of the present invention.
  • FIG. 9B is a flow diagram of a method of new-model-availability-initiated voice model adaptation according to an embodiment of the present invention.
  • FIG. 10 is a flow diagram of a method of performing a diagnostic routine on the VR device according to an embodiment of the present invention.
  • FIG. 1 is a block diagram of a voice recognition (VR) apparatus according to an embodiment of the present invention.
  • the VR apparatus comprises a set-up device 100 , a voice recognition (VR) device 200 , and an application device 300 .
  • the set-up device may be, for example, a personal computer or personal digital assistant (PDA).
  • the VR device 200 may be, for example, a headset, a speakerphone, an earset or an earset/speakerphone combo with VR functionality.
  • VR device 200 is a Voice Keypad (VKP), namely a device with the ability to recognize keywords uttered by a user and generate corresponding outputs, for example, commands or text-strings, for use by an application device.
  • VKP Voice Keypad
  • the application device 300 is a device that performs a function under the control of the VR device 200 .
  • the application device 300 may be, for example, a mobile telephone, a PDA, a global positioning device, a home appliance or information appliance, a personal computer, a control system for a DVD/MP3 player, a car radio, or a car function controller.
  • Set-up device 100 , VR device 200 and application device 300 are connected by wired or wireless connections.
  • set-up device 100 is connected to VR device 200 by a USB interface
  • VR device 200 is connected to application device 300 by a wireless interface, for example, Bluetooth.
  • the set-up device 100 is a personal computer
  • the VR device 200 is a wireless earset
  • the application device 300 is a mobile telephone.
  • this embodiment is exemplary in nature and in no way intended to limit the scope of the invention to this particular configuration.
  • VR device 200 may be used as a VKT for dialing numbers and entering commands on the application device 300 .
  • VR device 200 provides conventional wireless earset functionality, namely, audio input/output for conversation and other communication via the mobile telephone. It is understood that when connected to set-up device 100 , VR device 200 may also serve as an audio input/out device for the set-up device.
  • VR device 200 may be used to transmit commands thereto, and no audio input/output functionality via the application device 300 need be provided.
  • FIG. 2A is a block diagram of a set-up device 100 according to an embodiment of the present invention.
  • set-up device 100 is a personal computer comprising controller 101 , voice recognition system (VRS) 102 , display 120 , input 130 , storage 180 , and interface 190 .
  • VRS voice recognition system
  • the controller 101 may be, for example, a microprocessor and related hardware and software for operating the set-up device 100 .
  • Display 120 may be, for example, a monitor such as a LCD monitor.
  • Input device 130 may be a keyboard/mouse or other conventional input device or devices.
  • Storage 180 is a memory or memories, for example, a hard drive or flash memory, and is used for storing new voice models and personal accumulated acoustic data, as will be described in further detail below.
  • An interface 190 for connecting to VR device 200 is also provided, for example, a USB interface, a wireless interface such as Bluetooth, or an 802.11 wireless network interface.
  • set-up device 100 is connected to a network, for example, a global network such as the World Wide Web.
  • VRS 102 comprises a Voice Keyword Table (VKT) 110 and a number of modules implemented in software and/or hardware on set-up device 100 .
  • the modules preferably include Voice Keyword Table (VKT) generation module 150 including a TTS-guided-pronunciation editing module 151 and a confusion test module 152 , system upgrade module 155 , voice model update module 160 including an adaptation module 161 , diagnostics module 165 , customer support module 170 , and wireless capable device compatibility module 175 .
  • VKT and, to the extent that they are software, the modules are stored in a memory or memories of set-up device 100 .
  • FIG. 2B is a block diagram of VKT 110 according to an embodiment of the present invention.
  • VKT 110 comprises table 111 , voice model database 112 , and TTS-generated voice prompt database 113 .
  • Table 111 stores pre-defined keywords, such as HOME and SET-UP MENU, and user-defined keywords such as BRIAN, RYAN and JOSE, and entry data corresponding to the keywords. Entry data may be text-strings, such as telephone numbers, or commands, such as a command for entering a set-up menu.
  • table 111 stores visual form data corresponding to any visual symbol the user uses to represent a keyword in the VKT 110 , and spoken form data corresponding to an utterance associated with the keyword.
  • table 111 comprises phonetic format data corresponding to the spoken form data.
  • pre-defined and user-defined keywords may include command functions related to the features of any particular application device.
  • the keywords may include pre-defined MP3 player commands such as STOP or RANDOM, user-defined commands, and others.
  • the commands may also be associated with operation of the VR device itself.
  • the command SET-UP MENU may activate a voice prompt interface on the VR device.
  • entry data is not limited to text-strings and commands.
  • entry data may include images, wave files, and other file formats. It is further contemplated that more than one entry field be associated with a given keyword. It is also contemplated that the VKT may store speaker dependent voice tags and corresponding speaker dependent voice models and entry data.
  • Voice model database 112 stores the current set of voice models for the system.
  • a voice model generating module of VRS 102 generates voice models corresponding to the phonetic format data for keywords in VKT 110 to populate voice model database 112 .
  • the voice models may comprise universal speaker-independent (SI) voice models and/or speaker-adapted (SA) voice models adapted according to embodiments of the present invention.
  • TTS-generated voice prompt database 113 stores data for the generation of text-to-speech (TTS) voice prompts used in embodiments of the present of invention.
  • TTS text-to-speech
  • a TTS-module of VRS 102 generates speech wave files corresponding to the phonetic format data for keywords in VKT 110 to populate voice prompt database 113 .
  • VKT 110 Additional features of VKT 110 are described in following sections in connection with FIGS. 5-10 .
  • FIG. 3A is a block diagram of VR device 200 according to an embodiment of the present invention.
  • VR device 200 comprises controller 201 , voice recognition system (VRS) 202 comprising VKT 210 and voice recognition engine (VRE) 220 , speaker 230 , microphone 240 , battery 250 , storage 280 , and interface 290 .
  • VRS voice recognition system
  • VRE voice recognition engine
  • the controller 201 may be, for example, a microprocessor and related hardware and software for operating the VR device 200 and performing digital signal processing on audio input received by microphone 240 .
  • Speaker 230 is a conventional speaker for outputting audio.
  • Microphone 240 may be a single microphone or an array microphone, and is preferably a small array microphone (SAM).
  • Storage 280 is a memory or memories, preferably a flash memory, and is used for storing run-time information and/or personal accumulated acoustic data, as will be described in further detail below.
  • Interface 290 is provided for connecting with set-up device 100 and application device 300 .
  • a USB interface may be provided for connecting to set-up device 100
  • a wireless interface may be provided for connecting to application device 300 .
  • the interface may comprise a single wireless interface (for example, Bluetooth) or multiple wireless interfaces (for example one Bluetooth and one 802.11 wireless network).
  • VKT 210 corresponds to VKT 110 , and, as shown in FIG. 3B , comprises corresponding table 211 , voice model database 212 , and TTS-generated voice prompt database 213 .
  • VRE 220 receives signals generated by microphone 240 and processed by controller 201 , extracts feature data for comparison with voice models stored in voice model database 212 so as to determine if the utterance matches a keyword in VKT 210 .
  • voice recognition engines are well known in the art, further description is not provided here.
  • VKT 110 is mirrored in VKT 210 . Namely, data entered into VKT 110 may be synched to VKT 210 , and vice versa, when the corresponding devices are connected.
  • VR device 200 includes functionality to receive data input independent from set-up device 100 .
  • VR device 200 may include a voice prompt guided interface for adding data to VKT 210 .
  • newly adding data in VKT 210 may be synched to VKT 110 when the corresponding devices are connected.
  • run-time information collected in the operation of VR device 200 is stored in storage 280 .
  • the run-time information is uploaded from VR device 200 to the set-up device 100 and processed by VRS 102 for the purpose of improving voice recognition performance.
  • the VR device 200 may then be updated according to the results of the processing of run-time information and improved voice recognition performance.
  • An example of the kind of run-time information that may be stored is acoustic data corresponding to successful keyword recognitions and/or data obtained from application device 300 .
  • FIG. 4 is a block diagram of application device 300 according to an embodiment of the present invention.
  • application device 300 comprises a controller 301 , an RF module 310 with an antenna for connecting to a communications network, a control program 302 comprising a dialing module 320 stored in a memory, a speaker 330 and a microphone 340 .
  • An interface 390 is provided for connecting to SR device 200 , for example, a wireless interface such as Bluetooth.
  • a user operates VR device 200 to control application device 300 .
  • application device 300 is a mobile telephone
  • VRS 202 determines a matching keyword, if any. If there is a keyword match, entry data corresponding to the matched keyword is transmitted from VR device 200 to application device 300 via interfaces 290 and 390 . If, for example, the entry data corresponding to RYAN is a telephone number, a dialing module receives the telephone number and dials the contact RYAN.
  • the system may also include other conventional functions such as a voice prompt feedback step allowing the user to confirm or reject a keyword match.
  • the user may operate VR device 200 to control the VR device itself.
  • controller 201 may cause the VR device to output a voice guided set-up menu via speaker 230 .
  • FIG. 5 shows the basic process flow of a preferred embodiment of VRS 102 for achieving improved voice recognition of the present invention. Steps 400 - 430 are described in further detail in connection with FIGS. 6-10 .
  • VKT 110 is generated on the set-up device 100 and downloaded to the VR device 200 , where it is stored in a memory as VKT 210 .
  • step 410 one or both of VRS 102 and VRS 202 are upgraded.
  • step 420 voice models are modified and downloaded from set-up device 100 to VR device 200 .
  • step 430 a diagnostics routine is performed on VR device 200 .
  • remote customer support is provided.
  • an interface may be provided via display 120 and input 130 allowing a user to link to a knowledgebase or other customer support services.
  • manual download of updated software and voice models may be performed through this interface.
  • remote wireless capable device compatibility support is provided.
  • an interface is provided on display 120 for the user to link to a wireless capable device compatibility database over a network using input device 130 .
  • the network comprises a web server.
  • the database contains specific instructions for pairing VR device 200 with various makes and models of mobile telephones.
  • FIG. 6A shows the steps of generating a VKT according to a preferred embodiment of the present invention.
  • step 500 keyword data is inputted into visual form and corresponding entry fields of table 111 .
  • data may be extracted from a software application by VKT generation module 150 to populate the visual form and entry data fields of table 111 .
  • Manual input, or editing of extracted data may also be performed to input data into table 111 .
  • visual form, spoken form, and entry data is displayable on display 120 and may be entered/edited in table 111 with input device 130 .
  • VKT generation module 150 extracts relevant data and populates table 111 .
  • the table may then be edited by amending, adding, or deleting keywords and entries (for example, names and telephone numbers) according to the user's preference.
  • visual form data is transformed into spoken form data.
  • Visual form data corresponds to any visual symbol the user uses to represent a keyword in the VKT.
  • spoken form data corresponds to an actual utterance associated with the keyword.
  • default spoken form data is automatically generated from visual form data by VKT generation module 150 . If the keywords are in a language in which the visual form data can also serve as the basis for word-to-phoneme translation and is easily edited by a user to achieve different pronunciations, the visual form data may simply be copied into the spoken form data. For example, if the keyword is RYAN, the visual form data and the default spoken form data are the same.
  • a word-to-pinyin translation or the like may be employed to generate the default spoken form data in pinyin or other alphabet conversion format.
  • the visual form data would be the Chinese character for flower and the default spoken form data would be the pinyin translation thereof, i.e., “HUA”.
  • the user may also add or edit spoken form data by manual entry through input device 130 .
  • spoken form data For example, in table 111 , the default spoken form data for keywords BRIAN and JOSE is BRIAN and JOSE, but for reasons explained in further detail in the following, the spoken form data has been edited to BRIAN SMITH and HOSAY.
  • spoken form data is mapped to phonetic format data by VKT generation module 150 by a word-to-phoneme translation module utilizing a pronunciation dictionary and pronunciation rules.
  • step 520 TTS-guided-pronunciation editing is performed by the TTS-guided-pronunciation editing module 151 . This step is shown in further detail in FIG. 6B , in which the following steps are performed.
  • step 550 the user selects a keyword.
  • step 560 a TTS-generated voice prompt is generated by VKT generation module 150 according to the phonetic format data currently stored corresponding to the selected keyword and TTS-generated voice prompt database 113 . If the user is satisfied with the output, the routine is ended and, at the user's option, another keyword may be selected.
  • the voice prompt is preferably outputted by speaker 230 of VR device 200 if VR device 200 is connected to set-up device 100 . Alternately, a speaker or other audio output device of set-up device 100 (not shown) may be used.
  • step 570 edit the spoken form data in table 111 .
  • the edited spoken form data is in turn mapped to phonetic form a in step 580 , and the routine returns to step 560 to determine if the user is satisfied with the modification, or if further editing of the spoken form data is required to bring the pronunciation generated by the TTS-generated voice prompt closer to the desired pronunciation.
  • the default spoken form data is JOSE.
  • the mapped phonetic format data for JOSE is , which sounds like JOE-SEE when the voice prompt is generated. If this pronunciation is unsatisfactory to the user, the user may edit the spoken form data to HOSAY, for which the mapped phonetic format data is ho'zei.
  • the voice prompt generated corresponding to this phonetic format data sounds like the Spanish-language pronunciation of the word Jose.
  • a confusion test is performed on VKT 110 by confusion test module 152 in which phonetic format data corresponding to keywords is analyzed such that keywords are recognized as members of a confusion set and distinguished.
  • phonetic format data from table 111 , corresponding voice models from voice model database 112 , and a confusion table are used to generate a confusion matrix to check and predict the recognition performance for the keywords and provide guidance to the user for improving performance.
  • the spoken form data may be changed to obtain a different pronunciation, a prefix or suffix may be added to the keyword, or adaptation may be performed on the confusable words.
  • the user may elect to edit spoken form data for one or more of the confused terms, thereby returning the routine to step 510 .
  • the keywords are BRIAN and RYAN
  • phonetic format data mapped from the default spoken form data (BRIAN and RYAN)
  • BRIAN and RYAN phonetic format data mapped from the default spoken form data
  • the user may elect to edit the spoken form data for BRIAN to BRIAN SMITH. New phonetic format data is then mapped from the edited spoken form data in step 515 .
  • the same set of phonetic format data is shared between TTS-guided-pronunciation editing and voice recognition.
  • the user edits the pronunciation of a keyword guided by TTS-guided-pronunciation editing to be close to his/her own accent.
  • the phonetic format data mapped from spoken form data that is the result of the TTS-guided-pronunciation editing process is used in the generation of voice models stored in voice model databases 112 / 212 .
  • the voice models correspond more closely to the specific pronunciation of the user and the recognition performance of VRS 202 can be improved.
  • FIG. 7A is a flow diagram of a preferred method of upgrading VRS 102 .
  • step 600 the system upgrade module 155 accesses a remote server via a network to determine if an updated version of the VRS 102 is available.
  • step 610 if an updated version of the VRS 102 is available, the user is prompted regarding the availability of the upgrade.
  • step 620 the updated version of VRS 102 is downloaded to the set-up device 100 via the network and stored in storage 180 .
  • step 640 the updated version of VRS 102 is installed on set-up device 100 .
  • FIGS. 7B and 7C show flow diagrams of a preferred method of upgrading VRS 202 .
  • step 650 the system upgrade module 155 accesses a remote server via a network to determine if an updated version of the VRS 202 is available.
  • step 660 if an updated version of the VRS 202 is available, the user is prompted regarding the availability of the upgrade.
  • step 670 the updated version of VRS 202 is downloaded to the set-up device 100 via the network and stored in storage 180 .
  • step 700 the VR device 200 is connected with the set-up device 100 .
  • system upgrade module 155 checks the version of VRS 202 installed on VR device 200 .
  • step 730 the updated version of VRS 202 is downloaded to the VR device 200 and installed.
  • voice models are modified and downloaded to VR device 200 in two different ways: user-initiated and new-model-availability-initiated.
  • FIG. 8 is a flow diagram of a method of performing user-initiated adaptation of voice models on VR device 200 according to an embodiment of the present invention.
  • step 801 the user profile is obtained by voice model update module 160 .
  • the categories may include pre-defined keywords, digits, or user-defined keywords.
  • pre-defined keyword are defined by the system, such as HOME corresponding to a text-string or SET-UP MENU corresponding to a command.
  • User-defined keywords are those extracted during creation of the VKT 110 or entered by other means. Digits are the numerals 0-1-2-3-4-5-6-7-8-9.
  • step 804 the user is prompted to select a mode. For example, the user may choose to adapt all keywords, new keywords, or manually select the keywords to adapt.
  • an adaptation engine 161 in voice model update module 160 performs an adaptation using accumulated personal acoustic data corresponding to the user profile (if any), the currently existing voice models (for example, the original SI voice models or previously adapted voice models) stored in voice model database 112 , and new speech input provided by the user to produce adapted voice models for download.
  • the system is preferably trained with a number of utterances corresponding to keywords in the selected category as determined by the selected mode to improve the recognition performance of the system for a given user. Adaptation techniques are well known in the art and are not discussed in further detail here.
  • VR device 200 is connected to set-up device 100 and new speech input is captured via microphone 240 . Otherwise, new speech input may be inputted by a microphone provided with set-up device 100 (not shown).
  • personal acoustic data is recorded and accumulated in storage 180 in association with the user profile during user-initiated adaptation. For example, if the user provides new speech input for the keyword RYAN, the recorded utterance is stored in storage 180 along with data associating the recorded utterance with the keyword RYAN.
  • adapted voice models are downloaded from set-up device 100 to VR device 200 and stored in voice model database 212 .
  • FIGS. 9A and 9B illustrate a method of modifying voice models on VR device 200 initiated by the availability of new voice models on a network according to an embodiment of the present invention.
  • new voice models are downloaded to the set-up device.
  • a remote server is accessed via a network to determine if new voice models are available.
  • New voice models may be, for example, new SI models developed reflecting improvements in the art or directed to a specific speaker group and stored on a remote server.
  • step 811 if new voice models are available, the user is prompted regarding the availability of the update.
  • step 812 if the user confirms the update, the new voice models are downloaded to the set-up device 100 via the network and saved in storage 180 .
  • FIG. 9B is a flow diagram of a method of new-model-availability-initiated voice model adaptation according to an embodiment of the present invention
  • step 815 the user profile is obtained.
  • step 816 the VR device 200 is connected to set-up device 100 .
  • voice model update module 160 compares the versions of the voice models in voice model database 212 on the VR device 200 with the new voice models stored in storage 180 on set-up device 100 . If there are newer versions available on the set-up device, the user is prompted regarding the available upgrade.
  • voice model update module 160 checks to determine if accumulated personal acoustic data corresponding to the user profile is available. For example, personal acoustic data accumulated during previous user-initiated adaptation may be stored in storage 180 . Furthermore, personal acoustic data accumulated during normal operation of VR device 200 and stored in storage 280 may be uploaded to storage 180 and associated with the user profile.
  • VKT 210 is uploaded into a memory in set-up device 100 .
  • voice model update module 160 builds keyword models for adaptation.
  • pre-defined keyword and digit models are built in advance. Thus, only user-defined keywords models need to be built for adaptation in this step.
  • adaptation module 161 performs an adaptation using the built-keyword models, new voice models and the accumulated personal acoustic data to generate adapted new voice models.
  • the accumulated personal acoustic data is used as speech input by the adaptation module 161 . This allows for adaptation of the new models to occur without the need for new speech input by the user.
  • step 835 adapted new voice models are downloaded to VR device 200 .
  • VKT 210 is uploaded into a memory in set-up device 100 .
  • voice model update module 160 builds keyword models using the new voice models.
  • pre-defined keyword and digit models are built in advance. Thus, only user-defined keywords models need to be built for adaptation in this step.
  • step 850 updated new voice models are downloaded to VR device 200 .
  • FIG. 10 shows an exemplary flow diagram of a method of performing a diagnostic routine according to an embodiment of the present invention.
  • step 900 the VR device 200 is connected to set-up device 100 .
  • diagnostics module 165 checks the connection between the VR device 200 and the set-up device 100 .
  • diagnostics module 165 checks the flash content of memory in which VR system 202 is stored.
  • diagnostics module 165 checks the battery status of battery 250 .
  • diagnostics module 165 checks the functioning of speaker 230 .
  • a test prompt is transmitted to the VR device 200 and output through speaker 230 . If the user hears the voice prompt, the user inputs a positive acknowledgement through input 130 of set-up device 100 . Otherwise, the user inputs a negative acknowledgement through input 130 and the test is a fail.
  • diagnostics module 165 checks the functioning of microphone 240 .
  • the user is prompted to speak into microphone 240 .
  • microphone volume is optimized such that the audio input is neither saturated nor too small to be detected.
  • an echo test may be performed to obtain the optimized input volume of the microphone 240 and output volume of the speaker 230 by controller 201 . If no input is detected, the test is a fail.
  • the user is notified on display 120 of any failed test. Furthermore, where appropriate, fix approaches are provided to the user.

Abstract

An improved voice recognition system in which a Voice Keyword Table is generated and downloaded from a set-up device to a voice recognition device. The VKT includes visual form data, spoken form data, phonetic format data, and an entry corresponding to a keyword, and TTS-generated voice prompts and voice models corresponding to the phonetic format data. A voice recognition system on the voice recognition device is updated by the set-up device. Furthermore, voice models in the voice recognition device are modified by the set-up device.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to voice recognition, and more specifically to improving the performance of a voice recognition apparatus.
  • A Voice Keypad (VKP) is a device with the ability to recognize keywords uttered by a user and generate corresponding outputs, for example, commands or text-strings, for use by an application device.
  • One implementation of a VKP is a Bluetooth speakerphone for use with a mobile telephone provided with Bluetooth functionality. In such a device, the VKP speakerphone and mobile telephone are paired. A voice recognition engine on the VKP is implemented to recognize a name uttered by a user with reference to a user-defined name list and output a corresponding telephone number. A dialing function on the mobile telephone then dials the number, and the user is able carry on a conversation through the mobile telephone via the speakerphone.
  • There are three general classes of voice recognition, namely speaker independent (SI), speaker dependent (SD) and speaker adapted (SA). In the SI system, a voice recognition engine identifies utterances according to universal voice models generated from samples obtained from a large training population. As no individual training by the user is required, such systems are convenient. However, these systems generally have low recognition performance, especially when used by speakers with heavy accents or whose speech patterns otherwise diverge from the training population. On the other hand, SD systems require users to provide samples for every keyword, which can become burdensome and memory intensive for large lists of keywords.
  • Conventional SA systems achieve limited improvement of recognition performance by adapting voice models according to speech input by an individual speaker. However, it is desirable to achieve a still higher recognition rate for keywords on a VKP. Furthermore, the VKP itself may lack the appropriate resources to achieve improved voice recognition.
  • SUMMARY
  • Provided are a method, system, and apparatus for improved voice recognition.
  • In an embodiment of the present invention, a method for improved voice recognition in a system having a set-up device and a voice recognition device is provided. The method comprises the steps of generating a Voice Keyword Table (VKT) and downloading the VKT to the voice recognition device; upgrading a voice recognition system on the voice recognition device; and modifying a voice model in the voice recognition device.
  • The VKT preferably comprises visual form data, spoken form data, phonetic format data, and an entry corresponding to a keyword, and TTS-generated voice prompts and voice models corresponding to the phonetic format data. The step of generating a VKT preferably comprises the steps of inputting visual form data and entry data; transforming visual form data to default spoken form data; mapping spoken form data to phonetic format; and performing TTS-guided-pronunciation editing to modify phonetic format data. In preferred embodiments, an additional step of a confusion test using the phonetic format data, voice models and a confusion table to identify keywords in a confusion set is performed. Furthermore, additional steps may be taken to eliminate keywords from the confusion set.
  • In preferred embodiments, a user-initiated step of modifying a voice model in the voice recognition device comprises the steps of building a keyword model from keywords in the VKT; selecting keywords for adaptation; obtaining new speech input for selected keywords; adapting voice models for selected keywords using existing keyword voice models and new speech input to produce adapted voice models; and downloading adapted speech models to the voice recognition device.
  • Alternately or in addition thereto, a new-model-availability-initiated step of modifying a voice model in the voice recognition device comprises the steps of downloading a new voice model from a network to the set-up device; if the new voice model is a newer version than the voice model on the voice recognition device, determining if accumulated personal acoustic data exists; if accumulated personal acoustic data exists, uploading the VKT from the voice recognition device to the set-up device, building a keyword model for adaptation from keywords in the uploaded VKT, performing adaptation using the new voice model and accumulated personal data to produce an adapted new voice model, and downloading the adapted new voice model to the voice recognition device; and if no accumulated speech data exists, uploading the VKT to the set-up device, and building a keyword model for keywords in the uploaded VKT using the new voice model, and downloading the updated new voice model to the voice recognition device. The accumulated personal acoustic data may be, for example, speech input recorded during user-initiated adaptation of voice models and stored on the set-up device or speech input recorded during use of the voice recognition device to identify keywords and stored on the voice recognition device.
  • In preferred embodiments, the step of upgrading and downloading a voice recognition system to the voice recognition device comprises the steps of downloading an updated voice recognition system to the set-up device via a network; determining if the updated voice recognition system is more recent than a voice recognition system on the voice recognition device; and if the updated voice recognition system is more recent, downloading the updated voice recognition system from the voice recognition device to the set-up device.
  • In preferred embodiments, run-time information is saved in the voice recognition device; saved run-time information is up-loaded from the voice recognition device to the set-up device; the up-loaded run-time information is processed on the set-up device; and the voice recognition device is updated according to the results of the processing of run-time information on the set-up device to improve voice recognition performance.
  • In addition, the method preferably includes one or more of the steps of initiating a diagnostic test on the voice recognition device by the set-up device, providing customer support over a network, and providing wireless capable device compatibility support comprising instructions for pairing the voice recognition device with a wireless capable application device.
  • In an embodiment of the present invention, a voice recognition system installed on a set-up device for improving voice recognition on a voice recognition device is provided. The voice recognition system comprises a Voice Keyword Table (VKT) generating means for generating a VKT and downloading the VKT to the voice recognition device; and means for updating voice models on the voice recognition device. The VKT preferably comprises visual form data, spoken form data, phonetic format data, and an entry corresponding to a keyword, and TTS-generated voice prompts and voice models corresponding to the phonetic format data.
  • In preferred embodiments, the voice recognition system further comprises means for performing a confusion test using the phonetic format data, voice models and a confusion table to identify keywords in a confusion set, and eliminating keywords from the confusion set. In addition, the voice recognition system further preferably comprises means for updating the voice recognition device according to the results of the processing of run-time information saved on the voice recognition device to improve voice recognition performance.
  • In preferred embodiments, the voice recognition system further comprises means for user-initiated and/or new-model-availability-initiated adaptation of voice models on the voice recognition device. The means for new-model-availability-initiated adaptation preferably uses accumulated personal acoustic data recorded during user-initiated adaptation of voice models on the voice recognition device or recorded during operation of the voice recognition device to identify keywords.
  • In preferred embodiments, the voice recognition system further comprises one or more means for upgrading and downloading a voice recognition system to the voice recognition device, means for initiating a diagnostic test on the voice recognition device, means for providing customer support via a network, and means for providing wireless capable device compatibility support comprising instructions for pairing the voice recognition device with a wireless capable application device.
  • In an embodiment of the present invention, an apparatus for improved voice recognition is provided. The apparatus comprises a set-up device comprising a first Voice Keyword Table (VKT) and a first voice recognition system; and a voice recognition device comprising a second VKT corresponding to the first VKT and a second voice recognition system, the voice recognition device connectible to the set-up device through an interface. The VKT comprises visual form data, spoken form data, phonetic format data, and an entry corresponding to a keyword, and TTS-generated voice prompts and voice models corresponding to the phonetic format data. The voice recognition device is preferably a Voice Key Pad (VKP) device or a wireless earset. The set-up device is preferably a personal computer (PC).
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
  • FIG. 1 is a block diagram of a voice recognition (VR) apparatus according to an embodiment of the present invention;
  • FIG. 2A is a block diagram of a set-up device according to an embodiment of the present invention;
  • FIG. 2B is a block diagram of a Voice Keyword Table (VKT) on the set-up device according to an embodiment of the present invention;
  • FIG. 3A is a block diagram of a VR device according to an embodiment of the present invention;
  • FIG. 3B is a block diagram of a corresponding VKT on the VR device according to an embodiment of the present invention;
  • FIG. 4 is a block diagram of an application device according to an embodiment of the present invention;
  • FIG. 5 is a flow diagram of a method of improved voice recognition according to an embodiment of the present invention;
  • FIG. 6A is a flow diagram of a method of generating a VKT according to an embodiment of the present invention;
  • FIG. 6B is a flow diagram of a method of performing TTS-guided-pronunciation editing according to an embodiment of the present invention;
  • FIG. 7A is a flow diagram of a method of upgrading the set-up device VR system according to an embodiment of the present invention;
  • FIG. 7B is a flow diagram of downloading an updated version of the VR device system to the set-up device according to an embodiment of the present invention;
  • FIG. 7C is a flow diagram of a method of updating the VR system on the VR device according to an embodiment of the present invention;
  • FIG. 8 is a flow diagram of a method of user-initiated voice model adaptation according to an embodiment of the present invention;
  • FIG. 9A is a flow diagram of a method of downloading new voice models to the set-up device according to an embodiment of the present invention;
  • FIG. 9B is a flow diagram of a method of new-model-availability-initiated voice model adaptation according to an embodiment of the present invention; and
  • FIG. 10 is a flow diagram of a method of performing a diagnostic routine on the VR device according to an embodiment of the present invention.
  • DESCRIPTION
  • FIG. 1 is a block diagram of a voice recognition (VR) apparatus according to an embodiment of the present invention.
  • In a preferred embodiment of the invention, the VR apparatus comprises a set-up device 100, a voice recognition (VR) device 200, and an application device 300. The set-up device may be, for example, a personal computer or personal digital assistant (PDA).
  • The VR device 200 may be, for example, a headset, a speakerphone, an earset or an earset/speakerphone combo with VR functionality. In preferred embodiments, VR device 200 is a Voice Keypad (VKP), namely a device with the ability to recognize keywords uttered by a user and generate corresponding outputs, for example, commands or text-strings, for use by an application device.
  • The application device 300 is a device that performs a function under the control of the VR device 200. The application device 300 may be, for example, a mobile telephone, a PDA, a global positioning device, a home appliance or information appliance, a personal computer, a control system for a DVD/MP3 player, a car radio, or a car function controller.
  • Set-up device 100, VR device 200 and application device 300 are connected by wired or wireless connections. In a preferred embodiment, set-up device 100 is connected to VR device 200 by a USB interface, while VR device 200 is connected to application device 300 by a wireless interface, for example, Bluetooth.
  • In the embodiment described below, the set-up device 100 is a personal computer, the VR device 200 is a wireless earset, and the application device 300 is a mobile telephone. However, it is understood that this embodiment is exemplary in nature and in no way intended to limit the scope of the invention to this particular configuration.
  • In this embodiment, VR device 200 may be used as a VKT for dialing numbers and entering commands on the application device 300. In conjunction therewith, VR device 200 provides conventional wireless earset functionality, namely, audio input/output for conversation and other communication via the mobile telephone. It is understood that when connected to set-up device 100, VR device 200 may also serve as an audio input/out device for the set-up device.
  • In the case where application device 300 is simply a control system, for example, a control system for a DVD/MP3 player, VR device 200 may be used to transmit commands thereto, and no audio input/output functionality via the application device 300 need be provided.
  • FIG. 2A is a block diagram of a set-up device 100 according to an embodiment of the present invention.
  • In a preferred embodiment of the present invention, set-up device 100 is a personal computer comprising controller 101, voice recognition system (VRS) 102, display 120, input 130, storage 180, and interface 190.
  • The controller 101 may be, for example, a microprocessor and related hardware and software for operating the set-up device 100. Display 120 may be, for example, a monitor such as a LCD monitor. Input device 130 may be a keyboard/mouse or other conventional input device or devices. Storage 180 is a memory or memories, for example, a hard drive or flash memory, and is used for storing new voice models and personal accumulated acoustic data, as will be described in further detail below. An interface 190 for connecting to VR device 200 is also provided, for example, a USB interface, a wireless interface such as Bluetooth, or an 802.11 wireless network interface. Furthermore, set-up device 100 is connected to a network, for example, a global network such as the World Wide Web.
  • In a preferred embodiment, VRS 102 comprises a Voice Keyword Table (VKT) 110 and a number of modules implemented in software and/or hardware on set-up device 100. As will be described in further detail in connection with FIGS. 5-10, the modules preferably include Voice Keyword Table (VKT) generation module 150 including a TTS-guided-pronunciation editing module 151 and a confusion test module 152, system upgrade module 155, voice model update module 160 including an adaptation module 161, diagnostics module 165, customer support module 170, and wireless capable device compatibility module 175. In a preferred embodiment, the VKT and, to the extent that they are software, the modules, are stored in a memory or memories of set-up device 100.
  • FIG. 2B is a block diagram of VKT 110 according to an embodiment of the present invention.
  • In a preferred embodiment of the invention, VKT 110 comprises table 111, voice model database 112, and TTS-generated voice prompt database 113. Table 111 stores pre-defined keywords, such as HOME and SET-UP MENU, and user-defined keywords such as BRIAN, RYAN and JOSE, and entry data corresponding to the keywords. Entry data may be text-strings, such as telephone numbers, or commands, such as a command for entering a set-up menu.
  • As will be described in further detail below, in preferred embodiments, table 111 stores visual form data corresponding to any visual symbol the user uses to represent a keyword in the VKT 110, and spoken form data corresponding to an utterance associated with the keyword. In addition, table 111 comprises phonetic format data corresponding to the spoken form data.
  • It is understood that depending on the application device used in conjunction with the VR device, keywords of different categorizations may be employed. Namely, pre-defined and user-defined keywords may include command functions related to the features of any particular application device. For example, if the application device is a MP3 player, the keywords may include pre-defined MP3 player commands such as STOP or RANDOM, user-defined commands, and others. The commands may also be associated with operation of the VR device itself. For example, the command SET-UP MENU may activate a voice prompt interface on the VR device.
  • Furthermore, the entry data is not limited to text-strings and commands. For example, entry data may include images, wave files, and other file formats. It is further contemplated that more than one entry field be associated with a given keyword. It is also contemplated that the VKT may store speaker dependent voice tags and corresponding speaker dependent voice models and entry data.
  • Voice model database 112 stores the current set of voice models for the system. In embodiments of the invention, a voice model generating module of VRS 102 generates voice models corresponding to the phonetic format data for keywords in VKT 110 to populate voice model database 112. As will be explained in further detail below, the voice models may comprise universal speaker-independent (SI) voice models and/or speaker-adapted (SA) voice models adapted according to embodiments of the present invention.
  • TTS-generated voice prompt database 113 stores data for the generation of text-to-speech (TTS) voice prompts used in embodiments of the present of invention. In embodiments of the invention, a TTS-module of VRS 102 generates speech wave files corresponding to the phonetic format data for keywords in VKT 110 to populate voice prompt database 113.
  • Additional features of VKT 110 are described in following sections in connection with FIGS. 5-10.
  • FIG. 3A is a block diagram of VR device 200 according to an embodiment of the present invention.
  • In a preferred embodiment of the present invention, VR device 200 comprises controller 201, voice recognition system (VRS) 202 comprising VKT 210 and voice recognition engine (VRE) 220, speaker 230, microphone 240, battery 250, storage 280, and interface 290.
  • The controller 201 may be, for example, a microprocessor and related hardware and software for operating the VR device 200 and performing digital signal processing on audio input received by microphone 240. Speaker 230 is a conventional speaker for outputting audio. Microphone 240 may be a single microphone or an array microphone, and is preferably a small array microphone (SAM). Storage 280 is a memory or memories, preferably a flash memory, and is used for storing run-time information and/or personal accumulated acoustic data, as will be described in further detail below. Interface 290 is provided for connecting with set-up device 100 and application device 300. For example, a USB interface may be provided for connecting to set-up device 100, while a wireless interface may be provided for connecting to application device 300. In the case where VR device 200 connects to both devices by a wireless connection, the interface may comprise a single wireless interface (for example, Bluetooth) or multiple wireless interfaces (for example one Bluetooth and one 802.11 wireless network).
  • VKT 210 corresponds to VKT 110, and, as shown in FIG. 3B, comprises corresponding table 211, voice model database 212, and TTS-generated voice prompt database 213.
  • In preferred embodiments, VRE 220 receives signals generated by microphone 240 and processed by controller 201, extracts feature data for comparison with voice models stored in voice model database 212 so as to determine if the utterance matches a keyword in VKT 210. As the features and operation of voice recognition engines are well known in the art, further description is not provided here.
  • It is a feature of embodiments of this invention that VKT 110 is mirrored in VKT 210. Namely, data entered into VKT 110 may be synched to VKT 210, and vice versa, when the corresponding devices are connected.
  • In embodiments of the present invention, VR device 200 includes functionality to receive data input independent from set-up device 100. For example, VR device 200 may include a voice prompt guided interface for adding data to VKT 210. In this case, newly adding data in VKT 210 may be synched to VKT 110 when the corresponding devices are connected.
  • It is a feature of the preferred embodiment of the present invention that run-time information collected in the operation of VR device 200 is stored in storage 280. When VR device 200 is connected to set-up device 100, the run-time information is uploaded from VR device 200 to the set-up device 100 and processed by VRS 102 for the purpose of improving voice recognition performance. The VR device 200 may then be updated according to the results of the processing of run-time information and improved voice recognition performance. An example of the kind of run-time information that may be stored is acoustic data corresponding to successful keyword recognitions and/or data obtained from application device 300.
  • FIG. 4 is a block diagram of application device 300 according to an embodiment of the present invention.
  • In a preferred embodiment of the present invention in which application device 300 is a mobile telephone, application device 300 comprises a controller 301, an RF module 310 with an antenna for connecting to a communications network, a control program 302 comprising a dialing module 320 stored in a memory, a speaker 330 and a microphone 340. An interface 390 is provided for connecting to SR device 200, for example, a wireless interface such as Bluetooth. As the features and structure of a mobile telephone are well known in the art, further description is not provided here.
  • In general, a user operates VR device 200 to control application device 300. In the embodiment where application device 300 is a mobile telephone, for example, if a user wishes to dial a contact RYAN, he or she utters the keyword RYAN into microphone 240. After front-end digital signal processing, VRS 202 determines a matching keyword, if any. If there is a keyword match, entry data corresponding to the matched keyword is transmitted from VR device 200 to application device 300 via interfaces 290 and 390. If, for example, the entry data corresponding to RYAN is a telephone number, a dialing module receives the telephone number and dials the contact RYAN. It is understood that the system may also include other conventional functions such as a voice prompt feedback step allowing the user to confirm or reject a keyword match.
  • It is another feature of preferred embodiments of the present invention that during normal use of the VR device 200, personal acoustic data is recorded and accumulated in storage 280 for later use in adaptation. For example, if the user utters the keyword RYAN and the user confirms the match determined by VRS 202, the recorded utterance is stored in storage 280 along with data associating the recorded utterance with the keyword RYAN. It is further understood that other methodologies may be employed to determine if VRS 202 successfully matched the keyword.
  • Furthermore, the user may operate VR device 200 to control the VR device itself. For example, if the user utters SET-UP menu, controller 201 may cause the VR device to output a voice guided set-up menu via speaker 230.
  • The operation of the voice recognition apparatus and component parts thereof is described in further detail below.
  • FIG. 5 shows the basic process flow of a preferred embodiment of VRS 102 for achieving improved voice recognition of the present invention. Steps 400-430 are described in further detail in connection with FIGS. 6-10.
  • In step 400, VKT 110 is generated on the set-up device 100 and downloaded to the VR device 200, where it is stored in a memory as VKT 210.
  • In step 410, one or both of VRS 102 and VRS 202 are upgraded.
  • In step 420, voice models are modified and downloaded from set-up device 100 to VR device 200.
  • In step 430, a diagnostics routine is performed on VR device 200.
  • In step 440, remote customer support is provided. In a preferred embodiment, an interface may be provided via display 120 and input 130 allowing a user to link to a knowledgebase or other customer support services. In addition, manual download of updated software and voice models may be performed through this interface.
  • In step 450, remote wireless capable device compatibility support is provided. In a preferred embodiment, an interface is provided on display 120 for the user to link to a wireless capable device compatibility database over a network using input device 130. In a preferred embodiment, the network comprises a web server. For example, in an embodiment of the present invention in which application device 300 is a mobile telephone with Bluetooth functionality, the database contains specific instructions for pairing VR device 200 with various makes and models of mobile telephones.
  • It is understood that the present invention is not intended to be limited to the performance of all of steps 400-450, or performance of the steps in the above-described order, although in a most preferred embodiment each of steps 400-450 is performed.
  • FIG. 6A shows the steps of generating a VKT according to a preferred embodiment of the present invention.
  • In step 500, keyword data is inputted into visual form and corresponding entry fields of table 111. For example, in a preferred embodiment, data may be extracted from a software application by VKT generation module 150 to populate the visual form and entry data fields of table 111. Manual input, or editing of extracted data may also be performed to input data into table 111.
  • In a preferred embodiment of the present invention, visual form, spoken form, and entry data is displayable on display 120 and may be entered/edited in table 111 with input device 130.
  • For example, in an embodiment of the present invention where application device 300 is a mobile telephone and set-up device 100 is a personal computer, the user may elect to extract data from an online telephone program account or an email address book located on set-up device 100 or accessed by set-up device 100 via a network to populate the visual form and entry data fields of table 111. In this case, VKT generation module 150 extracts relevant data and populates table 111. The table may then be edited by amending, adding, or deleting keywords and entries (for example, names and telephone numbers) according to the user's preference.
  • In step 510, visual form data is transformed into spoken form data. Visual form data corresponds to any visual symbol the user uses to represent a keyword in the VKT. On the other hand, spoken form data corresponds to an actual utterance associated with the keyword. In a preferred embodiment, default spoken form data is automatically generated from visual form data by VKT generation module 150. If the keywords are in a language in which the visual form data can also serve as the basis for word-to-phoneme translation and is easily edited by a user to achieve different pronunciations, the visual form data may simply be copied into the spoken form data. For example, if the keyword is RYAN, the visual form data and the default spoken form data are the same. On the other hand, for a language such as Chinese, in which the visual form data cannot serve as the basis for word-to-phoneme translation and is not easily edited to achieve different pronunciations, a word-to-pinyin translation or the like may be employed to generate the default spoken form data in pinyin or other alphabet conversion format. Thus, if the keyword is the Chinese word for “flower” and word-to-pinyin translation were employed, the visual form data would be the Chinese character for flower
    Figure US20100145696A1-20100610-P00001
    and the default spoken form data would be the pinyin translation thereof, i.e., “HUA”.
  • The user may also add or edit spoken form data by manual entry through input device 130. For example, in table 111, the default spoken form data for keywords BRIAN and JOSE is BRIAN and JOSE, but for reasons explained in further detail in the following, the spoken form data has been edited to BRIAN SMITH and HOSAY.
  • In step 515, spoken form data is mapped to phonetic format data by VKT generation module 150 by a word-to-phoneme translation module utilizing a pronunciation dictionary and pronunciation rules.
  • In step 520, TTS-guided-pronunciation editing is performed by the TTS-guided-pronunciation editing module 151. This step is shown in further detail in FIG. 6B, in which the following steps are performed.
  • In step 550, the user selects a keyword. Subsequently, in step 560, a TTS-generated voice prompt is generated by VKT generation module 150 according to the phonetic format data currently stored corresponding to the selected keyword and TTS-generated voice prompt database 113. If the user is satisfied with the output, the routine is ended and, at the user's option, another keyword may be selected. The voice prompt is preferably outputted by speaker 230 of VR device 200 if VR device 200 is connected to set-up device 100. Alternately, a speaker or other audio output device of set-up device 100 (not shown) may be used.
  • If the user is not satisfied with the output, the user may in step 570 edit the spoken form data in table 111. The edited spoken form data is in turn mapped to phonetic form a in step 580, and the routine returns to step 560 to determine if the user is satisfied with the modification, or if further editing of the spoken form data is required to bring the pronunciation generated by the TTS-generated voice prompt closer to the desired pronunciation.
  • For example, in the case of a keyword JOSE, the default spoken form data is JOSE. However, the mapped phonetic format data for JOSE is
    Figure US20100145696A1-20100610-P00002
    , which sounds like JOE-SEE when the voice prompt is generated. If this pronunciation is unsatisfactory to the user, the user may edit the spoken form data to HOSAY, for which the mapped phonetic format data is ho'zei. The voice prompt generated corresponding to this phonetic format data sounds like the Spanish-language pronunciation of the word Jose.
  • Returning to FIG. 6A, in step 530, in a preferred embodiment of the present invention a confusion test is performed on VKT 110 by confusion test module 152 in which phonetic format data corresponding to keywords is analyzed such that keywords are recognized as members of a confusion set and distinguished. Namely, phonetic format data from table 111, corresponding voice models from voice model database 112, and a confusion table are used to generate a confusion matrix to check and predict the recognition performance for the keywords and provide guidance to the user for improving performance. For example, the spoken form data may be changed to obtain a different pronunciation, a prefix or suffix may be added to the keyword, or adaptation may be performed on the confusable words.
  • For example, on determination of a confusion set, the user may elect to edit spoken form data for one or more of the confused terms, thereby returning the routine to step 510. In the case where the keywords are BRIAN and RYAN, phonetic format data mapped from the default spoken form data (BRIAN and RYAN), may be identified as a confusion set based on the voice models present in voice model database 112. Once identified to the user as such, the user may elect to edit the spoken form data for BRIAN to BRIAN SMITH. New phonetic format data is then mapped from the edited spoken form data in step 515.
  • It is a feature of embodiments of the present invention that the same set of phonetic format data is shared between TTS-guided-pronunciation editing and voice recognition. Namely, the user edits the pronunciation of a keyword guided by TTS-guided-pronunciation editing to be close to his/her own accent. Furthermore, the phonetic format data mapped from spoken form data that is the result of the TTS-guided-pronunciation editing process is used in the generation of voice models stored in voice model databases 112/212. Thus, the voice models correspond more closely to the specific pronunciation of the user and the recognition performance of VRS 202 can be improved.
  • FIG. 7A is a flow diagram of a preferred method of upgrading VRS 102.
  • In step 600, the system upgrade module 155 accesses a remote server via a network to determine if an updated version of the VRS 102 is available.
  • In step 610, if an updated version of the VRS 102 is available, the user is prompted regarding the availability of the upgrade.
  • If the user confirms the upgrade in step 610, in step 620 the updated version of VRS 102 is downloaded to the set-up device 100 via the network and stored in storage 180.
  • In step 640, the updated version of VRS 102 is installed on set-up device 100.
  • FIGS. 7B and 7C show flow diagrams of a preferred method of upgrading VRS 202.
  • In step 650, the system upgrade module 155 accesses a remote server via a network to determine if an updated version of the VRS 202 is available.
  • In step 660, if an updated version of the VRS 202 is available, the user is prompted regarding the availability of the upgrade.
  • If the user confirms the upgrade in step 660, in step 670 the updated version of VRS 202 is downloaded to the set-up device 100 via the network and stored in storage 180.
  • Then, with reference to FIG. 7C, in step 700, the VR device 200 is connected with the set-up device 100.
  • In step 710, system upgrade module 155 checks the version of VRS 202 installed on VR device 200.
  • If the updated version of VRS 202 is newer than the version installed on VR device 200, the user is prompted regarding the availability of an upgrade.
  • If the user confirms an upgrade, in step 730, the updated version of VRS 202 is downloaded to the VR device 200 and installed.
  • In preferred embodiments of the present invention, voice models are modified and downloaded to VR device 200 in two different ways: user-initiated and new-model-availability-initiated.
  • FIG. 8 is a flow diagram of a method of performing user-initiated adaptation of voice models on VR device 200 according to an embodiment of the present invention.
  • In step 801, the user profile is obtained by voice model update module 160.
  • In step 802, keyword models are built for adaptation for keywords in VKT 110. In preferred embodiments of the present invention, pre-defined keyword and digit models are built in advance, and only user-defined keywords models need to be built for adaptation in this step.
  • In step 803, the user is prompted to select a category for adaptation. The categories may include pre-defined keywords, digits, or user-defined keywords. As noted, pre-defined keyword are defined by the system, such as HOME corresponding to a text-string or SET-UP MENU corresponding to a command. User-defined keywords are those extracted during creation of the VKT 110 or entered by other means. Digits are the numerals 0-1-2-3-4-5-6-7-8-9.
  • In step 804, the user is prompted to select a mode. For example, the user may choose to adapt all keywords, new keywords, or manually select the keywords to adapt.
  • In step 805, an adaptation engine 161 in voice model update module 160 performs an adaptation using accumulated personal acoustic data corresponding to the user profile (if any), the currently existing voice models (for example, the original SI voice models or previously adapted voice models) stored in voice model database 112, and new speech input provided by the user to produce adapted voice models for download. In this step, the system is preferably trained with a number of utterances corresponding to keywords in the selected category as determined by the selected mode to improve the recognition performance of the system for a given user. Adaptation techniques are well known in the art and are not discussed in further detail here.
  • In a preferred embodiment, VR device 200 is connected to set-up device 100 and new speech input is captured via microphone 240. Otherwise, new speech input may be inputted by a microphone provided with set-up device 100 (not shown).
  • It is a feature of preferred embodiments of the present invention that personal acoustic data is recorded and accumulated in storage 180 in association with the user profile during user-initiated adaptation. For example, if the user provides new speech input for the keyword RYAN, the recorded utterance is stored in storage 180 along with data associating the recorded utterance with the keyword RYAN.
  • In step 806, adapted voice models are downloaded from set-up device 100 to VR device 200 and stored in voice model database 212.
  • FIGS. 9A and 9B illustrate a method of modifying voice models on VR device 200 initiated by the availability of new voice models on a network according to an embodiment of the present invention.
  • First, as shown in FIG. 9A, new voice models are downloaded to the set-up device.
  • In step 810, a remote server is accessed via a network to determine if new voice models are available. New voice models may be, for example, new SI models developed reflecting improvements in the art or directed to a specific speaker group and stored on a remote server.
  • In step 811, if new voice models are available, the user is prompted regarding the availability of the update.
  • In step 812, if the user confirms the update, the new voice models are downloaded to the set-up device 100 via the network and saved in storage 180.
  • FIG. 9B is a flow diagram of a method of new-model-availability-initiated voice model adaptation according to an embodiment of the present invention
  • In step 815, the user profile is obtained.
  • In step 816, the VR device 200 is connected to set-up device 100.
  • In step 817, voice model update module 160 compares the versions of the voice models in voice model database 212 on the VR device 200 with the new voice models stored in storage 180 on set-up device 100. If there are newer versions available on the set-up device, the user is prompted regarding the available upgrade.
  • If the user confirms the upgrade, in step 818, voice model update module 160 checks to determine if accumulated personal acoustic data corresponding to the user profile is available. For example, personal acoustic data accumulated during previous user-initiated adaptation may be stored in storage 180. Furthermore, personal acoustic data accumulated during normal operation of VR device 200 and stored in storage 280 may be uploaded to storage 180 and associated with the user profile.
  • If so, in step 820, VKT 210 is uploaded into a memory in set-up device 100.
  • In step 825, voice model update module 160 builds keyword models for adaptation. In preferred embodiments of the present invention, pre-defined keyword and digit models are built in advance. Thus, only user-defined keywords models need to be built for adaptation in this step.
  • In step 830, adaptation module 161 performs an adaptation using the built-keyword models, new voice models and the accumulated personal acoustic data to generate adapted new voice models. In this step, the accumulated personal acoustic data is used as speech input by the adaptation module 161. This allows for adaptation of the new models to occur without the need for new speech input by the user.
  • In step 835, adapted new voice models are downloaded to VR device 200.
  • If, on the other hand, no accumulated personal acoustic data exists, in step 840, VKT 210 is uploaded into a memory in set-up device 100.
  • In step 845, voice model update module 160 builds keyword models using the new voice models. In preferred embodiments of the present invention, pre-defined keyword and digit models are built in advance. Thus, only user-defined keywords models need to be built for adaptation in this step.
  • In step 850, updated new voice models are downloaded to VR device 200.
  • FIG. 10 shows an exemplary flow diagram of a method of performing a diagnostic routine according to an embodiment of the present invention.
  • In step 900, the VR device 200 is connected to set-up device 100.
  • In step 910, diagnostics module 165 checks the connection between the VR device 200 and the set-up device 100.
  • In step 920, diagnostics module 165 checks the flash content of memory in which VR system 202 is stored.
  • In step 930, diagnostics module 165 checks the battery status of battery 250.
  • In step 940, diagnostics module 165 checks the functioning of speaker 230. In a preferred embodiment of the invention, a test prompt is transmitted to the VR device 200 and output through speaker 230. If the user hears the voice prompt, the user inputs a positive acknowledgement through input 130 of set-up device 100. Otherwise, the user inputs a negative acknowledgement through input 130 and the test is a fail.
  • In step 950, diagnostics module 165 checks the functioning of microphone 240. In a preferred embodiment of the invention, the user is prompted to speak into microphone 240. Based on the speaker input, microphone volume is optimized such that the audio input is neither saturated nor too small to be detected. In this regard, an echo test may be performed to obtain the optimized input volume of the microphone 240 and output volume of the speaker 230 by controller 201. If no input is detected, the test is a fail.
  • In preferred embodiments of the invention, the user is notified on display 120 of any failed test. Furthermore, where appropriate, fix approaches are provided to the user.
  • While the invention has been described by way of example and in terms of the preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims (21)

1. A method for improved voice recognition in a system having a set-up device and a voice recognition device, comprising the steps of:
generating a Voice Keyword Table (VKT) and downloading the VKT to the voice recognition device;
upgrading a voice recognition system on the voice recognition device; and
modifying a voice model in the voice recognition device, whereby the voice recognition is improved.
2-11. (canceled)
12. The method of claim 1, wherein the step of upgrading and downloading a voice recognition system to the voice recognition device comprises the steps of:
downloading an updated voice recognition system to the set-up device via a network;
determining if the updated voice recognition system is more recent than a voice recognition system on the voice recognition device; and
if the updated voice recognition system is more recent, downloading the updated voice recognition system from the set-up device to the voice recognition device.
13-16. (canceled)
17. The method of claim 1, further comprising a step of providing customer support over a network.
18. The method of claim 1, further comprising a step of providing wireless capable device compatibility support comprising instructions for pairing the voice recognition device with a wireless capable application device.
19. A voice recognition system installed on a set-up device for improving voice recognition on a voice recognition device comprising:
a Voice Keyword Table (VKT) generating means for generating a VKT and downloading the VKT to the voice recognition device; and
means for updating voice models on the voice recognition device.
20-22. (canceled)
23. The voice recognition system of claim 19, further comprising means for user-initiated adaptation of voice models on the voice recognition device.
24. The voice recognition system of claim 19 further comprising means for new-model availability-initiated adaptation of voice models on the voice recognition device.
25. The voice recognition system of claim 24, wherein the means for new-model availability-initiated adaptation uses accumulated personal acoustic data recorded during user-initiated adaptation of voice models on the voice recognition device.
26. The voice recognition system of claim 24, wherein the means for new-model availability-initiated adaptation uses accumulated personal acoustic data recorded during operation of the voice recognition device to identify keywords.
27. The voice recognition system of claim 19, further including means for upgrading and downloading a voice recognition system to the voice recognition device.
28. (canceled)
29. The voice recognition system of claim 19, further including means for providing customer support via a network.
30. The voice recognition system of claim 19, further including means for providing wireless capable device compatibility support comprising instructions for pairing the voice recognition device with a wireless capable application device.
31. An apparatus for improved voice recognition, comprising:
a set-up device comprising a first Voice Keyword Table (VKT) and a first voice recognition system; and
a voice recognition device comprising a second VKT corresponding to the first VKT and
a second voice recognition system, the voice recognition device connectible to the set-up device through an interface.
32. (canceled)
33. The method of claim 31, wherein the voice recognition device is a Voice Key Pad (VKP) device.
34. The method of claim 31 wherein the voice recognition device is a wireless earset.
35. The method of claim 31, wherein the set-up device is a personal computer (PC).
US12/704,320 2006-09-04 2010-02-11 Method, system and apparatus for improved voice recognition Abandoned US20100145696A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/704,320 US20100145696A1 (en) 2006-09-04 2010-02-11 Method, system and apparatus for improved voice recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/469,893 US7689417B2 (en) 2006-09-04 2006-09-04 Method, system and apparatus for improved voice recognition
US12/704,320 US20100145696A1 (en) 2006-09-04 2010-02-11 Method, system and apparatus for improved voice recognition

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/469,893 Continuation US7689417B2 (en) 2006-09-04 2006-09-04 Method, system and apparatus for improved voice recognition

Publications (1)

Publication Number Publication Date
US20100145696A1 true US20100145696A1 (en) 2010-06-10

Family

ID=39153040

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/469,893 Active 2028-11-17 US7689417B2 (en) 2006-09-04 2006-09-04 Method, system and apparatus for improved voice recognition
US12/704,320 Abandoned US20100145696A1 (en) 2006-09-04 2010-02-11 Method, system and apparatus for improved voice recognition

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/469,893 Active 2028-11-17 US7689417B2 (en) 2006-09-04 2006-09-04 Method, system and apparatus for improved voice recognition

Country Status (3)

Country Link
US (2) US7689417B2 (en)
CN (1) CN101145341B (en)
TW (1) TWI349878B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959863B2 (en) 2014-09-08 2018-05-01 Qualcomm Incorporated Keyword detection using speaker-independent keyword models for user-designated keywords
DE102013221631B4 (en) 2012-10-31 2022-01-20 GM Global Technology Operations, LLC (n.d. Ges. d. Staates Delaware) System, method and computer program product for implementing a speech recognition functionality in a vehicle by an external device

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7957972B2 (en) * 2006-09-05 2011-06-07 Fortemedia, Inc. Voice recognition system and method thereof
US8056070B2 (en) * 2007-01-10 2011-11-08 Goller Michael D System and method for modifying and updating a speech recognition program
US20100017208A1 (en) * 2008-07-16 2010-01-21 Oki Electric Industry Co., Ltd. Integrated circuit for processing voice
US8077836B2 (en) * 2008-07-30 2011-12-13 At&T Intellectual Property, I, L.P. Transparent voice registration and verification method and system
US8990087B1 (en) * 2008-09-30 2015-03-24 Amazon Technologies, Inc. Providing text to speech from digital content on an electronic device
US9002713B2 (en) * 2009-06-09 2015-04-07 At&T Intellectual Property I, L.P. System and method for speech personalization by need
US9798653B1 (en) * 2010-05-05 2017-10-24 Nuance Communications, Inc. Methods, apparatus and data structure for cross-language speech adaptation
US20120226500A1 (en) * 2011-03-02 2012-09-06 Sony Corporation System and method for content rendering including synthetic narration
US10019983B2 (en) * 2012-08-30 2018-07-10 Aravind Ganapathiraju Method and system for predicting speech recognition performance using accuracy scores
US9786296B2 (en) 2013-07-08 2017-10-10 Qualcomm Incorporated Method and apparatus for assigning keyword model to voice operated function
US9286897B2 (en) * 2013-09-27 2016-03-15 Amazon Technologies, Inc. Speech recognizer with multi-directional decoding
US20150161986A1 (en) * 2013-12-09 2015-06-11 Intel Corporation Device-based personal speech recognition training
US20150310851A1 (en) * 2014-04-24 2015-10-29 Ford Global Technologies, Llc Method and Apparatus for Extra-Vehicular Voice Recognition Training Including Vehicular Updating
US10199034B2 (en) * 2014-08-18 2019-02-05 At&T Intellectual Property I, L.P. System and method for unified normalization in text-to-speech and automatic speech recognition
DE112014007207B4 (en) * 2014-11-25 2019-12-24 Mitsubishi Electric Corporation Information presentation system
CN105825856B (en) * 2016-05-16 2019-11-08 四川长虹电器股份有限公司 The autonomous learning method of vehicle-mounted voice identification module
US10276161B2 (en) * 2016-12-27 2019-04-30 Google Llc Contextual hotwords
TWI697890B (en) * 2018-10-12 2020-07-01 廣達電腦股份有限公司 Speech correction system and speech correction method
US11017771B2 (en) * 2019-01-18 2021-05-25 Adobe Inc. Voice command matching during testing of voice-assisted application prototypes for languages with non-phonetic alphabets
US11282500B2 (en) * 2019-07-19 2022-03-22 Cisco Technology, Inc. Generating and training new wake words
US11607038B2 (en) 2019-10-11 2023-03-21 Ergotron, Inc. Configuration techniques for an appliance with changeable components
CN112216278A (en) * 2020-09-25 2021-01-12 威盛电子股份有限公司 Speech recognition system, instruction generation system and speech recognition method thereof
CN114387947B (en) * 2022-03-23 2022-08-02 北京中科深智科技有限公司 Automatic voice synthesis method suitable for virtual anchor in E-commerce live broadcast

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4989151A (en) * 1988-02-23 1991-01-29 Kabushiki Kaisha Toshiba Navigation apparatus and matching method for navigation
US5299300A (en) * 1990-02-22 1994-03-29 Harris Corporation Interpolation processing of digital map imagery data
US5381338A (en) * 1991-06-21 1995-01-10 Wysocki; David A. Real time three dimensional geo-referenced digital orthophotograph-based positioning, navigation, collision avoidance and decision support system
US5490646A (en) * 1991-06-28 1996-02-13 Conceptual Solutions, Inc. Aircraft maintenance robot
US5689415A (en) * 1992-06-01 1997-11-18 Ducost Engineering Ltd. Control of paint spraying machines and the like
US6018568A (en) * 1996-09-25 2000-01-25 At&T Corp. Voice dialing system
US6240360B1 (en) * 1995-08-16 2001-05-29 Sean Phelan Computer system for indentifying local resources
US20050114120A1 (en) * 2003-11-25 2005-05-26 Jp Mobile Operating, L.P. Communication system and method for compressing information sent by a communication device to a target portable communication device
US20060132136A1 (en) * 2000-09-08 2006-06-22 Morio Mizuno Bore location system
US20090106027A1 (en) * 2005-05-27 2009-04-23 Matsushita Electric Industrial Co., Ltd. Voice edition device, voice edition method, and voice edition program
US20090313008A1 (en) * 2005-06-29 2009-12-17 Reiko Okada Information apparatus for use in mobile unit
US8118192B2 (en) * 2008-09-10 2012-02-21 At&T Intellectual Property I, L. P. Methods, systems, and products for marking concealed objects

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864810A (en) * 1995-01-20 1999-01-26 Sri International Method and apparatus for speech recognition adapted to an individual speaker
US6078885A (en) * 1998-05-08 2000-06-20 At&T Corp Verbal, fully automatic dictionary updates by end-users of speech synthesis and recognition systems
DE10047718A1 (en) * 2000-09-27 2002-04-18 Philips Corp Intellectual Pty Speech recognition method
DE10122828A1 (en) * 2001-05-11 2002-11-14 Philips Corp Intellectual Pty Procedure for training or adapting a speech recognizer
JP2008529101A (en) * 2005-02-03 2008-07-31 ボイス シグナル テクノロジーズ インコーポレイテッド Method and apparatus for automatically expanding the speech vocabulary of a mobile communication device
US8626506B2 (en) * 2006-01-20 2014-01-07 General Motors Llc Method and system for dynamic nametag scoring

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4989151A (en) * 1988-02-23 1991-01-29 Kabushiki Kaisha Toshiba Navigation apparatus and matching method for navigation
US5299300A (en) * 1990-02-22 1994-03-29 Harris Corporation Interpolation processing of digital map imagery data
US5381338A (en) * 1991-06-21 1995-01-10 Wysocki; David A. Real time three dimensional geo-referenced digital orthophotograph-based positioning, navigation, collision avoidance and decision support system
US5490646A (en) * 1991-06-28 1996-02-13 Conceptual Solutions, Inc. Aircraft maintenance robot
US5689415A (en) * 1992-06-01 1997-11-18 Ducost Engineering Ltd. Control of paint spraying machines and the like
US6240360B1 (en) * 1995-08-16 2001-05-29 Sean Phelan Computer system for indentifying local resources
US6018568A (en) * 1996-09-25 2000-01-25 At&T Corp. Voice dialing system
US20060132136A1 (en) * 2000-09-08 2006-06-22 Morio Mizuno Bore location system
US20050114120A1 (en) * 2003-11-25 2005-05-26 Jp Mobile Operating, L.P. Communication system and method for compressing information sent by a communication device to a target portable communication device
US20090106027A1 (en) * 2005-05-27 2009-04-23 Matsushita Electric Industrial Co., Ltd. Voice edition device, voice edition method, and voice edition program
US20090313008A1 (en) * 2005-06-29 2009-12-17 Reiko Okada Information apparatus for use in mobile unit
US8118192B2 (en) * 2008-09-10 2012-02-21 At&T Intellectual Property I, L. P. Methods, systems, and products for marking concealed objects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013221631B4 (en) 2012-10-31 2022-01-20 GM Global Technology Operations, LLC (n.d. Ges. d. Staates Delaware) System, method and computer program product for implementing a speech recognition functionality in a vehicle by an external device
US9959863B2 (en) 2014-09-08 2018-05-01 Qualcomm Incorporated Keyword detection using speaker-independent keyword models for user-designated keywords

Also Published As

Publication number Publication date
TWI349878B (en) 2011-10-01
US7689417B2 (en) 2010-03-30
CN101145341A (en) 2008-03-19
TW200813812A (en) 2008-03-16
US20080059191A1 (en) 2008-03-06
CN101145341B (en) 2011-12-07

Similar Documents

Publication Publication Date Title
US7689417B2 (en) Method, system and apparatus for improved voice recognition
US7957972B2 (en) Voice recognition system and method thereof
JP5598998B2 (en) Speech translation system, first terminal device, speech recognition server device, translation server device, and speech synthesis server device
US6463413B1 (en) Speech recognition training for small hardware devices
US20030120493A1 (en) Method and system for updating and customizing recognition vocabulary
US8160884B2 (en) Methods and apparatus for automatically extending the voice vocabulary of mobile communications devices
US8032383B1 (en) Speech controlled services and devices using internet
US7826945B2 (en) Automobile speech-recognition interface
US20050273337A1 (en) Apparatus and method for synthesized audible response to an utterance in speaker-independent voice recognition
US8676577B2 (en) Use of metadata to post process speech recognition output
US5732187A (en) Speaker-dependent speech recognition using speaker independent models
US20140365200A1 (en) System and method for automatic speech translation
US20070276651A1 (en) Grammar adaptation through cooperative client and server based speech recognition
KR20050098839A (en) Intermediary for speech processing in network environments
WO2009006081A2 (en) Pronunciation correction of text-to-speech systems between different spoken languages
US20060190260A1 (en) Selecting an order of elements for a speech synthesis
GB2557714A (en) Determining phonetic relationships
EP1899955B1 (en) Speech dialog method and system
AU760377B2 (en) A method and a system for voice dialling
US20070129945A1 (en) Voice quality control for high quality speech reconstruction
KR20010020871A (en) Method and apparatus for voice controlled devices with improved phrase storage, use, conversion, transfer, and recognition
JP2020034832A (en) Dictionary generation device, voice recognition system, and dictionary generation method
Fischer et al. Towards multi-modal interfaces for embedded devices

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION