USRE45096E1 - Voice response system - Google Patents

Voice response system Download PDF

Info

Publication number
USRE45096E1
USRE45096E1 US12/458,605 US45860509A USRE45096E US RE45096 E1 USRE45096 E1 US RE45096E1 US 45860509 A US45860509 A US 45860509A US RE45096 E USRE45096 E US RE45096E
Authority
US
United States
Prior art keywords
user
current
dialogue
stage
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US12/458,605
Inventor
Michael Andrew Harrison
Paul Ian Popay
Neil Lewis Watton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
British Telecommunications PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications PLC filed Critical British Telecommunications PLC
Priority to US12/458,605 priority Critical patent/USRE45096E1/en
Application granted granted Critical
Publication of USRE45096E1 publication Critical patent/USRE45096E1/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/35Aspects of automatic or semi-automatic exchanges related to information services provided via a voice call
    • H04M2203/355Interactive dialogue design tools, features or methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/2218Call detail recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/36Statistical metering, e.g. recording occasions when traffic exceeds capacity of trunks

Definitions

  • This invention relates to a voice response apparatus and method, particularly although not exclusively for accessing and updating remotely held data using a telephone.
  • a dialogue model is used to model a dialogue between a user and the system.
  • a dialogue model comprises states (or nodes) which are notionally connected by edges.
  • states or nodes
  • a user making a call to the visits a state and the system asks the user a question in dependence up on the current state the user is visiting.
  • the user's answer is analysed by the system in order to decide which state the user should visit next, and hence what the next question should be.
  • a method of operating a current dialogue with a user of an interactive voice response system having a dialogue model comprising
  • the prompt definition is selected in dependence on further data indicating whether the user has visited the current state during the current dialogue and upon data indicating the prompt which was selected for the most recent visit to the current state.
  • the present invention further provides an interactive voice response system having a dialogue model comprising:
  • FIG. 1 is a schematic representation of a computer loaded with software embodying the present invention
  • FIG. 2 shows an architecture of a natural language system embodying the present invention
  • FIG. 3 illustrates a grammar data updater according to the present invention
  • FIG. 4 illustrates part of the user dialogue data store of FIG. 1 .
  • FIG. 1 illustrates a conventional computer 101 , such as a Personal Computer, generally referred to as a PC, running a conventional operating system 103 , such as Windows (a Registered Trade Mark of Microsoft Corporation), having a store 123 and having a number of resident application programs 105 such as an e-mail program, a text to speech synthesiser, a speech recogniser, a telephone interface program or a database management program.
  • the computer 101 also has a program 109 which together with data stored in the store 123 , and resident application programs provides an interactive voice response system as described below with reference to FIG. 2 .
  • the computer 101 is connected to a conventional disc storage unit 111 for storing data and programs, a keyboard 113 and mouse 115 for allowing user input and a printer 117 and display unit 119 for providing output from the computer 101 .
  • the computer 101 also has access to external networks (not shown) via a network connection card 121 .
  • FIG. 2 shows an architecture of an embodiment of the interactive voice response system according to this invention.
  • a user's speech utterance is received by a speech recogniser 10 .
  • the received speech utterance is analysed by the recogniser 10 with reference to a user grammar data store 24 .
  • the user grammar data store 24 represents sequences of words or sub-words which can be recognised by the recogniser 10 and the probability of these sequences occurring.
  • the recogniser 10 analyses the received speech utterance, with reference to speech units which are held in a speech unit database 16 , and provides as an output a representation of sequences of words or sub-words which most closely resemble the received speech utterance.
  • the representation comprises the most likely sequence of words or sub-words, in other embodiments the representation could be a graph of the mostly likely sequences.
  • the output graph including the confidence measures are received by a classifier 6 , which classifies the received graph according to a predefined set of meanings, with reference to a semantic model 20 (which is one of a plurality (not shown) of possible semantic models) to form a semantic classification.
  • the semantic classification comprises a vector of likelihoods, each likelihood relating to a particular one of the predefined set of meanings.
  • a dialogue manager 4 operates using a state based dialogue model 18 as will be described more fully later.
  • the dialogue manager 4 uses the semantic classification vector and information about the current dialogue state together with information from the dialogue model 18 and user dialogue data 15 to instruct a message generator 8 to generate a message, which is spoken to the user via a speech synthesiser 12 .
  • the message generator 8 uses information from a message model 14 to construct appropriate messages.
  • the speech synthesiser uses a speech unit database 16 which contains speech units representing a particular voice.
  • the dialogue manager 4 also instructs the recogniser 10 which user grammar to use from the user grammar data store 24 for recognising a received response to the generated message, and also instructs the classifier 6 as to the semantic model to use for classification of the received response.
  • the dialogue manager 4 interfaces to other systems 2 (for example, a customer records database).
  • a user When a user calls the system the user is asked for a unique user identifier and a personal identification number. If the data entered by the user (which may be spoken or entered using a telephone keypad) matches an entry in a user access database 22 then they are allowed access to the service.
  • the dialogue model 18 comprises a plurality of states connected together by interconnecting edges.
  • a caller moves to a particular state by speaking a one of several words or phases which are classified by the classifier 6 as having a particular meaning.
  • ‘view my calendar’ and ‘go to my appointments’ may be classified as meaning the same thing as far as the dialogue is concerned, and may take the user to a particular dairy access state.
  • the user dialogue data store 15 stores a count of the number of times a user has visited a particular state in the dialogue model.
  • FIG. 4 shows schematically the contents of the user dialogue data store 15 .
  • the dialogue manager instructs the message generator to play a message to the caller to guide them as to the actions they may perform.
  • the verbosity of the message depends upon the count of the number of times the user had previously visited that state, which is stored in the user dialogue data store 15 .
  • the message used will be verbose as the count will be equal to 0.
  • the messages become more concise as the stored count for that state increases i.e. each time an individual user uses the state, whether or not the use of the state is during a single call or whether the use is during a later call to the system.
  • the count values stored in the store 15 may be updated periodically to reduce the value if a particular user has not used a particular state recently, therefore the messages will become more verbose over time should a user not enter that state in subsequent calls, or if a user has not used the system for some time.
  • the user dialogue data store 15 also stores a Boolean flag indicating whether or not a user has visited a particular state in the dialogue model within a particular call, together with a record of the message which was played to the user the last time that state was visited.
  • a Boolean flag indicating whether or not a user has visited a particular state in the dialogue model within a particular call, together with a record of the message which was played to the user the last time that state was visited.
  • messages will be selected by the dialogue manager 4 to ensure a different message is played to that played last time the state was visited during the call. This avoids the repetition that human factors analysis shows detrimentally affects the likelihood of a user reusing the system.
  • there are a plurality of messages stored in the message model store 14 with the next message to be used randomly selected from the set not including the message used previously (which is stored in the user dialogue data store 15 ).
  • Speech data received from the user is recognised by the recogniser 10 with reference to the user grammar data store 24 .
  • the user grammar data is identical to generic grammar data stored in a generic grammar data store 36 .
  • the speech data store 32 stores for each user speech data along with the sequences of words or sub-words which were recognised by the recogniser 10 .
  • the recognised speech is used by a weighting updater 30 to update weighting values for words which have been recognised in a grammar definition store 40 .
  • a weighting updater 30 For the particular user who made the call the words which have been recognised have a weighting value increased. In other embodiments of the invention words which have not been used also have their weighting value decreased.
  • a compiler 38 is used to update the user grammar data store 42 according to the weighting values stored in the grammar definition store 40 .
  • a method of updating a grammar for a speech recogniser according to provided weighting values is described in our co-pending patent application no EP96904973.3. Together the weighting updater 30 , the grammar definition store 40 and the compiler 38 provide the grammar updater 42 of the present invention.
  • Recognised speech does not need to be stored in a speech data store, in other embodiments of the invention recognised speech may be used to update user grammar data in a single process which may be carried out immediately. Furthermore it will be understood that the updating process could take at predetermined time intervals as described above, or could conveniently be done whenever there is spare processing power available, for example when there are no calls in progress.
  • the result of the use of the compiler 38 is that words or phrases which a particular user uses more frequently are given a higher weighting in the user grammar data store 24 than those which are hardly ever used. It is possible in fact to effectively delete words from a particular user grammar by providing a weighting value of 0. Of course, it may happen that a user starts to use words which have not been used previously. The recogniser 10 may not recognise these words due to the fact that these words have a very low weighting value associated with them for that user in the user grammar data store 42 .
  • the users speech which has been stored in the speech data store 32 is periodically recognised by the speech recogniser 10 using generic grammar data 36 , and the recognised speech is sent to a grammar data checker 34 which checks that no words have been recognised which have been previously been given a very low weighting. If this is the case then the weighting value for that word will be updated accordingly, and the compiler 38 is used to update the user grammar data store 42 according to the updated weighting values stored in the grammar definition store 40 .
  • the interactive voice response program 109 can be contained on various transmission and/or storage mediums such as a floppy disc, CD-ROM, or magnetic tape so that the program can be loaded onto one or more general purpose computers or could be downloaded over a computer network using a suitable transmission medium.

Abstract

With interactive voice response services, it can be frustrating for a user to become stuck in a dialogue where the same question is asked repetitively. Here the wording of questions used by the system are varied throughout the dialogue depending upon how many times a user has visited a particular dialogue state during the call history. Furthermore the wording of the question is also varied in dependence upon the way in which the question was asked the last time the user was in a particular dialogue state.

Description

This application is the US national phase of international application PCT/GB02/01643 filed 8 April 2002 which designated the U.S.
TECHNICAL FIELD
This invention relates to a voice response apparatus and method, particularly although not exclusively for accessing and updating remotely held data using a telephone.
BACKGROUND TO THE INVENTION AND PRIOR ART
In known voice response systems a dialogue model is used to model a dialogue between a user and the system. Often such a dialogue model comprises states (or nodes) which are notionally connected by edges. Conceptually a user making a call to the visits a state and the system asks the user a question in dependence up on the current state the user is visiting. The user's answer is analysed by the system in order to decide which state the user should visit next, and hence what the next question should be.
However, a problem with such a system is that it is possible for the user to get ‘stuck’ in a particular state, and hence the dialogue becomes repetitive. In the worst case the user terminates the call, at the very least the user is discouraged from using the system again even if they do eventually achieve the task they set out to do.
SUMMARY OF THE INVENTION
According to a first aspect of the present invention there is provided a method of operating a current dialogue with a user of an interactive voice response system having a dialogue model comprising
    • a plurality of states and a plurality of interconnecting edges;
    • a current state; and
    • user dialogue data indicating for a user a total number of visits to a state;
      in which a prompt definition, for use by a message generator to generate a message for sending to the user, is selected dependence upon the current state, upon the number of visits made to the current state during the current dialogue and upon the total number of visits said user has made to the current state during one or more previous dialogues.
Preferably the prompt definition is selected in dependence on further data indicating whether the user has visited the current state during the current dialogue and upon data indicating the prompt which was selected for the most recent visit to the current state.
Moreover, from a second aspect the present invention further provides an interactive voice response system having a dialogue model comprising:
    • a plurality of states and a plurality of interconnecting edges;
    • a current state; and
    • user dialogue data indicating for a user a total number of visits to a state;
    • the system further comprising prompt definition selection means for selecting a prompt definition, for use by a message generator to generate a message for sending to the user, in dependence upon the current state, upon the number of visits made to the current state during the current dialogue and upon the total number of visits said user has made to the current state during one or more previous dialogues.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the present invention will now be described, presented by way of example only, with reference to the accompanying drawings in which:
FIG. 1 is a schematic representation of a computer loaded with software embodying the present invention;
FIG. 2 shows an architecture of a natural language system embodying the present invention;
FIG. 3 illustrates a grammar data updater according to the present invention; and
FIG. 4 illustrates part of the user dialogue data store of FIG. 1.
DESCRIPTION OF THE PREFERRED EMBODIMENT
FIG. 1 illustrates a conventional computer 101, such as a Personal Computer, generally referred to as a PC, running a conventional operating system 103, such as Windows (a Registered Trade Mark of Microsoft Corporation), having a store 123 and having a number of resident application programs 105 such as an e-mail program, a text to speech synthesiser, a speech recogniser, a telephone interface program or a database management program. The computer 101 also has a program 109 which together with data stored in the store 123, and resident application programs provides an interactive voice response system as described below with reference to FIG. 2.
The computer 101 is connected to a conventional disc storage unit 111 for storing data and programs, a keyboard 113 and mouse 115 for allowing user input and a printer 117 and display unit 119 for providing output from the computer 101. The computer 101 also has access to external networks (not shown) via a network connection card 121.
FIG. 2 shows an architecture of an embodiment of the interactive voice response system according to this invention. A user's speech utterance is received by a speech recogniser 10. The received speech utterance is analysed by the recogniser 10 with reference to a user grammar data store 24. The user grammar data store 24 represents sequences of words or sub-words which can be recognised by the recogniser 10 and the probability of these sequences occurring. The recogniser 10 analyses the received speech utterance, with reference to speech units which are held in a speech unit database 16, and provides as an output a representation of sequences of words or sub-words which most closely resemble the received speech utterance. In this embodiment of the invention the representation comprises the most likely sequence of words or sub-words, in other embodiments the representation could be a graph of the mostly likely sequences.
Recognition results are expected to be error prone, and certain words or phrases will be much more important to the meaning of the input utterance that others. Thus, confidence values associated with each word in the output representation are also provided. The confidence values give a measure related to the likelihood that the associated word has been correctly recognised by the recogniser 10. The output graph including the confidence measures are received by a classifier 6, which classifies the received graph according to a predefined set of meanings, with reference to a semantic model 20 (which is one of a plurality (not shown) of possible semantic models) to form a semantic classification. The semantic classification comprises a vector of likelihoods, each likelihood relating to a particular one of the predefined set of meanings. A dialogue manager 4 operates using a state based dialogue model 18 as will be described more fully later. The dialogue manager 4 uses the semantic classification vector and information about the current dialogue state together with information from the dialogue model 18 and user dialogue data 15 to instruct a message generator 8 to generate a message, which is spoken to the user via a speech synthesiser 12. The message generator 8 uses information from a message model 14 to construct appropriate messages. The speech synthesiser uses a speech unit database 16 which contains speech units representing a particular voice. The dialogue manager 4 also instructs the recogniser 10 which user grammar to use from the user grammar data store 24 for recognising a received response to the generated message, and also instructs the classifier 6 as to the semantic model to use for classification of the received response. The dialogue manager 4 interfaces to other systems 2 (for example, a customer records database).
When a user calls the system the user is asked for a unique user identifier and a personal identification number. If the data entered by the user (which may be spoken or entered using a telephone keypad) matches an entry in a user access database 22 then they are allowed access to the service.
The dialogue model 18 comprises a plurality of states connected together by interconnecting edges. A caller moves to a particular state by speaking a one of several words or phases which are classified by the classifier 6 as having a particular meaning. To use the example above, ‘view my calendar’ and ‘go to my appointments’ may be classified as meaning the same thing as far as the dialogue is concerned, and may take the user to a particular dairy access state.
The user dialogue data store 15 stores a count of the number of times a user has visited a particular state in the dialogue model. FIG. 4 shows schematically the contents of the user dialogue data store 15.
Once a user is in a particular state the dialogue manager instructs the message generator to play a message to the caller to guide them as to the actions they may perform. The verbosity of the message depends upon the count of the number of times the user had previously visited that state, which is stored in the user dialogue data store 15. When a new user calls the system, the message used will be verbose as the count will be equal to 0. The messages become more concise as the stored count for that state increases i.e. each time an individual user uses the state, whether or not the use of the state is during a single call or whether the use is during a later call to the system. The count values stored in the store 15 may be updated periodically to reduce the value if a particular user has not used a particular state recently, therefore the messages will become more verbose over time should a user not enter that state in subsequent calls, or if a user has not used the system for some time.
The user dialogue data store 15 also stores a Boolean flag indicating whether or not a user has visited a particular state in the dialogue model within a particular call, together with a record of the message which was played to the user the last time that state was visited. When the user visits the same state on more than one occasion during a particular call, messages will be selected by the dialogue manager 4 to ensure a different message is played to that played last time the state was visited during the call. This avoids the repetition that human factors analysis shows detrimentally affects the likelihood of a user reusing the system. For any sate with potential repetition, there are a plurality of messages stored in the message model store 14, with the next message to be used randomly selected from the set not including the message used previously (which is stored in the user dialogue data store 15).
In order to tailor the system to a particular user so that the system becomes easier to use as the system is used more, each time a user calls the system data is stored in a speech data store 32. Speech data received from the user is recognised by the recogniser 10 with reference to the user grammar data store 24. Initially before any calls have been made by a user the user grammar data is identical to generic grammar data stored in a generic grammar data store 36.
The speech data store 32 stores for each user speech data along with the sequences of words or sub-words which were recognised by the recogniser 10. After each call the recognised speech is used by a weighting updater 30 to update weighting values for words which have been recognised in a grammar definition store 40. For the particular user who made the call the words which have been recognised have a weighting value increased. In other embodiments of the invention words which have not been used also have their weighting value decreased. Once a day a compiler 38 is used to update the user grammar data store 42 according to the weighting values stored in the grammar definition store 40. A method of updating a grammar for a speech recogniser according to provided weighting values is described in our co-pending patent application no EP96904973.3. Together the weighting updater 30, the grammar definition store 40 and the compiler 38 provide the grammar updater 42 of the present invention.
Recognised speech does not need to be stored in a speech data store, in other embodiments of the invention recognised speech may be used to update user grammar data in a single process which may be carried out immediately. Furthermore it will be understood that the updating process could take at predetermined time intervals as described above, or could conveniently be done whenever there is spare processing power available, for example when there are no calls in progress.
The result of the use of the compiler 38 is that words or phrases which a particular user uses more frequently are given a higher weighting in the user grammar data store 24 than those which are hardly ever used. It is possible in fact to effectively delete words from a particular user grammar by providing a weighting value of 0. Of course, it may happen that a user starts to use words which have not been used previously. The recogniser 10 may not recognise these words due to the fact that these words have a very low weighting value associated with them for that user in the user grammar data store 42. In order to prevent this problem the users speech which has been stored in the speech data store 32 is periodically recognised by the speech recogniser 10 using generic grammar data 36, and the recognised speech is sent to a grammar data checker 34 which checks that no words have been recognised which have been previously been given a very low weighting. If this is the case then the weighting value for that word will be updated accordingly, and the compiler 38 is used to update the user grammar data store 42 according to the updated weighting values stored in the grammar definition store 40.
Whilst this invention has been described with reference to stores 32, 40, 42 which store data for each user it will be understood that this data could be organised in any number of ways, for example there could be a separate store for each user, or store 42 could be organised as a separate store for each grammar for each user.
As will be understood by those skilled in the art, the interactive voice response program 109 can be contained on various transmission and/or storage mediums such as a floppy disc, CD-ROM, or magnetic tape so that the program can be loaded onto one or more general purpose computers or could be downloaded over a computer network using a suitable transmission medium.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise”, “comprising” and the like are to be construed in an inclusive as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to”.

Claims (17)

The invention claimed is:
1. A method of operating an interactive voice response system to engage in a current dialogue with a user, said system having a dialogue model comprising
a plurality of states and a plurality of interconnecting edges;
a current state;
user dialogue data indicating for a user a total number of visits to a state;
in which the wording of a message for sending to the user at a state repeated in said current dialogue, is selected in dependence upon
a) the current state; and
b) the number of times the current state has been repeated during the current dialogue; and
c) the total number of times the current state has been repeated during one or more previous dialogues with said user; and
repeating a) through c) for multiple ones of said plurality of states.
2. A method according to claim 1 in which the wording of said message is selected in dependence on further data indicating whether the user has previously been at the current stage during the current dialogue and upon data indicating the wording of the message which was selected for the most recent visit to the current state.
3. An interactive voice response system having a dialogue model comprising:
a plurality of states and a plurality of interconnecting edges;
a current state;
user dialogue data indicating for a user a total number of visits to a state;
the system further comprising message wording selection means, for selecting the wording of a message for sending to the user at a state repeated in said current dialogue, in dependence upon
a) the current state; and
b) the number of times the current state has been repeated during the current dialogue; and
c) the total number of times the current state has been repeated during one or more previous dialogues; and
repeating a) through c) for multiple ones of said plurality of states.
4. A system according to claim 3, wherein the message wording selection means is further operable to select the message wording in dependence on further data indicating whether the user has visited the current state during the current dialogue and upon data indicating the message wording which was selected for the most recent visit to the current state.
5. A method of operating an interactive voice response system to engage in a current dialogue with a user, said interactive voice response system storing a dialogue model comprising
a plurality of states and a plurality of interconnecting edges,
a current state,
user dialogue data indicating for a user a total number of visits to a state in the current dialogue and a total number of visits in one or more previous dialogues;
wherein each state represents a stage of the dialogue and each interconnecting edge represents a transition between one stage of the dialogue and a subsequent stage,
said method comprising the steps of
finding the number of times the current stage has been repeated in the current and previous dialogues from said user dialogue data; and
selecting the wording to be output by said interactive voice response system at a repeated stage of the current dialogue in dependence upon
a) the current stage in the dialogue; and
b) the number of times the current stage has been repeated, and
c) the total number of times the user has been at this stage during one or more previous and
repeating a) through c) for multiple ones of said plurality of states.
6. A method according to claim 1 in which the message wording is selected in dependent on further data indicating whether the user has been at the current stage during the current dialogue and upon data indicating the message wording which was selected the previous time the user was at the current stage.
7. An interactive voice response system having a dialogue model comprising:
a plurality of states and a plurality of interconnecting edges;
a current state;
user dialogue data indicating for a user a total number of visits to a state,
wherein each state represents a stage of the dialogue and each interconnecting edge represents a transition between one stage of the dialogue and a subsequent stage,
the system further comprising:
stage repetition monitoring means arranged in operation to find the number of times the current stage has been repeated in the current and previous dialogues from said user dialogue data,
message wording selection means arranged in operation to select the wording of a message for sending to the user at said current repeated stage, said message wording being selected in dependence upon
a) the current stage in the dialogue; and
b) the number of times the current stage has been repeated; and
c) the total number of times the user has been at his stage during one or more previously dialogues; and
repeating a) through c) for multiple ones of said plurality of states.
8. A system according to claim 3, wherein the message wording selection means is further operable to select the message wording in dependence on further data indicating whether the user has earlier visited the current stage during the current dialogue and upon data indicating the message wording which was selected on the most recent occasion on which the user was at the current stage.
9. A system according to claim 7, wherein the message wording selection means is further operable to select the message wording in dependence on further data indicating whether the user has earlier visited the current stage during the current dialogue and upon data indicating the message wording which was selected on the most recent occasion on which the user was at the current stage.
10. A system as in claim 3, wherein the user enters a unique identifier and a personal identification number which must match an entry in a user access database before the user can access the system.
11. A system as in claim 10, wherein the user enters the unique identifier and personal identification number using a telephone keypad.
12. A system as in claim 10, wherein the user enters the unique identifier and personal identification number by speaking into a telephone.
13. A system as in claim 7, wherein the user enters a unique identifier and a personal identification number which must match an entry in a user access database before the user can access the system.
14. A system as in claim 13, wherein the user enters the unique identifier and personal identification number using a telephone keypad.
15. A system as in claim 13, wherein the user enters the unique identifier and personal identification number by speaking into a telephone.
16. A method as in claim 1, wherein the user enters a unique identifier and a personal identification number which must match an entry in a user access database before the user can access the system.
17. A method as in claim 2, wherein the user enters a unique identifier and a personal identification number which must match an entry in a user access database before the user can access the system.
US12/458,605 2001-04-19 2002-04-08 Voice response system Expired - Lifetime USRE45096E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/458,605 USRE45096E1 (en) 2001-04-19 2002-04-08 Voice response system

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EP01303600 2001-04-19
EP01303600 2001-04-19
US47490402A 2002-04-08 2002-04-08
US12/458,605 USRE45096E1 (en) 2001-04-19 2002-04-08 Voice response system
PCT/GB2002/001643 WO2002087202A1 (en) 2001-04-19 2002-04-08 Voice response system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US47490402A Reissue 2001-04-19 2002-04-08

Publications (1)

Publication Number Publication Date
USRE45096E1 true USRE45096E1 (en) 2014-08-26

Family

ID=8181905

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/474,904 Ceased US7245706B2 (en) 2001-04-19 2002-04-08 Voice response system
US12/458,605 Expired - Lifetime USRE45096E1 (en) 2001-04-19 2002-04-08 Voice response system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/474,904 Ceased US7245706B2 (en) 2001-04-19 2002-04-08 Voice response system

Country Status (4)

Country Link
US (2) US7245706B2 (en)
EP (1) EP1380154A1 (en)
CA (1) CA2441195C (en)
WO (1) WO2002087202A1 (en)

Families Citing this family (106)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
CA2441195C (en) * 2001-04-19 2008-08-26 British Telecommunications Public Limited Company Voice response system
JP4679254B2 (en) * 2004-10-28 2011-04-27 富士通株式会社 Dialog system, dialog method, and computer program
US9083798B2 (en) * 2004-12-22 2015-07-14 Nuance Communications, Inc. Enabling voice selection of user preferences
US20060287865A1 (en) * 2005-06-16 2006-12-21 Cross Charles W Jr Establishing a multimodal application voice
US7917365B2 (en) * 2005-06-16 2011-03-29 Nuance Communications, Inc. Synchronizing visual and speech events in a multimodal application
US8090584B2 (en) * 2005-06-16 2012-01-03 Nuance Communications, Inc. Modifying a grammar of a hierarchical multimodal menu in dependence upon speech command frequency
US20060287858A1 (en) * 2005-06-16 2006-12-21 Cross Charles W Jr Modifying a grammar of a hierarchical multimodal menu with keywords sold to customers
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US8073700B2 (en) 2005-09-12 2011-12-06 Nuance Communications, Inc. Retrieval and presentation of network service results for mobile device using a multimodal browser
JP4197344B2 (en) * 2006-02-20 2008-12-17 インターナショナル・ビジネス・マシーンズ・コーポレーション Spoken dialogue system
US7848314B2 (en) * 2006-05-10 2010-12-07 Nuance Communications, Inc. VOIP barge-in support for half-duplex DSR client on a full-duplex network
US20070274297A1 (en) * 2006-05-10 2007-11-29 Cross Charles W Jr Streaming audio from a full-duplex network through a half-duplex device
US9208785B2 (en) * 2006-05-10 2015-12-08 Nuance Communications, Inc. Synchronizing distributed speech recognition
US8332218B2 (en) 2006-06-13 2012-12-11 Nuance Communications, Inc. Context-based grammars for automated speech recognition
US7676371B2 (en) * 2006-06-13 2010-03-09 Nuance Communications, Inc. Oral modification of an ASR lexicon of an ASR engine
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8374874B2 (en) * 2006-09-11 2013-02-12 Nuance Communications, Inc. Establishing a multimodal personality for a multimodal application in dependence upon attributes of user interaction
US8145493B2 (en) 2006-09-11 2012-03-27 Nuance Communications, Inc. Establishing a preferred mode of interaction between a user and a multimodal application
US7957976B2 (en) 2006-09-12 2011-06-07 Nuance Communications, Inc. Establishing a multimodal advertising personality for a sponsor of a multimodal application
US8086463B2 (en) 2006-09-12 2011-12-27 Nuance Communications, Inc. Dynamically generating a vocal help prompt in a multimodal application
US8073697B2 (en) 2006-09-12 2011-12-06 International Business Machines Corporation Establishing a multimodal personality for a multimodal application
US7827033B2 (en) 2006-12-06 2010-11-02 Nuance Communications, Inc. Enabling grammars in web page frames
US8069047B2 (en) * 2007-02-12 2011-11-29 Nuance Communications, Inc. Dynamically defining a VoiceXML grammar in an X+V page of a multimodal application
US7801728B2 (en) 2007-02-26 2010-09-21 Nuance Communications, Inc. Document session replay for multimodal applications
US8150698B2 (en) 2007-02-26 2012-04-03 Nuance Communications, Inc. Invoking tapered prompts in a multimodal application
US9208783B2 (en) 2007-02-27 2015-12-08 Nuance Communications, Inc. Altering behavior of a multimodal application based on location
US7809575B2 (en) * 2007-02-27 2010-10-05 Nuance Communications, Inc. Enabling global grammars for a particular multimodal application
US7840409B2 (en) * 2007-02-27 2010-11-23 Nuance Communications, Inc. Ordering recognition results produced by an automatic speech recognition engine for a multimodal application
US20080208589A1 (en) * 2007-02-27 2008-08-28 Cross Charles W Presenting Supplemental Content For Digital Media Using A Multimodal Application
US7822608B2 (en) * 2007-02-27 2010-10-26 Nuance Communications, Inc. Disambiguating a speech recognition grammar in a multimodal application
US8713542B2 (en) * 2007-02-27 2014-04-29 Nuance Communications, Inc. Pausing a VoiceXML dialog of a multimodal application
US20080208586A1 (en) * 2007-02-27 2008-08-28 Soonthorn Ativanichayaphong Enabling Natural Language Understanding In An X+V Page Of A Multimodal Application
US8938392B2 (en) * 2007-02-27 2015-01-20 Nuance Communications, Inc. Configuring a speech engine for a multimodal application based on location
US8843376B2 (en) 2007-03-13 2014-09-23 Nuance Communications, Inc. Speech-enabled web content searching using a multimodal browser
US7945851B2 (en) * 2007-03-14 2011-05-17 Nuance Communications, Inc. Enabling dynamic voiceXML in an X+V page of a multimodal application
US8515757B2 (en) 2007-03-20 2013-08-20 Nuance Communications, Inc. Indexing digitized speech with words represented in the digitized speech
US8670987B2 (en) * 2007-03-20 2014-03-11 Nuance Communications, Inc. Automatic speech recognition with dynamic grammar rules
US8909532B2 (en) * 2007-03-23 2014-12-09 Nuance Communications, Inc. Supporting multi-lingual user interaction with a multimodal application
US20080235029A1 (en) * 2007-03-23 2008-09-25 Cross Charles W Speech-Enabled Predictive Text Selection For A Multimodal Application
US8788620B2 (en) * 2007-04-04 2014-07-22 International Business Machines Corporation Web service support for a multimodal client processing a multimodal application
US8725513B2 (en) * 2007-04-12 2014-05-13 Nuance Communications, Inc. Providing expressive user interaction with a multimodal application
US8862475B2 (en) * 2007-04-12 2014-10-14 Nuance Communications, Inc. Speech-enabled content navigation and control of a distributed multimodal browser
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US8214242B2 (en) * 2008-04-24 2012-07-03 International Business Machines Corporation Signaling correspondence between a meeting agenda and a meeting discussion
US9349367B2 (en) * 2008-04-24 2016-05-24 Nuance Communications, Inc. Records disambiguation in a multimodal application operating on a multimodal device
US8229081B2 (en) * 2008-04-24 2012-07-24 International Business Machines Corporation Dynamically publishing directory information for a plurality of interactive voice response systems
US8082148B2 (en) * 2008-04-24 2011-12-20 Nuance Communications, Inc. Testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise
US8121837B2 (en) 2008-04-24 2012-02-21 Nuance Communications, Inc. Adjusting a speech engine for a mobile computing device based on background noise
US8380513B2 (en) * 2009-05-19 2013-02-19 International Business Machines Corporation Improving speech capabilities of a multimodal application
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US8290780B2 (en) 2009-06-24 2012-10-16 International Business Machines Corporation Dynamically extending the speech prompts of a multimodal application
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US8510117B2 (en) * 2009-07-09 2013-08-13 Nuance Communications, Inc. Speech enabled media sharing in a multimodal application
JP2011033680A (en) * 2009-07-30 2011-02-17 Sony Corp Voice processing device and method, and program
US8416714B2 (en) * 2009-08-05 2013-04-09 International Business Machines Corporation Multimodal teleconferencing
US8381107B2 (en) * 2010-01-13 2013-02-19 Apple Inc. Adaptive audio feedback system and method
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
KR101959188B1 (en) 2013-06-09 2019-07-02 애플 인크. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US11222627B1 (en) * 2017-11-22 2022-01-11 Educational Testing Service Exploring ASR-free end-to-end modeling to improve spoken language understanding in a cloud-based dialog system

Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4932021A (en) * 1989-04-03 1990-06-05 At&T Bell Laboratories Path learning feature for an automated telemarketing system
EP0697780A2 (en) 1994-08-19 1996-02-21 International Business Machines Corporation Voice response system
US5694558A (en) 1994-04-22 1997-12-02 U S West Technologies, Inc. Method and system for interactive object-oriented dialogue management
US5787151A (en) * 1995-05-18 1998-07-28 Northern Telecom Limited Telephony based delivery system of messages containing selected greetings
US5818908A (en) * 1996-11-05 1998-10-06 At&T Corp. Selective voice menu system
US6016336A (en) * 1997-11-18 2000-01-18 At&T Corp Interactive voice response system with call trainable routing
EP0973314A2 (en) 1998-07-17 2000-01-19 Siemens Information and Communication Networks Inc. Apparatus and method for improving the user interface of integrated voice response systems
EP0992980A2 (en) 1998-10-06 2000-04-12 Lucent Technologies Inc. Web-based platform for interactive voice response (IVR)
GB2342530A (en) 1998-10-07 2000-04-12 Vocalis Ltd Gathering user inputs by speech recognition
US6061433A (en) * 1995-10-19 2000-05-09 Intervoice Limited Partnership Dynamically changeable menus based on externally available data
WO2000065814A1 (en) 1999-04-23 2000-11-02 Nuance Communications Object-orientated framework for interactive voice response applications
WO2000078022A1 (en) 1999-06-11 2000-12-21 Telstra New Wave Pty Ltd A method of developing an interactive system
US6167117A (en) * 1996-10-07 2000-12-26 Nortel Networks Limited Voice-dialing system using model of calling behavior
US6349134B1 (en) * 1985-07-10 2002-02-19 Ronald A. Katz Technology Licensing, L.P. Telephonic-interface statistical analysis system
US6370238B1 (en) * 1997-09-19 2002-04-09 Siemens Information And Communication Networks Inc. System and method for improved user interface in prompting systems
US6404874B1 (en) * 1997-03-27 2002-06-11 Cisco Technology, Inc. Telecommute server
US6434223B2 (en) * 1985-07-10 2002-08-13 Ronald A. Katz Technology Licensing, L.P. Telephone interface call processing system with call selectivity
US6449496B1 (en) * 1999-02-08 2002-09-10 Qualcomm Incorporated Voice recognition user interface for telephone handsets
US6456619B1 (en) * 1997-12-04 2002-09-24 Siemens Information And Communication Networks, Inc. Method and system for supporting a decision tree with placeholder capability
US6501832B1 (en) * 1999-08-24 2002-12-31 Microstrategy, Inc. Voice code registration system and method for registering voice codes for voice pages in a voice network access provider system
US6512415B1 (en) * 1985-07-10 2003-01-28 Ronald A. Katz Technology Licensing Lp. Telephonic-interface game control system
US6570967B2 (en) * 1985-07-10 2003-05-27 Ronald A. Katz Technology Licensing, L.P. Voice-data telephonic interface control system
US6584181B1 (en) * 1997-09-19 2003-06-24 Siemens Information & Communication Networks, Inc. System and method for organizing multi-media messages folders from a displayless interface and selectively retrieving information using voice labels
US6678360B1 (en) * 1985-07-10 2004-01-13 Ronald A. Katz Technology Licensing, L.P. Telephonic-interface statistical analysis system
US6707889B1 (en) * 1999-08-24 2004-03-16 Microstrategy Incorporated Multiple voice network access provider system and method
US6757362B1 (en) * 2000-03-06 2004-06-29 Avaya Technology Corp. Personal virtual assistant
US6792086B1 (en) * 1999-08-24 2004-09-14 Microstrategy, Inc. Voice network access provider system and method
US6850949B2 (en) * 2002-06-03 2005-02-01 Right Now Technologies, Inc. System and method for generating a dynamic interface via a communications network
US6885733B2 (en) * 2001-12-03 2005-04-26 At&T Corp. Method of providing a user interface for audio telecommunications systems
US6888929B1 (en) * 1999-08-24 2005-05-03 Microstrategy, Inc. Revenue generation method for use with voice network access provider system and method
US7003079B1 (en) * 2001-03-05 2006-02-21 Bbnt Solutions Llc Apparatus and method for monitoring performance of an automated response system
US7245706B2 (en) * 2001-04-19 2007-07-17 British Telecommunications Public Limited Company Voice response system
US7324942B1 (en) * 2002-01-29 2008-01-29 Microstrategy, Incorporated System and method for interactive voice services using markup language with N-best filter element
US7457397B1 (en) * 1999-08-24 2008-11-25 Microstrategy, Inc. Voice page directory system in a voice page creation and delivery system
US7756258B2 (en) * 2003-12-11 2010-07-13 British Telecommunications Plc Communications system with direct access mailbox

Patent Citations (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434223B2 (en) * 1985-07-10 2002-08-13 Ronald A. Katz Technology Licensing, L.P. Telephone interface call processing system with call selectivity
US6349134B1 (en) * 1985-07-10 2002-02-19 Ronald A. Katz Technology Licensing, L.P. Telephonic-interface statistical analysis system
US6678360B1 (en) * 1985-07-10 2004-01-13 Ronald A. Katz Technology Licensing, L.P. Telephonic-interface statistical analysis system
US6570967B2 (en) * 1985-07-10 2003-05-27 Ronald A. Katz Technology Licensing, L.P. Voice-data telephonic interface control system
US6512415B1 (en) * 1985-07-10 2003-01-28 Ronald A. Katz Technology Licensing Lp. Telephonic-interface game control system
US4932021A (en) * 1989-04-03 1990-06-05 At&T Bell Laboratories Path learning feature for an automated telemarketing system
US5694558A (en) 1994-04-22 1997-12-02 U S West Technologies, Inc. Method and system for interactive object-oriented dialogue management
EP0697780A2 (en) 1994-08-19 1996-02-21 International Business Machines Corporation Voice response system
US5787151A (en) * 1995-05-18 1998-07-28 Northern Telecom Limited Telephony based delivery system of messages containing selected greetings
US6061433A (en) * 1995-10-19 2000-05-09 Intervoice Limited Partnership Dynamically changeable menus based on externally available data
US6167117A (en) * 1996-10-07 2000-12-26 Nortel Networks Limited Voice-dialing system using model of calling behavior
US5818908A (en) * 1996-11-05 1998-10-06 At&T Corp. Selective voice menu system
US6404874B1 (en) * 1997-03-27 2002-06-11 Cisco Technology, Inc. Telecommute server
US6487277B2 (en) * 1997-09-19 2002-11-26 Siemens Information And Communication Networks, Inc. Apparatus and method for improving the user interface of integrated voice response systems
US6584181B1 (en) * 1997-09-19 2003-06-24 Siemens Information & Communication Networks, Inc. System and method for organizing multi-media messages folders from a displayless interface and selectively retrieving information using voice labels
US6370238B1 (en) * 1997-09-19 2002-04-09 Siemens Information And Communication Networks Inc. System and method for improved user interface in prompting systems
US6016336A (en) * 1997-11-18 2000-01-18 At&T Corp Interactive voice response system with call trainable routing
US6456619B1 (en) * 1997-12-04 2002-09-24 Siemens Information And Communication Networks, Inc. Method and system for supporting a decision tree with placeholder capability
EP0973314A2 (en) 1998-07-17 2000-01-19 Siemens Information and Communication Networks Inc. Apparatus and method for improving the user interface of integrated voice response systems
EP0992980A2 (en) 1998-10-06 2000-04-12 Lucent Technologies Inc. Web-based platform for interactive voice response (IVR)
GB2342530A (en) 1998-10-07 2000-04-12 Vocalis Ltd Gathering user inputs by speech recognition
US6449496B1 (en) * 1999-02-08 2002-09-10 Qualcomm Incorporated Voice recognition user interface for telephone handsets
WO2000065814A1 (en) 1999-04-23 2000-11-02 Nuance Communications Object-orientated framework for interactive voice response applications
WO2000078022A1 (en) 1999-06-11 2000-12-21 Telstra New Wave Pty Ltd A method of developing an interactive system
US6501832B1 (en) * 1999-08-24 2002-12-31 Microstrategy, Inc. Voice code registration system and method for registering voice codes for voice pages in a voice network access provider system
US6792086B1 (en) * 1999-08-24 2004-09-14 Microstrategy, Inc. Voice network access provider system and method
US6888929B1 (en) * 1999-08-24 2005-05-03 Microstrategy, Inc. Revenue generation method for use with voice network access provider system and method
US6895084B1 (en) * 1999-08-24 2005-05-17 Microstrategy, Inc. System and method for generating voice pages with included audio files for use in a voice page delivery system
US6707889B1 (en) * 1999-08-24 2004-03-16 Microstrategy Incorporated Multiple voice network access provider system and method
US7457397B1 (en) * 1999-08-24 2008-11-25 Microstrategy, Inc. Voice page directory system in a voice page creation and delivery system
US7920678B2 (en) * 2000-03-06 2011-04-05 Avaya Inc. Personal virtual assistant
US6757362B1 (en) * 2000-03-06 2004-06-29 Avaya Technology Corp. Personal virtual assistant
US8000453B2 (en) * 2000-03-06 2011-08-16 Avaya Inc. Personal virtual assistant
US7415100B2 (en) * 2000-03-06 2008-08-19 Avaya Technology Corp. Personal virtual assistant
US7003079B1 (en) * 2001-03-05 2006-02-21 Bbnt Solutions Llc Apparatus and method for monitoring performance of an automated response system
US7245706B2 (en) * 2001-04-19 2007-07-17 British Telecommunications Public Limited Company Voice response system
US6885733B2 (en) * 2001-12-03 2005-04-26 At&T Corp. Method of providing a user interface for audio telecommunications systems
US7324942B1 (en) * 2002-01-29 2008-01-29 Microstrategy, Incorporated System and method for interactive voice services using markup language with N-best filter element
US6850949B2 (en) * 2002-06-03 2005-02-01 Right Now Technologies, Inc. System and method for generating a dynamic interface via a communications network
US7756258B2 (en) * 2003-12-11 2010-07-13 British Telecommunications Plc Communications system with direct access mailbox

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Attawater et al, "Large-Vocabulary Data-Centric Dialogues", BT Technology Journal, BT Laboratories, GB, vol. 17, No. 1, Jan. 1999, pp. 149-159, XP000824588, ISSN: 1358-3948.
Attwater et al, "Issues in Large-Vocabulary Interactive Speech Systems", BT Technology Journal, BT Laboratories, GB, vol. 14, No. 1, 1996, pp. 177-186, XP000554647.
Office Action issued in EP Application No. 02720194.6, dated Jun. 21, 2007.
Pawlewski et al, "Advances in Telephony-Based Speech Recognition", BT Technology Journal, BT Laboratories, GB, vol. 14, No. 1, 1996, pp. 127-149, XP000554644, ISSN: 1358-3948.
Power, "The Listening Telephone-Automating Speech Recognition Over the PSTN", BT Technology Journal, BT Labroatories, GB, vol. 14, No. 1, 1996, pp. 112-126, XP000554564, ISSN: 1358-3948.
Whittaker et al, "Interactive Speech Systems for Telecommunications Applications", BT Technology Journal, BT Laboratories, GB, vol. 14, No. 2, Apr. 1, 1996, pp. 11-23, XP000584907, ISSN: 1358-3948.

Also Published As

Publication number Publication date
US7245706B2 (en) 2007-07-17
CA2441195A1 (en) 2002-10-31
US20040120476A1 (en) 2004-06-24
EP1380154A1 (en) 2004-01-14
WO2002087202A1 (en) 2002-10-31
CA2441195C (en) 2008-08-26

Similar Documents

Publication Publication Date Title
USRE45096E1 (en) Voice response system
US20040120472A1 (en) Voice response system
US6839671B2 (en) Learning of dialogue states and language model of spoken information system
US7143040B2 (en) Interactive dialogues
US7606714B2 (en) Natural language classification within an automated response system
US7487095B2 (en) Method and apparatus for managing user conversations
US8046227B2 (en) Development system for a dialog system
US20050033582A1 (en) Spoken language interface
US20110106527A1 (en) Method and Apparatus for Adapting a Voice Extensible Markup Language-enabled Voice System for Natural Speech Recognition and System Response
US8165887B2 (en) Data-driven voice user interface
US8862477B2 (en) Menu hierarchy skipping dialog for directed dialog speech recognition
US7881932B2 (en) VoiceXML language extension for natively supporting voice enrolled grammars
CN114168718A (en) Information processing apparatus, method and information recording medium
EP1761015B1 (en) Self-adaptive user interface for dialogue systems
WO2002089112A1 (en) Adaptive learning of language models for speech recognition
WO2002089113A1 (en) System for generating the grammar of a spoken dialogue system
EP1301921B1 (en) Interactive dialogues
AU2010238568B2 (en) A development system for a dialog system
Higashida et al. A new dialogue control method based on human listening process to construct an interface for ascertaining a user²s inputs.
CN111048074A (en) Context information generation method and device for assisting speech recognition
Whittaker et al. Practical issues in the application of speech technology to network and customer service applications

Legal Events

Date Code Title Description
FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12