US20040204938A1 - System and method for network based transcription - Google Patents
System and method for network based transcription Download PDFInfo
- Publication number
- US20040204938A1 US20040204938A1 US10/837,640 US83764004A US2004204938A1 US 20040204938 A1 US20040204938 A1 US 20040204938A1 US 83764004 A US83764004 A US 83764004A US 2004204938 A1 US2004204938 A1 US 2004204938A1
- Authority
- US
- United States
- Prior art keywords
- information
- dictation
- document
- transcription
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/186—Templates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Definitions
- This invention relates to a transcription system.
- this invention relates to a transcription system over a distributed network.
- speech-to-text programs allow a user to speak into a computer.
- the computer compares the received spoken sounds to previously identified sounds, and therefore can convert the spoken utterances into text.
- a user dictates onto a recordable media.
- This recordable media is forwarded to a transcriptionist who listens to the dictation and manually converts the dictation into a document.
- the document can then be returned to the original user for editing, or the like.
- the document Upon completion of the editing, or alternatively, if the document is in final form, the document is manually forwarded to its destination.
- the systems and methods of this invention receive information, such as a human voice, which is converted into a digital file.
- the digital file is packaged with information. This information can include, for example, information about the file, the speaker or user, formatting options, destination information, template creation information, or the like.
- the digital file and associated information are then transmitted via a distributed network to a transcription system.
- the transcription system converts the digital file into a document, taking account of any supplemental information that may be associated with the digital file.
- the resulting document is associated with the digital file and any associated information, and the updated document is returned to the original creator.
- the original creator has the option of reading, reviewing, approving, and/or revising, the document. If modifications are necessary, the process repeats itself. Otherwise, an approval of the document results in the system forwarding the document, and associated information, to the appropriate destination.
- the systems and methods of this invention provide a transcription system over a distributed network.
- This invention separately provides systems and methods for assembling a digital file that contains at least dictation information and additional information.
- This invention separately provides systems and methods that allow a user to interface with a dictation and/or transcription tool via a user interface.
- This invention additionally provides systems and methods that allow a user to automatically assemble and subsequently distribute a document over a distributed network.
- This invention additionally provide systems and methods that allow for dynamic development of a document.
- This invention additionally provide systems and methods that allow for dynamic development of a document based on a template.
- the transcription systems and methods of this invention use a combination of accumulated digital information and user interfaces to provide for dictation, transcription and subsequent document delivery services.
- the systems and methods of this invention receive dictation, and additional information, such as routing information and formatting information.
- the dictation and associated information are forwarded to a transcription system that converts the information into a document.
- This document is returned to the originator for review, modification and/or approval. Once approved, the document is routed to the appropriate destination based on the associated routing information.
- FIG. 1 is a functional block diagram illustrating an exemplary transcription system according to this invention
- FIG. 2 is a functional block diagram illustrating an exemplary dictation station according to this invention.
- FIG. 3 is a functional block diagram of an exemplary transcription station according to this invention.
- FIG. 4 is an screen shot of an exemplary dictation station user interface according to this invention.
- FIG. 5 is a second screen shot of an exemplary dictation station user interface according to this invention.
- FIG. 6 is a third screen shot of an exemplary dictation station user interface according to this invention.
- FIG. 7 is a fourth screen shot of an exemplary dictation station user interface according to this invention.
- FIG. 8 is a fifth screen shot representing an exemplary document library according to this invention.
- FIG. 9 is a screen shot of an exemplary transcription station user interface according to this invention.
- FIG. 10 is a screen shot of an exemplary transcription station user interface according to this invention.
- FIG. 11 is a flow chart outlining one exemplary embodiment of a method for performing transcription according to this invention.
- FIG. 12 is a flow chart outlining an exemplary method for interfacing with the dictation management system according to this invention.
- FIG. 13 is a flow chart outlining in greater detail the initiate job submission control interface of FIG. 12;
- FIG. 14 is a flow chart outlining an exemplary embodiment of a method for interfacing with the transcription station according to this invention.
- the systems and methods of this invention streamline the entire dictation to delivery chain of events. Furthermore, by using a dedicated transcription management system, a document based on a dictation can be returned to an originator with greater efficiency.
- the systems and methods in this invention allow a user to record dictation and supplement this dictation with additional information.
- This additional information can range from routing information, categorical information, formatting options, to template creation instructions, or the like.
- the dictation can be enhanced with any supplemental information and the transcription system is capable of integrating this additional information and returning a document based on this information to the original dictator.
- a completed dictation need not be returned to the original dictator, if, for example, the additional information associated with the dictation specifies an alternate destination.
- a user creates a dictation.
- This dictation is a recording of the user's voice.
- additional information that can be selected from, for example, a template based on established information.
- the template may be used to create a summary of a doctor's examination. Therefore, the template could have selectable portions for default characteristics, such as, runny nose, fever, or the like.
- the template can have predetermined formats, for example, a business letter, with predetermined headings, closing portions, and selectable address and formatting characteristics.
- the systems and methods of this invention also allow a user to fully manage all dictations within the transcription system.
- the user is provided with a dictation management interface that allows access to, for example, the status of all dictation, account information, destination information, and access to stored documents.
- FIG. 1 illustrates an exemplary embodiment of components of the transcription system 100 .
- the transcription system 100 comprises an I/O, interface 110 , a controller 120 , a memory 130 , a speech recognition device 140 , a template storage device 150 , an input management controller 160 , a document management controller 170 , a speech recognition support device 180 , a document distribution device 190 , a document storage device 200 and a direct output device 210 all interconnected by link 5 .
- the transcription system 100 is also connected to at least one distributed network 250 which may or may not also be connected to one or more other transcription systems or other distributed networks, as well as one or more input devices 230 and document sinks 240 .
- FIG. 1 shows the transcription system 100 and associated components collocated
- the various components of the transcription system 100 can be located at distant portions of a distributed network, such as a local area network, a wide area network, an intranet and/or the internet or within a dictation or transcription station.
- a distributed network such as a local area network, a wide area network, an intranet and/or the internet or within a dictation or transcription station.
- the components of the transcription system 100 can be combined into one device or collocated on a particular node of a distributed network.
- the components of the transcription system 100 can be arranged at any location within a distributed network without affecting the operation of the system.
- the links 5 can be a wired or a wireless link or any known or later developed element(s) that is capable of supplying electronic data to and from the connected elements.
- dictation information associated with, for example, routing information, formatting information, categorical information or the like is received from input device 230 .
- the input device 230 can be, for example, a telephone, a personal digital assistant, a cellular phone, a handheld voice recorder, a personal computer, a streaming media digital audio file, digital audio compact disc, or analog magnetic tape or the like.
- FIGS. 1-3 illustrate the transcription system, dictation station and transcription station, respectively.
- dictation and distribution information is received from the input device 230 .
- This dictation and distribution information can include, but is not limited to, the actual speech which is to be converted to text, formatting options for the document, identification of a template, template options, including but not limited to static measurements, optional segments of text, numeric values for variables, or the like, categorical information, including but not limited to topics like legal, medical, or the like, priority information, and routing or distribution information for the completed document.
- the dictation and distribution information is received in the transcription system 100 via link 5 and the network 250 and, with the aid of the I/O interface 110 , the controller 120 and the memory 130 , stored in the document storage device 200 .
- the transcription system 100 determines if a complete acoustical reference file has either been appended to the distribution and dictation information, or is present in the speech recognition support device 180 .
- the acoustical reference file allows the speech recognition device 140 to perform speech recognition on the received dictation information.
- an acoustical reference file is present in the speech recognition support device 180 , the speech recognition device 140 , with the aid of controller 120 , the memory 130 and the acoustical reference file stored in the speech recognition support device 180 converts the dictation information into text. This text is associated with the original dictation and distribution information is forwarded, via link 5 and network 250 to the transcription station 400 for approval, correction and/or modification.
- the dictation and distribution information is forwarded, via link 5 and network 250 to the transcription station 400 .
- the transcription system 100 may directly interface with the input device 230 .
- the input device 230 could be a telephone.
- a user would place a call via, for example, the network 250 and link 5 to the transcription system 100 .
- the transcription 100 with the aid of the input management controller 160 could guide a user, for example, through a set of key-tone selectable options, to create dictation and distribution information which would then be stored in the document storage device 200 .
- the input management controller 160 queries the user as to whether a template should be used to create this document.
- a template can be retrieved, for example, with the selection of a predetermined keystroke, from the template storage device 150 . Then, for example, portions of the template can be selected, and optionally populated, with various combinations of keystrokes. Additionally, the transcription system 100 can, at the direction of the input device 230 , record dictation straight from the input device 230 . Then, as previously discussed, the dictation and distribution information is stored in the document storage device 200 .
- the document storage device 200 After reception of the dictation and distribution information, and after speech-to-text conversion has taken place by the speech recognition device 140 , if appropriate, the document storage device 200 , in corporation with the document management controller 170 , the I/O interface 110 , the controller, 120 and memory 130 , forwards the dictation and distribution information, as well as the converted text, to the transcription station 400 .
- the transcription station 400 proofs the document and performs any necessary modifications based on information that may be associated with or contained in the dictation and distribution information. Upon completion of any modifications, the transcription station 400 returns to the transcription system 100 , via the network 250 and link 5 , the completed document.
- the document management controller 170 recognizes that the original dictation and distribution information has been supplemented with the text of the dictation and has been reviewed by the transcription station 400 . Thus, the document is forwarded back to the document originator for approval.
- the document can be forwarded back to the input device 230 via link 5 and the network 250 .
- the document can be returned to the originator, or another party, if instructions appended to the original dictation and distribution information so indicate.
- the original dictation and distribution information could have been received over a telephone.
- the user could have specified in the dictation and distribution information that the completed document be returned to a particular e-mail address for review.
- the document management controller 170 would detect the presence of this routing information in the document returned from the transcription station 400 and route it to the user at the appropriate destination.
- the user via the input device 230 , then either approves or performs further edits or modifications to the document.
- the document is either returned to the transcription system 100 for further speech recognition and/or further processing by the transcription station 400 as previously discussed, or alternatively, returned to the transcription system 100 for distribution.
- the transcription system 100 receives, via link 5 the network 250 , the I/O interface 110 , the controller 120 and memory 130 , the approved document.
- the document management controller 170 determines if the document has been approved and forwards the document to the document distribution device 190 for distribution.
- the document distribution device 190 parses the document to determine the appropriate routing. For example, the document can indicate that its contents are to be forwarded to a first destination via e-mail, a second destination via fax and a third destination by a postal carrier.
- the document distribution device 190 in cooperation with the I/O interface 110 , the controller 120 and the memory 130 , routes the document, via network 250 and link 5 , to the document sink 240 , which in this case is an e-mail address.
- the document distribution device 190 prepares the document for delivery to a facsimile machine.
- the appropriate routing information is recovered, such as a telephone number, from the document.
- the facsimile is transmitted, at the direction of the direct output device 210 , to a document sink 240 , such as a facsimile machine.
- the document distribution device 190 controls the direct output device 210 in order to print a copy of the document.
- the document sink 240 in this instance is a printer.
- the transcription system 100 would produce a document for example, including, the completed document and an address label. This finalized document could then be delivered to a postal carrier for further delivery.
- FIG. 2 illustrates an exemplary dictation station 300 .
- the dictation station 300 can be used as an exemplary input device 230 .
- the dictation station 300 comprises an I/O interface 310 , a controller 320 , a memory 330 , a user interface controller 340 , a storage device 350 , a dictation management controller 360 and a template storage device 370 , interconnected by link 5 .
- the dictation station 300 is also connected to one or more input devices 380 , display devices 390 and to a wired or wireless network.
- the input device 380 can be, for example, a keyboard, a mouse, a microphone, a personal digital assistant, a handheld analog or digital voice recorder, a digital audio compact disc, an analog magnetic tape, an interactive telephony application, personal computer, an interactive voice response system, or the like.
- the input device 380 can be any device capable of receiving information on behalf of the dictation station 300 .
- the display device 390 can be, for example, a computer monitor, a PDA display, a cellular phone display, voice prompts of a telephony application, the user interface of a personal recorder, or the like.
- the display device 390 can be any device capable of providing information, including both audio and/or visual information, from the dictation station 300 to a user.
- the dictation station 300 receives input from the input device 380 .
- the user interface can be provided to a user for recordation of dictation.
- the input device 380 can be a keyboard and mouse, and the display device 390 and a monitor.
- the user is provided with, for example, a graphical user interface that allows for the management and recordation of dictation and distribution information.
- the user interface controller 340 can display on the display device 390 the various controls needed for recording dictation and associating supplemental information with that dictation.
- user interface controller 340 in cooperation with the template storage device 370 , can present to the user, via display device 390 , a template for dictation.
- This template can be dynamic.
- the template can have selectable portions that have predetermined contents for population of the template.
- the template can have various portions into which dictation can be input.
- the dictation and any associated distribution information is stored, with the aid of the I/O interface and controller 320 , in the storage device 350 .
- the dictation and distribution information is forwarded, via link 5 , and with the cooperation of the I/O interface 310 , the controller 320 and memory 330 , over a wired or a wireless network to the transcription system 100 .
- FIG. 3 illustrates an exemplary embodiment of a transcription station 400 according to an exemplary embodiment of the invention.
- the transcription station 400 comprises an I/O interface 410 , a controller 420 , memory 430 , a user interface controller 440 , a storage device 450 and a transcription management controller 460 interconnected by link 5 .
- the transcription station 400 is also connected to an input device 470 , a display device 480 and a wired or wireless network.
- the input device 380 can be, for example, a keyboard, a mouse, a microphone, a personal digital assistant, a handheld analog or digital voice recorder, a digital audio compact disc, an analog magnetic tape, an interactive telephony application, personal computer, or the like.
- the input device 380 can be any device capable of receiving information on behalf of the dictation station 300 .
- the display device 390 can be, for example, a computer monitor, a PDA display, a cellular phone display, voice prompts of a telephony application, the user interface of a personal recorder, or the like.
- the display device 390 can be any device capable of providing information from the dictation station 300 to a user.
- the transcription station 400 receives one or more of dictation information, distribution information and text corresponding to speech from the transcription system 100 .
- the transcription station 400 allows a user to transcribe received dictation, modify received text, add or modify formatting, create a template, modify distribution information, or the like.
- the transcription station 400 is connected to one or more input devices 470 and display devices 480 such that a user can interface with the received dictation information.
- the user interface controller 440 in cooperation with the I/O interface, the controller 420 and the memory 430 , provides the necessary interface via the display device 480 for the user to perform the necessary tasks.
- input device 470 can be a keyboard and the display device 480 a monitor.
- a user at transcription station 400 can input modifications to a document via input device 470 and, optionally, modify or supplement distribution information.
- the dictation may include instructions to alter the distribution information to, for example, add or delete recipients of the document.
- a user at the transcription station 400 would be able to modify this distribution information which would then be stored with an updated version of the document in the storage device 450 .
- the transcription management controller 460 Upon completion of work at the transcription station 400 , the transcription management controller 460 forwards, via link 5 , and with the cooperation of the I/O interface 410 and controller 420 , the updated document back to the transcription system 100 .
- FIGS. 4-10 illustrate exemplary user interfaces that a user may encounter during the dictation or transcription process.
- FIG. 4 illustrates an exemplary login screen that a user may encounter to enable access to the transcription system.
- a user could, for example, enter a user name and password that would allow the user access to the system which could, for example, store preferences, and keep track of previously created documents, templates, and the like.
- FIG. 5 illustrates an exemplary screen shot of a user interface a user may encounter upon logging onto the transcription system.
- the account information user interface 700 could provide a summary of all, or a portion of, activities the user may have with the transcription system 100 .
- the exemplary account information user interface 700 comprises a message center display portion 710 , a status display portion 720 , an account display portion 730 , a document library interface display portion 740 , and a job: submission control display portion 750 .
- a message center display portion 710 a status display portion 720 , an account display portion 730 , a document library interface display portion 740 , and a job: submission control display portion 750 .
- any information can be displayed on the account information user interface that may be useful to a particular user.
- the user may, using the personalized content button 800 or the personalized layout button 810 , customize the account information user interface to suite their particular environment.
- Exemplary environments include disciplines in the medial field including radiology, cardiology, orthopedics, primary care, neurosurgery, oncology, and the like as well as, accounting, legal, court reporting, real estate, insurance, law enforcement, and general business applications and the like.
- the message center display portion 710 can provide access to via, for example, hyperlinks, e-mail services and address books. Therefore, for example, if a user desires to view and/or edit an entry in an address book, the user would select, for example with the click of a mouse, the hyperlink 712 . Upon selection of the hyperlink 712 , an address book or comparable interface could be opened that would allow a user to access and/or modify contacts and their related information, such as destination routing information.
- the status display portion 720 provides the user with a summary of outstanding documents.
- the status display portion 720 includes, for example, three subsections. These subsections include, for example, a reviews pending display portion 722 , a documents pending transcription display portion 724 and a routing information display portion 726 .
- the reviews pending display portion 722 can, for example, summarize the documents that need approval by the user, and have already gone through at least one iteration of the transcription process.
- the documents pending transcription display portion 724 can display, for example, documents which may or may not have been routed through the speech recognition engine, and are awaiting further modifications at a transcription station.
- the routing information display portion 726 can display, for example, the delivery status of documents that have been approved by the user.
- the routing information display portion 726 can contain a list of documents indicating that some were faxed, some were e-mailed, or, for example, some have been printed.
- the account information display portion 730 summarizes the billing information for the particular user.
- the account display portion 730 can include an itemization of particular documents based on, for example, routing information, or other user editable criteria.
- the document library interface display portion 740 provide users with a summary of previously completed or partially completed documents.
- the document library interface display portion 740 can, for example, include a plurality of selectable subdirectories 742 which, in turn, can include a list of documents. By selecting a hyperlink associated with any one of the documents in the document library interface display portion 740 , a user can then at least one of, open or modify the document as desired.
- the job submission control display portion 750 allows a user to create a new document. For example, a user can select a document type in the document type display portion 752 which would provide the user with a template from which the document is generated. Additionally, the user may select routing information from the routing display portion 754 that is to be associated with a particular document. Then, this routing information can be populated with, for example, information from the address book which is selectable via the view/edit address book hyperlink 712 .
- a user upon selection of one or more of the document type and routing information, a user would select, for example, with the click of a mouse, the create new document button 760 . This would provide the user with one or more interfaces specific to the selected document type and routing information.
- FIG. 6 illustrates an exemplary dictation user interface 900 .
- the dictation user interface 900 comprises a template selection portion 910 , a document display portion 920 , a multimedia controller 930 and, for example, a completed file display portion 940 . Additionally, the dictation user interface can have one or more buttons which, for example, allow the addition of a signature to the document 950 or allow printing of the document 960 .
- the template display portion 910 allows a user to select a template onto which the dictation will be merged. Then, using the controls in the multimedia control display portion 930 , a user records dictation. Upon completion of the dictation, the document is added to the completed files display portion 940 .
- the document display portion 920 will be populated with the already transcribed text. Therefore, for example, if this was the second instance of a letter, the document display portion 920 would contain a first draft of the letter.
- FIG. 7 illustrates an exemplary job submission control user interface 1000 .
- the exemplary job submission control user interface 1000 contains selectable portions that assist a user in populating a template with information.
- the job submission control user interface 1000 comprises a document template type selection portion 1010 , a document distribution preference selection portion 1020 , a multimedia control interface 1030 , the functional control buttons 1040 and a field population display portion 1050 .
- the document template selection portion 1010 allows a user to select and/or modify or change a template. Upon selection of a particular template, the remainder of the job submission control user interface 1000 is populated based on the selected template.
- a document distribution preference selection portion 1020 is displayed which indicates that, for example, two individuals and the document library interface are to receive a copy of the document.
- the multimedia control portion 1030 allows a user to view, for example, the length of the dictation and current position within the dictation.
- the job submission control user interface 1000 contains one or more function buttons 1040 which allow, for example, a user to further customize the template by adding additional distribution locations, confining dictation, aborting the document, or the like.
- the field selection display portion 1050 allows a user to populate portions of the template based on the document template type selection.
- the field selection display portion 1050 can include selectable menus for formatting or otherwise populating, for example, an address, a heading, a closing and a signature file.
- FIG. 8 illustrates an exemplary document library user interface 1100 .
- the document library user interface 1100 comprises a directory selection portion 1110 , a document display portion 1120 , and associated function buttons 1130 - 1180 .
- the directory display portion 1110 comprises, for example, a list of directories within the document library. Upon selection of a particular directory, the files saved within that directory are displayed in the file display portion 1120 . Then, one or more files, upon selection, can be further accessed, modified, or otherwise operated on in accordance with the function buttons 1130 - 1180 .
- the function button 1130 allows a document to be opened and displayed.
- the function button 1140 allows the document to be downloaded to, for example, a PDA.
- the send button 1150 allows a document to be sent to one or more destinations.
- the delete button 1160 allows deletion of a document
- the rename button 1140 allows renaming of a document
- the move button 1180 allows, for example, moving of a document to a different folder.
- FIG. 9 illustrates an exemplary transcription user interface which may be displayed at the transcription station 400 .
- the transcription user interface 1200 comprises a multimedia control portion 1210 , a workspace display portion 1220 , a work queue display portion 1230 , and one or more function buttons 1240 .
- the multimedia control 1210 allows a transcriptionist to play dictation and populate the workspace 1220 with that information.
- the work queue display portion 1230 shows a transcriptionist work jobs waiting for transcription services in a queue.
- the functional buttons 1240 allow a transcriptionist to view, for example, performance statistics, change account information, create or edit templates, or the like.
- FIG. 10 illustrates an exemplary job submission user interface for which the dictation has already been run through a speech recognition device or the transcription station as appropriate.
- the job submission user interface 1300 comprises a document template type selection portion 1010 , a document distribution preference selection portion 1020 , a multimedia control interface 1030 , the functional control buttons 1040 , a draft document display portion 1310 , a status display portion 1320 and draft management buttons 1330 - 1380 .
- the document template selection portion 1010 allows a user to select and/or modify the draft document shown in the draft document display portion 1310 .
- the draft management buttons 1330 - 1380 allow a user to interact with a draft document and/or any attachments.
- the send button 1330 allows a user to send the draft document to one or more recipients.
- the save draft button 1340 allows a user to save modifications to a draft document for, for example, further editing at another time.
- the spell check button 1350 allows the user to spell check the document.
- the cancel button 1360 cancels the current task.
- the use signature check box 1370 allows a user to append a signature to the document and the edit attachments button 1380 allows a user to edit attachments, if any, associated with the document.
- FIG. 11 illustrates an exemplary embodiment of the operation of the transcription system according to this invention.
- control begins at step S 100 and continues to step S 110 .
- step S 110 the dictation and distribution information is received.
- step S 120 a determination is made whether the user has a complete acoustical reference file. If the user has a complete acoustical reference file, control jumps to step S 130 . Otherwise, control continues to step S 150 where speech recognition is performed. Control then continues to step S 160 .
- step S 130 the dictation and distribution information is forwarded to a transcription station.
- step S 140 the transcribed dictation is forwarded to the speech recognition engine for development of the acoustical reference file. Control then continues to step S 160 .
- step S 160 a determination is made whether proofing is required. If proofing is required, control jumps to step S 170 . Otherwise, control continues to step S 180 .
- step S 170 the document is proofed. Control then continues to step S 180 .
- step S 180 a determination is made whether additional instructions are present. If additional instructions are present, control jumps to step S 190 . Otherwise, control continues to step S 210 .
- step S 1190 the document is forwarded to the transcription station.
- step S 200 the additional instructions are implemented.
- these additional instructions can include formatting instructions, routing instructions, template creation instructions, or the like. Control then continues to step S 210 .
- step S 210 a determination is made whether the document is to be returned to the originator for approval. If the document is to be returned, control jumps to step S 220 . Otherwise, control continues to step S 270 .
- step S 220 a determination is made whether the originator approved the document. If the originator approved the document, control jumps to step S 230 . Otherwise, control continues to step S 270 .
- step S 230 a determination is made whether edits to the document or associated information are required. If edits are required, control jumps to step S 240 ; Otherwise, control continues to step S 250 .
- step S 240 the user edits the document and/or the associated information. Control then continues to step S 250 .
- step S 250 a determination is made whether the routing information is to be modified. If the routing information is to be modified, control jumps to step S 260 . Otherwise, control continues to step S 270 .
- step S 260 the routing information is edited. Control then continues to step S 270 .
- step S 270 the document and the associated information is stored.
- step S 280 the document is distributed in accordance with the distribution information. Control then continues to step S 290 where the control sequence ends.
- FIG. 12 is a flowchart outlining one exemplary embodiment of a method for interfacing with the transcription system according to this invention.
- control begins in step S 300 and continues to step S 310 .
- step S 310 a determination is made whether status information is to be reviewed. If status information is to be reviewed, control continues to step S 320 .
- step S 320 the status information is determined.
- step S 330 the status information is displayed. Control then continues to step S 340 .
- step S 340 a determination is made whether account information is to be reviewed. If account information is to be reviewed, control continues to step S 350 . In step S 350 , the account information is determined. Next, in step S 360 the account information is displayed. Control then continues to step S 370 .
- step S 370 a determination is made whether the document library interface is to be accessed. If the document library interface is to be accessed, control continues to step S 380 . Otherwise, control jumps to step S 390 .
- step S 380 the document library interface is initiated.
- the document library interface allows, for example, access to transcribed dictations which have been saved as documents. These documents may be for example, opened, downloaded, sent, deleted, renamed, or moved to for example, a different storage location within the document library. Control then continues to step S 390 .
- step S 390 a determination is made whether the job submission control should be accessed. If the job submission control is to be accessed, control continues to step S 400 . Otherwise, control jumps to step S 410 where the control sequence ends.
- step S 400 the job submission control interface is initiated. Control then continues to step S 410 where the control sequence ends.
- FIG. 13 illustrates in greater detail the operation of the job submission control interface step of FIG. 12.
- control begins in step S 500 and continues to step S 510 .
- step S 510 a user is queried whether a template is to be used. If a template is to be used, control continues to step S 520 . Otherwise, control jumps to step S 550 .
- step S 550 a determination is made whether an existing template is to be used for the dictation. If an existing template is to be used for the dictation, control continues to step S 540 .
- step S 530 a template is selected. Control then continues to step S 550 .
- step S 540 the new dictation is used to create a template. Control then continues to step S 550 .
- step S 550 routing information is selected.
- step S 560 the user's dictation is recorded.
- step S 570 the dictation and any associated information is stored. Control then continues to step S 580 .
- step S 580 the dictation and distribution information is forwarded to the transcription system. Control then continues to step S 590 where the control sequence ends.
- FIG. 14 is a flowchart outlining an exemplary embodiment of a method of operation of the transcription station according to this invention. Specifically, control begins in step S 600 and continues to step S 610 .
- step S 610 the dictation and distribution information is received.
- step S 620 the dictation is reviewed.
- step S 630 a determination is made whether modifications to either the dictation and/or distribution information are desired. For example, supplemental instructions found in the dictation but, for example, not transcribed by the speech recognition engine can then be implemented at the transcription station. Control then jumps to step S 640 . Otherwise, control continues to step S 650 .
- step S 640 edits to the dictation and/or distribution information are performed. Control then continues to step S 650 .
- step S 650 the modifications are saved. Control then continues to step S 660 where the dictation and distribution information is returned to the transcription system. Control then continues to step 5670 where the control sequence ends.
- the transcription and associated systems are preferably implemented either on single program general purpose computers, or separate program general purpose computers.
- the transcription system dictation and transcription stations can also be implemented on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmable logic device such as a PLD, PLA, FPGA, PAL, or the like.
- any device capable of implementing a finite state machine is in turn capable of implementing the flowcharts illustrated in FIGS. 11-14 can be used to implement the transcription system according to this invention.
- the disclosed method may be readily implemented in software using object or object-oriented software development environments that provide portable source code that can be used or a variety of computer or workstation hardware platforms.
- the disclosed transcription system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware used to implement the systems in accordance with this invention is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
- the transcription systems and methods described above can be readily implemented in hardware and/or software using any known or later-developed systems or structures, devices and/or software by those skilled in the applicable art without undue experimentation from the function description provided herein together with a general knowledge of the computer arts.
- the disclosed methods may be readily implemented as software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like.
- the methods and systems of this invention can be implemented as a routine embedded on a personal computer such as a Java® or CGI script, as a resource residing on a server or graphics workstation, as a routine embedded in a dedicated transcription system, a web brouser, a cellular telephone, a PDA, a dedicated dictation or transcription system, or the like.
- the transcription system can also be implemented by physically incorporating the system and method into a software and/or hardware system, such as the hardware and software systems of a graphics workstation or dedicated transcription system.
Abstract
The systems and methods described herein allow dictation and associated routing and formatting information to be forwarded to a transcription system. The transcription system converts the information into a document. The additional information associated with the dictation is then applied to the document to ensure proper formatting, routing, or the like. The completed document is returned to the original dictator for review and proofing. Upon approval, the document is distributed via the transcription system in accordance with distribution information associated with the document.
Description
- 1. Field of the Invention
- This invention relates to a transcription system. In particular, this invention relates to a transcription system over a distributed network.
- 2. Description of Related Art
- A plethora of available systems are available for converting dictation to text. For example, speech-to-text programs allow a user to speak into a computer. The computer compares the received spoken sounds to previously identified sounds, and therefore can convert the spoken utterances into text.
- Alternatively, and more traditionally, a user dictates onto a recordable media. This recordable media is forwarded to a transcriptionist who listens to the dictation and manually converts the dictation into a document. The document can then be returned to the original user for editing, or the like. Upon completion of the editing, or alternatively, if the document is in final form, the document is manually forwarded to its destination.
- While existing dictation and transcription systems work well in particular instances, they are cumbersome and fail to take advantage of current technologies.
- The systems and methods of this invention receive information, such as a human voice, which is converted into a digital file. The digital file is packaged with information. This information can include, for example, information about the file, the speaker or user, formatting options, destination information, template creation information, or the like. The digital file and associated information are then transmitted via a distributed network to a transcription system. The transcription system converts the digital file into a document, taking account of any supplemental information that may be associated with the digital file. The resulting document is associated with the digital file and any associated information, and the updated document is returned to the original creator. The original creator has the option of reading, reviewing, approving, and/or revising, the document. If modifications are necessary, the process repeats itself. Otherwise, an approval of the document results in the system forwarding the document, and associated information, to the appropriate destination.
- The systems and methods of this invention provide a transcription system over a distributed network.
- This invention separately provides systems and methods for assembling a digital file that contains at least dictation information and additional information.
- This invention separately provides systems and methods that allow a user to interface with a dictation and/or transcription tool via a user interface.
- This invention additionally provides systems and methods that allow a user to automatically assemble and subsequently distribute a document over a distributed network.
- This invention additionally provide systems and methods that allow for dynamic development of a document.
- This invention additionally provide systems and methods that allow for dynamic development of a document based on a template.
- The transcription systems and methods of this invention use a combination of accumulated digital information and user interfaces to provide for dictation, transcription and subsequent document delivery services. The systems and methods of this invention receive dictation, and additional information, such as routing information and formatting information. The dictation and associated information are forwarded to a transcription system that converts the information into a document. This document is returned to the originator for review, modification and/or approval. Once approved, the document is routed to the appropriate destination based on the associated routing information.
- These and other features and advantages of this invention are described in or are apparent from the following detailed description of preferred embodiments.
- The preferred embodiments of the invention will be described in detail, with reference to the following figures wherein:
- FIG. 1 is a functional block diagram illustrating an exemplary transcription system according to this invention;
- FIG. 2 is a functional block diagram illustrating an exemplary dictation station according to this invention;
- FIG. 3 is a functional block diagram of an exemplary transcription station according to this invention;
- FIG. 4 is an screen shot of an exemplary dictation station user interface according to this invention;
- FIG. 5 is a second screen shot of an exemplary dictation station user interface according to this invention;
- FIG. 6 is a third screen shot of an exemplary dictation station user interface according to this invention;
- FIG. 7 is a fourth screen shot of an exemplary dictation station user interface according to this invention;
- FIG. 8 is a fifth screen shot representing an exemplary document library according to this invention;
- FIG. 9 is a screen shot of an exemplary transcription station user interface according to this invention;
- FIG. 10 is a screen shot of an exemplary transcription station user interface according to this invention;
- FIG. 11 is a flow chart outlining one exemplary embodiment of a method for performing transcription according to this invention;
- FIG. 12 is a flow chart outlining an exemplary method for interfacing with the dictation management system according to this invention;
- FIG. 13 is a flow chart outlining in greater detail the initiate job submission control interface of FIG. 12;
- FIG. 14 is a flow chart outlining an exemplary embodiment of a method for interfacing with the transcription station according to this invention.
- By combining dynamic routing of information with dictation information, the systems and methods of this invention streamline the entire dictation to delivery chain of events. Furthermore, by using a dedicated transcription management system, a document based on a dictation can be returned to an originator with greater efficiency.
- The systems and methods in this invention allow a user to record dictation and supplement this dictation with additional information. This additional information can range from routing information, categorical information, formatting options, to template creation instructions, or the like. In general, the dictation can be enhanced with any supplemental information and the transcription system is capable of integrating this additional information and returning a document based on this information to the original dictator. However, it is to be appreciated that a completed dictation need not be returned to the original dictator, if, for example, the additional information associated with the dictation specifies an alternate destination.
- A user creates a dictation. This dictation is a recording of the user's voice. Associated with this dictation may be additional information that can be selected from, for example, a template based on established information. For example, the template may be used to create a summary of a doctor's examination. Therefore, the template could have selectable portions for default characteristics, such as, runny nose, fever, or the like. Additionally, the template can have predetermined formats, for example, a business letter, with predetermined headings, closing portions, and selectable address and formatting characteristics.
- The systems and methods of this invention also allow a user to fully manage all dictations within the transcription system. Specifically, the user is provided with a dictation management interface that allows access to, for example, the status of all dictation, account information, destination information, and access to stored documents.
- FIG. 1 illustrates an exemplary embodiment of components of the
transcription system 100. Thetranscription system 100 comprises an I/O,interface 110, acontroller 120, amemory 130, aspeech recognition device 140, atemplate storage device 150, aninput management controller 160, adocument management controller 170, a speechrecognition support device 180, adocument distribution device 190, adocument storage device 200 and adirect output device 210 all interconnected bylink 5. Thetranscription system 100 is also connected to at least one distributednetwork 250 which may or may not also be connected to one or more other transcription systems or other distributed networks, as well as one ormore input devices 230 and document sinks 240. - While the exemplary embodiment illustrated in FIG. 1 shows the
transcription system 100 and associated components collocated, it is to be appreciated that the various components of thetranscription system 100 can be located at distant portions of a distributed network, such as a local area network, a wide area network, an intranet and/or the internet or within a dictation or transcription station. Thus, it should be appreciated that the components of thetranscription system 100 can be combined into one device or collocated on a particular node of a distributed network. As will be appreciated from the following description, and for reasons of computational efficiency, the components of thetranscription system 100 can be arranged at any location within a distributed network without affecting the operation of the system. - Furthermore, the
links 5 can be a wired or a wireless link or any known or later developed element(s) that is capable of supplying electronic data to and from the connected elements. - In operation, dictation information associated with, for example, routing information, formatting information, categorical information or the like, is received from
input device 230. Theinput device 230 can be, for example, a telephone, a personal digital assistant, a cellular phone, a handheld voice recorder, a personal computer, a streaming media digital audio file, digital audio compact disc, or analog magnetic tape or the like. - FIGS. 1-3 illustrate the transcription system, dictation station and transcription station, respectively. In operation, dictation and distribution information is received from the
input device 230. This dictation and distribution information can include, but is not limited to, the actual speech which is to be converted to text, formatting options for the document, identification of a template, template options, including but not limited to static measurements, optional segments of text, numeric values for variables, or the like, categorical information, including but not limited to topics like legal, medical, or the like, priority information, and routing or distribution information for the completed document. The dictation and distribution information is received in thetranscription system 100 vialink 5 and thenetwork 250 and, with the aid of the I/O interface 110, thecontroller 120 and thememory 130, stored in thedocument storage device 200. Upon receipt, thetranscription system 100 determines if a complete acoustical reference file has either been appended to the distribution and dictation information, or is present in the speechrecognition support device 180. The acoustical reference file allows thespeech recognition device 140 to perform speech recognition on the received dictation information. - If an acoustical reference file is present in the speech
recognition support device 180, thespeech recognition device 140, with the aid ofcontroller 120, thememory 130 and the acoustical reference file stored in the speechrecognition support device 180 converts the dictation information into text. This text is associated with the original dictation and distribution information is forwarded, vialink 5 andnetwork 250 to thetranscription station 400 for approval, correction and/or modification. - Alternatively, if an acoustical reference file is not present in the speech
recognition support device 180 for the particular user, and an acoustical reference file has not been received with the dictation and distribution information, the dictation and distribution information is forwarded, vialink 5 andnetwork 250 to thetranscription station 400. - Alternatively, and depending on the type of
input device 230, thetranscription system 100 may directly interface with theinput device 230. For example, theinput device 230 could be a telephone. In this instance, a user would place a call via, for example, thenetwork 250 andlink 5 to thetranscription system 100. Thetranscription 100, with the aid of theinput management controller 160 could guide a user, for example, through a set of key-tone selectable options, to create dictation and distribution information which would then be stored in thedocument storage device 200. For example, upon thetranscription system 100 receiving a call from theinput device 230, which is a telephone, theinput management controller 160, for example, queries the user as to whether a template should be used to create this document. If a template is to be used, a template can be retrieved, for example, with the selection of a predetermined keystroke, from thetemplate storage device 150. Then, for example, portions of the template can be selected, and optionally populated, with various combinations of keystrokes. Additionally, thetranscription system 100 can, at the direction of theinput device 230, record dictation straight from theinput device 230. Then, as previously discussed, the dictation and distribution information is stored in thedocument storage device 200. - After reception of the dictation and distribution information, and after speech-to-text conversion has taken place by the
speech recognition device 140, if appropriate, thedocument storage device 200, in corporation with thedocument management controller 170, the I/O interface 110, the controller, 120 andmemory 130, forwards the dictation and distribution information, as well as the converted text, to thetranscription station 400. - The
transcription station 400 proofs the document and performs any necessary modifications based on information that may be associated with or contained in the dictation and distribution information. Upon completion of any modifications, thetranscription station 400 returns to thetranscription system 100, via thenetwork 250 andlink 5, the completed document. Thedocument management controller 170 recognizes that the original dictation and distribution information has been supplemented with the text of the dictation and has been reviewed by thetranscription station 400. Thus, the document is forwarded back to the document originator for approval. - For example, the document, with the aid of the I/
O interface 1110 thecontroller 120 and thememory 130, can be forwarded back to theinput device 230 vialink 5 and thenetwork 250. Alternatively, the document can be returned to the originator, or another party, if instructions appended to the original dictation and distribution information so indicate. For example, the original dictation and distribution information could have been received over a telephone. However, the user could have specified in the dictation and distribution information that the completed document be returned to a particular e-mail address for review. Thus, thedocument management controller 170 would detect the presence of this routing information in the document returned from thetranscription station 400 and route it to the user at the appropriate destination. - The user, via the
input device 230, then either approves or performs further edits or modifications to the document. Depending on the extent of these modifications, the document is either returned to thetranscription system 100 for further speech recognition and/or further processing by thetranscription station 400 as previously discussed, or alternatively, returned to thetranscription system 100 for distribution. - If the document is approved, the
transcription system 100 receives, vialink 5 thenetwork 250, the I/O interface 110, thecontroller 120 andmemory 130, the approved document. Thedocument management controller 170 determines if the document has been approved and forwards the document to thedocument distribution device 190 for distribution. Thedocument distribution device 190 parses the document to determine the appropriate routing. For example, the document can indicate that its contents are to be forwarded to a first destination via e-mail, a second destination via fax and a third destination by a postal carrier. - In the first instance, the
document distribution device 190, in cooperation with the I/O interface 110, thecontroller 120 and thememory 130, routes the document, vianetwork 250 andlink 5, to thedocument sink 240, which in this case is an e-mail address. In the second instance, thedocument distribution device 190 prepares the document for delivery to a facsimile machine. Thus, in cooperation with thedirect output device 210, the appropriate routing information is recovered, such as a telephone number, from the document. Thus, and again in cooperation with the I/O interface 110, thecontroller 120, thememory 130 and overnetwork 250 andlink 5, the facsimile is transmitted, at the direction of thedirect output device 210, to adocument sink 240, such as a facsimile machine. - In the third instance, the
document distribution device 190 controls thedirect output device 210 in order to print a copy of the document. Thus, thedocument sink 240 in this instance, is a printer. Then, for example, thetranscription system 100 would produce a document for example, including, the completed document and an address label. This finalized document could then be delivered to a postal carrier for further delivery. - FIG. 2 illustrates an
exemplary dictation station 300. Thedictation station 300 can be used as anexemplary input device 230. Thedictation station 300 comprises an I/O interface 310, a controller 320, a memory 330, auser interface controller 340, astorage device 350, adictation management controller 360 and atemplate storage device 370, interconnected bylink 5. Thedictation station 300 is also connected to one ormore input devices 380,display devices 390 and to a wired or wireless network. - The
input device 380 can be, for example, a keyboard, a mouse, a microphone, a personal digital assistant, a handheld analog or digital voice recorder, a digital audio compact disc, an analog magnetic tape, an interactive telephony application, personal computer, an interactive voice response system, or the like. In general, theinput device 380 can be any device capable of receiving information on behalf of thedictation station 300. - The
display device 390 can be, for example, a computer monitor, a PDA display, a cellular phone display, voice prompts of a telephony application, the user interface of a personal recorder, or the like. In general, thedisplay device 390 can be any device capable of providing information, including both audio and/or visual information, from thedictation station 300 to a user. - In operation, the
dictation station 300 receives input from theinput device 380. For example, the user interface can be provided to a user for recordation of dictation. Thus, for example, theinput device 380 can be a keyboard and mouse, and thedisplay device 390 and a monitor. In this exemplary embodiment, the user is provided with, for example, a graphical user interface that allows for the management and recordation of dictation and distribution information. Thus, upon initialization of thedictation station 300, the user is provided with, for example, a user interface that allows for management and recordation of dictation and distribution information. For example, theuser interface controller 340 can display on thedisplay device 390 the various controls needed for recording dictation and associating supplemental information with that dictation. - Additionally,
user interface controller 340, in cooperation with thetemplate storage device 370, can present to the user, viadisplay device 390, a template for dictation. This template can be dynamic. For example, the template can have selectable portions that have predetermined contents for population of the template. Additionally, for example, the template can have various portions into which dictation can be input. Thus, upon completion of the dictation, the dictation and any associated distribution information is stored, with the aid of the I/O interface and controller 320, in thestorage device 350. Then, at the direction of thedictation management controller 360, the dictation and distribution information is forwarded, vialink 5, and with the cooperation of the I/O interface 310, the controller 320 and memory 330, over a wired or a wireless network to thetranscription system 100. - FIG. 3 illustrates an exemplary embodiment of a
transcription station 400 according to an exemplary embodiment of the invention. Thetranscription station 400 comprises an I/O interface 410, a controller 420, memory 430, auser interface controller 440, astorage device 450 and atranscription management controller 460 interconnected bylink 5. Thetranscription station 400 is also connected to aninput device 470, adisplay device 480 and a wired or wireless network. - The
input device 380 can be, for example, a keyboard, a mouse, a microphone, a personal digital assistant, a handheld analog or digital voice recorder, a digital audio compact disc, an analog magnetic tape, an interactive telephony application, personal computer, or the like. In general, theinput device 380 can be any device capable of receiving information on behalf of thedictation station 300. - The
display device 390 can be, for example, a computer monitor, a PDA display, a cellular phone display, voice prompts of a telephony application, the user interface of a personal recorder, or the like. In general, thedisplay device 390 can be any device capable of providing information from thedictation station 300 to a user. - The
transcription station 400 receives one or more of dictation information, distribution information and text corresponding to speech from thetranscription system 100. Thetranscription station 400 allows a user to transcribe received dictation, modify received text, add or modify formatting, create a template, modify distribution information, or the like. Thus, similar to thedictation station 300, thetranscription station 400 is connected to one ormore input devices 470 anddisplay devices 480 such that a user can interface with the received dictation information. Thus, theuser interface controller 440, in cooperation with the I/O interface, the controller 420 and the memory 430, provides the necessary interface via thedisplay device 480 for the user to perform the necessary tasks. For example,input device 470 can be a keyboard and the display device 480 a monitor. Thus, upon receipt of dictation and distribution information, a user attranscription station 400 can input modifications to a document viainput device 470 and, optionally, modify or supplement distribution information. For example, the dictation may include instructions to alter the distribution information to, for example, add or delete recipients of the document. Thus, a user at thetranscription station 400 would be able to modify this distribution information which would then be stored with an updated version of the document in thestorage device 450. - Upon completion of work at the
transcription station 400, thetranscription management controller 460 forwards, vialink 5, and with the cooperation of the I/O interface 410 and controller 420, the updated document back to thetranscription system 100. - This process of receiving dictation, modifying and/or proofing the resulting document, and returning that document to the originator for approval continues until a final document has been approved. Upon approval of the document, and as previously discussed, the
transcription system 100 distributes the document in accordance with the distribution information associated with the document. - FIGS. 4-10 illustrate exemplary user interfaces that a user may encounter during the dictation or transcription process. For example, FIG. 4 illustrates an exemplary login screen that a user may encounter to enable access to the transcription system. In particular, a user could, for example, enter a user name and password that would allow the user access to the system which could, for example, store preferences, and keep track of previously created documents, templates, and the like.
- In particular, FIG. 5 illustrates an exemplary screen shot of a user interface a user may encounter upon logging onto the transcription system. Specifically, the account
information user interface 700 could provide a summary of all, or a portion of, activities the user may have with thetranscription system 100. The exemplary accountinformation user interface 700 comprises a messagecenter display portion 710, astatus display portion 720, anaccount display portion 730, a document libraryinterface display portion 740, and a job: submissioncontrol display portion 750. However, it should be appreciated, that in general any information can be displayed on the account information user interface that may be useful to a particular user. For example, the user may, using thepersonalized content button 800 or thepersonalized layout button 810, customize the account information user interface to suite their particular environment. Exemplary environments include disciplines in the medial field including radiology, cardiology, orthopedics, primary care, neurosurgery, oncology, and the like as well as, accounting, legal, court reporting, real estate, insurance, law enforcement, and general business applications and the like. - The message
center display portion 710 can provide access to via, for example, hyperlinks, e-mail services and address books. Therefore, for example, if a user desires to view and/or edit an entry in an address book, the user would select, for example with the click of a mouse, thehyperlink 712. Upon selection of thehyperlink 712, an address book or comparable interface could be opened that would allow a user to access and/or modify contacts and their related information, such as destination routing information. - The
status display portion 720 provides the user with a summary of outstanding documents. In particular, thestatus display portion 720 includes, for example, three subsections. These subsections include, for example, a reviews pendingdisplay portion 722, a documents pendingtranscription display portion 724 and a routinginformation display portion 726. The reviews pendingdisplay portion 722 can, for example, summarize the documents that need approval by the user, and have already gone through at least one iteration of the transcription process. The documents pendingtranscription display portion 724 can display, for example, documents which may or may not have been routed through the speech recognition engine, and are awaiting further modifications at a transcription station. The routinginformation display portion 726 can display, for example, the delivery status of documents that have been approved by the user. For example, the routinginformation display portion 726 can contain a list of documents indicating that some were faxed, some were e-mailed, or, for example, some have been printed. - The account
information display portion 730 summarizes the billing information for the particular user. For example, theaccount display portion 730 can include an itemization of particular documents based on, for example, routing information, or other user editable criteria. - The document library
interface display portion 740 provide users with a summary of previously completed or partially completed documents. The document libraryinterface display portion 740 can, for example, include a plurality ofselectable subdirectories 742 which, in turn, can include a list of documents. By selecting a hyperlink associated with any one of the documents in the document libraryinterface display portion 740, a user can then at least one of, open or modify the document as desired. - The job submission
control display portion 750 allows a user to create a new document. For example, a user can select a document type in the documenttype display portion 752 which would provide the user with a template from which the document is generated. Additionally, the user may select routing information from therouting display portion 754 that is to be associated with a particular document. Then, this routing information can be populated with, for example, information from the address book which is selectable via the view/editaddress book hyperlink 712. - Accordingly, upon selection of one or more of the document type and routing information, a user would select, for example, with the click of a mouse, the create
new document button 760. This would provide the user with one or more interfaces specific to the selected document type and routing information. - FIG. 6 illustrates an exemplary
dictation user interface 900. Thedictation user interface 900 comprises atemplate selection portion 910, adocument display portion 920, amultimedia controller 930 and, for example, a completedfile display portion 940. Additionally, the dictation user interface can have one or more buttons which, for example, allow the addition of a signature to thedocument 950 or allow printing of thedocument 960. - The
template display portion 910 allows a user to select a template onto which the dictation will be merged. Then, using the controls in the multimediacontrol display portion 930, a user records dictation. Upon completion of the dictation, the document is added to the completedfiles display portion 940. - However, if the document has already been transcribed, and it has been returned to the user for review, the
document display portion 920 will be populated with the already transcribed text. Therefore, for example, if this was the second instance of a letter, thedocument display portion 920 would contain a first draft of the letter. - FIG. 7 illustrates an exemplary job submission
control user interface 1000. The exemplary job submissioncontrol user interface 1000 contains selectable portions that assist a user in populating a template with information. For example, the job submissioncontrol user interface 1000 comprises a document templatetype selection portion 1010, a document distributionpreference selection portion 1020, amultimedia control interface 1030, thefunctional control buttons 1040 and a fieldpopulation display portion 1050. The documenttemplate selection portion 1010 allows a user to select and/or modify or change a template. Upon selection of a particular template, the remainder of the job submissioncontrol user interface 1000 is populated based on the selected template. For example, in the exemplary job submissioncontrol user interface 1000, a document distributionpreference selection portion 1020 is displayed which indicates that, for example, two individuals and the document library interface are to receive a copy of the document. Additionally, themultimedia control portion 1030 allows a user to view, for example, the length of the dictation and current position within the dictation. The job submissioncontrol user interface 1000 contains one ormore function buttons 1040 which allow, for example, a user to further customize the template by adding additional distribution locations, confining dictation, aborting the document, or the like. - The field
selection display portion 1050 allows a user to populate portions of the template based on the document template type selection. For example, the fieldselection display portion 1050 can include selectable menus for formatting or otherwise populating, for example, an address, a heading, a closing and a signature file. - FIG. 8 illustrates an exemplary document
library user interface 1100. The documentlibrary user interface 1100 comprises adirectory selection portion 1110, adocument display portion 1120, and associated function buttons 1130-1180. - The
directory display portion 1110 comprises, for example, a list of directories within the document library. Upon selection of a particular directory, the files saved within that directory are displayed in thefile display portion 1120. Then, one or more files, upon selection, can be further accessed, modified, or otherwise operated on in accordance with the function buttons 1130-1180. In particular, thefunction button 1130 allows a document to be opened and displayed. Alternatively, thefunction button 1140 allows the document to be downloaded to, for example, a PDA. Thesend button 1150 allows a document to be sent to one or more destinations. Thedelete button 1160 allows deletion of a document, therename button 1140 allows renaming of a document, and themove button 1180 allows, for example, moving of a document to a different folder. - FIG. 9 illustrates an exemplary transcription user interface which may be displayed at the
transcription station 400. Thetranscription user interface 1200 comprises amultimedia control portion 1210, aworkspace display portion 1220, a workqueue display portion 1230, and one ormore function buttons 1240. - The
multimedia control 1210 allows a transcriptionist to play dictation and populate theworkspace 1220 with that information. The workqueue display portion 1230 shows a transcriptionist work jobs waiting for transcription services in a queue. Thefunctional buttons 1240 allow a transcriptionist to view, for example, performance statistics, change account information, create or edit templates, or the like. - FIG. 10 illustrates an exemplary job submission user interface for which the dictation has already been run through a speech recognition device or the transcription station as appropriate. The job
submission user interface 1300 comprises a document templatetype selection portion 1010, a document distributionpreference selection portion 1020, amultimedia control interface 1030, thefunctional control buttons 1040, a draftdocument display portion 1310, astatus display portion 1320 and draft management buttons 1330-1380. The documenttemplate selection portion 1010 allows a user to select and/or modify the draft document shown in the draftdocument display portion 1310. - The draft management buttons1330-1380 allow a user to interact with a draft document and/or any attachments. Specifically, the send button 1330 allows a user to send the draft document to one or more recipients. The save draft button 1340 allows a user to save modifications to a draft document for, for example, further editing at another time. The spell check button 1350 allows the user to spell check the document. The cancel
button 1360 cancels the current task. The usesignature check box 1370 allows a user to append a signature to the document and theedit attachments button 1380 allows a user to edit attachments, if any, associated with the document. - FIG. 11 illustrates an exemplary embodiment of the operation of the transcription system according to this invention. In particular, control begins at step S100 and continues to step S110. In step S110, the dictation and distribution information is received. Next, in step S120, a determination is made whether the user has a complete acoustical reference file. If the user has a complete acoustical reference file, control jumps to step S130. Otherwise, control continues to step S150 where speech recognition is performed. Control then continues to step S160.
- In step S130, the dictation and distribution information is forwarded to a transcription station. Next, in step S140, the transcribed dictation is forwarded to the speech recognition engine for development of the acoustical reference file. Control then continues to step S160.
- In step S160, a determination is made whether proofing is required. If proofing is required, control jumps to step S170. Otherwise, control continues to step S180.
- In step S170, the document is proofed. Control then continues to step S180. In step S180, a determination is made whether additional instructions are present. If additional instructions are present, control jumps to step S190. Otherwise, control continues to step S210.
- In step S1190, the document is forwarded to the transcription station. Next, in step S200, the additional instructions are implemented. For example, these additional instructions can include formatting instructions, routing instructions, template creation instructions, or the like. Control then continues to step S210.
- In step S210, a determination is made whether the document is to be returned to the originator for approval. If the document is to be returned, control jumps to step S220. Otherwise, control continues to step S270.
- In step S220, a determination is made whether the originator approved the document. If the originator approved the document, control jumps to step S230. Otherwise, control continues to step S270.
- In step S230, a determination is made whether edits to the document or associated information are required. If edits are required, control jumps to step S240; Otherwise, control continues to step S250.
- In step S240, the user edits the document and/or the associated information. Control then continues to step S250.
- In step S250, a determination is made whether the routing information is to be modified. If the routing information is to be modified, control jumps to step S260. Otherwise, control continues to step S270.
- In step S260, the routing information is edited. Control then continues to step S270.
- In step S270 the document and the associated information is stored. Next in S280, the document is distributed in accordance with the distribution information. Control then continues to step S290 where the control sequence ends.
- FIG. 12 is a flowchart outlining one exemplary embodiment of a method for interfacing with the transcription system according to this invention. In particular, control begins in step S300 and continues to step S310. In step S310, a determination is made whether status information is to be reviewed. If status information is to be reviewed, control continues to step S320. In step S320, the status information is determined. Next, in step S330, the status information is displayed. Control then continues to step S340.
- In step S340, a determination is made whether account information is to be reviewed. If account information is to be reviewed, control continues to step S350. In step S350, the account information is determined. Next, in step S360 the account information is displayed. Control then continues to step S370.
- In step S370, a determination is made whether the document library interface is to be accessed. If the document library interface is to be accessed, control continues to step S380. Otherwise, control jumps to step S390. In step S380, the document library interface is initiated. The document library interface allows, for example, access to transcribed dictations which have been saved as documents. These documents may be for example, opened, downloaded, sent, deleted, renamed, or moved to for example, a different storage location within the document library. Control then continues to step S390.
- In step S390, a determination is made whether the job submission control should be accessed. If the job submission control is to be accessed, control continues to step S400. Otherwise, control jumps to step S410 where the control sequence ends.
- In step S400, the job submission control interface is initiated. Control then continues to step S410 where the control sequence ends.
- FIG. 13 illustrates in greater detail the operation of the job submission control interface step of FIG. 12. In particular, control begins in step S500 and continues to step S510. In step S510, a user is queried whether a template is to be used. If a template is to be used, control continues to step S520. Otherwise, control jumps to step S550.
- In step S550, a determination is made whether an existing template is to be used for the dictation. If an existing template is to be used for the dictation, control continues to step S540.
- In step S530, a template is selected. Control then continues to step S550.
- In step S540, the new dictation is used to create a template. Control then continues to step S550.
- In step S550, routing information is selected. Next, in step S560, the user's dictation is recorded. Then, in step S570, the dictation and any associated information is stored. Control then continues to step S580.
- In step S580, the dictation and distribution information is forwarded to the transcription system. Control then continues to step S590 where the control sequence ends.
- FIG. 14 is a flowchart outlining an exemplary embodiment of a method of operation of the transcription station according to this invention. Specifically, control begins in step S600 and continues to step S610.
- In step S610, the dictation and distribution information is received. Next, in step S620, the dictation is reviewed. Then, in step S630, a determination is made whether modifications to either the dictation and/or distribution information are desired. For example, supplemental instructions found in the dictation but, for example, not transcribed by the speech recognition engine can then be implemented at the transcription station. Control then jumps to step S640. Otherwise, control continues to step S650.
- In step S640, edits to the dictation and/or distribution information are performed. Control then continues to step S650.
- In step S650, the modifications are saved. Control then continues to step S660 where the dictation and distribution information is returned to the transcription system. Control then continues to step 5670 where the control sequence ends.
- As shown in FIGS. 1-3, the transcription and associated systems are preferably implemented either on single program general purpose computers, or separate program general purpose computers. However, the transcription system dictation and transcription stations can also be implemented on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmable logic device such as a PLD, PLA, FPGA, PAL, or the like. In general, any device, capable of implementing a finite state machine is in turn capable of implementing the flowcharts illustrated in FIGS. 11-14 can be used to implement the transcription system according to this invention.
- Furthermore, the disclosed method may be readily implemented in software using object or object-oriented software development environments that provide portable source code that can be used or a variety of computer or workstation hardware platforms. Alternatively, the disclosed transcription system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware used to implement the systems in accordance with this invention is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized. The transcription systems and methods described above, however, can be readily implemented in hardware and/or software using any known or later-developed systems or structures, devices and/or software by those skilled in the applicable art without undue experimentation from the function description provided herein together with a general knowledge of the computer arts.
- Moreover, the disclosed methods may be readily implemented as software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like. In this instance, the methods and systems of this invention can be implemented as a routine embedded on a personal computer such as a Java® or CGI script, as a resource residing on a server or graphics workstation, as a routine embedded in a dedicated transcription system, a web brouser, a cellular telephone, a PDA, a dedicated dictation or transcription system, or the like. The transcription system can also be implemented by physically incorporating the system and method into a software and/or hardware system, such as the hardware and software systems of a graphics workstation or dedicated transcription system.
- It is, therefore, apparent that there has been provided, in accordance with the present invention, systems and methods for transcribing and routing dictation over one or more distributed networks. While this invention has been described in conjunction with preferred embodiments thereof, it is evident that many alternatives, modifications, and variations be apparent to those skilled in the applicable arts. Accordingly, Applicants intend to embrace all such alternatives, modifications and variations that follow in the spirit and scope of this invention.
Claims (20)
1. A document processing system comprising:
a transcription system that receives dictation information, wherein the dictation information is capable of including dictation and supplemental information;
a dictation conversion device that converts the dictation information into a document based on a previously generated acoustical reference file and forwards the document and the supplemental information to a transcription station; and
a document management module adapted to display status information regarding one or more documents, the status information including review pending information, documents pending transcription information and routing information.
2. The system of claim 1 , wherein the routing information includes information reflecting a distribution status of one or more documents.
3. The system of claim 1 , wherein the review pending information includes information reflecting documents ready for user review.
4. The system of claim 1 , wherein the documents pending transcription information includes the status of one or more documents awaiting transcription.
5. The system of claim 1 , wherein the previously generated acoustic reference file is constructed from a speech recognition engine comparing manually transcribed dictation information to the dictation information.
6. The system of claim 1 , further comprising a dictation station that interacts with a user to capture the dictation and supplemental information.
7. The system of claim 6 , wherein the dictation station is at least one of a personal digital assistant, a cellular phone, a personal computer, an analog or digital dictation receiving device and a telephone.
8. The system of claim 6 , wherein the interaction includes at least one of job submission control, document library interface control, account information access and status information access.
9. The system of claim 1 , further comprising a transcription station that allows updating of the document.
10. The system of claim 9 , wherein the updating comprises at least one of verification, modification and correction of at least one of the document and supplemental information.
11. The system of claim 1 , wherein distribution information allows the document to be automatically routed to a document sink.
12. The system of claim 11 , wherein the document sink is at least one of a printer, a personal digital assistant, a cellular phone, a personal computer, a laptop computer, an e-mail address and a facsimile machine.
13. The system of claim 1 , wherein the supplemental information includes instructions regarding the document.
14. A method for managing dictation comprising:
receiving dictation information, wherein the dictation information includes dictation and supplemental information;
converting the dictation information into a document, based on a previously generated acoustical reference file;
forwarding the document and the supplemental information to a transcription station; and
displaing status information regarding one or more documents, the status information including review pending information, documents pending transcription information and routing information
15. The method of claim 14 , wherein the previously generated acoustic reference file is constructed from a speech recognition engine comparing manually transcribed dictation information to the dictation information.
16. The method of claim 14 , further comprising modifying the document based on the dictation information.
17. The method of claim 14 , wherein the document is routed to at least one of a printer, a personal digital assistant, a cellular phone, a personal computer, a laptop computer, an e-mail address and a facsimile machine.
18. The method of claim 14 , wherein the dictation and supplemental information is received from at least one of a personal digital assistant, a cellular phone, a personal computer, an analog or digital dictation receiving device and a telephone.
19. The method of claim 14 , further comprising determining a template based on one or more of the dictation and supplemental information.
20. The method of claim 19 , wherein the template is used to guide a user through inputting of one or more of the dictation and distribution information.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/837,640 US20040204938A1 (en) | 1999-11-01 | 2004-05-04 | System and method for network based transcription |
US11/152,107 US20050234730A1 (en) | 1999-11-01 | 2005-06-15 | System and method for network based transcription |
US11/427,158 US20060256933A1 (en) | 1999-11-01 | 2006-06-28 | System and method for network based transcription |
US11/748,730 US20070225978A1 (en) | 1999-11-01 | 2007-05-15 | System and method for network based transcription |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16296999P | 1999-11-01 | 1999-11-01 | |
US09/699,477 US6789060B1 (en) | 1999-11-01 | 2000-10-31 | Network based speech transcription that maintains dynamic templates |
US10/837,640 US20040204938A1 (en) | 1999-11-01 | 2004-05-04 | System and method for network based transcription |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/699,477 Continuation US6789060B1 (en) | 1999-11-01 | 2000-10-31 | Network based speech transcription that maintains dynamic templates |
US09/699,477 Division US6789060B1 (en) | 1999-11-01 | 2000-10-31 | Network based speech transcription that maintains dynamic templates |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/152,107 Continuation US20050234730A1 (en) | 1999-11-01 | 2005-06-15 | System and method for network based transcription |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040204938A1 true US20040204938A1 (en) | 2004-10-14 |
Family
ID=33134585
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/699,477 Expired - Fee Related US6789060B1 (en) | 1999-11-01 | 2000-10-31 | Network based speech transcription that maintains dynamic templates |
US10/837,640 Abandoned US20040204938A1 (en) | 1999-11-01 | 2004-05-04 | System and method for network based transcription |
US11/152,107 Abandoned US20050234730A1 (en) | 1999-11-01 | 2005-06-15 | System and method for network based transcription |
US11/427,158 Abandoned US20060256933A1 (en) | 1999-11-01 | 2006-06-28 | System and method for network based transcription |
US11/748,730 Abandoned US20070225978A1 (en) | 1999-11-01 | 2007-05-15 | System and method for network based transcription |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/699,477 Expired - Fee Related US6789060B1 (en) | 1999-11-01 | 2000-10-31 | Network based speech transcription that maintains dynamic templates |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/152,107 Abandoned US20050234730A1 (en) | 1999-11-01 | 2005-06-15 | System and method for network based transcription |
US11/427,158 Abandoned US20060256933A1 (en) | 1999-11-01 | 2006-06-28 | System and method for network based transcription |
US11/748,730 Abandoned US20070225978A1 (en) | 1999-11-01 | 2007-05-15 | System and method for network based transcription |
Country Status (1)
Country | Link |
---|---|
US (5) | US6789060B1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060041462A1 (en) * | 2002-08-20 | 2006-02-23 | Ulrich Waibel | Method to route jobs |
WO2007041277A2 (en) * | 2005-09-29 | 2007-04-12 | Spryance, Inc. | Transcribing dictation containing private information |
US20070208563A1 (en) * | 2006-03-03 | 2007-09-06 | Rothschild Leigh M | Device, system and method for enabling speech recognition on a portable data device |
US20090070109A1 (en) * | 2007-09-12 | 2009-03-12 | Microsoft Corporation | Speech-to-Text Transcription for Personal Communication Devices |
US7558735B1 (en) * | 2000-12-28 | 2009-07-07 | Vianeta Communication | Transcription application infrastructure and methodology |
US7894807B1 (en) * | 2005-03-30 | 2011-02-22 | Openwave Systems Inc. | System and method for routing a wireless connection in a hybrid network |
US20120310644A1 (en) * | 2006-06-29 | 2012-12-06 | Escription Inc. | Insertion of standard text in transcription |
US9305551B1 (en) * | 2013-08-06 | 2016-04-05 | Timothy A. Johns | Scribe system for transmitting an audio recording from a recording device to a server |
EP3246917A1 (en) * | 2016-05-16 | 2017-11-22 | Olympus Corporation | Voice recording device and voice recording control method |
CN108228699A (en) * | 2016-12-22 | 2018-06-29 | 谷歌有限责任公司 | Collaborative phonetic controller |
US10747947B2 (en) * | 2016-02-25 | 2020-08-18 | Nxgn Management, Llc | Electronic health record compatible distributed dictation transcription system |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6604124B1 (en) * | 1997-03-13 | 2003-08-05 | A:\Scribes Corporation | Systems and methods for automatically managing work flow based on tracking job step completion status |
US6789060B1 (en) * | 1999-11-01 | 2004-09-07 | Gene J. Wolfe | Network based speech transcription that maintains dynamic templates |
US7401023B1 (en) * | 2000-09-06 | 2008-07-15 | Verizon Corporate Services Group Inc. | Systems and methods for providing automated directory assistance using transcripts |
US6915258B2 (en) * | 2001-04-02 | 2005-07-05 | Thanassis Vasilios Kontonassios | Method and apparatus for displaying and manipulating account information using the human voice |
US7225126B2 (en) | 2001-06-12 | 2007-05-29 | At&T Corp. | System and method for processing speech files |
US7958443B2 (en) * | 2003-02-28 | 2011-06-07 | Dictaphone Corporation | System and method for structuring speech recognized text into a pre-selected document format |
US7584101B2 (en) * | 2003-08-22 | 2009-09-01 | Ser Solutions, Inc. | System for and method of automated quality monitoring |
US8200487B2 (en) | 2003-11-21 | 2012-06-12 | Nuance Communications Austria Gmbh | Text segmentation and label assignment with user interaction by means of topic specific language models and topic-specific label statistics |
US8155957B1 (en) * | 2003-11-21 | 2012-04-10 | Takens Luann C | Medical transcription system including automated formatting means and associated method |
JP3829849B2 (en) * | 2004-01-20 | 2006-10-04 | セイコーエプソン株式会社 | Scan data transmission apparatus and scan data transmission system |
JP2005301754A (en) * | 2004-04-13 | 2005-10-27 | Olympus Corp | Transcription device and dictation system |
GB2418280A (en) * | 2004-09-18 | 2006-03-22 | Hewlett Packard Development Co | Document creation system |
US7447636B1 (en) | 2005-05-12 | 2008-11-04 | Verizon Corporate Services Group Inc. | System and methods for using transcripts to train an automated directory assistance service |
US20070011012A1 (en) * | 2005-07-11 | 2007-01-11 | Steve Yurick | Method, system, and apparatus for facilitating captioning of multi-media content |
EP2044804A4 (en) | 2006-07-08 | 2013-12-18 | Personics Holdings Inc | Personal audio assistant device and method |
US9870796B2 (en) | 2007-05-25 | 2018-01-16 | Tigerfish | Editing video using a corresponding synchronized written transcript by selection from a text viewer |
US8306816B2 (en) * | 2007-05-25 | 2012-11-06 | Tigerfish | Rapid transcription by dispersing segments of source material to a plurality of transcribing stations |
US10282391B2 (en) | 2008-07-03 | 2019-05-07 | Ebay Inc. | Position editing tool of collage multi-media |
US11017160B2 (en) | 2008-07-03 | 2021-05-25 | Ebay Inc. | Systems and methods for publishing and/or sharing media presentations over a network |
US8893015B2 (en) | 2008-07-03 | 2014-11-18 | Ebay Inc. | Multi-directional and variable speed navigation of collage multi-media |
US20110046950A1 (en) * | 2009-08-18 | 2011-02-24 | Priyamvada Sinvhal-Sharma | Wireless Dictaphone Features and Interface |
US9111546B2 (en) * | 2013-03-06 | 2015-08-18 | Nuance Communications, Inc. | Speech recognition and interpretation system |
Citations (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3965484A (en) * | 1975-01-02 | 1976-06-22 | Dictaphone Corporation | Central dictation system |
US4315309A (en) * | 1979-06-25 | 1982-02-09 | Coli Robert D | Integrated medical test data storage and retrieval system |
US4975896A (en) * | 1986-08-08 | 1990-12-04 | Agosto Iii Nicholas A D | Communications network and method |
US5050088A (en) * | 1989-03-29 | 1991-09-17 | Eastman Kodak Company | Production control system and method |
US5146439A (en) * | 1989-01-04 | 1992-09-08 | Pitney Bowes Inc. | Records management system having dictation/transcription capability |
US5369704A (en) * | 1993-03-24 | 1994-11-29 | Engate Incorporated | Down-line transcription system for manipulating real-time testimony |
US5649182A (en) * | 1995-03-17 | 1997-07-15 | Reitz; Carl A. | Apparatus and method for organizing timeline data |
US5696906A (en) * | 1995-03-09 | 1997-12-09 | Continental Cablevision, Inc. | Telecommunicaion user account management system and method |
US5721913A (en) * | 1994-05-05 | 1998-02-24 | Lucent Technologies Inc. | Integrated activity management system |
US5724405A (en) * | 1988-10-11 | 1998-03-03 | Ultratec, Inc. | Text enhanced telephony |
US5745901A (en) * | 1994-11-08 | 1998-04-28 | Kodak Limited | Workflow initiated by graphical symbols |
US5752227A (en) * | 1994-05-10 | 1998-05-12 | Telia Ab | Method and arrangement for speech to text conversion |
US5774841A (en) * | 1995-09-20 | 1998-06-30 | The United States Of America As Represented By The Adminstrator Of The National Aeronautics And Space Administration | Real-time reconfigurable adaptive speech recognition command and control apparatus and method |
US5812882A (en) * | 1994-10-18 | 1998-09-22 | Lanier Worldwide, Inc. | Digital dictation system having a central station that includes component cards for interfacing to dictation stations and transcription stations and for processing and storing digitized dictation segments |
US5823948A (en) * | 1996-07-08 | 1998-10-20 | Rlis, Inc. | Medical records, documentation, tracking and order entry system |
US5875436A (en) * | 1996-08-27 | 1999-02-23 | Data Link Systems, Inc. | Virtual transcription system |
US5920835A (en) * | 1993-09-17 | 1999-07-06 | Alcatel N.V. | Method and apparatus for processing and transmitting text documents generated from speech |
US5933805A (en) * | 1996-12-13 | 1999-08-03 | Intel Corporation | Retaining prosody during speech analysis for later playback |
US5950194A (en) * | 1993-03-24 | 1999-09-07 | Engate Incorporated | Down-line transcription system having real-time generation of transcript and searching thereof |
US5960399A (en) * | 1996-12-24 | 1999-09-28 | Gte Internetworking Incorporated | Client/server speech processor/recognizer |
US5963903A (en) * | 1996-06-28 | 1999-10-05 | Microsoft Corporation | Method and system for dynamically adjusted training for speech recognition |
US5978755A (en) * | 1996-03-11 | 1999-11-02 | U.S. Philips Corporation | Dictation device for the storage of speech signals |
US5995948A (en) * | 1997-11-21 | 1999-11-30 | First Usa Bank, N.A. | Correspondence and chargeback workstation |
US6047257A (en) * | 1997-03-01 | 2000-04-04 | Agfa-Gevaert | Identification of medical images through speech recognition |
US6055498A (en) * | 1996-10-02 | 2000-04-25 | Sri International | Method and apparatus for automatic text-independent grading of pronunciation for language instruction |
US6092045A (en) * | 1997-09-19 | 2000-07-18 | Nortel Networks Corporation | Method and apparatus for speech recognition |
US6122614A (en) * | 1998-11-20 | 2000-09-19 | Custom Speech Usa, Inc. | System and method for automating transcription services |
US6122613A (en) * | 1997-01-30 | 2000-09-19 | Dragon Systems, Inc. | Speech recognition using multiple recognizers (selectively) applied to the same input sample |
US6138088A (en) * | 1997-02-19 | 2000-10-24 | International Business Machines Corporation | Method and apparatus for process control by using natural language processing (NLP) technology |
US6173259B1 (en) * | 1997-03-27 | 2001-01-09 | Speech Machines Plc | Speech to text conversion |
US6175822B1 (en) * | 1998-06-05 | 2001-01-16 | Sprint Communications Company, L.P. | Method and system for providing network based transcription services |
US6182043B1 (en) * | 1996-02-12 | 2001-01-30 | U.S. Philips Corporation | Dictation system which compresses a speech signal using a user-selectable compression rate |
US6249765B1 (en) * | 1998-12-22 | 2001-06-19 | Xerox Corporation | System and method for extracting data from audio messages |
US6259657B1 (en) * | 1999-06-28 | 2001-07-10 | Robert S. Swinney | Dictation system capable of processing audio information at a remote location |
US6282154B1 (en) * | 1998-11-02 | 2001-08-28 | Howarlene S. Webb | Portable hands-free digital voice recording and transcription device |
US6298326B1 (en) * | 1999-05-13 | 2001-10-02 | Alan Feller | Off-site data entry system |
US6304844B1 (en) * | 2000-03-30 | 2001-10-16 | Verbaltek, Inc. | Spelling speech recognition apparatus and method for communications |
US6308158B1 (en) * | 1999-06-30 | 2001-10-23 | Dictaphone Corporation | Distributed speech recognition system with multi-user input stations |
US6336108B1 (en) * | 1997-12-04 | 2002-01-01 | Microsoft Corporation | Speech recognition with mixtures of bayesian networks |
US6342903B1 (en) * | 1999-02-25 | 2002-01-29 | International Business Machines Corp. | User selectable input devices for speech applications |
US6358519B1 (en) * | 1998-02-16 | 2002-03-19 | Ruth S. Waterman | Germ-resistant communication and data transfer/entry products |
US6392633B1 (en) * | 1996-07-08 | 2002-05-21 | Thomas Leiper | Apparatus for audio dictation and navigation of electronic images and documents |
US6400806B1 (en) * | 1996-11-14 | 2002-06-04 | Vois Corporation | System and method for providing and using universally accessible voice and speech data files |
US20030097277A1 (en) * | 1998-01-09 | 2003-05-22 | Geoffrey Marc Miller | Computer-based system for automating administrative procedures in a medical office |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6173529B1 (en) * | 1987-03-04 | 2001-01-16 | Malcolm Glen Kertz | Plant growing room |
US5148366A (en) * | 1989-10-16 | 1992-09-15 | Medical Documenting Systems, Inc. | Computer-assisted documentation system for enhancing or replacing the process of dictating and transcribing |
WO1993002412A1 (en) * | 1991-07-16 | 1993-02-04 | The Bcb Technology Group Incorporated | Dos compatible dictation and voice mail system |
US5752277A (en) * | 1994-12-05 | 1998-05-19 | Vanson Leathers, Inc. | Garment with structural vent |
JP3095654B2 (en) * | 1995-02-06 | 2000-10-10 | 三菱重工業株式会社 | Mobile monitoring device |
US5704371A (en) * | 1996-03-06 | 1998-01-06 | Shepard; Franziska | Medical history documentation system and method |
US5746901A (en) * | 1996-04-05 | 1998-05-05 | Regents Of The University Of California | Hybrid slab-microchannel gel electrophoresis system |
US5875448A (en) * | 1996-10-08 | 1999-02-23 | Boys; Donald R. | Data stream editing system including a hand-held voice-editing apparatus having a position-finding enunciator |
US6122604A (en) * | 1996-12-02 | 2000-09-19 | Dynacolor Inc. | Digital protection circuit for CRT based display systems |
DE19745128C2 (en) * | 1997-10-13 | 1999-08-19 | Daimler Chrysler Ag | Method for determining a trigger threshold for an automatic braking process |
US6223213B1 (en) * | 1998-07-31 | 2001-04-24 | Webtv Networks, Inc. | Browser-based email system with user interface for audio/video capture |
US6789060B1 (en) * | 1999-11-01 | 2004-09-07 | Gene J. Wolfe | Network based speech transcription that maintains dynamic templates |
-
2000
- 2000-10-31 US US09/699,477 patent/US6789060B1/en not_active Expired - Fee Related
-
2004
- 2004-05-04 US US10/837,640 patent/US20040204938A1/en not_active Abandoned
-
2005
- 2005-06-15 US US11/152,107 patent/US20050234730A1/en not_active Abandoned
-
2006
- 2006-06-28 US US11/427,158 patent/US20060256933A1/en not_active Abandoned
-
2007
- 2007-05-15 US US11/748,730 patent/US20070225978A1/en not_active Abandoned
Patent Citations (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3965484A (en) * | 1975-01-02 | 1976-06-22 | Dictaphone Corporation | Central dictation system |
US4315309A (en) * | 1979-06-25 | 1982-02-09 | Coli Robert D | Integrated medical test data storage and retrieval system |
US4975896A (en) * | 1986-08-08 | 1990-12-04 | Agosto Iii Nicholas A D | Communications network and method |
US5724405A (en) * | 1988-10-11 | 1998-03-03 | Ultratec, Inc. | Text enhanced telephony |
US5146439A (en) * | 1989-01-04 | 1992-09-08 | Pitney Bowes Inc. | Records management system having dictation/transcription capability |
US5050088A (en) * | 1989-03-29 | 1991-09-17 | Eastman Kodak Company | Production control system and method |
US5740245A (en) * | 1993-03-24 | 1998-04-14 | Engate Incorporated | Down-line transcription system for manipulating real-time testimony |
US6282510B1 (en) * | 1993-03-24 | 2001-08-28 | Engate Incorporated | Audio and video transcription system for manipulating real-time testimony |
US5884256A (en) * | 1993-03-24 | 1999-03-16 | Engate Incorporated | Networked stenographic system with real-time speech to text conversion for down-line display and annotation |
US5369704A (en) * | 1993-03-24 | 1994-11-29 | Engate Incorporated | Down-line transcription system for manipulating real-time testimony |
US5926787A (en) * | 1993-03-24 | 1999-07-20 | Engate Incorporated | Computer-aided transcription system using pronounceable substitute text with a common cross-reference library |
US5815639A (en) * | 1993-03-24 | 1998-09-29 | Engate Incorporated | Computer-aided transcription system using pronounceable substitute text with a common cross-reference library |
US5950194A (en) * | 1993-03-24 | 1999-09-07 | Engate Incorporated | Down-line transcription system having real-time generation of transcript and searching thereof |
US5920835A (en) * | 1993-09-17 | 1999-07-06 | Alcatel N.V. | Method and apparatus for processing and transmitting text documents generated from speech |
US5721913A (en) * | 1994-05-05 | 1998-02-24 | Lucent Technologies Inc. | Integrated activity management system |
US5752227A (en) * | 1994-05-10 | 1998-05-12 | Telia Ab | Method and arrangement for speech to text conversion |
US5812882A (en) * | 1994-10-18 | 1998-09-22 | Lanier Worldwide, Inc. | Digital dictation system having a central station that includes component cards for interfacing to dictation stations and transcription stations and for processing and storing digitized dictation segments |
US5845150A (en) * | 1994-10-18 | 1998-12-01 | Lanier Worldwide, Inc. | Modular digital dictation system with plurality of power sources and redundancy circuit which outputs service request signal in a source does not meet predetermined output level |
US5745901A (en) * | 1994-11-08 | 1998-04-28 | Kodak Limited | Workflow initiated by graphical symbols |
US5696906A (en) * | 1995-03-09 | 1997-12-09 | Continental Cablevision, Inc. | Telecommunicaion user account management system and method |
US5649182A (en) * | 1995-03-17 | 1997-07-15 | Reitz; Carl A. | Apparatus and method for organizing timeline data |
US5774841A (en) * | 1995-09-20 | 1998-06-30 | The United States Of America As Represented By The Adminstrator Of The National Aeronautics And Space Administration | Real-time reconfigurable adaptive speech recognition command and control apparatus and method |
US6182043B1 (en) * | 1996-02-12 | 2001-01-30 | U.S. Philips Corporation | Dictation system which compresses a speech signal using a user-selectable compression rate |
US5978755A (en) * | 1996-03-11 | 1999-11-02 | U.S. Philips Corporation | Dictation device for the storage of speech signals |
US5963903A (en) * | 1996-06-28 | 1999-10-05 | Microsoft Corporation | Method and system for dynamically adjusted training for speech recognition |
US6392633B1 (en) * | 1996-07-08 | 2002-05-21 | Thomas Leiper | Apparatus for audio dictation and navigation of electronic images and documents |
US5823948A (en) * | 1996-07-08 | 1998-10-20 | Rlis, Inc. | Medical records, documentation, tracking and order entry system |
US5875436A (en) * | 1996-08-27 | 1999-02-23 | Data Link Systems, Inc. | Virtual transcription system |
US6055498A (en) * | 1996-10-02 | 2000-04-25 | Sri International | Method and apparatus for automatic text-independent grading of pronunciation for language instruction |
US6400806B1 (en) * | 1996-11-14 | 2002-06-04 | Vois Corporation | System and method for providing and using universally accessible voice and speech data files |
US5933805A (en) * | 1996-12-13 | 1999-08-03 | Intel Corporation | Retaining prosody during speech analysis for later playback |
US5960399A (en) * | 1996-12-24 | 1999-09-28 | Gte Internetworking Incorporated | Client/server speech processor/recognizer |
US6122613A (en) * | 1997-01-30 | 2000-09-19 | Dragon Systems, Inc. | Speech recognition using multiple recognizers (selectively) applied to the same input sample |
US6138088A (en) * | 1997-02-19 | 2000-10-24 | International Business Machines Corporation | Method and apparatus for process control by using natural language processing (NLP) technology |
US6047257A (en) * | 1997-03-01 | 2000-04-04 | Agfa-Gevaert | Identification of medical images through speech recognition |
US6173259B1 (en) * | 1997-03-27 | 2001-01-09 | Speech Machines Plc | Speech to text conversion |
US6092045A (en) * | 1997-09-19 | 2000-07-18 | Nortel Networks Corporation | Method and apparatus for speech recognition |
US5995948A (en) * | 1997-11-21 | 1999-11-30 | First Usa Bank, N.A. | Correspondence and chargeback workstation |
US6336108B1 (en) * | 1997-12-04 | 2002-01-01 | Microsoft Corporation | Speech recognition with mixtures of bayesian networks |
US20030097277A1 (en) * | 1998-01-09 | 2003-05-22 | Geoffrey Marc Miller | Computer-based system for automating administrative procedures in a medical office |
US6358519B1 (en) * | 1998-02-16 | 2002-03-19 | Ruth S. Waterman | Germ-resistant communication and data transfer/entry products |
US6175822B1 (en) * | 1998-06-05 | 2001-01-16 | Sprint Communications Company, L.P. | Method and system for providing network based transcription services |
US6282154B1 (en) * | 1998-11-02 | 2001-08-28 | Howarlene S. Webb | Portable hands-free digital voice recording and transcription device |
US6122614A (en) * | 1998-11-20 | 2000-09-19 | Custom Speech Usa, Inc. | System and method for automating transcription services |
US6249765B1 (en) * | 1998-12-22 | 2001-06-19 | Xerox Corporation | System and method for extracting data from audio messages |
US6342903B1 (en) * | 1999-02-25 | 2002-01-29 | International Business Machines Corp. | User selectable input devices for speech applications |
US6298326B1 (en) * | 1999-05-13 | 2001-10-02 | Alan Feller | Off-site data entry system |
US6259657B1 (en) * | 1999-06-28 | 2001-07-10 | Robert S. Swinney | Dictation system capable of processing audio information at a remote location |
US6308158B1 (en) * | 1999-06-30 | 2001-10-23 | Dictaphone Corporation | Distributed speech recognition system with multi-user input stations |
US6304844B1 (en) * | 2000-03-30 | 2001-10-16 | Verbaltek, Inc. | Spelling speech recognition apparatus and method for communications |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7558735B1 (en) * | 2000-12-28 | 2009-07-07 | Vianeta Communication | Transcription application infrastructure and methodology |
US20060041462A1 (en) * | 2002-08-20 | 2006-02-23 | Ulrich Waibel | Method to route jobs |
US7894807B1 (en) * | 2005-03-30 | 2011-02-22 | Openwave Systems Inc. | System and method for routing a wireless connection in a hybrid network |
WO2007041277A2 (en) * | 2005-09-29 | 2007-04-12 | Spryance, Inc. | Transcribing dictation containing private information |
US20070081428A1 (en) * | 2005-09-29 | 2007-04-12 | Spryance, Inc. | Transcribing dictation containing private information |
WO2007041277A3 (en) * | 2005-09-29 | 2008-09-18 | Spryance Inc | Transcribing dictation containing private information |
US20070208563A1 (en) * | 2006-03-03 | 2007-09-06 | Rothschild Leigh M | Device, system and method for enabling speech recognition on a portable data device |
US8370141B2 (en) * | 2006-03-03 | 2013-02-05 | Reagan Inventions, Llc | Device, system and method for enabling speech recognition on a portable data device |
US20120310644A1 (en) * | 2006-06-29 | 2012-12-06 | Escription Inc. | Insertion of standard text in transcription |
US10423721B2 (en) * | 2006-06-29 | 2019-09-24 | Nuance Communications, Inc. | Insertion of standard text in transcription |
US11586808B2 (en) | 2006-06-29 | 2023-02-21 | Deliverhealth Solutions Llc | Insertion of standard text in transcription |
US20090070109A1 (en) * | 2007-09-12 | 2009-03-12 | Microsoft Corporation | Speech-to-Text Transcription for Personal Communication Devices |
US9305551B1 (en) * | 2013-08-06 | 2016-04-05 | Timothy A. Johns | Scribe system for transmitting an audio recording from a recording device to a server |
US10747947B2 (en) * | 2016-02-25 | 2020-08-18 | Nxgn Management, Llc | Electronic health record compatible distributed dictation transcription system |
EP3246917A1 (en) * | 2016-05-16 | 2017-11-22 | Olympus Corporation | Voice recording device and voice recording control method |
US10438585B2 (en) | 2016-05-16 | 2019-10-08 | Olympus Corporation | Voice recording device and voice recording control method |
CN108228699A (en) * | 2016-12-22 | 2018-06-29 | 谷歌有限责任公司 | Collaborative phonetic controller |
US11521618B2 (en) | 2016-12-22 | 2022-12-06 | Google Llc | Collaborative voice controlled devices |
US11893995B2 (en) | 2016-12-22 | 2024-02-06 | Google Llc | Generating additional synthesized voice output based on prior utterance and synthesized voice output provided in response to the prior utterance |
Also Published As
Publication number | Publication date |
---|---|
US20060256933A1 (en) | 2006-11-16 |
US20050234730A1 (en) | 2005-10-20 |
US6789060B1 (en) | 2004-09-07 |
US20070225978A1 (en) | 2007-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6789060B1 (en) | Network based speech transcription that maintains dynamic templates | |
US11594211B2 (en) | Methods and systems for correcting transcribed audio files | |
US7356537B2 (en) | Providing contextually sensitive tools and help content in computer-generated documents | |
US6789064B2 (en) | Message management system | |
US6834264B2 (en) | Method and apparatus for voice dictation and document production | |
US20030046350A1 (en) | System for transcribing dictation | |
US6122614A (en) | System and method for automating transcription services | |
US8108769B2 (en) | Presenting multimodal web page content on sequential multimode devices | |
US20010049725A1 (en) | E-mail processing system, processing method and processing device | |
US20100017694A1 (en) | Apparatus, and associated method, for creating and annotating content | |
JP2006221673A (en) | E-mail reader | |
US20150098555A1 (en) | Voicemail preview and editing system | |
US20030200089A1 (en) | Speech recognition apparatus and method, and program | |
JP2008165308A (en) | Contract information storage system and contract information management system | |
US20060111917A1 (en) | Method and system for transcribing speech on demand using a trascription portlet | |
KR20220046165A (en) | Method, system, and computer readable record medium to write memo for audio file through linkage between app and web | |
US20050234870A1 (en) | Automatic association of audio data file with document data file | |
US20050119888A1 (en) | Information processing apparatus and method, and program | |
US20060041462A1 (en) | Method to route jobs | |
JP2003167768A5 (en) | ||
JP2004118796A (en) | Server computer provided with home page preparation function for portable telephone | |
JP2000331107A (en) | Electronic slip system | |
JPH10293786A (en) | Speech table system | |
JP2004265146A (en) | Electronic dictionary managing device and dictionary data transmitting program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |