US20120033133A1 - Closed captioning language translation - Google Patents

Closed captioning language translation Download PDF

Info

Publication number
US20120033133A1
US20120033133A1 US13/276,833 US201113276833A US2012033133A1 US 20120033133 A1 US20120033133 A1 US 20120033133A1 US 201113276833 A US201113276833 A US 201113276833A US 2012033133 A1 US2012033133 A1 US 2012033133A1
Authority
US
United States
Prior art keywords
user
requested information
video
signal
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/276,833
Inventor
William Bishop
M. Neil Harrington
Steve J. McKinnon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RPX Clearinghouse LLC
Original Assignee
Rockstar Bidco LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockstar Bidco LP filed Critical Rockstar Bidco LP
Priority to US13/276,833 priority Critical patent/US20120033133A1/en
Publication of US20120033133A1 publication Critical patent/US20120033133A1/en
Assigned to ROCKSTAR CONSORTIUM US LP reassignment ROCKSTAR CONSORTIUM US LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Rockstar Bidco, LP
Assigned to RPX CLEARINGHOUSE LLC reassignment RPX CLEARINGHOUSE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOCKSTAR TECHNOLOGIES LLC, CONSTELLATION TECHNOLOGIES LLC, MOBILESTAR TECHNOLOGIES LLC, NETSTAR TECHNOLOGIES LLC, ROCKSTAR CONSORTIUM LLC, ROCKSTAR CONSORTIUM US LP
Assigned to JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: RPX CLEARINGHOUSE LLC, RPX CORPORATION
Assigned to RPX CORPORATION, RPX CLEARINGHOUSE LLC reassignment RPX CORPORATION RELEASE (REEL 038041 / FRAME 0001) Assignors: JPMORGAN CHASE BANK, N.A.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/08Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
    • H04N7/087Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only
    • H04N7/088Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital
    • H04N7/0884Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of additional display-information, e.g. menu for programme or channel selection
    • H04N7/0885Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of additional display-information, e.g. menu for programme or channel selection for the transmission of subtitles

Definitions

  • the present invention relates to closed captioning, and in particular to translating closed captioning text provided in a first language into a second language, wherein the translated closed captioning text is presented, to a viewer along with the corresponding video.
  • Closed captioning allows deaf, hard of hearing, and hearing impaired people to read a transcript of an audio portion of a television, video, or film presentation. As the video is presented to the viewer, text captions are displayed identifying who is speaking, transcribing what is being said, and indicating relevant sounds, such as laughing, crying, crashes, explosions, and the like. Closed captioning is also used to assist people who are learning an additional language, learning to read, or for those in a noisy environment.
  • Closed captioning text is encoded into the video using any number of closed captioning techniques.
  • different types of video programming employ different types of closed captioning encoding and decoding.
  • National Television Systems Committee (NTSC)-based programming encodes closed captioning text into line 21 of the vertical blanking interval.
  • the vertical blanking interval is a portion of the analog television picture that resides just above the visible portion of the video, and is not seen by the viewer.
  • the viewer's set-top box or television is able to decode the encoded closed captioning text provided in line 21 of the vertical blanking interval and present it to the viewer in association with the video.
  • the Advanced Television Systems Committee (ATSC)-based programming encodes three data streams into the video to support closed captioning.
  • One of the streams can support up to 63 unique closed captioning streams, which are encoded in an EIAA-708 format as set forth by the Electronic Industries Alliance (EIA).
  • the other two streams are encoded such that when the digital video is converted to analog video, the closed captioning text appears as encoded closed captioning in line 21 of the vertical blanking interval.
  • PAL Phase Alternation Line
  • SCCAM Sequential Color With Memory
  • closed captioning is extremely beneficial in providing a transcript of an audio portion of a video program.
  • closed captioning text is generally only available in one language, although most closed captioning standards support different closed captioning streams for different languages.
  • the significant effort and expense associated with providing closed captioning generally limits the closed captioning text to the most prevalent language in which the video will be presented. In the United States, for example, closed captioning is generally only provided in English, even though there are significant Hispanic, Asian, and European contingents who would benefit from closed captioning in their native languages.
  • the present invention provides an architecture for translating closed captioning text originally provided with a video program from one language to another and presenting the translated closed captioning text with the video program to a viewer.
  • the viewers are able to receive the closed captioning text in a language other than that used for the closed captioning originally provided with the video program.
  • the original closed captioning text may be translated from one language to another by a centralized closed captioning processor, such that the customer equipment for various subscribers can take advantage of centralized translation services. Once the original closed captioning text is translated, the translated closed captioning text may be delivered to the customer equipment in different ways.
  • the video program is sent to the closed captioning processor and the customer equipment at the same time.
  • the closed captioning processor will translate the original closed captioning text from one language to another. After translation, the closed captioning processor will deliver the translated closed captioning text to the customer equipment, which will present the translated closed captioning text with the video program.
  • the video program is initially sent to the closed captioning processor.
  • the closed captioning processor will translate the original closed captioning text from one language to another. After translation, the closed captioning processor will encode the translated closed captioning text into the video program as closed captioning. The closed captioning processor will then deliver the video program with the new closed captioning content to the customer equipment, which will recover the translated closed captioning text using traditional closed captioning decoding techniques.
  • FIG. 1 is a block representation of a closed captioning architecture according to a first embodiment of the present invention.
  • FIG. 2 is a block representation of a closed captioning translation process according to the embodiment of FIG. 1 .
  • FIG. 3 is a flow diagram illustrating the closed captioning translation process according to the embodiment of FIG. 1 .
  • FIG. 4 is a block representation of a closed captioning architecture according to a second embodiment of the present invention.
  • FIG. 5 is a block representation of, a closed captioning translation process according to the embodiment of FIG. 4 .
  • FIG. 6 is a flow diagram illustrating the closed captioning translation process according to the embodiment of FIG. 4 .
  • FIG. 7 is a block representation of a closed captioning processor according to one embodiment of the present invention.
  • FIG. 8 is a block representation of customer equipment according to one embodiment of the present invention.
  • the present invention provides an architecture for translating closed captioning (CC) text originally provided with a video program from one language to another and presenting the translated closed captioning text with the video program to a viewer.
  • the viewers are able to receive the closed captioning text in a language other than that used for the closed captioning originally provided with the video program.
  • the original closed captioning text may be translated from one language to another by a centralized closed captioning processor, such that the customer equipment for various subscribers can take advantage of centralized translation services.
  • the translated closed captioning text may be delivered to the customer equipment in different ways.
  • the video program is sent to the closed captioning processor and the customer equipment at the same time.
  • the closed captioning processor will translate the original closed captioning text from one language to another.
  • the closed captioning processor will deliver the translated closed captioning, text to the customer equipment, which will present the translated closed captioning text with the video program.
  • the video program is initially sent to the closed captioning processor.
  • the closed captioning processor will translate the original closed captioning text from one language to another. After translation, the closed captioning processor will encode the translated closed captioning text into the video program as closed captioning. The closed captioning processor will then deliver the video program with the new closed captioning content to the customer equipment, which will recover the translated closed captioning text using traditional closed captioning decoding techniques.
  • any number of translations may be provided for original closed captioning text.
  • the translation process and the desired translation or translations may be requested by the customer equipment, the service provider, or another subscriber device.
  • the video program may be provided in an analog or digital format via broadcast or recorded medium.
  • the closed captioning translation architecture 10 includes a video source 12 capable of delivering video programs to various customer equipment 14 , which may be associated with different subscribers at different locations.
  • the customer equipment 14 may take various forms, including a set-top box (STB) 14 A, an associated television or monitor 14 B, a personal computer (PC) 14 C, as well as any other mobile device, such as a personal digital assistant (PDA) 14 D or mobile telephone (not shown).
  • the video programs that require closed captioning translation are also sent to a closed captioning (CC) processor 16 at the same time they are being sent to the customer equipment 14 .
  • CC closed captioning
  • video programs are delivered to the customer equipment 14 and the CC processor 16 via an appropriate video signal 18 , which provides closed captioning in a first language.
  • the CC processor 16 will translate the closed captioning text to a second language and will provide corresponding translated CC information 20 to the appropriate customer equipment 14 , as indicated above.
  • the CC processor 16 is provided in a central network 22 .
  • the CC processor 22 is separate from the customer equipment 14 .
  • the CC processor 16 is able to provide translation services to customer equipment 14 at numerous customer premises 24 .
  • FIG. 2 illustrates a simplified block representation of the closed captioning translation architecture of the embodiment illustrated in FIG. 1 .
  • a video source 12 will simultaneously provide the video signal 18 for the video program to the CC processor 16 and the customer equipment 14 .
  • the video signal 18 will have a video, audio, and closed captioning component having closed captioning text (CC 1 ), regardless of the signal type or closed captioning encoding technique.
  • the CC processor 16 may receive a CC translation request from the customer equipment 14 to identify a language to which the closed captioning text (CC 1 ) provided in the video signal 18 should be translated (step 100 ). The CC processor 16 will then receive the video signal 18 with the original closed captioning for the video program (step 102 ), and extract the original closed captioning text (CC 1 ) from the video signal 18 using the appropriate closed captioning decoding techniques (step 104 ). Once the original dosed captioning text (CC 1 ) is extracted, the CC processor 16 will translate the original closed captioning text from one language to another to create translated closed captioning text (CC 2 ) (step 106 ). The translated closed captioning text is delivered to the customer equipment 14 in translated closed captioning information 20 , which is based on the translated, closed captioning text (step 108 ), as also illustrated in FIG. 2 .
  • Translation of the closed captioning text may be provided using different techniques.
  • the translation may be provided on a word-by-word or phrase-by-phrase basis.
  • the closed captioning text over a certain period of time may be accumulated and then translated as an entire segment.
  • the particular type of translation is beyond the scope of the present invention, and those skilled in the art will recognize various translation techniques that will be beneficial in various video delivery environments.
  • the translated closed captioning information is not re-encoded into a closed captioning format, and is sent to the customer equipment 14 without the video or audio components of the video program.
  • the translated closed captioning information 20 is formatted such that the customer equipment 14 can readily recover the translated closed captioning text (CC 2 ) and overlay the text on the video program being presented to the viewer.
  • the overlay procedure may be configured to emulate traditional closed captioning, or may be presented in any desired fashion.
  • the customer equipment 14 need not have closed captioning decoding capabilities, since the translated closed captioning information is not necessarily provided in a closed captioning format, although it represents a translation of the original closed captioning text (CC 1 ).
  • the CC processor 16 may provide markers or like synchronization information corresponding to the video in the video program, such that the translated closed captioning text is presented at the appropriate time and rate in association with the video of the video program.
  • the customer equipment 14 and the CC processor 16 may periodically or continuously communicate with each other to ensure that the translated closed captioning text (CC 2 ) is presented to the viewer along with the video of the video program in a synchronized fashion.
  • the CC processor 16 will inevitably inject some delay in presenting the translated closed captioning text (CC 2 ) to the customer equipment 14 .
  • the customer equipment 14 may employ at least a video buffer to buffer the video of the video program for a time sufficient to receive the translated closed captioning text (CC 2 ) from the CC processor 16 .
  • the synchronization information in the closed captioning text will control the amount of buffering.
  • the customer equipment 14 will then present the translated closed captioning text (CC 2 ) and the video of the video program in a synchronized fashion to the viewer.
  • any customer equipment 14 may receive translation services. Further, the CC processor 16 may be configured to translate between any number of languages, such that closed captioning text may be translated into any number of languages and may be presented to any amount of customer equipment 14 in an effective and efficient manner. Preferably, the customer equipment 14 is able to request a particular type of translation for a particular video program or for all programming in light of the viewer's needs or desires.
  • an alternative closed captioning translation architecture is provided.
  • the video signal 18 is sent to the CC processor 16 from the video source 12 .
  • the video signal 18 is not directly delivered to the customer equipment 14 when closed captioning translation services are employed.
  • the CC processor 16 will extract the closed captioning text from the video signal 18 , translate the closed captioning text from one language to another, and encode translated closed captioning text into the video signal 18 as traditional closed captioning.
  • the CC processor 16 will then deliver a video signal 26 having closed captioning with the translated closed captioning text to the customer equipment 14 .
  • the CC processor 16 presents video, audio, and closed captioning aspects of the video program in the video signal 26 to the customer equipment 14 .
  • the translated closed captioning text (CC 2 ) may replace the closed captioning text (CC 1 ) that was provided in the original video signal 18 , or may be added in a separate closed captioning field, as illustrated.
  • the customer equipment 14 may use traditional closed captioning decoding techniques to present the translated closed captioning text CC 2 or original closed captioning text CC 1 to the viewer.
  • this embodiment embeds translated closed captioning text into the video signal 26 prior to the customer equipment 14 receiving the video signal 26 .
  • This embodiment is particularly beneficial in environments where the customer equipment 14 is not configured to receive translated closed captioning information via a separate source and overlay the corresponding translated closed captioning text (CC 2 ) over the video of the video signal 18 .
  • this embodiment requires that the customer equipment 14 have closed captioning decoding capabilities.
  • the CC processor 16 may receive closed captioning translation instructions from the customer equipment 14 , a service provider, or a subscriber via another piece of customer equipment 14 (step 200 ).
  • the instructions may identify a desired language for closed captioning, as well as the video program or programs for which closed captioning translation services are desired.
  • the CC processor 16 may then receive a video signal with closed captioning for a video program (step 202 ). Again, the closed captioning will include closed captioning text (CC 1 ) in a first language.
  • the CC processor 16 will extract the closed captioning text (CC 1 ) (step 204 ) and translate the closed captioning text (CC 1 ) into translated closed captioning text (CC 2 ) (step 206 ).
  • the translated closed captioning text is then encoded into the video signal 18 of the video program using appropriate closed captioning encoding techniques (step 208 ).
  • the CC processor 16 will deliver the resultant video signal 26 to the customer equipment 14 (step 210 ). Again, the video signal 26 will have closed captioning including the translated closed captioning text (CC 2 ), and perhaps the original closed captioning text (CC 1 ).
  • synchronization of the translated closed captioning text (CC 2 ) with the video of the video program is provided in the CC processor 16 during the closed captioning encoding process.
  • the customer equipment 14 will simply decode the appropriate closed captioning stream, which includes the translated closed captioning text (CC 2 ), and present the translated closed captioning text (CC 2 ) to the viewer along with the video in traditional fashion.
  • the CC processor 16 may include a control system 30 associated with sufficient memory 32 for the requisite software 34 and data 36 to operate as described above.
  • the control system 30 may also be associated with one or more communication interfaces 38 to facilitate communications directly or indirectly with the video source 12 and the customer equipment 14 .
  • the customer equipment 14 may include a control system 40 having sufficient memory 42 for the requisite software 44 and data 46 to operate as described above.
  • the control system 40 may also be associated with one or more communication interfaces 48 to facilitate communications with other communication equipment 14 , the CC processor 16 , and the video source 12 , in a direct or indirect fashion.
  • the control system 40 may also be associated with a user interface 50 , which may represent a display, monitor, keypad, mouse, remote control input, or the like.

Abstract

The present invention provides an architecture for translating closed captioning text originally provided with a video program from one language to another and presenting the translated closed captioning text with the video program to a viewer. As such, the viewers are able to receive the closed captioning text in languages other than that used for the closed captioning originally provided with the video program. The original closed captioning text may be translated from one language to another by a centralized closed captioning processor, such that the customer equipment for various subscribers can take advantage of centralized translation services. Once, the original closed captioning text is translated, the translated closed captioning text may be delivered to the customer equipment in different ways.

Description

    RELATED APPLICATIONS
  • The present application is a continuation of co-pending U.S. patent application Ser. No. 11/531,562, filed Sep. 13, 2006, entitled “CLOSED CAPTIONING LANGUAGE TRANSLATION.”
  • FIELD OF THE INVENTION
  • The present invention relates to closed captioning, and in particular to translating closed captioning text provided in a first language into a second language, wherein the translated closed captioning text is presented, to a viewer along with the corresponding video.
  • BACKGROUND OF THE INVENTION
  • Closed captioning allows deaf, hard of hearing, and hearing impaired people to read a transcript of an audio portion of a television, video, or film presentation. As the video is presented to the viewer, text captions are displayed identifying who is speaking, transcribing what is being said, and indicating relevant sounds, such as laughing, crying, crashes, explosions, and the like. Closed captioning is also used to assist people who are learning an additional language, learning to read, or for those in a noisy environment.
  • For the present disclosure, television, video, and film presentations are referred to as “video,” and the text captions representing the closed captioning of the video are referred to as “closed captioning text.” Closed captioning text is encoded into the video using any number of closed captioning techniques. In many instances, different types of video programming employ different types of closed captioning encoding and decoding.
  • In North America, National Television Systems Committee (NTSC)-based programming encodes closed captioning text into line 21 of the vertical blanking interval. The vertical blanking interval is a portion of the analog television picture that resides just above the visible portion of the video, and is not seen by the viewer. The viewer's set-top box or television is able to decode the encoded closed captioning text provided in line 21 of the vertical blanking interval and present it to the viewer in association with the video. For digital television, the Advanced Television Systems Committee (ATSC)-based programming encodes three data streams into the video to support closed captioning. One of the streams can support up to 63 unique closed captioning streams, which are encoded in an EIAA-708 format as set forth by the Electronic Industries Alliance (EIA). The other two streams are encoded such that when the digital video is converted to analog video, the closed captioning text appears as encoded closed captioning in line 21 of the vertical blanking interval. Outside of North America, Phase Alternation Line (PAL) and Sequential Color With Memory (SCCAM) video standards transmit and store closed captioning information in a different manner, but the overall result is the same.
  • Regardless of the encoding and delivery technique, closed captioning is extremely beneficial in providing a transcript of an audio portion of a video program. Unfortunately, closed captioning text is generally only available in one language, although most closed captioning standards support different closed captioning streams for different languages. However, the significant effort and expense associated with providing closed captioning generally limits the closed captioning text to the most prevalent language in which the video will be presented. In the United States, for example, closed captioning is generally only provided in English, even though there are significant Hispanic, Asian, and European contingents who would benefit from closed captioning in their native languages.
  • Accordingly, there is a need for a way to efficiently and effectively translate closed captioning text presented in a first language into a second language, and make the translated closed captioning text available to viewers of the associated video.
  • SUMMARY OF THE INVENTION
  • The present invention provides an architecture for translating closed captioning text originally provided with a video program from one language to another and presenting the translated closed captioning text with the video program to a viewer. As such, the viewers are able to receive the closed captioning text in a language other than that used for the closed captioning originally provided with the video program. The original closed captioning text may be translated from one language to another by a centralized closed captioning processor, such that the customer equipment for various subscribers can take advantage of centralized translation services. Once the original closed captioning text is translated, the translated closed captioning text may be delivered to the customer equipment in different ways.
  • In a first embodiment, the video program is sent to the closed captioning processor and the customer equipment at the same time. The closed captioning processor will translate the original closed captioning text from one language to another. After translation, the closed captioning processor will deliver the translated closed captioning text to the customer equipment, which will present the translated closed captioning text with the video program.
  • In another embodiment, the video program is initially sent to the closed captioning processor. The closed captioning processor will translate the original closed captioning text from one language to another. After translation, the closed captioning processor will encode the translated closed captioning text into the video program as closed captioning. The closed captioning processor will then deliver the video program with the new closed captioning content to the customer equipment, which will recover the translated closed captioning text using traditional closed captioning decoding techniques.
  • Those skilled in the art will appreciate the scope of the present invention and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.
  • BRIEF DESCRIPTION OF THE DRAWING FIGURES
  • The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the invention, and together with the description serve to explain the principles of the invention.
  • FIG. 1 is a block representation of a closed captioning architecture according to a first embodiment of the present invention.
  • FIG. 2 is a block representation of a closed captioning translation process according to the embodiment of FIG. 1.
  • FIG. 3 is a flow diagram illustrating the closed captioning translation process according to the embodiment of FIG. 1.
  • FIG. 4 is a block representation of a closed captioning architecture according to a second embodiment of the present invention.
  • FIG. 5 is a block representation of, a closed captioning translation process according to the embodiment of FIG. 4.
  • FIG. 6 is a flow diagram illustrating the closed captioning translation process according to the embodiment of FIG. 4.
  • FIG. 7 is a block representation of a closed captioning processor according to one embodiment of the present invention.
  • FIG. 8 is a block representation of customer equipment according to one embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the invention and illustrate the best mode of practicing the invention. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the invention and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
  • The present invention provides an architecture for translating closed captioning (CC) text originally provided with a video program from one language to another and presenting the translated closed captioning text with the video program to a viewer. As such, the viewers are able to receive the closed captioning text in a language other than that used for the closed captioning originally provided with the video program. The original closed captioning text may be translated from one language to another by a centralized closed captioning processor, such that the customer equipment for various subscribers can take advantage of centralized translation services. Once the original closed captioning text is translated, the translated closed captioning text may be delivered to the customer equipment in different ways. In a first embodiment, the video program is sent to the closed captioning processor and the customer equipment at the same time. The closed captioning processor will translate the original closed captioning text from one language to another. After translation, the closed captioning processor will deliver the translated closed captioning, text to the customer equipment, which will present the translated closed captioning text with the video program.
  • In another embodiment, the video program is initially sent to the closed captioning processor. The closed captioning processor will translate the original closed captioning text from one language to another. After translation, the closed captioning processor will encode the translated closed captioning text into the video program as closed captioning. The closed captioning processor will then deliver the video program with the new closed captioning content to the customer equipment, which will recover the translated closed captioning text using traditional closed captioning decoding techniques.
  • In either embodiment, any number of translations may be provided for original closed captioning text. The translation process and the desired translation or translations may be requested by the customer equipment, the service provider, or another subscriber device. The video program may be provided in an analog or digital format via broadcast or recorded medium.
  • With reference to FIG. 1, a closed captioning translation architecture is illustrated according to a first embodiment of the present invention. The closed captioning translation architecture 10 includes a video source 12 capable of delivering video programs to various customer equipment 14, which may be associated with different subscribers at different locations. The customer equipment 14 may take various forms, including a set-top box (STB) 14A, an associated television or monitor 14B, a personal computer (PC) 14C, as well as any other mobile device, such as a personal digital assistant (PDA) 14D or mobile telephone (not shown). The video programs that require closed captioning translation are also sent to a closed captioning (CC) processor 16 at the same time they are being sent to the customer equipment 14.
  • In FIG. 1, video programs are delivered to the customer equipment 14 and the CC processor 16 via an appropriate video signal 18, which provides closed captioning in a first language. The CC processor 16 will translate the closed captioning text to a second language and will provide corresponding translated CC information 20 to the appropriate customer equipment 14, as indicated above. To reduce the computational requirements of the customer equipment 14, and to minimize delays associated with translating closed captioning text from one language to another, the CC processor 16 is provided in a central network 22. As such, the CC processor 22 is separate from the customer equipment 14. Preferably, the CC processor 16 is able to provide translation services to customer equipment 14 at numerous customer premises 24.
  • FIG. 2 illustrates a simplified block representation of the closed captioning translation architecture of the embodiment illustrated in FIG. 1. As illustrated, a video source 12 will simultaneously provide the video signal 18 for the video program to the CC processor 16 and the customer equipment 14. The video signal 18 will have a video, audio, and closed captioning component having closed captioning text (CC1), regardless of the signal type or closed captioning encoding technique.
  • With reference to FIG. 3, operation of the CC processor 16 is illustrated. The CC processor 16 may receive a CC translation request from the customer equipment 14 to identify a language to which the closed captioning text (CC1) provided in the video signal 18 should be translated (step 100). The CC processor 16 will then receive the video signal 18 with the original closed captioning for the video program (step 102), and extract the original closed captioning text (CC1) from the video signal 18 using the appropriate closed captioning decoding techniques (step 104). Once the original dosed captioning text (CC1) is extracted, the CC processor 16 will translate the original closed captioning text from one language to another to create translated closed captioning text (CC2) (step 106). The translated closed captioning text is delivered to the customer equipment 14 in translated closed captioning information 20, which is based on the translated, closed captioning text (step 108), as also illustrated in FIG. 2.
  • Translation of the closed captioning text may be provided using different techniques. For example, the translation may be provided on a word-by-word or phrase-by-phrase basis. Alternatively, the closed captioning text over a certain period of time may be accumulated and then translated as an entire segment. The particular type of translation is beyond the scope of the present invention, and those skilled in the art will recognize various translation techniques that will be beneficial in various video delivery environments.
  • In one embodiment, the translated closed captioning information is not re-encoded into a closed captioning format, and is sent to the customer equipment 14 without the video or audio components of the video program. The translated closed captioning information 20 is formatted such that the customer equipment 14 can readily recover the translated closed captioning text (CC2) and overlay the text on the video program being presented to the viewer. The overlay procedure may be configured to emulate traditional closed captioning, or may be presented in any desired fashion. Notably, the customer equipment 14 need not have closed captioning decoding capabilities, since the translated closed captioning information is not necessarily provided in a closed captioning format, although it represents a translation of the original closed captioning text (CC1).
  • Since the translated closed captioning text (CC2) is being delivered to the customer equipment 14 in the translated closed captioning information 20 separately from the video signal 18, steps must be taken to synchronize the presentation of the translated closed captioning text (CC2) with the video of the video program. In the translated closed captioning information 20, the CC processor 16 may provide markers or like synchronization information corresponding to the video in the video program, such that the translated closed captioning text is presented at the appropriate time and rate in association with the video of the video program. Alternatively, the customer equipment 14 and the CC processor 16 may periodically or continuously communicate with each other to ensure that the translated closed captioning text (CC2) is presented to the viewer along with the video of the video program in a synchronized fashion.
  • The CC processor 16 will inevitably inject some delay in presenting the translated closed captioning text (CC2) to the customer equipment 14. The customer equipment 14 may employ at least a video buffer to buffer the video of the video program for a time sufficient to receive the translated closed captioning text (CC2) from the CC processor 16. The synchronization information in the closed captioning text will control the amount of buffering. The customer equipment 14 will then present the translated closed captioning text (CC2) and the video of the video program in a synchronized fashion to the viewer.
  • Given the centralized nature of the CC processor 16, any customer equipment 14 may receive translation services. Further, the CC processor 16 may be configured to translate between any number of languages, such that closed captioning text may be translated into any number of languages and may be presented to any amount of customer equipment 14 in an effective and efficient manner. Preferably, the customer equipment 14 is able to request a particular type of translation for a particular video program or for all programming in light of the viewer's needs or desires.
  • With reference to FIG. 4, an alternative closed captioning translation architecture is provided. As illustrated, the video signal 18 is sent to the CC processor 16 from the video source 12. The video signal 18 is not directly delivered to the customer equipment 14 when closed captioning translation services are employed. Instead, the CC processor 16 will extract the closed captioning text from the video signal 18, translate the closed captioning text from one language to another, and encode translated closed captioning text into the video signal 18 as traditional closed captioning. The CC processor 16 will then deliver a video signal 26 having closed captioning with the translated closed captioning text to the customer equipment 14.
  • This process is further illustrated in FIG. 5. Notably, the CC processor 16 presents video, audio, and closed captioning aspects of the video program in the video signal 26 to the customer equipment 14. Notably, the translated closed captioning text (CC2) may replace the closed captioning text (CC1) that was provided in the original video signal 18, or may be added in a separate closed captioning field, as illustrated. Upon receipt of the video signal 26, the customer equipment 14 may use traditional closed captioning decoding techniques to present the translated closed captioning text CC2 or original closed captioning text CC1 to the viewer. Unlike the previous embodiment where the translated closed captioning text (CC2) required a different process to overlay the translated closed captioning text on the video of the video program, this embodiment embeds translated closed captioning text into the video signal 26 prior to the customer equipment 14 receiving the video signal 26. This embodiment is particularly beneficial in environments where the customer equipment 14 is not configured to receive translated closed captioning information via a separate source and overlay the corresponding translated closed captioning text (CC2) over the video of the video signal 18. However, this embodiment requires that the customer equipment 14 have closed captioning decoding capabilities.
  • With reference to FIG. 6, operation of the CC processor 16 is provided according to the second embodiment. Initially, the CC processor 16 may receive closed captioning translation instructions from the customer equipment 14, a service provider, or a subscriber via another piece of customer equipment 14 (step 200). The instructions may identify a desired language for closed captioning, as well as the video program or programs for which closed captioning translation services are desired. The CC processor 16 may then receive a video signal with closed captioning for a video program (step 202). Again, the closed captioning will include closed captioning text (CC1) in a first language. The CC processor 16 will extract the closed captioning text (CC1) (step 204) and translate the closed captioning text (CC1) into translated closed captioning text (CC2) (step 206). The translated closed captioning text is then encoded into the video signal 18 of the video program using appropriate closed captioning encoding techniques (step 208). The CC processor 16 will deliver the resultant video signal 26 to the customer equipment 14 (step 210). Again, the video signal 26 will have closed captioning including the translated closed captioning text (CC2), and perhaps the original closed captioning text (CC1).
  • In this embodiment, synchronization of the translated closed captioning text (CC2) with the video of the video program is provided in the CC processor 16 during the closed captioning encoding process. The customer equipment 14 will simply decode the appropriate closed captioning stream, which includes the translated closed captioning text (CC2), and present the translated closed captioning text (CC2) to the viewer along with the video in traditional fashion.
  • Those skilled in the art will recognize that different closed captioning encoding and decoding techniques are available and known in the art. In light of the different closed captioning processes and the various manners in which video programs may be recorded, broadcast, or delivered to customer equipment 14, the concepts of the present invention may take corresponding forms, as will be appreciated by those skilled in the art.
  • With reference to FIG. 7, a block representation of a CC processor 16 is illustrated. The CC processor 16 may include a control system 30 associated with sufficient memory 32 for the requisite software 34 and data 36 to operate as described above. The control system 30 may also be associated with one or more communication interfaces 38 to facilitate communications directly or indirectly with the video source 12 and the customer equipment 14.
  • A block representation of customer equipment 14 is provided in FIG. 8. The customer equipment 14 may include a control system 40 having sufficient memory 42 for the requisite software 44 and data 46 to operate as described above. The control system 40 may also be associated with one or more communication interfaces 48 to facilitate communications with other communication equipment 14, the CC processor 16, and the video source 12, in a direct or indirect fashion. The control system 40 may also be associated with a user interface 50, which may represent a display, monitor, keypad, mouse, remote control input, or the like.
  • Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present invention. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.

Claims (40)

1. A video display system, comprising:
a user interface operable to receive a user input requesting information presentation in a user-specified format;
at least one communication interface operable to:
in response to the user input, transmit a request for information presentation in the user-specified format over a communications network to a remote source of requested information; and
receive a video signal and at least one requested information signal over a communications network from at least one remote signal source, the at least one requested information signal comprising information in the user-specified format;
a video display; and
a control system operable to present the video signal and the at least one requested information signal in the user-specified format on the video display.
2. The video display system of claim 1, wherein the requested information is textual information.
3. The video display system of claim 2, wherein the textual information is closed captioning.
4. The video display system of claim 1, wherein the user-specified format is a user-specified language.
5. The video display system of claim 1, wherein the at least one communication interface comprises a video interface operable to receive the video signal from a remote video signal source.
6. The video display system of claim 5, wherein the video interface is operable to receive the video signal and the at least one requested information signal from the remote video signal source.
7. The video display system of claim 6, wherein the at least one requested information signal is incorporated in the video signal.
8. The video display system of claim 6, wherein the remote video signal source is the remote source of the requested information.
9. The video display system of claim 6, wherein the at least one communication interface further comprises at least one requested information signal interface operable to receive the at least one requested information signal from the remote source of the requested information, the at least one requested information signal comprising the requested information in the user-specified format.
10. The video display system of claim 1, wherein the control system is operable to synchronize the requested information in the user-specified format with video information from the video signal.
11. The video display system of claim 10, wherein the control system is operable to use synchronization information in the at least one requested information signal to synchronize the requested information in the user-specified format with the video information.
12. The video display system of claim 10, wherein the control system is operable to communicate with the remote source of the requested information over the at least one requested information signal interface to synchronize the requested information in the user-specified format with the video information.
13. The video display system of claim 10, further comprising a video buffer controlled by the control system to synchronize the requested information with the video information.
14. A method of operating a video display system, comprising:
receiving a user input requesting information presentation in a user-specified format
in response to, the user input, transmitting a request for information presentation in the user-specified format over a communications network to a remote source of requested information; and
receiving a video signal and at least one requested information signal over a communications network from at least one remote signal source, the at least one requested information signal comprising information in the user-specified format;
presenting the video signal and the requested information in the user-specified format on a video display.
15. The method of claim 14, wherein the requested information is textual information.
16. The method of claim 15, wherein the textual information is closed captioning.
17. The method of claim 14, wherein the user-specified format is a user-specified language.
18. The method of claim 14, further comprising synchronizing the requested information in the user-specified format with video information from the video signal.
19. The method of claim 14, comprising using synchronization information in the at least one requested information signal to synchronize the requested information in the user-specified format with the video information.
20. The method of claim 14, comprising communicating with the remote source of the requested information to synchronize the requested information in the user-specified format with the video information.
21. An apparatus for providing requested information in a user-requested format system, comprising:
at least one communication interface operable to receive a request for information presentation in a user-specified format over a communications network from a remote user video display system; and
a control system operable to provide a requested information signal comprising requested information in the user-specified format;
the at least one communication interface being further operable to transmit the requested information signal to the remote user video display system.
22. The apparatus of claim 21, wherein the control system is operable to provide the requested information signal by generating the requested information signal in response to the received request.
23. The apparatus of claim 21, wherein the requested information is textual information.
24. The apparatus of claim 21, wherein the user-specified format is a user-specified language.
25. The apparatus of claim 24, wherein the control system is operable to translate an information signal into the user-specified language in response to the request.
26. The apparatus of claim 21, wherein the at least one communication interface is further operable to receive at least one video signal and to transmit at least one video signal to the remote user video display system.
27. The apparatus of claim 26, wherein the control system is operable to incorporate the requested information signal in an associated video signal, and the communication interface is operable to transmit the video signal with the incorporated requested information signal to the remote user video display system.
28. The apparatus of claim 26, wherein the control system is operable to provide the requested information signal by generating the requested information signal from a selected received video signal in response to the received request.
29. The apparatus of claim 28, wherein the user-specified format is a user-specified language, and the control system is operable to translate an information signal into the user-specified language in response to the request.
30. The apparatus of claim 29, wherein the requested information is textual information.
31. A method of providing requested information in a user-requested format system from a central information source, the method comprising:
receiving a request for information presentation in a user-specified format over a communications network from a remote user video display system;
generating a requested information signal comprising the requested information in the user-specified format; and
transmitting the requested information signal to the remote user video display system.
32. The method of claim 31, wherein generating the requested information signal comprises generating the requested information signal in response to the request.
33. The method of claim 31, wherein the requested information is textual information.
34. The apparatus of claim 31, wherein the user-specified format is a user-specified language.
35. The method of claim 34, wherein generating the requested information signal comprises translating an information signal into the user-specified language in response to the received request.
36. The method of claim 31, wherein further comprising receiving at least one video signal and transmitting at least one video signal to the remote user video display system.
37. The method of claim 36, comprising incorporating the requested information signal in an associated video signal, and transmitting the associated video signal with the incorporated requested information signal to the remote user video display system.
38. The method of claim 36, wherein generating the requested information signal comprises generating the requested information signal from a selected received video signal in response to the request.
39. The method of claim 38, wherein the user-specified format is a user-specified language, and generating the requested information signal comprises translating an information signal into the user-specified language in response to the request.
40. The method of claim 39, wherein the requested information is textual information.
US13/276,833 2006-09-13 2011-10-19 Closed captioning language translation Abandoned US20120033133A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/276,833 US20120033133A1 (en) 2006-09-13 2011-10-19 Closed captioning language translation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/531,562 US8045054B2 (en) 2006-09-13 2006-09-13 Closed captioning language translation
US13/276,833 US20120033133A1 (en) 2006-09-13 2011-10-19 Closed captioning language translation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/531,562 Continuation US8045054B2 (en) 2006-09-13 2006-09-13 Closed captioning language translation

Publications (1)

Publication Number Publication Date
US20120033133A1 true US20120033133A1 (en) 2012-02-09

Family

ID=39171300

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/531,562 Expired - Fee Related US8045054B2 (en) 2006-09-13 2006-09-13 Closed captioning language translation
US13/276,833 Abandoned US20120033133A1 (en) 2006-09-13 2011-10-19 Closed captioning language translation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/531,562 Expired - Fee Related US8045054B2 (en) 2006-09-13 2006-09-13 Closed captioning language translation

Country Status (3)

Country Link
US (2) US8045054B2 (en)
EP (2) EP2479982A1 (en)
WO (1) WO2008032184A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11358063B2 (en) 2020-03-06 2022-06-14 International Business Machines Corporation Generation of audience appropriate content

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0619972D0 (en) * 2006-10-10 2006-11-15 Ibm Method, apparatus and computer network for producing special effects to complement displayed video information
US9710553B2 (en) 2007-05-25 2017-07-18 Google Inc. Graphical user interface for management of remotely stored videos, and captions or subtitles thereof
US8239767B2 (en) * 2007-06-25 2012-08-07 Microsoft Corporation Audio stream management for television content
US7992183B1 (en) 2007-11-09 2011-08-02 Google Inc. Enabling users to create, to edit and/or to rate online video captions over the web
US20090150951A1 (en) * 2007-12-06 2009-06-11 At&T Knowledge Ventures, L.P. Enhanced captioning data for use with multimedia content
US8621505B2 (en) * 2008-03-31 2013-12-31 At&T Intellectual Property I, L.P. Method and system for closed caption processing
US20090249406A1 (en) * 2008-03-31 2009-10-01 Broadcom Corporation Mobile video device with enhanced video navigation
DE202009018608U1 (en) * 2008-10-29 2012-06-12 Google, Inc. System for translating timed text into web video
US8913188B2 (en) * 2008-11-12 2014-12-16 Cisco Technology, Inc. Closed caption translation apparatus and method of translating closed captioning
US20100265397A1 (en) * 2009-04-20 2010-10-21 Tandberg Television, Inc. Systems and methods for providing dynamically determined closed caption translations for vod content
TWI416935B (en) * 2009-06-03 2013-11-21 Via Tech Inc Video preview modules, systems, user equipment, and methods
US9547642B2 (en) * 2009-06-17 2017-01-17 Empire Technology Development Llc Voice to text to voice processing
US8707381B2 (en) * 2009-09-22 2014-04-22 Caption Colorado L.L.C. Caption and/or metadata synchronization for replay of previously or simultaneously recorded live programs
KR101716274B1 (en) * 2010-02-12 2017-03-27 삼성전자주식회사 Electronic Device and Control Method Thereof
US20110234900A1 (en) * 2010-03-29 2011-09-29 Rovi Technologies Corporation Method and apparatus for identifying video program material or content via closed caption data
WO2011146059A1 (en) * 2010-05-19 2011-11-24 Ericsson Television Inc. Preferred language for video on demand & cable tv and audio
US8527268B2 (en) 2010-06-30 2013-09-03 Rovi Technologies Corporation Method and apparatus for improving speech recognition and identifying video program material or content
US8799774B2 (en) 2010-10-07 2014-08-05 International Business Machines Corporation Translatable annotated presentation of a computer program operation
US8761545B2 (en) 2010-11-19 2014-06-24 Rovi Technologies Corporation Method and apparatus for identifying video program material or content via differential signals
US9792612B2 (en) 2010-11-23 2017-10-17 Echostar Technologies L.L.C. Facilitating user support of electronic devices using dynamic matrix code generation
US9329966B2 (en) 2010-11-23 2016-05-03 Echostar Technologies L.L.C. Facilitating user support of electronic devices using matrix codes
US9781465B2 (en) 2010-11-24 2017-10-03 Echostar Technologies L.L.C. Tracking user interaction from a receiving device
US9280515B2 (en) 2010-12-03 2016-03-08 Echostar Technologies L.L.C. Provision of alternate content in response to QR code
US8886172B2 (en) 2010-12-06 2014-11-11 Echostar Technologies L.L.C. Providing location information using matrix code
US8875173B2 (en) 2010-12-10 2014-10-28 Echostar Technologies L.L.C. Mining of advertisement viewer information using matrix code
US9596500B2 (en) 2010-12-17 2017-03-14 Echostar Technologies L.L.C. Accessing content via a matrix code
US8640956B2 (en) 2010-12-17 2014-02-04 Echostar Technologies L.L.C. Accessing content via a matrix code
US9148686B2 (en) 2010-12-20 2015-09-29 Echostar Technologies, Llc Matrix code-based user interface
US8856853B2 (en) 2010-12-29 2014-10-07 Echostar Technologies L.L.C. Network media device with code recognition
US8292166B2 (en) 2011-01-07 2012-10-23 Echostar Technologies L.L.C. Performing social networking functions using matrix codes
US8534540B2 (en) 2011-01-14 2013-09-17 Echostar Technologies L.L.C. 3-D matrix barcode presentation
US8786410B2 (en) 2011-01-20 2014-07-22 Echostar Technologies L.L.C. Configuring remote control devices utilizing matrix codes
US8553146B2 (en) 2011-01-26 2013-10-08 Echostar Technologies L.L.C. Visually imperceptible matrix codes utilizing interlacing
US8468610B2 (en) 2011-01-27 2013-06-18 Echostar Technologies L.L.C. Determining fraudulent use of electronic devices utilizing matrix codes
US9571888B2 (en) 2011-02-15 2017-02-14 Echostar Technologies L.L.C. Selection graphics overlay of matrix code
US8511540B2 (en) 2011-02-18 2013-08-20 Echostar Technologies L.L.C. Matrix code for use in verification of data card swap
US8931031B2 (en) * 2011-02-24 2015-01-06 Echostar Technologies L.L.C. Matrix code-based accessibility
US9367669B2 (en) 2011-02-25 2016-06-14 Echostar Technologies L.L.C. Content source identification using matrix barcode
US8443407B2 (en) 2011-02-28 2013-05-14 Echostar Technologies L.L.C. Facilitating placeshifting using matrix code
US9736469B2 (en) 2011-02-28 2017-08-15 Echostar Technologies L.L.C. Set top box health and configuration
US8833640B2 (en) 2011-02-28 2014-09-16 Echostar Technologies L.L.C. Utilizing matrix codes during installation of components of a distribution system
US8550334B2 (en) 2011-02-28 2013-10-08 Echostar Technologies L.L.C. Synching one or more matrix codes to content related to a multimedia presentation
WO2012151479A2 (en) 2011-05-05 2012-11-08 Ortsbo, Inc. Cross-language communication between proximate mobile devices
EP2525281B1 (en) 2011-05-20 2019-01-02 EchoStar Technologies L.L.C. Improved progress bar
JP5903924B2 (en) * 2012-02-17 2016-04-13 ソニー株式会社 Receiving apparatus and subtitle processing method
US8695048B1 (en) * 2012-10-15 2014-04-08 Wowza Media Systems, LLC Systems and methods of processing closed captioning for video on demand content
ITTO20120966A1 (en) * 2012-11-06 2014-05-07 Inst Rundfunktechnik Gmbh MEHRSPRACHIGE GRAFIKANSTEUERUNG IN FERNSEHSENDUNGEN
GB2510116A (en) * 2013-01-23 2014-07-30 Sony Corp Translating the language of text associated with a video
US10244203B1 (en) * 2013-03-15 2019-03-26 Amazon Technologies, Inc. Adaptable captioning in a video broadcast
US8782722B1 (en) * 2013-04-05 2014-07-15 Wowza Media Systems, LLC Decoding of closed captions at a media server
US8913187B1 (en) * 2014-02-24 2014-12-16 The Directv Group, Inc. System and method to detect garbled closed captioning
US10298987B2 (en) 2014-05-09 2019-05-21 At&T Intellectual Property I, L.P. Delivery of media content to a user device at a particular quality based on a personal quality profile
CN104219459A (en) * 2014-09-30 2014-12-17 上海摩软通讯技术有限公司 Video language translation method and system and intelligent display device
JP6930639B2 (en) * 2016-04-25 2021-09-01 ヤマハ株式会社 Terminal device, terminal device operation method and program
EP3422203A1 (en) * 2017-06-29 2019-01-02 Vestel Elektronik Sanayi ve Ticaret A.S. Computer implemented simultaneous translation method simultaneous translation device
US10224057B1 (en) * 2017-09-25 2019-03-05 Sorenson Ip Holdings, Llc Presentation of communications
KR102452644B1 (en) * 2017-10-31 2022-10-11 삼성전자주식회사 Electronic apparatus, voice recognition method and storage medium
KR102468214B1 (en) * 2018-02-19 2022-11-17 삼성전자주식회사 The system and an appratus for providig contents based on a user utterance
EP3841754A4 (en) * 2018-09-13 2022-06-15 iChannel.io Ltd. A system and computerized method for subtitles synchronization of audiovisual content using the human voice detection for synchronization
JP7205839B2 (en) * 2019-05-24 2023-01-17 日本電信電話株式会社 Data generation model learning device, latent variable generation model learning device, translation data generation device, data generation model learning method, latent variable generation model learning method, translation data generation method, program
US20200401910A1 (en) * 2019-06-18 2020-12-24 International Business Machines Corporation Intelligent causal knowledge extraction from data sources
US11270123B2 (en) * 2019-10-22 2022-03-08 Palo Alto Research Center Incorporated System and method for generating localized contextual video annotation
KR20210100368A (en) * 2020-02-06 2021-08-17 삼성전자주식회사 Electronice device and control method thereof
US11032620B1 (en) * 2020-02-14 2021-06-08 Sling Media Pvt Ltd Methods, systems, and apparatuses to respond to voice requests to play desired video clips in streamed media based on matched close caption and sub-title text
US11683558B2 (en) * 2021-06-29 2023-06-20 The Nielsen Company (Us), Llc Methods and apparatus to determine the speed-up of media programs using speech recognition
US11736773B2 (en) * 2021-10-15 2023-08-22 Rovi Guides, Inc. Interactive pronunciation learning system
US11902690B2 (en) * 2021-10-27 2024-02-13 Microsoft Technology Licensing, Llc Machine learning driven teleprompter
US11785278B1 (en) * 2022-03-18 2023-10-10 Comcast Cable Communications, Llc Methods and systems for synchronization of closed captions with content output

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347365A (en) * 1991-09-27 1994-09-13 Sanyo Electric Co., Ltd. Device for receiving closed captioned broadcasts
US5659368A (en) * 1992-04-28 1997-08-19 Thomson Consumer Electronics, Inc. Auxiliary video information system including extended data services
US20010002204A1 (en) * 1997-08-11 2001-05-31 Marshall, O'toole, Gerstein, Murray & Borun Data management and order delivery system
US6272673B1 (en) * 1997-11-25 2001-08-07 Alphablox Corporation Mechanism for automatically establishing connections between executable components of a hypertext-based application
US20020101537A1 (en) * 2001-01-31 2002-08-01 International Business Machines Corporation Universal closed caption portable receiver
US20020120577A1 (en) * 2001-02-27 2002-08-29 Hans Mathieu C. Managing access to digital content
US20030065503A1 (en) * 2001-09-28 2003-04-03 Philips Electronics North America Corp. Multi-lingual transcription system
US20040139070A1 (en) * 1999-05-11 2004-07-15 Norbert Technologies, L.L.C. Method and apparatus for storing data as objects, constructing customized data retrieval and data processing requests, and performing householding queries
US20040153288A1 (en) * 2001-01-23 2004-08-05 Intel Corporation Method and system for detecting semantic events
US20050172318A1 (en) * 2000-11-16 2005-08-04 Mydtv, Inc. System and method for determining the desirability of video programming events using keyword matching
US20060064716A1 (en) * 2000-07-24 2006-03-23 Vivcom, Inc. Techniques for navigating multiple video streams
US20060098641A1 (en) * 2003-03-05 2006-05-11 Samsung Electronics Co., Ltd. Method and apparatus for detecting format of closed caption data automatically and displaying the caption data
US20060130112A1 (en) * 2002-06-14 2006-06-15 Patrick Stewart Streaming or real-time data television programming
US20060149781A1 (en) * 2004-12-30 2006-07-06 Massachusetts Institute Of Technology Techniques for relating arbitrary metadata to media files
US7908628B2 (en) * 2001-08-03 2011-03-15 Comcast Ip Holdings I, Llc Video and digital multimedia aggregator content coding and formatting

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR940002277B1 (en) 1991-08-27 1994-03-19 주식회사 금성사 On-screen display apparatus and method of vcr
US5982448A (en) 1997-10-30 1999-11-09 Reyes; Frances S. Multi-language closed captioning system
JP2000092460A (en) 1998-09-08 2000-03-31 Nec Corp Device and method for subtitle-voice data translation
KR100367675B1 (en) * 2000-04-27 2003-01-15 엘지전자 주식회사 Tv text information translation system and control method the same
EP1158799A1 (en) * 2000-05-18 2001-11-28 Deutsche Thomson-Brandt Gmbh Method and receiver for providing subtitle data in several languages on demand
US7130790B1 (en) 2000-10-24 2006-10-31 Global Translations, Inc. System and method for closed caption data translation
AU2003206009A1 (en) 2002-03-21 2003-10-08 Koninklijke Philips Electronics N.V. Multi-lingual closed-captioning
US7054804B2 (en) 2002-05-20 2006-05-30 International Buisness Machines Corporation Method and apparatus for performing real-time subtitles translation
US20050075857A1 (en) * 2003-10-02 2005-04-07 Elcock Albert F. Method and system for dynamically translating closed captions
US7830408B2 (en) * 2005-12-21 2010-11-09 Cisco Technology, Inc. Conference captioning
US7711543B2 (en) * 2006-04-14 2010-05-04 At&T Intellectual Property Ii, Lp On-demand language translation for television programs

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347365A (en) * 1991-09-27 1994-09-13 Sanyo Electric Co., Ltd. Device for receiving closed captioned broadcasts
US5659368A (en) * 1992-04-28 1997-08-19 Thomson Consumer Electronics, Inc. Auxiliary video information system including extended data services
US20010002204A1 (en) * 1997-08-11 2001-05-31 Marshall, O'toole, Gerstein, Murray & Borun Data management and order delivery system
US6272673B1 (en) * 1997-11-25 2001-08-07 Alphablox Corporation Mechanism for automatically establishing connections between executable components of a hypertext-based application
US20040139070A1 (en) * 1999-05-11 2004-07-15 Norbert Technologies, L.L.C. Method and apparatus for storing data as objects, constructing customized data retrieval and data processing requests, and performing householding queries
US20060064716A1 (en) * 2000-07-24 2006-03-23 Vivcom, Inc. Techniques for navigating multiple video streams
US20050172318A1 (en) * 2000-11-16 2005-08-04 Mydtv, Inc. System and method for determining the desirability of video programming events using keyword matching
US20040153288A1 (en) * 2001-01-23 2004-08-05 Intel Corporation Method and system for detecting semantic events
US20020101537A1 (en) * 2001-01-31 2002-08-01 International Business Machines Corporation Universal closed caption portable receiver
US20020120577A1 (en) * 2001-02-27 2002-08-29 Hans Mathieu C. Managing access to digital content
US7908628B2 (en) * 2001-08-03 2011-03-15 Comcast Ip Holdings I, Llc Video and digital multimedia aggregator content coding and formatting
US20030065503A1 (en) * 2001-09-28 2003-04-03 Philips Electronics North America Corp. Multi-lingual transcription system
US20060130112A1 (en) * 2002-06-14 2006-06-15 Patrick Stewart Streaming or real-time data television programming
US20060098641A1 (en) * 2003-03-05 2006-05-11 Samsung Electronics Co., Ltd. Method and apparatus for detecting format of closed caption data automatically and displaying the caption data
US20060149781A1 (en) * 2004-12-30 2006-07-06 Massachusetts Institute Of Technology Techniques for relating arbitrary metadata to media files

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11358063B2 (en) 2020-03-06 2022-06-14 International Business Machines Corporation Generation of audience appropriate content

Also Published As

Publication number Publication date
EP2479982A1 (en) 2012-07-25
EP2080368A4 (en) 2010-01-06
US8045054B2 (en) 2011-10-25
WO2008032184A3 (en) 2008-06-12
WO2008032184A2 (en) 2008-03-20
EP2080368A2 (en) 2009-07-22
US20080066138A1 (en) 2008-03-13
WO2008032184A8 (en) 2008-07-24

Similar Documents

Publication Publication Date Title
US8045054B2 (en) Closed captioning language translation
US11463779B2 (en) Video stream processing method and apparatus, computer device, and storage medium
US10244291B2 (en) Authoring system for IPTV network
US6903779B2 (en) Method and system for displaying related components of a media stream that has been transmitted over a computer network
US20160066055A1 (en) Method and system for automatically adding subtitles to streaming media content
US9634880B2 (en) Method for displaying user interface and display device thereof
US20100194979A1 (en) Multi-lingual transmission and delay of closed caption content through a delivery system
US8745683B1 (en) Methods, devices, and mediums associated with supplementary audio information
US20060174315A1 (en) System and method for providing sign language video data in a broadcasting-communication convergence system
KR101192207B1 (en) System for providing real-time subtitles service of many languages for online live broadcasting and method thereof
US8782721B1 (en) Closed captions for live streams
KR20130029055A (en) System for translating spoken language into sign language for the deaf
JP6399725B1 (en) Text content generation device, transmission device, reception device, and program
JP6700957B2 (en) Subtitle data generation device and program
US20100175082A1 (en) System and method for inserting sponsor information into closed caption content of a video signal
EP2574054B1 (en) Method for synchronising subtitles with audio for live subtitling
US10924779B2 (en) Location agnostic media control room and broadcasting facility
KR20180083132A (en) Electronic apparatus, and operating method for the same
CN112616062A (en) Subtitle display method and device, electronic equipment and storage medium
US20050165606A1 (en) System and method for providing a printing capability for a transcription service or multimedia presentation
US20080276289A1 (en) System for video presentations with adjustable display elements
US20100188573A1 (en) Media metadata transportation
US10796089B2 (en) Enhanced timed text in video streaming
KR102214598B1 (en) Contents playing apparatus, and control method thereof
KR20160011158A (en) Screen sharing system and method

Legal Events

Date Code Title Description
AS Assignment

Owner name: ROCKSTAR CONSORTIUM US LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROCKSTAR BIDCO, LP;REEL/FRAME:032097/0753

Effective date: 20120509

AS Assignment

Owner name: RPX CLEARINGHOUSE LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROCKSTAR CONSORTIUM US LP;ROCKSTAR CONSORTIUM LLC;BOCKSTAR TECHNOLOGIES LLC;AND OTHERS;REEL/FRAME:034924/0779

Effective date: 20150128

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, IL

Free format text: SECURITY AGREEMENT;ASSIGNORS:RPX CORPORATION;RPX CLEARINGHOUSE LLC;REEL/FRAME:038041/0001

Effective date: 20160226

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION

AS Assignment

Owner name: RPX CORPORATION, CALIFORNIA

Free format text: RELEASE (REEL 038041 / FRAME 0001);ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:044970/0030

Effective date: 20171222

Owner name: RPX CLEARINGHOUSE LLC, CALIFORNIA

Free format text: RELEASE (REEL 038041 / FRAME 0001);ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:044970/0030

Effective date: 20171222