US20130159853A1 - Managing playback of supplemental information - Google Patents
Managing playback of supplemental information Download PDFInfo
- Publication number
- US20130159853A1 US20130159853A1 US13/332,157 US201113332157A US2013159853A1 US 20130159853 A1 US20130159853 A1 US 20130159853A1 US 201113332157 A US201113332157 A US 201113332157A US 2013159853 A1 US2013159853 A1 US 2013159853A1
- Authority
- US
- United States
- Prior art keywords
- audio content
- output
- audio information
- supplemental
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000153 supplemental effect Effects 0.000 title claims abstract description 289
- 230000004044 response Effects 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 19
- 238000004891 communication Methods 0.000 claims description 11
- 230000000007 visual effect Effects 0.000 claims description 9
- 230000006855 networking Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 241000699670 Mus sp. Species 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/001—Teaching or communicating with blind persons
- G09B21/006—Teaching or communicating with blind persons using audible presentation of the information
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
Definitions
- Traditional printed books, electronic books or other printed media often contain a primary text and additional, supplementary information, such as footnotes, end notes, glossaries and appendices. These items of supplementary information often contain useful or interesting information, such as additional background or explanatory text regarding the primary text, external references, or commentary.
- Audio content may be played back on a wide variety of devices, such as notebook and tablet computers, mobile phones, personal music players, electronic book (“eBook”) readers, and other audio playback devices or devices with audio playback capability.
- devices such as notebook and tablet computers, mobile phones, personal music players, electronic book (“eBook”) readers, and other audio playback devices or devices with audio playback capability.
- FIG. 1 is a block diagram depicting an illustrative embodiment of a computing device configured to implement playback of supplemental audio information
- FIG. 2 is a schematic block diagram depicting an illustrative operating environment in which a computing device of FIG. 1 may interact with an electronic marketplace to acquire audio content and supplemental information;
- FIG. 3 is an illustrative graphical representation or visualization of audio content including supplemental information
- FIGS. 4A-4D depict illustrative user interfaces that may be used to facilitate playback of supplemental audio information.
- FIG. 5 is a flow diagram depicting an illustrative routine for playback of supplemental audio information.
- aspects of the present disclosure relate to the output of supplemental audio information on a computing device.
- systems and methods are disclosed for controlling the playback of audio content including one or more items of supplemental information, such as footnotes, endnotes, glossary information, or commentary.
- a user may utilize a computing device such as a personal music player in order to obtain access to audio content including supplemental audio information. While listening to the audio content, the user may receive an indication that supplemental information is available and associated with recently played or upcoming audio content.
- This supplemental information may correspond to information contained within a printed text from which the audio content was created. For example, footnotes, endnotes, glossaries, and appendices may all constitute supplemental information.
- supplemental information may correspond to other information, such as author or editor commentary, or commentary from other users who have purchased the audio content.
- supplemental information may correspond to commentary generated by contacts of a user. Such contacts may be associated with the user via an electronic marketplace used to retrieve audio content, through one or more social networking systems, or through other systems.
- One example for determining contacts of a user is given in U.S. patent application Ser. No. 12/871733, filed on Aug. 30, 2010 and entitled “CUSTOMIZING CONTENT BASED ON SOCIAL NETWORK INFORMATION” which is hereby incorporated by reference in its entirety.
- the supplemental information provided with an audio content may or may not be contained within a corresponding printed text.
- the user may enter a command indicating they wish to listen to the supplemental information.
- the command may be received for some period before or after the point at which the supplemental information is available (e.g., the user may be given n seconds during which a command can be entered).
- This command may be obtained at the personal music player by any input, such as the press of a button, or, where the personal music player is equipped with a microphone or other audio input, speaking a command.
- the personal music player may then output the supplemental information to the user.
- the personal music player may return to the audio content at the position at which the user entered the command, or a position near that at which the user entered the command.
- the personal music player may not output an indication that supplemental information is available, or may only indicate the availability of supplemental information that would be of particular interest the user. Preventing indication of every item of supplemental information may be beneficial, for example, where the amount of supplemental information is large or where supplemental information is frequently available.
- a glossary or appendix of an audio content may be provided which contains explanatory information regarding characters, terms, locations, or entities within an audio content. It may not be desirable to indicate the presence of this supplemental information at every mention of those characters, terms, etc., within the audio content.
- the personal music player may be configured to receive a command to play supplemental information at any point within an audio content, and may search for supplemental information corresponding to a word recently recited in the audio content.
- the user may input a command to search for supplemental information.
- the personal music player may detect that the audio recently discussed the character, and locate corresponding supplemental information associated with the character. In this manner, a user listening to audio content may be provided with access to the same reference information given to a reader of text.
- an error may be played if no supplemental information associated with the current playback position can be found.
- the audio content may be associated with corresponding textual content.
- a personal music player may store an audio content and the corresponding text.
- the corresponding text may be a book from which the audio content is created.
- the text may be a transcript created from the audio content.
- the correlation of audio and text content may be used to provide additional functionality or to further enhance features described above.
- play may continue from a point at or near the point where the user issued a command to play the supplemental information.
- playback may resume at the exact point the command was received, or at some fixed time period prior to that point (e.g., 3 or 5 seconds).
- playback may resume at a point determined at least in part based on the corresponding text. For example, playback may resume at the point in the audio content just prior to the sentence spoken in which the user indicated supplemental information should be played.
- playback may resume at the beginning of a paragraph, a word, or other unit of text.
- an indicator of supplemental information may be suppressed for supplemental information that has already been played.
- Correlations between audio and text may further be used to enhance the above embodiments.
- a user may request supplemental information regarding any character, term, etc., by issuing a command to play supplemental information
- the personal music player may use text corresponding to recently played audio to search for such terms in a provided glossary or appendix.
- an item of supplemental information may be associated with multiple positions within a primary audio content.
- supplemental information may be associated with a position within a chapter of an audio content, and also associated with the end of that chapter. In this manner, a user may have multiple opportunities to hear an item of supplemental information.
- an item of supplemental information may be associated with a position within another item of supplemental information.
- a first item of supplemental information may be associated with a position in a primary audio content
- a second item of supplemental information may be associated with a position in the first item of supplemental information.
- the second item of supplemental information may itself have one or more additional items of supplemental information associated with it.
- multiple levels of supplemental information may be provided, each new level associated with a previous level or the primary audio content.
- indication of the availability of that supplemental information may be suppressed after the first indication that the supplemental information is available. In other embodiments, indication of supplemental information may be suppressed after the supplemental information has been fully played.
- audio content and supplemental information may be stored within data storage of a playback device.
- audio content and/or supplemental information may be stored remote from the playback device, such as on a remote server.
- the playback device may be configured to retrieve audio content and/or supplemental information from the remote server.
- supplemental information associated with audio content may be retrieved at substantially the same time as the audio content.
- a playback device may be configured to retrieve supplemental information periodically. For example, a playback device may query a remote server associated with audio content every n hours in order to determine whether new supplemental information is available.
- a playback device may query a remote server for supplemental information associated with a currently played audio content.
- a remote server may be configured to notify a playback device of available supplemental information.
- a user of a playback device may specify types of supplemental information which are desired. For example, a user may specify that supplemental information associated with the author (e.g., footnotes, glossaries, author commentary, etc.) should be played, while supplemental information associated with the publisher (e.g., editor commentary, etc.) should not be played. Further, a user may specify that supplemental information associated with contacts of the user should be played, while supplemental information associated with general users of an electronic marketplace from which the audio content was acquired should not be played.
- supplemental information associated with the author e.g., footnotes, glossaries, author commentary, etc.
- the publisher e.g., editor commentary, etc.
- a user of a playback device may specify categories of supplemental information which are desired. For example, where audio content has been acquired from an electronic marketplace, the electronic marketplace may categorize items of supplemental information into one or more categories. Examples of such categories include, but are not limited to, “Top Rated,” “Funny,” “Insightful,” “Informative,” and “Interesting.” Illustratively, a user may specify that only supplemental information listed as “Top Rated” or “Funny” should be presented for playback, while other supplemental information should be excluded. In some embodiments, where a user has excluded some types of supplemental information and where that supplemental information is stored remotely from a playback device, it may not be necessary for the playback device to retrieve the remotely stored supplemental information.
- any computing device capable of presenting audio content to a user may be used in accordance with the present disclosure.
- a computing device can include, but is not limited to, a laptop, personal computer, a tablet computer, personal digital assistant (PDA), hybrid PDA/mobile phone, mobile phone, electronic book reader, digital media player, integrated components for inclusion in computing devices, appliances, electronic devices for inclusion in vehicles or machinery, gaming devices, set top boxes, electronic devices for inclusion in televisions, and the like.
- PDA personal digital assistant
- These computing devices may be associated with any of a number of visual, tactile, or auditory output devices, and may be associated with a number of devices for user input, including, but not limited to, keyboards, mice, trackballs, trackpads, joysticks, input tablets, trackpoints, touch screens, remote controls, game controllers, motion detectors and the like.
- audio content can refer to any data containing audio information that can be directly or indirectly accessed by a user, including, but not limited to, multi-media data, digital video, audio data, electronic books (“eBooks”), electronic documents, electronic publications, computer-executable code, portions of the above, and the like.
- references to textual content or other visually displayed content should be understood to include any form of visual or tactile content, including text, images, charts, graphs, slides, maps, Braille, embossed images, or any other content capable of being displayed in a visual or tactile medium.
- Content may be stored on a computing device, may be generated by the computing device, or may be streamed across a network for display or output on the computing device.
- content may be obtained from any of a number of sources, including a network content provider, a local data store, computer readable media, a content generation algorithm (e.g., a text-to-speech algorithm) running remotely or locally, or through user input (e.g., text entered by a user).
- a content generation algorithm e.g., a text-to-speech algorithm
- user input e.g., text entered by a user
- Content may be obtained, stored, or delivered from any one or combination of sources as described above.
- FIG. 1 is a block diagram illustrating an embodiment of a computing device 100 configured to implement playback of content including supplemental information.
- the computing device 100 may have one or more processors 102 in communication with a network interface 104 , a display interface 106 , a computer readable medium drive 108 , and an input/output device interface 110 , all of which communicate with one another by way of a communication bus.
- the network interface 104 may provide connectivity to one or more networks or computing systems.
- the processor(s) 102 may thus receive information and instructions from other computing systems or services via a network.
- the processor(s) 102 may also communicate to and from memory 112 and further provide output information or receive input information via the display interface 106 and/or the input/output device interface 110 .
- the input/output device interface 110 may accept input from one or more input devices 124 , including, but not limited to, keyboards, mice, trackballs, trackpads, joysticks, input tablets, trackpoints, touch screens, remote controls, game controllers, heart rate monitors, velocity sensors, voltage or current sensors, motion detectors, transponders, global positioning systems, radio frequency identification tags, or any other input device capable of obtaining a position or magnitude value from a user.
- input devices 124 including, but not limited to, keyboards, mice, trackballs, trackpads, joysticks, input tablets, trackpoints, touch screens, remote controls, game controllers, heart rate monitors, velocity sensors, voltage or current sensors, motion detectors, transponders, global positioning systems, radio frequency identification tags, or any other input device capable of obtaining a position or magnitude value from a user.
- the input/output interface may also provide output via one or more output devices 122 , including, but not limited to, one or more speakers or any of a variety of digital or analog audio capable output ports, including, but not limited to, headphone jacks, 1 ⁇ 4 inch jacks, XLR jacks, stereo jacks, Bluetooth links, RCA jacks, optical ports or USB ports, as described above.
- the display interface 106 may be associated with any number of visual or tactile interfaces incorporating any of a number of active or passive display technologies (e.g., electronic-ink, LCD, LED or OLED, CRT, projection, holographic imagery, three dimensional imaging systems, etc.) or technologies for the display of Braille or other tactile information.
- Memory 112 may include computer program instructions that the processor(s) 102 executes in order to implement one or more embodiments.
- the memory 112 generally includes RAM, ROM and/or other persistent or non-transitory computer-readable storage media.
- Memory 112 may store a presentation module 114 for managing the output of information to a display and/or other output device(s) 122 via the display interface 106 and/or input/output interface 110 .
- the memory 112 may further include a user control module 116 for managing and obtaining user input information received for one or more input device 124 via the input/output device interface 110 .
- the user control module 116 may additionally interpret user input information in order to initiate playback of supplemental information.
- Memory 112 may further store a supplemental information module 118 .
- the supplemental information module 118 may detect the presence of supplemental information associated with a recently played or upcoming item of audio content (e.g., output via the presentation module 114 ). The supplemental information module 118 may cause the presentation module 114 to output an indication that the supplemental information is available.
- This indication may correspond to any type of output possible via the output devices 122 .
- the indication may correspond to audio output via a speaker or headphone. This audio content may include a tone, bell, voice indication, or other sound indicating the presence and availability of additional content.
- the indication may correspond to visual output via display interface 106 .
- the indication may correspond to a haptic indication, such as a vibration caused by a haptic feedback device included with the display interface 106 or otherwise provided.
- the supplemental information module 118 may receive and interpret user input via the user control module 116 to determine whether to cause playback of supplemental information.
- the supplemental information module 118 may cause playback of supplemental information associated with the current point of audio playback via the output device 122 .
- the supplemental information module 118 may interpret commands received during the playback of supplemental information.
- supplemental information itself may be associated with one or more items of supplemental information.
- the supplemental information module 118 may interpret input received during playback of a first item of supplemental information to indicate a command to play a second item of supplemental information associated with the first supplemental information.
- a user may issue a command to stop playback of an item of supplemental information.
- the supplemental information module 118 may interpret a received input as such a command and cause the presentation module 114 to return to playback for the previous item of content (i.e., the supplemental information or primary audio content played before playback of a current item of audio content).
- FIG. 2 is a schematic block diagram depicting an illustrative operating environment in which a computing device of FIG. 1 may interact with an electronic marketplace 150 to acquire audio content and supplemental information.
- the operating environment includes one or more user computing devices 100 , such as a computing device of FIG. 1 , in communication with the electronic marketplace 150 via a network 130 .
- the network 130 may be any wired network, wireless network or combination thereof.
- the network 130 may be a personal area network, local area network, wide area network, cable network, satellite network, cellular telephone network, or combination thereof.
- the network 130 is the Internet. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art of computer communications and thus, need not be described in more detail herein.
- a user using his or her user computing device 100 , may communicate with the electronic marketplace 150 regarding audio content. Supplemental information regarding audio content may also be made available by the electronic marketplace 150 .
- a user, utilizing his or her computing device 100 may browse descriptions of audio content made available by the electronic marketplace 150 .
- a user, utilizing his or her computing device 100 may acquire desired audio content or supplemental information regarding audio content from the electronic marketplace 150 .
- the electronic marketplace 150 is illustrated in FIG. 2 as operating in a distributed computing environment comprising several computer systems that are interconnected using one or more networks. More specifically, the electronic marketplace 150 may include a marketplace server 156 , a content catalog 152 , a supplemental information catalog 154 , and a distributed computing environment 155 discussed in greater detail below. However, it may be appreciated by those skilled in the art that the electronic marketplace 150 may have fewer or greater components than are illustrated in FIG. 1 . In addition, the electronic marketplace 150 could include various Web services and/or peer-to-peer network configurations. Thus, the depiction of electronic marketplace 150 in FIG. 1 should be taken as illustrative and not limiting to the present disclosure.
- Any one or more of the marketplace server 156 , the content catalog 152 , and the supplemental information catalog 154 may be embodied in a plurality of components, each executing an instance of the respective marketplace server 156 , content catalog 152 , and supplemental information catalog 154 .
- a server or other computing component implementing any one of the marketplace server 156 , the content catalog 152 , and the supplemental information catalog 154 may include a network interface, memory, processing unit, and computer readable medium drive, all of which may communicate which each other may way of a communication bus.
- the network interface may provide connectivity over the network 130 and/or other networks or computer systems.
- the processing unit may communicate to and from memory containing program instructions that the processing unit executes in order to operate the respective marketplace server 156 , content catalog 152 , and supplemental information catalog 154 .
- the memory may generally include RAM, ROM, other persistent and auxiliary memory, and/or any non-transitory computer-readable media.
- the content catalog 152 and the supplemental information catalog 154 can be implemented by the distributed computing environment 155 .
- the marketplace server 156 or other components of the electronic marketplace 150 may be implemented by the distributed computing environment.
- the entirety of the electronic marketplace 150 may be implemented by the distributed computing environment 150 .
- the distributed computing environment 155 may include a collection of rapidly provisioned and released computing resources hosted in connection with the electronic marketplace 150 or a third party.
- the computing resources may include a number of computing, networking and storage devices in communication with one another.
- the computing devices may correspond to physical computing devices.
- the computing devices may correspond to virtual machine instances implemented by one or more physical computing devices.
- computing devices may correspond to both virtual computing devices and physical computing devices.
- One example of a distributed computing environment is given in U.S. Pat. No. 7,865,586, issued on Jan. 4, 2011 and entitled “Configuring Communications Between Computing Nodes” which is hereby incorporated by reference in its entirety.
- a distributed computing environment may also be referred to as a cloud computing environment.
- the marketplace server 156 may enable browsing and acquisition of audio content and/or supplemental information relating to audio content available from the electronic marketplace 150 . Further, the marketplace server 156 may transmit audio content and/or supplemental information to user computing devices 100 .
- the content catalog 152 may include information on audio content available from the electronic marketplace 150 .
- the supplemental information catalog 154 may include supplemental information available from the electronic marketplace 150 .
- Such supplemental information may include, by way of non-limiting example, supplemental information provided or generated by authors, editors, publishers, users of the electronic marketplace 150 , or other third parties.
- the marketplace server 156 may obtain audio content information for audio content offered by the electronic marketplace 150 , as well as supplemental information offered by the electronic marketplace 150 , and may make such audio content and supplemental information available to a user from a single network resource, such as a Web site. A user may then acquire audio content and/or supplemental information from the electronic marketplace 150 .
- marketplace server 156 may generate one or more user interfaces through which a user utilizing a user computing device 100 or a distinct computing device, may browse audio content and/or supplemental information made available by the electronic marketplace 150 , submit queries for matching audio content and/or supplemental information, view information and details regarding specific audio content and/or supplemental information, and acquire audio content and/or supplemental information.
- the marketplace server 156 may facilitate the acquisition of the audio content and/or supplemental information.
- the marketplace server 156 may receive payment information from the user computing device 100 or distinct computing device. Further, the marketplace server 156 may, transmit the audio content and/or supplemental information to the user computing device 100 .
- the marketplace server 156 may, subsequent to acquisition of an item of audio content, inform a user computing device 100 or newly available supplemental information which is associated with the audio content. In still more embodiments, the marketplace server 156 may enable streaming of audio content and/or supplemental information from the content catalog 152 or the supplemental information catalog 154 to a user computing device 100 .
- a playback device may obtain audio information or supplemental information from additional or alternative sources, such as third party content catalogs or supplemental information catalogs.
- FIG. 3 is an illustrative graphical representation or visualization of audio content including supplemental information.
- the audio content corresponds to the audio book “The Adventures of Tom Sawyer.”
- the primary audio content 204 represents the content of the audio book excluding supplemental information.
- the primary audio content 204 can represent a visual map of audio content, such that the duration of the audio content 204 is displayed from left to right.
- the primary audio content 204 is associated with supplemental information 206 - 214 .
- Each item of supplemental information 206 - 214 is associated with one or more points X 1 -X 6 in either the primary audio content 204 or another item of supplemental information 206 - 214 .
- supplemental information may be associated with a range of points within the audio content 204 , or with a specified duration of audio content 204 .
- supplemental information may include footnotes (supplemental information 206 and 210 ), editor commentary (supplemental information 208 and 212 ), or additional content (supplemental information 214 ). Additional content may correspond to other types of supplemental information described above, such as author commentary or commentary of other readers of the audio content.
- the audio content of the book 202 may be played by a computing device, such as device 100 of FIG. 1 , beginning at the left of 204 and proceeding to the right.
- the computing device 100 may indicate to the user that supplemental information 206 is available for playback.
- the user may input a command that the supplemental information 206 should be played.
- playback of the audio content 202 may temporarily cease, and playback of the supplemental information 206 may begin.
- playback of the audio content 204 may resume.
- playback may resume at or near the point X 1 , such as a set amount of time before X 1 (e.g., 3 seconds), or the beginning of the sentence or paragraph containing X 1 .
- the device 100 can indicate the availability of supplemental information 208 .
- such indication may correspond to a visual indication, audio indication, haptic indication, or any combination therein.
- the user may be given a period of time (e.g., 10 seconds) in which to command playback of supplemental information. In this example, if the user does not input a command to play supplemental information 208 , playback of the audio content 202 continues.
- the device 100 may indicate the availability of supplemental information 210 , and receive a user command to play the content 210 .
- supplemental information 210 is itself associated with supplemental information 212 .
- the device 100 can indicate availability of supplemental information 212 and receive a command to play the content 212 .
- the device 100 may receive a command from the user to stop playback of supplemental information 212 . In some embodiments, this may cease playback of content 212 and resume playback of content 210 at or near the point X 6 .
- the command received during playback of supplemental information 212 may resume playback of the primary audio content 204 at or near the point X 3 , the last point played of the audio content 204 .
- a point X 4 may be encountered which is associated with a previously played item of supplemental information 210 .
- the device 100 may not indicate the presence of supplemental information 210 at X 4 .
- an indication may be suppressed only if supplemental information 210 was played completely.
- an indication may always be played.
- the device 100 may indicate the presence of supplemental information 214 .
- the user may input a command to the device 100 to indicate a desire to play supplemental information 214 . If this input is received, supplemental information 214 is played. Otherwise, playback of the primary audio content 204 proceeds, until it reaches the end of audio content 204 or until user input is received which causes the audio content 204 to stop playing.
- FIGS. 4A-4C depict an illustrative user interface 300 displayed by a computing device, such as computing device 100 of FIG. 1 , that allows playback and interaction with supplemented audio content.
- the audio content illustratively corresponds to the audio content 204 of FIG. 3 .
- the title of the audio content 204 is displayed as “The Adventures of Tom Sawyer” 301 .
- the user interface 300 contains a number of input controls 302 - 304 , each of which may be selected by a user to display a different aspect of the user interface 300 .
- the input control 302 corresponding to “Now Playing” is currently selected.
- Further input controls 316 - 320 allow various inputs by the user, such as rewinding playback for a period of time with input control 316 , starting and stopping playback with input control 318 (the display of which may alter between play and pause symbols depending on the playback state), and bookmarking a current position with input control 320 .
- the interface includes audio content information 312 , such as a graphic associated with the audio content, title, author, and narrator information, and a chapter indicator 309 that displays the current chapter of the audio content 204 that is ready for playback.
- the interface 300 further includes content indicator 310 that indicates the content of the current of the chapter, as well as a progress indicator 311 , which indicates the position of playback within the currently selected chapter.
- the position of the progress indicator 311 may correspond to position X 1 of FIG. 3 , which is associated with supplemental information 206 .
- the user interface 300 further includes an input control 314 which indicates the availability of the supplemental information 206 .
- the input control 314 may be displayed for a period before or after the position X 1 , to allow a user time to request playback of the supplemental information 206 .
- playback of the audio content 204 may be temporarily ceased, and playback of the supplemental information 206 may begin.
- FIG. 4B depicts the illustrative user interface 300 during playback of an item of supplemental information, such as supplemental information 206 .
- content indicator 310 may be altered to indicate that playback of the primary audio content has been temporarily halted.
- the user interface 300 may include a supplemental information title indicator 402 which describes the currently playing supplemental information.
- a supplemental information indicator 404 may be provided that displays the content and position within the currently playing supplemental information.
- the user interface 300 also includes an input control 406 which allows the user to stop playback of the current supplemental information and return to the primary audio content.
- playback of the primary audio content may resume at the point at which it ceased or a point nearby, such as the beginning of a paragraph or sentence, or a point some period (e.g., 3 seconds) prior.
- a point some period e.g. 3 seconds
- an additional input control may be provided to play additional supplemental information associated with the currently playing supplemental information, as is described above.
- FIG. 4C depicts the illustrative user interface 300 after user selection of input control 308 , which is configured to cause display of the supplemental information associated with the currently loaded audio content.
- each input control 506 - 510 is selectable by a user to play the associated supplemental information independent of the primary audio content.
- selection of an item of supplemental information via input controls 502 - 510 may cause the content to be played independent of the audio content.
- selection of supplemental information via input controls 502 - 510 may cause the selected content to be played, and further cause playback of the audio content from a point with which the supplement content is associated.
- the user interface 300 of FIG. 4C may further include an input control 520 which is selectable by a user to display a different aspect of the user interface 300 , such as a portion of the user interface 300 which enables the user to specify preferences regarding supplemental information.
- FIG. 4D depicts the illustrative user interface 300 after user selection of input control 520 , which is configured to cause display of a portion of the user interface 300 which enables a user to specify preferences regarding supplemental information.
- user preferences may be specific to the currently depicted audio content (e.g., “The Adventures of Tom Sawyer”). In other embodiments, user preferences may be specified for all audio content, or for specific sets of audio content.
- the illustrative user interface 300 of FIG. 4D contains user selectable input controls 552 - 558 which enable the user to specify various types of supplemental information which should be provided.
- input control 552 corresponds to author provided supplemental information, which may include, for example, footnotes, endnotes, glossary information, or author commentary.
- Input control 554 corresponds to publisher provided supplemental information, which may include editor commentary or additional information provided by a publisher of audio content.
- Input controls 556 and 558 correspond to supplemental information associated with other users of the electronic marketplace 150 .
- input control 556 corresponds to users associated with the current user of the user interface 300 . In some embodiments, these associated users may correspond to users designated as contacts or “friends” in the electronic marketplace 150 .
- these associated users may correspond to users designated as contacts or “friends” through external systems, such as one or more social networking systems in communication with the electronic marketplace 150 .
- input control 558 corresponds to general users of the electronic marketplace 150 who are not necessarily designated as a contact or “friend” of the current user.
- selection of one or more of the input controls 552 - 558 may cause the computing device 100 to retrieve supplemental information provided by the corresponding source (e.g., the author, the publisher, friends, or other users) during or prior to playback of audio content with which the supplemental information is associated.
- Supplemental information may be retrieved, for example, from the electronic marketplace 150 of FIG. 2 via the network 130 .
- supplemental information may be retrieved from the electronic marketplace 150 regardless of user selection of input controls 552 - 558 .
- retrieval of supplemental information regardless of user selection of input controls 552 - 558 may enable a user to select new supplemental information for playback immediately, without waiting for the new supplemental information to be retrieved.
- user selection of one or more of the input controls 552 - 558 may enable or disable the availability of the associated supplemental information during playback of audio content. For example, de-selection of input control 552 may disable playback of supplemental content generated by the author of the audio content.
- the user interface 300 as depicted in FIG. 4D further includes input controls 560 , which may enable a user to specify categories of desired supplemental information. For example, each item of supplemental information may be classified as one or more of “Top Rated,” “Funny,” “Insightful,” “Informative,” or “Interesting.” In some embodiments, such classification may be accomplished by the operator of the electronic marketplace 150 . In further embodiments, such classification may be accomplished by users of the electronic marketplace 150 . Illustratively, de-selection of one or more of the input controls 160 may disable playback of correspondingly tagged supplemental information.
- a particular item of supplemental information is classified by the electronic marketplace 150 as “funny,” and the user of the computing device 100 de-selects the input control 560 corresponding to “Funny,” then that particular item of supplemental content may not be available during playback of audio content.
- a particular item of supplemental information is classified by the electronic marketplace 150 as “top rated,” and the user of the computing device 100 selects the input control 560 corresponding to “Top Rated,” then that particular item of supplemental content may be made available during playback of audio content.
- only specific types of supplemental information may be categorized. For example, supplemental information generated by contacts or general users may be categorized, while supplemental information generated by the author or publisher may not be.
- input controls 560 may apply only to supplemental information which is categorized. In other embodiments, input controls 560 may apply to all supplemental information.
- the user interface 300 as depicted in FIG. 4D further includes input controls 562 and 563 , which may enable a user to specify a limit to the amount of supplemental information provided during playback of audio content.
- input control 562 may enable a user to specify that no more than three items of supplemental information should be presented during any one minute of audio playback.
- input control 563 may enable a user to specify a maximum duration of supplemental information that should be presented. In addition to limits per time period (e.g., minutes, hours, etc.), limits may, for example, be imposed over the course of a paragraph, page, chapter, book, or other measurement period.
- the user may return to portion of the user interface 300 displayed in FIG. 4C by selection of input control 564 .
- a user may provide preferences indicating one or more items, categories, or types of supplemental information that should be presented automatically, without output of an indication and without requiring user input. For example, a user may specify that all “top rated” items of supplemental information should be automatically presented, indicators should be provided for “funny” supplemental information, and no indicator should be provided for supplemental information only marked as “interesting.” As will be appreciated by those skilled in the art, such preferences may be combined to specify that any given item of supplemental information should be automatically presented, indicated for presentation or playback, or not indicated for presentation or playback.
- FIG. 5 is a flow diagram depicting an illustrative routine 600 for playback of supplemented audio content.
- the routine 600 may illustratively be implemented by the supplemental information module 118 of the computing device 100 .
- the routine 600 begins at block 602 , which causes the playback of a primary audio content, such as audio content 204 . Playback may begin, for example, in response to user selection of input control 318 of FIGS. 4A-4C .
- the computing device 100 determines whether supplemental information is associated with a current position within the primary audio content.
- supplemental information may be made associated with a range of positions within a primary audio content (e.g., with a continuous 10 second range). If supplemental information is not available, playback continues at block 614 , described below. If the current playback position is within such a range, and supplemental information is therefore available, the routine 600 proceeds to block 606 , which outputs to the user an indication that supplemental information is available.
- this indication may correspond to audio output by the device 100 , such as a tone, bell, voice, or sound, to a visual output, such as the appearance of an input control on a display, or to haptic feedback, such as a vibration of the device 100 .
- the computing device 100 tests whether the user has entered a command to play the detected supplemental information.
- a command may correspond to input via a display device or other input control, such as a physical button on the computing device 100 or an accessory connected to the computing device 100 (e.g., headphones).
- the command may further correspond to a voice command from the user. If a command is not received, playback continues at block 614 , described below.
- routine 600 continues to block 610 , which causes the playback of the supplemental information (and temporarily stops playback of the primary audio content). After playback of the supplemental information is completed, playback of the primary audio content resumes at block 614 .
- routine 600 may also be configured to receive a user command at block 610 to cease the playback of the supplemental information and immediately resume playback of the primary audio content.
- routine 600 may be executed at block 610 , such that the user may indicate that secondary supplemental audio content should be played.
- each additional instance of block 610 may create an instance of routine 600 , such that playback any configuration of supplemental information may be facilitated.
- playback of the primary audio content resumes.
- playback may be resumed at or near the point at which it was ceased.
- playback may be resumed at a point prior to where playback was ceased, such as the beginning of a previous paragraph.
- the routine 600 tests whether to end playback of the audio content. Playback may be ended, for example, in response to a user command or completion of an item of audio content. If playback is not ended, the routine continues at block 604 , as described above. If playback is ended, the routine 600 may end.
- All of the processes described herein may be embodied in, and fully automated via, software code modules executed by one or more general purpose computers or processors, thus transforming the general purpose computers or processors into specifically configured devices.
- the code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all the methods may alternatively be embodied in specialized computer hardware.
- the components referred to herein may be implemented in hardware, software, firmware or a combination thereof
Abstract
Description
- Traditional printed books, electronic books or other printed media (whether in electronic or physical form) often contain a primary text and additional, supplementary information, such as footnotes, end notes, glossaries and appendices. These items of supplementary information often contain useful or interesting information, such as additional background or explanatory text regarding the primary text, external references, or commentary.
- Frequently, printed media are converted into audio format. Generally, this may involve narrating and recording a reading of the printed medium. The resulting audio book or audio content may then be made available to users. Audio content may be played back on a wide variety of devices, such as notebook and tablet computers, mobile phones, personal music players, electronic book (“eBook”) readers, and other audio playback devices or devices with audio playback capability.
- The foregoing aspects and many of the attendant advantages of the present disclosure will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
-
FIG. 1 is a block diagram depicting an illustrative embodiment of a computing device configured to implement playback of supplemental audio information; -
FIG. 2 is a schematic block diagram depicting an illustrative operating environment in which a computing device ofFIG. 1 may interact with an electronic marketplace to acquire audio content and supplemental information; -
FIG. 3 is an illustrative graphical representation or visualization of audio content including supplemental information; -
FIGS. 4A-4D depict illustrative user interfaces that may be used to facilitate playback of supplemental audio information; and -
FIG. 5 is a flow diagram depicting an illustrative routine for playback of supplemental audio information. - Generally described, aspects of the present disclosure relate to the output of supplemental audio information on a computing device. Specifically, systems and methods are disclosed for controlling the playback of audio content including one or more items of supplemental information, such as footnotes, endnotes, glossary information, or commentary. For example, a user may utilize a computing device such as a personal music player in order to obtain access to audio content including supplemental audio information. While listening to the audio content, the user may receive an indication that supplemental information is available and associated with recently played or upcoming audio content. This supplemental information may correspond to information contained within a printed text from which the audio content was created. For example, footnotes, endnotes, glossaries, and appendices may all constitute supplemental information. Further, supplemental information may correspond to other information, such as author or editor commentary, or commentary from other users who have purchased the audio content. Still further, supplemental information may correspond to commentary generated by contacts of a user. Such contacts may be associated with the user via an electronic marketplace used to retrieve audio content, through one or more social networking systems, or through other systems. One example for determining contacts of a user is given in U.S. patent application Ser. No. 12/871733, filed on Aug. 30, 2010 and entitled “CUSTOMIZING CONTENT BASED ON SOCIAL NETWORK INFORMATION” which is hereby incorporated by reference in its entirety.
- The supplemental information provided with an audio content may or may not be contained within a corresponding printed text. After receiving an indication that supplemental information is available, the user may enter a command indicating they wish to listen to the supplemental information. In some embodiments, the command may be received for some period before or after the point at which the supplemental information is available (e.g., the user may be given n seconds during which a command can be entered). This command may be obtained at the personal music player by any input, such as the press of a button, or, where the personal music player is equipped with a microphone or other audio input, speaking a command. After receiving the command, the personal music player may then output the supplemental information to the user. Illustratively, after playing the supplemental information, the personal music player may return to the audio content at the position at which the user entered the command, or a position near that at which the user entered the command.
- As will be described below, various embodiments may be used exclusive to or in combination with the illustrative example described above. For example, in one embodiment, the personal music player may not output an indication that supplemental information is available, or may only indicate the availability of supplemental information that would be of particular interest the user. Preventing indication of every item of supplemental information may be beneficial, for example, where the amount of supplemental information is large or where supplemental information is frequently available. Illustratively, a glossary or appendix of an audio content may be provided which contains explanatory information regarding characters, terms, locations, or entities within an audio content. It may not be desirable to indicate the presence of this supplemental information at every mention of those characters, terms, etc., within the audio content. Instead, the personal music player may be configured to receive a command to play supplemental information at any point within an audio content, and may search for supplemental information corresponding to a word recently recited in the audio content. Illustratively, if playback of an audio content mentions a character name with which the user is not familiar, the user may input a command to search for supplemental information. The personal music player may detect that the audio recently discussed the character, and locate corresponding supplemental information associated with the character. In this manner, a user listening to audio content may be provided with access to the same reference information given to a reader of text. In these embodiments, an error may be played if no supplemental information associated with the current playback position can be found.
- In some embodiments, the audio content may be associated with corresponding textual content. For example, a personal music player may store an audio content and the corresponding text. In some instances, the corresponding text may be a book from which the audio content is created. In other instances, the text may be a transcript created from the audio content. One example of synchronization of textual and audio content is given in U.S. patent application Ser. No. 13/070,313, filed on Mar. 23, 2011 and entitled “SYTEMS AND METHODS FOR SYNCHRONIZING DIGITAL CONTENT” which is hereby incorporated by reference in its entirety.
- In these embodiments, the correlation of audio and text content may be used to provide additional functionality or to further enhance features described above. For example, as described above, when playback of an item of supplemental information has concluded, play may continue from a point at or near the point where the user issued a command to play the supplemental information. Illustratively, playback may resume at the exact point the command was received, or at some fixed time period prior to that point (e.g., 3 or 5 seconds). However, in embodiments where information regarding the text corresponding to the played audio is known, playback may resume at a point determined at least in part based on the corresponding text. For example, playback may resume at the point in the audio content just prior to the sentence spoken in which the user indicated supplemental information should be played. In other embodiments, playback may resume at the beginning of a paragraph, a word, or other unit of text. In these embodiments, an indicator of supplemental information may be suppressed for supplemental information that has already been played.
- Correlations between audio and text may further be used to enhance the above embodiments. For example, in embodiments where a user may request supplemental information regarding any character, term, etc., by issuing a command to play supplemental information, the personal music player may use text corresponding to recently played audio to search for such terms in a provided glossary or appendix.
- In some embodiments, an item of supplemental information may be associated with multiple positions within a primary audio content. For example, supplemental information may be associated with a position within a chapter of an audio content, and also associated with the end of that chapter. In this manner, a user may have multiple opportunities to hear an item of supplemental information. Further, an item of supplemental information may be associated with a position within another item of supplemental information. For example, a first item of supplemental information may be associated with a position in a primary audio content, and a second item of supplemental information may be associated with a position in the first item of supplemental information. The second item of supplemental information may itself have one or more additional items of supplemental information associated with it. As such, multiple levels of supplemental information may be provided, each new level associated with a previous level or the primary audio content. In embodiments where associations of supplemental information would cause the supplemental information to be available multiple times within a playback of audio content, indication of the availability of that supplemental information may be suppressed after the first indication that the supplemental information is available. In other embodiments, indication of supplemental information may be suppressed after the supplemental information has been fully played.
- In some embodiments, audio content and supplemental information may be stored within data storage of a playback device. In other embodiments, audio content and/or supplemental information may be stored remote from the playback device, such as on a remote server. Illustratively, the playback device may be configured to retrieve audio content and/or supplemental information from the remote server. In some embodiments, supplemental information associated with audio content may be retrieved at substantially the same time as the audio content. In other embodiments, a playback device may be configured to retrieve supplemental information periodically. For example, a playback device may query a remote server associated with audio content every n hours in order to determine whether new supplemental information is available. In further embodiments, a playback device may query a remote server for supplemental information associated with a currently played audio content. In still other embodiments, a remote server may be configured to notify a playback device of available supplemental information.
- In some embodiments, a user of a playback device may specify types of supplemental information which are desired. For example, a user may specify that supplemental information associated with the author (e.g., footnotes, glossaries, author commentary, etc.) should be played, while supplemental information associated with the publisher (e.g., editor commentary, etc.) should not be played. Further, a user may specify that supplemental information associated with contacts of the user should be played, while supplemental information associated with general users of an electronic marketplace from which the audio content was acquired should not be played.
- In further embodiments, a user of a playback device may specify categories of supplemental information which are desired. For example, where audio content has been acquired from an electronic marketplace, the electronic marketplace may categorize items of supplemental information into one or more categories. Examples of such categories include, but are not limited to, “Top Rated,” “Funny,” “Insightful,” “Informative,” and “Interesting.” Illustratively, a user may specify that only supplemental information listed as “Top Rated” or “Funny” should be presented for playback, while other supplemental information should be excluded. In some embodiments, where a user has excluded some types of supplemental information and where that supplemental information is stored remotely from a playback device, it may not be necessary for the playback device to retrieve the remotely stored supplemental information.
- Although the preceding description refers to a personal music player, any computing device capable of presenting audio content to a user may be used in accordance with the present disclosure. Such a computing device can include, but is not limited to, a laptop, personal computer, a tablet computer, personal digital assistant (PDA), hybrid PDA/mobile phone, mobile phone, electronic book reader, digital media player, integrated components for inclusion in computing devices, appliances, electronic devices for inclusion in vehicles or machinery, gaming devices, set top boxes, electronic devices for inclusion in televisions, and the like. These computing devices may be associated with any of a number of visual, tactile, or auditory output devices, and may be associated with a number of devices for user input, including, but not limited to, keyboards, mice, trackballs, trackpads, joysticks, input tablets, trackpoints, touch screens, remote controls, game controllers, motion detectors and the like.
- In addition, audio content can refer to any data containing audio information that can be directly or indirectly accessed by a user, including, but not limited to, multi-media data, digital video, audio data, electronic books (“eBooks”), electronic documents, electronic publications, computer-executable code, portions of the above, and the like. References to textual content or other visually displayed content should be understood to include any form of visual or tactile content, including text, images, charts, graphs, slides, maps, Braille, embossed images, or any other content capable of being displayed in a visual or tactile medium. Content may be stored on a computing device, may be generated by the computing device, or may be streamed across a network for display or output on the computing device. Moreover, content may be obtained from any of a number of sources, including a network content provider, a local data store, computer readable media, a content generation algorithm (e.g., a text-to-speech algorithm) running remotely or locally, or through user input (e.g., text entered by a user). Content may be obtained, stored, or delivered from any one or combination of sources as described above.
-
FIG. 1 is a block diagram illustrating an embodiment of acomputing device 100 configured to implement playback of content including supplemental information. Thecomputing device 100 may have one ormore processors 102 in communication with anetwork interface 104, adisplay interface 106, a computerreadable medium drive 108, and an input/output device interface 110, all of which communicate with one another by way of a communication bus. Thenetwork interface 104 may provide connectivity to one or more networks or computing systems. The processor(s) 102 may thus receive information and instructions from other computing systems or services via a network. The processor(s) 102 may also communicate to and frommemory 112 and further provide output information or receive input information via thedisplay interface 106 and/or the input/output device interface 110. The input/output device interface 110 may accept input from one ormore input devices 124, including, but not limited to, keyboards, mice, trackballs, trackpads, joysticks, input tablets, trackpoints, touch screens, remote controls, game controllers, heart rate monitors, velocity sensors, voltage or current sensors, motion detectors, transponders, global positioning systems, radio frequency identification tags, or any other input device capable of obtaining a position or magnitude value from a user. The input/output interface may also provide output via one ormore output devices 122, including, but not limited to, one or more speakers or any of a variety of digital or analog audio capable output ports, including, but not limited to, headphone jacks, ¼ inch jacks, XLR jacks, stereo jacks, Bluetooth links, RCA jacks, optical ports or USB ports, as described above. Thedisplay interface 106 may be associated with any number of visual or tactile interfaces incorporating any of a number of active or passive display technologies (e.g., electronic-ink, LCD, LED or OLED, CRT, projection, holographic imagery, three dimensional imaging systems, etc.) or technologies for the display of Braille or other tactile information. -
Memory 112 may include computer program instructions that the processor(s) 102 executes in order to implement one or more embodiments. Thememory 112 generally includes RAM, ROM and/or other persistent or non-transitory computer-readable storage media.Memory 112 may store apresentation module 114 for managing the output of information to a display and/or other output device(s) 122 via thedisplay interface 106 and/or input/output interface 110. Thememory 112 may further include auser control module 116 for managing and obtaining user input information received for one ormore input device 124 via the input/output device interface 110. In one embodiment, theuser control module 116 may additionally interpret user input information in order to initiate playback of supplemental information.Memory 112 may further store asupplemental information module 118. In one embodiment, thesupplemental information module 118 may detect the presence of supplemental information associated with a recently played or upcoming item of audio content (e.g., output via the presentation module 114). Thesupplemental information module 118 may cause thepresentation module 114 to output an indication that the supplemental information is available. This indication may correspond to any type of output possible via theoutput devices 122. For example, the indication may correspond to audio output via a speaker or headphone. This audio content may include a tone, bell, voice indication, or other sound indicating the presence and availability of additional content. In addition, the indication may correspond to visual output viadisplay interface 106. Still further, the indication may correspond to a haptic indication, such as a vibration caused by a haptic feedback device included with thedisplay interface 106 or otherwise provided. - In addition, the
supplemental information module 118 may receive and interpret user input via theuser control module 116 to determine whether to cause playback of supplemental information. When a command to play supplemental information is received, thesupplemental information module 118 may cause playback of supplemental information associated with the current point of audio playback via theoutput device 122. Still further, thesupplemental information module 118 may interpret commands received during the playback of supplemental information. As described above, supplemental information itself may be associated with one or more items of supplemental information. Illustratively, thesupplemental information module 118 may interpret input received during playback of a first item of supplemental information to indicate a command to play a second item of supplemental information associated with the first supplemental information. In some embodiments, a user may issue a command to stop playback of an item of supplemental information. Thesupplemental information module 118 may interpret a received input as such a command and cause thepresentation module 114 to return to playback for the previous item of content (i.e., the supplemental information or primary audio content played before playback of a current item of audio content). -
FIG. 2 is a schematic block diagram depicting an illustrative operating environment in which a computing device ofFIG. 1 may interact with anelectronic marketplace 150 to acquire audio content and supplemental information. As illustrated inFIG. 2 , the operating environment includes one or moreuser computing devices 100, such as a computing device ofFIG. 1 , in communication with theelectronic marketplace 150 via anetwork 130. - Those skilled in the art will appreciate that the
network 130 may be any wired network, wireless network or combination thereof. In addition, thenetwork 130 may be a personal area network, local area network, wide area network, cable network, satellite network, cellular telephone network, or combination thereof. In the illustrated embodiment, thenetwork 130 is the Internet. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art of computer communications and thus, need not be described in more detail herein. - Accordingly, a user, using his or her
user computing device 100, may communicate with theelectronic marketplace 150 regarding audio content. Supplemental information regarding audio content may also be made available by theelectronic marketplace 150. In one embodiment a user, utilizing his or hercomputing device 100 may browse descriptions of audio content made available by theelectronic marketplace 150. In another embodiment, a user, utilizing his or hercomputing device 100 may acquire desired audio content or supplemental information regarding audio content from theelectronic marketplace 150. - The
electronic marketplace 150 is illustrated inFIG. 2 as operating in a distributed computing environment comprising several computer systems that are interconnected using one or more networks. More specifically, theelectronic marketplace 150 may include amarketplace server 156, acontent catalog 152, asupplemental information catalog 154, and a distributedcomputing environment 155 discussed in greater detail below. However, it may be appreciated by those skilled in the art that theelectronic marketplace 150 may have fewer or greater components than are illustrated inFIG. 1 . In addition, theelectronic marketplace 150 could include various Web services and/or peer-to-peer network configurations. Thus, the depiction ofelectronic marketplace 150 inFIG. 1 should be taken as illustrative and not limiting to the present disclosure. - Any one or more of the
marketplace server 156, thecontent catalog 152, and thesupplemental information catalog 154 may be embodied in a plurality of components, each executing an instance of therespective marketplace server 156,content catalog 152, andsupplemental information catalog 154. A server or other computing component implementing any one of themarketplace server 156, thecontent catalog 152, and thesupplemental information catalog 154 may include a network interface, memory, processing unit, and computer readable medium drive, all of which may communicate which each other may way of a communication bus. The network interface may provide connectivity over thenetwork 130 and/or other networks or computer systems. The processing unit may communicate to and from memory containing program instructions that the processing unit executes in order to operate therespective marketplace server 156,content catalog 152, andsupplemental information catalog 154. The memory may generally include RAM, ROM, other persistent and auxiliary memory, and/or any non-transitory computer-readable media. - In this illustrative example, the
content catalog 152 and thesupplemental information catalog 154 can be implemented by the distributedcomputing environment 155. In addition, in some embodiments, themarketplace server 156 or other components of theelectronic marketplace 150 may be implemented by the distributed computing environment. In some embodiments, the entirety of theelectronic marketplace 150 may be implemented by the distributedcomputing environment 150. - The distributed
computing environment 155 may include a collection of rapidly provisioned and released computing resources hosted in connection with theelectronic marketplace 150 or a third party. The computing resources may include a number of computing, networking and storage devices in communication with one another. In some embodiments, the computing devices may correspond to physical computing devices. In other embodiments, the computing devices may correspond to virtual machine instances implemented by one or more physical computing devices. In still other embodiments, computing devices may correspond to both virtual computing devices and physical computing devices. One example of a distributed computing environment is given in U.S. Pat. No. 7,865,586, issued on Jan. 4, 2011 and entitled “Configuring Communications Between Computing Nodes” which is hereby incorporated by reference in its entirety. A distributed computing environment may also be referred to as a cloud computing environment. - With further reference to
FIG. 2 , illustrative components of theelectronic marketplace 150 will now be discussed. Themarketplace server 156 may enable browsing and acquisition of audio content and/or supplemental information relating to audio content available from theelectronic marketplace 150. Further, themarketplace server 156 may transmit audio content and/or supplemental information touser computing devices 100. - The
content catalog 152 may include information on audio content available from theelectronic marketplace 150. Thesupplemental information catalog 154 may include supplemental information available from theelectronic marketplace 150. Such supplemental information may include, by way of non-limiting example, supplemental information provided or generated by authors, editors, publishers, users of theelectronic marketplace 150, or other third parties. Accordingly, themarketplace server 156 may obtain audio content information for audio content offered by theelectronic marketplace 150, as well as supplemental information offered by theelectronic marketplace 150, and may make such audio content and supplemental information available to a user from a single network resource, such as a Web site. A user may then acquire audio content and/or supplemental information from theelectronic marketplace 150. - Illustratively,
marketplace server 156 may generate one or more user interfaces through which a user utilizing auser computing device 100 or a distinct computing device, may browse audio content and/or supplemental information made available by theelectronic marketplace 150, submit queries for matching audio content and/or supplemental information, view information and details regarding specific audio content and/or supplemental information, and acquire audio content and/or supplemental information. - After the user selects desired audio content and/or supplemental information from the
electronic marketplace 150, themarketplace server 156 may facilitate the acquisition of the audio content and/or supplemental information. In this regard, themarketplace server 156 may receive payment information from theuser computing device 100 or distinct computing device. Further, themarketplace server 156 may, transmit the audio content and/or supplemental information to theuser computing device 100. - In some embodiments, the
marketplace server 156 may, subsequent to acquisition of an item of audio content, inform auser computing device 100 or newly available supplemental information which is associated with the audio content. In still more embodiments, themarketplace server 156 may enable streaming of audio content and/or supplemental information from thecontent catalog 152 or thesupplemental information catalog 154 to auser computing device 100. - Though described herein with reference to catalogs associated with an
electronic marketplace 150, in some embodiments, a playback device may obtain audio information or supplemental information from additional or alternative sources, such as third party content catalogs or supplemental information catalogs. -
FIG. 3 is an illustrative graphical representation or visualization of audio content including supplemental information. As shown inFIG. 3 , the audio content corresponds to the audio book “The Adventures of Tom Sawyer.” Theprimary audio content 204 represents the content of the audio book excluding supplemental information. Illustratively, theprimary audio content 204 can represent a visual map of audio content, such that the duration of theaudio content 204 is displayed from left to right. As also shown inFIG. 3 , theprimary audio content 204 is associated with supplemental information 206-214. Each item of supplemental information 206-214 is associated with one or more points X1-X6 in either theprimary audio content 204 or another item of supplemental information 206-214. Though described herein with reference to specific points within theaudio content 204, in some embodiments, supplemental information may be associated with a range of points within theaudio content 204, or with a specified duration ofaudio content 204. As discussed above, supplemental information may include footnotes (supplemental information 206 and 210), editor commentary (supplemental information 208 and 212), or additional content (supplemental information 214). Additional content may correspond to other types of supplemental information described above, such as author commentary or commentary of other readers of the audio content. - An illustrative user interaction with the
audio content 202 will now be described with reference toFIG. 3 . Illustratively, the audio content of thebook 202 may be played by a computing device, such asdevice 100 ofFIG. 1 , beginning at the left of 204 and proceeding to the right. At point X1, thecomputing device 100 may indicate to the user thatsupplemental information 206 is available for playback. The user may input a command that thesupplemental information 206 should be played. At this point, playback of theaudio content 202 may temporarily cease, and playback of thesupplemental information 206 may begin. After completion of playback ofsupplemental information 206, playback of theaudio content 204 may resume. As discussed above, playback may resume at or near the point X1, such as a set amount of time before X1 (e.g., 3 seconds), or the beginning of the sentence or paragraph containing X1. - As playback proceeds to X2, the
device 100 can indicate the availability ofsupplemental information 208. As described above, such indication may correspond to a visual indication, audio indication, haptic indication, or any combination therein. As described above, the user may be given a period of time (e.g., 10 seconds) in which to command playback of supplemental information. In this example, if the user does not input a command to playsupplemental information 208, playback of theaudio content 202 continues. - At point X3, the
device 100 may indicate the availability ofsupplemental information 210, and receive a user command to play thecontent 210. As displayed inFIG. 3 ,supplemental information 210 is itself associated withsupplemental information 212. At the point X6 during playback of thesupplemental information 210, thedevice 100 can indicate availability ofsupplemental information 212 and receive a command to play thecontent 212. During playback ofsupplemental information 212, thedevice 100 may receive a command from the user to stop playback ofsupplemental information 212. In some embodiments, this may cease playback ofcontent 212 and resume playback ofcontent 210 at or near the point X6. In other embodiments, the command received during playback ofsupplemental information 212 may resume playback of theprimary audio content 204 at or near the point X3, the last point played of theaudio content 204. - As playback of the primary audio content continues, a point X4 may be encountered which is associated with a previously played item of
supplemental information 210. In some embodiments, if thesupplemental information 210 has already been played, thedevice 100 may not indicate the presence ofsupplemental information 210 at X4. In other embodiments, an indication may be suppressed only ifsupplemental information 210 was played completely. In still other embodiments, an indication may always be played. - At point X5, the
device 100 may indicate the presence ofsupplemental information 214. As described above, the user may input a command to thedevice 100 to indicate a desire to playsupplemental information 214. If this input is received,supplemental information 214 is played. Otherwise, playback of theprimary audio content 204 proceeds, until it reaches the end ofaudio content 204 or until user input is received which causes theaudio content 204 to stop playing. -
FIGS. 4A-4C depict anillustrative user interface 300 displayed by a computing device, such ascomputing device 100 ofFIG. 1 , that allows playback and interaction with supplemented audio content. In these examples, the audio content illustratively corresponds to theaudio content 204 ofFIG. 3 . The title of theaudio content 204 is displayed as “The Adventures of Tom Sawyer” 301. Theuser interface 300 contains a number of input controls 302-304, each of which may be selected by a user to display a different aspect of theuser interface 300. As illustrated inFIG. 4A , theinput control 302, corresponding to “Now Playing” is currently selected. Further input controls 316-320 allow various inputs by the user, such as rewinding playback for a period of time withinput control 316, starting and stopping playback with input control 318 (the display of which may alter between play and pause symbols depending on the playback state), and bookmarking a current position withinput control 320. The interface includesaudio content information 312, such as a graphic associated with the audio content, title, author, and narrator information, and achapter indicator 309 that displays the current chapter of theaudio content 204 that is ready for playback. - The
interface 300 further includescontent indicator 310 that indicates the content of the current of the chapter, as well as a progress indicator 311, which indicates the position of playback within the currently selected chapter. Illustratively, the position of the progress indicator 311 may correspond to position X1 ofFIG. 3 , which is associated withsupplemental information 206. As such, theuser interface 300 further includes aninput control 314 which indicates the availability of thesupplemental information 206. As described above, theinput control 314 may be displayed for a period before or after the position X1, to allow a user time to request playback of thesupplemental information 206. After selection of theinput control 314, playback of theaudio content 204 may be temporarily ceased, and playback of thesupplemental information 206 may begin. -
FIG. 4B depicts theillustrative user interface 300 during playback of an item of supplemental information, such assupplemental information 206. As shown inFIG. 3B ,content indicator 310 may be altered to indicate that playback of the primary audio content has been temporarily halted. Further, theuser interface 300 may include a supplementalinformation title indicator 402 which describes the currently playing supplemental information. Asupplemental information indicator 404 may be provided that displays the content and position within the currently playing supplemental information. Theuser interface 300 also includes aninput control 406 which allows the user to stop playback of the current supplemental information and return to the primary audio content. As described above, playback of the primary audio content may resume at the point at which it ceased or a point nearby, such as the beginning of a paragraph or sentence, or a point some period (e.g., 3 seconds) prior. Though theuser interface 300 ofFIG. 4B displaysonly input control 406 to return to the primary audio content, in some embodiments, an additional input control may be provided to play additional supplemental information associated with the currently playing supplemental information, as is described above. -
FIG. 4C depicts theillustrative user interface 300 after user selection ofinput control 308, which is configured to cause display of the supplemental information associated with the currently loaded audio content. In the current example, each input control 506-510 is selectable by a user to play the associated supplemental information independent of the primary audio content. In some embodiments, selection of an item of supplemental information via input controls 502-510 may cause the content to be played independent of the audio content. In other embodiments, selection of supplemental information via input controls 502-510 may cause the selected content to be played, and further cause playback of the audio content from a point with which the supplement content is associated. Theuser interface 300 ofFIG. 4C may further include aninput control 520 which is selectable by a user to display a different aspect of theuser interface 300, such as a portion of theuser interface 300 which enables the user to specify preferences regarding supplemental information. -
FIG. 4D depicts theillustrative user interface 300 after user selection ofinput control 520, which is configured to cause display of a portion of theuser interface 300 which enables a user to specify preferences regarding supplemental information. In some embodiments, such user preferences may be specific to the currently depicted audio content (e.g., “The Adventures of Tom Sawyer”). In other embodiments, user preferences may be specified for all audio content, or for specific sets of audio content. - The
illustrative user interface 300 ofFIG. 4D contains user selectable input controls 552-558 which enable the user to specify various types of supplemental information which should be provided. For example,input control 552 corresponds to author provided supplemental information, which may include, for example, footnotes, endnotes, glossary information, or author commentary.Input control 554 corresponds to publisher provided supplemental information, which may include editor commentary or additional information provided by a publisher of audio content. Input controls 556 and 558 correspond to supplemental information associated with other users of theelectronic marketplace 150. Specifically,input control 556 corresponds to users associated with the current user of theuser interface 300. In some embodiments, these associated users may correspond to users designated as contacts or “friends” in theelectronic marketplace 150. In other embodiments, these associated users may correspond to users designated as contacts or “friends” through external systems, such as one or more social networking systems in communication with theelectronic marketplace 150. In addition,input control 558 corresponds to general users of theelectronic marketplace 150 who are not necessarily designated as a contact or “friend” of the current user. - Illustratively, selection of one or more of the input controls 552-558 may cause the
computing device 100 to retrieve supplemental information provided by the corresponding source (e.g., the author, the publisher, friends, or other users) during or prior to playback of audio content with which the supplemental information is associated. Supplemental information may be retrieved, for example, from theelectronic marketplace 150 ofFIG. 2 via thenetwork 130. In some embodiments, supplemental information may be retrieved from theelectronic marketplace 150 regardless of user selection of input controls 552-558. For example, where supplemental information is retrieved prior to playback of corresponding audio content, retrieval of supplemental information regardless of user selection of input controls 552-558 may enable a user to select new supplemental information for playback immediately, without waiting for the new supplemental information to be retrieved. In still more embodiments, user selection of one or more of the input controls 552-558 may enable or disable the availability of the associated supplemental information during playback of audio content. For example, de-selection ofinput control 552 may disable playback of supplemental content generated by the author of the audio content. - The
user interface 300 as depicted inFIG. 4D further includes input controls 560, which may enable a user to specify categories of desired supplemental information. For example, each item of supplemental information may be classified as one or more of “Top Rated,” “Funny,” “Insightful,” “Informative,” or “Interesting.” In some embodiments, such classification may be accomplished by the operator of theelectronic marketplace 150. In further embodiments, such classification may be accomplished by users of theelectronic marketplace 150. Illustratively, de-selection of one or more of the input controls 160 may disable playback of correspondingly tagged supplemental information. For example, if a particular item of supplemental information is classified by theelectronic marketplace 150 as “funny,” and the user of thecomputing device 100 de-selects theinput control 560 corresponding to “Funny,” then that particular item of supplemental content may not be available during playback of audio content. As a further example, if a particular item of supplemental information is classified by theelectronic marketplace 150 as “top rated,” and the user of thecomputing device 100 selects theinput control 560 corresponding to “Top Rated,” then that particular item of supplemental content may be made available during playback of audio content. In some embodiments, only specific types of supplemental information may be categorized. For example, supplemental information generated by contacts or general users may be categorized, while supplemental information generated by the author or publisher may not be. In these embodiments, input controls 560 may apply only to supplemental information which is categorized. In other embodiments, input controls 560 may apply to all supplemental information. - The
user interface 300 as depicted inFIG. 4D further includes input controls 562 and 563, which may enable a user to specify a limit to the amount of supplemental information provided during playback of audio content. For example,input control 562 may enable a user to specify that no more than three items of supplemental information should be presented during any one minute of audio playback. As a further example,input control 563 may enable a user to specify a maximum duration of supplemental information that should be presented. In addition to limits per time period (e.g., minutes, hours, etc.), limits may, for example, be imposed over the course of a paragraph, page, chapter, book, or other measurement period. - After specifying one or more preferences regarding supplemental information, the user may return to portion of the
user interface 300 displayed inFIG. 4C by selection ofinput control 564. - Though not shown in
FIG. 4 , in some embodiments, a user may provide preferences indicating one or more items, categories, or types of supplemental information that should be presented automatically, without output of an indication and without requiring user input. For example, a user may specify that all “top rated” items of supplemental information should be automatically presented, indicators should be provided for “funny” supplemental information, and no indicator should be provided for supplemental information only marked as “interesting.” As will be appreciated by those skilled in the art, such preferences may be combined to specify that any given item of supplemental information should be automatically presented, indicated for presentation or playback, or not indicated for presentation or playback. -
FIG. 5 is a flow diagram depicting anillustrative routine 600 for playback of supplemented audio content. The routine 600 may illustratively be implemented by thesupplemental information module 118 of thecomputing device 100. The routine 600 begins atblock 602, which causes the playback of a primary audio content, such asaudio content 204. Playback may begin, for example, in response to user selection ofinput control 318 ofFIGS. 4A-4C . - At
block 604, thecomputing device 100 determines whether supplemental information is associated with a current position within the primary audio content. As described above, supplemental information may be made associated with a range of positions within a primary audio content (e.g., with a continuous 10 second range). If supplemental information is not available, playback continues atblock 614, described below. If the current playback position is within such a range, and supplemental information is therefore available, the routine 600 proceeds to block 606, which outputs to the user an indication that supplemental information is available. As described above, this indication may correspond to audio output by thedevice 100, such as a tone, bell, voice, or sound, to a visual output, such as the appearance of an input control on a display, or to haptic feedback, such as a vibration of thedevice 100. Atblock 608, thecomputing device 100 tests whether the user has entered a command to play the detected supplemental information. As described above, such a command may correspond to input via a display device or other input control, such as a physical button on thecomputing device 100 or an accessory connected to the computing device 100 (e.g., headphones). In some embodiments, the command may further correspond to a voice command from the user. If a command is not received, playback continues atblock 614, described below. If a command is received, the routine 600 continues to block 610, which causes the playback of the supplemental information (and temporarily stops playback of the primary audio content). After playback of the supplemental information is completed, playback of the primary audio content resumes atblock 614. Optionally, the routine 600 may also be configured to receive a user command atblock 610 to cease the playback of the supplemental information and immediately resume playback of the primary audio content. - Additionally, as described above, some supplemental information may itself be associated with supplemental information. In these embodiments, additional instances of routine 600 may be executed at
block 610, such that the user may indicate that secondary supplemental audio content should be played. As will be appreciated by one skilled in the art, each additional instance ofblock 610 may create an instance of routine 600, such that playback any configuration of supplemental information may be facilitated. - At
block 614, playback of the primary audio content resumes. As discussed above, playback may be resumed at or near the point at which it was ceased. For example, playback may be resumed at a point prior to where playback was ceased, such as the beginning of a previous paragraph. Atblock 616, the routine 600 tests whether to end playback of the audio content. Playback may be ended, for example, in response to a user command or completion of an item of audio content. If playback is not ended, the routine continues atblock 604, as described above. If playback is ended, the routine 600 may end. - It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
- All of the processes described herein may be embodied in, and fully automated via, software code modules executed by one or more general purpose computers or processors, thus transforming the general purpose computers or processors into specifically configured devices. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all the methods may alternatively be embodied in specialized computer hardware. In addition, the components referred to herein may be implemented in hardware, software, firmware or a combination thereof
- Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
- Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each be present.
- Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
- It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Claims (31)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/332,157 US9348554B2 (en) | 2011-12-20 | 2011-12-20 | Managing playback of supplemental information |
JP2014548824A JP2015510602A (en) | 2011-12-20 | 2012-12-19 | Management of auxiliary information playback |
AU2012359080A AU2012359080B2 (en) | 2011-12-20 | 2012-12-19 | Managing playback of supplemental information |
PCT/US2012/070565 WO2013096422A1 (en) | 2011-12-20 | 2012-12-19 | Managing playback of supplemental information |
CN201280063654.8A CN104205791A (en) | 2011-12-20 | 2012-12-19 | Managing playback of supplemental information |
EP12860523.5A EP2795885B1 (en) | 2011-12-20 | 2012-12-19 | Managing playback of supplemental information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/332,157 US9348554B2 (en) | 2011-12-20 | 2011-12-20 | Managing playback of supplemental information |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130159853A1 true US20130159853A1 (en) | 2013-06-20 |
US9348554B2 US9348554B2 (en) | 2016-05-24 |
Family
ID=48611538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/332,157 Active 2033-03-10 US9348554B2 (en) | 2011-12-20 | 2011-12-20 | Managing playback of supplemental information |
Country Status (6)
Country | Link |
---|---|
US (1) | US9348554B2 (en) |
EP (1) | EP2795885B1 (en) |
JP (1) | JP2015510602A (en) |
CN (1) | CN104205791A (en) |
AU (1) | AU2012359080B2 (en) |
WO (1) | WO2013096422A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140040715A1 (en) * | 2012-07-25 | 2014-02-06 | Oliver S. Younge | Application for synchronizing e-books with original or custom-created scores |
US20140115449A1 (en) * | 2012-10-22 | 2014-04-24 | Apple Inc. | Generating sample documents |
US20140331126A1 (en) * | 2013-05-06 | 2014-11-06 | Dropbox, Inc. | Animating Edits to Documents |
WO2014210034A1 (en) * | 2013-06-25 | 2014-12-31 | Audible, Inc. | Synchronous presentation of content with a braille translation |
US20150169279A1 (en) * | 2013-12-17 | 2015-06-18 | Google Inc. | Audio book smart pause |
US9342229B2 (en) * | 2014-03-28 | 2016-05-17 | Acast AB | Method for associating media files with additional content |
US9678637B1 (en) | 2013-06-11 | 2017-06-13 | Audible, Inc. | Providing context-based portions of content |
US20170300294A1 (en) * | 2016-04-18 | 2017-10-19 | Orange | Audio assistance method for a control interface of a terminal, program and terminal |
US9927957B1 (en) * | 2014-12-11 | 2018-03-27 | Audible, Inc. | Rotary navigation of synchronized content |
US20210345003A1 (en) * | 2019-12-19 | 2021-11-04 | Rovi Guides, Inc. | Systems and methods for providing timeline of content items on a user interface |
US20210397491A1 (en) * | 2020-06-18 | 2021-12-23 | Apple Inc. | Providing Access to Related Content in Media Presentations |
WO2024047141A1 (en) * | 2022-08-30 | 2024-03-07 | Varinder Kullar | Book apparatus |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108885869B (en) * | 2016-03-16 | 2023-07-18 | 索尼移动通讯有限公司 | Method, computing device, and medium for controlling playback of audio data containing speech |
WO2020023070A1 (en) * | 2018-07-24 | 2020-01-30 | Google Llc | Text-to-speech interface featuring visual content supplemental to audio playback of text documents |
US11803590B2 (en) * | 2018-11-16 | 2023-10-31 | Dell Products L.P. | Smart and interactive book audio services |
WO2021107932A1 (en) | 2019-11-26 | 2021-06-03 | Google Llc | Dynamic insertion of supplemental audio content into audio recordings at request time |
CN112712806A (en) * | 2020-12-31 | 2021-04-27 | 南方科技大学 | Auxiliary reading method and device for visually impaired people, mobile terminal and storage medium |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020099552A1 (en) * | 2001-01-25 | 2002-07-25 | Darryl Rubin | Annotating electronic information with audio clips |
US20020120925A1 (en) * | 2000-03-28 | 2002-08-29 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US6587127B1 (en) * | 1997-11-25 | 2003-07-01 | Motorola, Inc. | Content player method and server with user profile |
US6601103B1 (en) * | 1996-08-22 | 2003-07-29 | Intel Corporation | Method and apparatus for providing personalized supplemental programming |
US20050193335A1 (en) * | 2001-06-22 | 2005-09-01 | International Business Machines Corporation | Method and system for personalized content conditioning |
US7114170B2 (en) * | 2001-02-07 | 2006-09-26 | Neoris Usa, Inc. | Method and apparatus for providing interactive media presentation |
US7321887B2 (en) * | 2002-09-30 | 2008-01-22 | Sap Aktiengesellschaft | Enriching information streams with contextual content |
US20080119132A1 (en) * | 2006-11-22 | 2008-05-22 | Bindu Rama Rao | Media distribution server that presents interactive media to a mobile device |
US20090210779A1 (en) * | 2008-02-19 | 2009-08-20 | Mihai Badoiu | Annotating Video Intervals |
US20100017694A1 (en) * | 2008-07-18 | 2010-01-21 | Electronic Data Systems Corporation | Apparatus, and associated method, for creating and annotating content |
US20100049741A1 (en) * | 2008-08-22 | 2010-02-25 | Ensequence, Inc. | Method and system for providing supplementary content to the user of a stored-media-content device |
US20100324709A1 (en) * | 2009-06-22 | 2010-12-23 | Tree Of Life Publishing | E-book reader with voice annotation |
US8028314B1 (en) * | 2000-05-26 | 2011-09-27 | Sharp Laboratories Of America, Inc. | Audiovisual information management system |
US8046689B2 (en) * | 2004-11-04 | 2011-10-25 | Apple Inc. | Media presentation with supplementary media |
US20120197648A1 (en) * | 2011-01-27 | 2012-08-02 | David Moloney | Audio annotation |
US8316303B2 (en) * | 2009-11-10 | 2012-11-20 | At&T Intellectual Property I, L.P. | Method and apparatus for presenting media programs |
US8316302B2 (en) * | 2007-05-11 | 2012-11-20 | General Instrument Corporation | Method and apparatus for annotating video content with metadata generated using speech recognition technology |
US8516375B2 (en) * | 2006-07-31 | 2013-08-20 | Litrell Bros. Limited Liability Company | Slide kit creation and collaboration system with multimedia interface |
US8543454B2 (en) * | 2011-02-18 | 2013-09-24 | Bluefin Labs, Inc. | Generating audience response metrics and ratings from social interest in time-based media |
US8645991B2 (en) * | 2006-03-30 | 2014-02-04 | Tout Industries, Inc. | Method and apparatus for annotating media streams |
US20150106854A1 (en) * | 2001-02-06 | 2015-04-16 | Rovi Guides, Inc. | Systems and methods for providing audio-based guidance |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6073589A (en) * | 1983-09-30 | 1985-04-25 | 株式会社日立製作所 | Voice synthesization system |
JPH04217023A (en) * | 1990-12-17 | 1992-08-07 | Fujitsu Ltd | Source program presenting device |
JP3446758B2 (en) * | 1991-09-18 | 2003-09-16 | ソニー株式会社 | Video tape recorder with memo function |
JP2003308341A (en) * | 2002-04-17 | 2003-10-31 | Brother Ind Ltd | Device and method for providing composition, and program |
JP3998187B2 (en) * | 2002-10-09 | 2007-10-24 | 日本放送協会 | Content commentary data generation device, method and program thereof, and content commentary data presentation device, method and program thereof |
JP2004157457A (en) * | 2002-11-08 | 2004-06-03 | Nissan Motor Co Ltd | Speech presentation system |
US9275052B2 (en) * | 2005-01-19 | 2016-03-01 | Amazon Technologies, Inc. | Providing annotations of a digital work |
JP4384074B2 (en) * | 2005-03-18 | 2009-12-16 | キヤノン株式会社 | Broadcast content processing apparatus and control method thereof |
US20080120330A1 (en) * | 2005-04-07 | 2008-05-22 | Iofy Corporation | System and Method for Linking User Generated Data Pertaining to Sequential Content |
JP2008051883A (en) * | 2006-08-22 | 2008-03-06 | Canon Inc | Voice synthesis control method and apparatus |
US7865586B2 (en) | 2008-03-31 | 2011-01-04 | Amazon Technologies, Inc. | Configuring communications between computing nodes |
US20090251440A1 (en) | 2008-04-03 | 2009-10-08 | Livescribe, Inc. | Audio Bookmarking |
US8973153B2 (en) | 2009-03-30 | 2015-03-03 | International Business Machines Corporation | Creating audio-based annotations for audiobooks |
US8392186B2 (en) * | 2010-05-18 | 2013-03-05 | K-Nfb Reading Technology, Inc. | Audio synchronization for document narration with user-selected playback |
-
2011
- 2011-12-20 US US13/332,157 patent/US9348554B2/en active Active
-
2012
- 2012-12-19 CN CN201280063654.8A patent/CN104205791A/en active Pending
- 2012-12-19 JP JP2014548824A patent/JP2015510602A/en active Pending
- 2012-12-19 EP EP12860523.5A patent/EP2795885B1/en active Active
- 2012-12-19 WO PCT/US2012/070565 patent/WO2013096422A1/en active Application Filing
- 2012-12-19 AU AU2012359080A patent/AU2012359080B2/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6601103B1 (en) * | 1996-08-22 | 2003-07-29 | Intel Corporation | Method and apparatus for providing personalized supplemental programming |
US6587127B1 (en) * | 1997-11-25 | 2003-07-01 | Motorola, Inc. | Content player method and server with user profile |
US20020120925A1 (en) * | 2000-03-28 | 2002-08-29 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US8028314B1 (en) * | 2000-05-26 | 2011-09-27 | Sharp Laboratories Of America, Inc. | Audiovisual information management system |
US20020099552A1 (en) * | 2001-01-25 | 2002-07-25 | Darryl Rubin | Annotating electronic information with audio clips |
US20150106854A1 (en) * | 2001-02-06 | 2015-04-16 | Rovi Guides, Inc. | Systems and methods for providing audio-based guidance |
US7114170B2 (en) * | 2001-02-07 | 2006-09-26 | Neoris Usa, Inc. | Method and apparatus for providing interactive media presentation |
US20050193335A1 (en) * | 2001-06-22 | 2005-09-01 | International Business Machines Corporation | Method and system for personalized content conditioning |
US7321887B2 (en) * | 2002-09-30 | 2008-01-22 | Sap Aktiengesellschaft | Enriching information streams with contextual content |
US8046689B2 (en) * | 2004-11-04 | 2011-10-25 | Apple Inc. | Media presentation with supplementary media |
US8645991B2 (en) * | 2006-03-30 | 2014-02-04 | Tout Industries, Inc. | Method and apparatus for annotating media streams |
US8516375B2 (en) * | 2006-07-31 | 2013-08-20 | Litrell Bros. Limited Liability Company | Slide kit creation and collaboration system with multimedia interface |
US20080119132A1 (en) * | 2006-11-22 | 2008-05-22 | Bindu Rama Rao | Media distribution server that presents interactive media to a mobile device |
US8316302B2 (en) * | 2007-05-11 | 2012-11-20 | General Instrument Corporation | Method and apparatus for annotating video content with metadata generated using speech recognition technology |
US20090210779A1 (en) * | 2008-02-19 | 2009-08-20 | Mihai Badoiu | Annotating Video Intervals |
US20100017694A1 (en) * | 2008-07-18 | 2010-01-21 | Electronic Data Systems Corporation | Apparatus, and associated method, for creating and annotating content |
US20100049741A1 (en) * | 2008-08-22 | 2010-02-25 | Ensequence, Inc. | Method and system for providing supplementary content to the user of a stored-media-content device |
US20100324709A1 (en) * | 2009-06-22 | 2010-12-23 | Tree Of Life Publishing | E-book reader with voice annotation |
US8316303B2 (en) * | 2009-11-10 | 2012-11-20 | At&T Intellectual Property I, L.P. | Method and apparatus for presenting media programs |
US20120197648A1 (en) * | 2011-01-27 | 2012-08-02 | David Moloney | Audio annotation |
US8543454B2 (en) * | 2011-02-18 | 2013-09-24 | Bluefin Labs, Inc. | Generating audience response metrics and ratings from social interest in time-based media |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140040715A1 (en) * | 2012-07-25 | 2014-02-06 | Oliver S. Younge | Application for synchronizing e-books with original or custom-created scores |
US20140115449A1 (en) * | 2012-10-22 | 2014-04-24 | Apple Inc. | Generating sample documents |
US20140331126A1 (en) * | 2013-05-06 | 2014-11-06 | Dropbox, Inc. | Animating Edits to Documents |
US11074396B2 (en) | 2013-05-06 | 2021-07-27 | Dropbox, Inc. | Animating edits to documents |
US10579715B2 (en) | 2013-05-06 | 2020-03-03 | Dropbox, Inc. | Animating edits to documents |
US9727544B2 (en) * | 2013-05-06 | 2017-08-08 | Dropbox, Inc. | Animating edits to documents |
US9678637B1 (en) | 2013-06-11 | 2017-06-13 | Audible, Inc. | Providing context-based portions of content |
WO2014210034A1 (en) * | 2013-06-25 | 2014-12-31 | Audible, Inc. | Synchronous presentation of content with a braille translation |
US10282162B2 (en) * | 2013-12-17 | 2019-05-07 | Google Llc | Audio book smart pause |
US20150169279A1 (en) * | 2013-12-17 | 2015-06-18 | Google Inc. | Audio book smart pause |
US20160274862A1 (en) * | 2013-12-17 | 2016-09-22 | Google Inc. | Audio book smart pause |
US9378651B2 (en) * | 2013-12-17 | 2016-06-28 | Google Inc. | Audio book smart pause |
US10452250B2 (en) | 2014-03-28 | 2019-10-22 | Acast AB | Method for associating media files with additional content |
US9342229B2 (en) * | 2014-03-28 | 2016-05-17 | Acast AB | Method for associating media files with additional content |
US9715338B2 (en) | 2014-03-28 | 2017-07-25 | Acast AB | Method for associating media files with additional content |
US9927957B1 (en) * | 2014-12-11 | 2018-03-27 | Audible, Inc. | Rotary navigation of synchronized content |
FR3050293A1 (en) * | 2016-04-18 | 2017-10-20 | Orange | METHOD FOR AUDIO ASSISTANCE OF TERMINAL CONTROL INTERFACE, PROGRAM AND TERMINAL |
EP3236347A1 (en) * | 2016-04-18 | 2017-10-25 | Orange | Sound assistance method of a control interface of a terminal, a program and a terminal |
US20170300294A1 (en) * | 2016-04-18 | 2017-10-19 | Orange | Audio assistance method for a control interface of a terminal, program and terminal |
US20210345003A1 (en) * | 2019-12-19 | 2021-11-04 | Rovi Guides, Inc. | Systems and methods for providing timeline of content items on a user interface |
US20210397491A1 (en) * | 2020-06-18 | 2021-12-23 | Apple Inc. | Providing Access to Related Content in Media Presentations |
US11650867B2 (en) * | 2020-06-18 | 2023-05-16 | Apple Inc. | Providing access to related content in media presentations |
WO2024047141A1 (en) * | 2022-08-30 | 2024-03-07 | Varinder Kullar | Book apparatus |
Also Published As
Publication number | Publication date |
---|---|
EP2795885A4 (en) | 2015-08-12 |
CN104205791A (en) | 2014-12-10 |
EP2795885A1 (en) | 2014-10-29 |
JP2015510602A (en) | 2015-04-09 |
AU2012359080B2 (en) | 2015-09-17 |
AU2012359080A1 (en) | 2014-07-03 |
EP2795885B1 (en) | 2020-05-20 |
WO2013096422A1 (en) | 2013-06-27 |
US9348554B2 (en) | 2016-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9348554B2 (en) | Managing playback of supplemental information | |
JP6492069B2 (en) | Environment-aware interaction policy and response generation | |
US11836180B2 (en) | System and management of semantic indicators during document presentations | |
JP5855223B2 (en) | Synchronized content playback management | |
US9471203B1 (en) | Presenting animated visual supplemental content | |
US20140377722A1 (en) | Synchronous presentation of content with a braille translation | |
US20150248886A1 (en) | Model Based Approach for On-Screen Item Selection and Disambiguation | |
US20140377721A1 (en) | Synchronous presentation of content with a braille translation | |
WO2013181158A2 (en) | Synchronizing translated digital content | |
KR101746052B1 (en) | Method and apparatus for providing e-book service in a portable terminal | |
KR102023157B1 (en) | Method and apparatus for recording and playing of user voice of mobile terminal | |
WO2014154097A1 (en) | Automatic page content reading-aloud method and device thereof | |
CN114008610A (en) | Information processing system, information processing method, and recording medium | |
US10089059B1 (en) | Managing playback of media content with location data | |
KR102208361B1 (en) | Keyword search method and apparatus | |
US9280905B2 (en) | Media outline | |
US20190129683A1 (en) | Audio app user interface for playing an audio file of a book that has associated images capable of rendering at appropriate timings in the audio file | |
CN109416581B (en) | Method, system and storage device for enhancing text narration using haptic feedback | |
US20220207029A1 (en) | Systems and methods for pushing content | |
WO2014210034A1 (en) | Synchronous presentation of content with a braille translation | |
US20190205014A1 (en) | Customizable content sharing with intelligent text segmentation | |
US11789696B2 (en) | Voice assistant-enabled client application with user view context | |
JP2022051500A (en) | Related information provision method and system | |
US10198245B1 (en) | Determining hierarchical user interface controls during content playback | |
CN113362802A (en) | Voice generation method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AUDIBLE, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STORY, GUY A., JR.;GOLDSTEIN, DOUG S.;SIGNING DATES FROM 20111215 TO 20111219;REEL/FRAME:027531/0004 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |