US20060107303A1 - Content specification for media streams - Google Patents
Content specification for media streams Download PDFInfo
- Publication number
- US20060107303A1 US20060107303A1 US10/989,136 US98913604A US2006107303A1 US 20060107303 A1 US20060107303 A1 US 20060107303A1 US 98913604 A US98913604 A US 98913604A US 2006107303 A1 US2006107303 A1 US 2006107303A1
- Authority
- US
- United States
- Prior art keywords
- media stream
- graphical object
- drag
- video signal
- document
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
Definitions
- the present invention relates to telecommunications in general, and, more particularly, to specifying the content of transmitted media streams.
- a user might record a message that comprises video and audio and transmit the message to a remote user (e.g., as an email attachment, as streaming content, etc.).
- a remote user e.g., as an email attachment, as streaming content, etc.
- video and audio that are captured at a telecommunications terminal e.g., a desktop computer, a personal digital assistant [PDA], a cellular telephone, etc.
- PDA personal digital assistant
- FIG. 1 depicts telecommunications terminal 100 , in this case a desktop personal computer, in accordance with the prior art.
- Telecommunications terminal 100 comprises processing unit 101 , display 102 , speaker 103 , webcam 104 , and microphone 105 , interconnected as shown.
- a user of telecommunications terminal 100 might use videoconferencing software to transmit video and audio captured at webcam 104 and microphone 105 , respectively, over telecommunications network 110 (e.g., the Internet, etc.) to remote telecommunications terminal 120 .
- telecommunications network 110 e.g., the Internet, etc.
- a user of telecommunications terminal 100 might use a content authoring application to create multimedia content, and communications software (e.g., an email client, a streaming application, etc.) to transmit the content to remote telecommunications terminal 120 via telecommunications network 110 .
- Multimedia content at telecommunications terminal 100 e.g., remote content received via telecommunications network 110 , content stored locally, etc.
- display 102 e.g., in window 106 , etc.
- speaker 103 e.g., in well-known fashion.
- a telecommunications terminal user who is engaged in a videoconference could dynamically supplant the video content of the outgoing media stream (e.g., video of the user talking, video of a whiteboard that the user is writing on, etc.) with alternative video content (e.g., a PowerPoint® presentation, a recorded video segment, etc.), while maintaining the audio content of the outgoing media stream (e.g., the user's speech, etc.).
- alternative video content e.g., a PowerPoint® presentation, a recorded video segment, etc.
- the user would also be advantageous for the user to be able to easily switch back to the transmission of the original video content at any time, and for the original video content to automatically resume when the alternative video content has concluded.
- the present invention enables a user of a telecommunications terminal to dynamically supplant the video content of an outgoing media stream (e.g., an outgoing videoconference stream, etc.) with video associated with a document (e.g., a PowerPoint® file, a Windows Media Video [WMV] file, etc.) via the terminal's graphical user interface (GUI).
- an outgoing media stream e.g., an outgoing videoconference stream, etc.
- a document e.g., a PowerPoint® file, a Windows Media Video [WMV] file, etc.
- GUI graphical user interface
- the video content of the outgoing video stream is supplanted with video content associated with the document.
- a user can drag-and-drop a document icon away from the second graphical object to restore the video content of the outgoing media stream to its prior source (e.g., webcam live-video capture, another document, etc.).
- the video content associated with the document concludes, the video content of the outgoing media stream automatically resumes to its prior source.
- the second illustrative embodiment of the present invention augments the first illustrative embodiment by adding the audio content associated with the drag-and-dropped document to the audio content of the outgoing media stream. For example, if a user drag-and-drops an icon for a Windows Media Video (WMV) file onto a videoconference application window, audio content from the WMV file (e.g., background music, etc.) is transmitted in addition to the live-audio capture, and the live-video capture is supplanted with the video content of the WMV file.
- WMV Windows Media Video
- the roles of the audio content and video content are reversed.
- the video content of a drag-and-dropped document is added to the live-video capture (e.g., shown side-by-side in a split-screen window, superimposed, etc.) and the live-audio capture is supplanted with the audio content of the document.
- the illustrative embodiment comprises: (a) transmitting to a remote telecommunications terminal a first media stream that comprises a first video signal and an audio signal; (b) receiving from the remote telecommunications terminal a second media stream; and (c) when a first graphical object that is associated with a document is drag-and-dropped in a graphical user interface onto a second graphical object that is associated with the first media stream, supplanting the first video signal in the first media stream with a second video signal that is based on the document.
- FIG. 1 depicts a telecommunications terminal in accordance with the prior art.
- FIG. 2 depicts telecommunications terminal 100 , as shown in FIG. 1 , communicating with another telecommunications terminal, in accordance with the prior art.
- FIG. 3 depicts a telecommunications terminal in accordance with the illustrative embodiments of the present invention.
- FIG. 4 depicts a drag-and-drop operation performed by a user of telecommunications terminal 300 , as shown in FIG. 3 , in accordance with the illustrative embodiments of the present invention.
- FIG. 5 depicts telecommunications terminal 300 , as shown in FIG. 3 , after the drag-and-drop operation of FIG. 4 , in accordance with the illustrative embodiments of the present invention.
- FIG. 6 depicts an alternative drag-and-drop operation to the drag-and-drop operation of FIG. 4 , in accordance with the illustrative embodiments of the present invention.
- FIG. 7 depicts a drag-and-drop operation performed by a user of telecommunications terminal 300 after the drag-and-drop operation of FIG. 4 , in accordance with the illustrative embodiments of the present invention.
- FIG. 8 depicts an alternative drag-and-drop operation to the drag-and-drop operation of FIG. 7 , in accordance with the illustrative embodiments of the present invention.
- FIG. 9 depicts a block diagram of the salient components of processing unit 301 , as shown in FIG. 3 , in accordance with the illustrative embodiments of the present invention.
- FIG. 10 depicts a flowchart of the salient tasks of telecommunications terminal 300 in response to the drag-and-drop operations of FIGS. 4 and 7 , in accordance with the first illustrative embodiment of the present invention.
- FIG. 11 depicts a flowchart of the salient tasks of telecommunications terminal 300 in response to the drag-and-drop operations of FIGS. 4 and 7 , in accordance with the second illustrative embodiment of the present invention.
- FIG. 12 depicts a flowchart of the salient tasks of telecommunications terminal 300 in response to the drag-and-drop operations of FIGS. 4 and 7 , in accordance with the third illustrative embodiment of the present invention.
- FIG. 13 depicts a client/server architecture in accordance with the illustrative embodiments of the present invention.
- the detailed description is organized into two sections: the first section describes how a user can specify, via the graphical user interface, what content is transmitted by telecommunications terminal 300 ; and the second section describes the salient hardware and software of telecommunications terminal 300 .
- GUI Graphical User Interface
- FIG. 3 depicts telecommunications terminal 300 in accordance with the illustrative embodiments of the present invention.
- Telecommunications terminal 300 comprises processing unit 301 , display 302 , speaker 303 , webcam 304 , and microphone 305 , interconnected as shown.
- Processing unit 301 is capable of executing programs, of storing and retrieving data, and of receiving messages from and transmitting messages to telecommunications network 110 , in well-known fashion.
- processing unit 301 is capable of outputting signals to display 302 and speaker 303 , and of receiving signals from webcam 304 , microphone 305 , and other input devices (not shown) such as a keyboard, a mouse, a joystick, etc.
- the internal architecture of processing unit 301 is described in detail below and with respect to FIG. 9 .
- Display 302 like display 102 of the prior art, is capable of receiving electric signals and of generating visual output (e.g., text, images, etc.) based on these signals, in well-known fashion.
- visual output e.g., text, images, etc.
- Speaker 303 is a transducer that is capable of receiving electric signals and of generating acoustic output signals based on the electric signals, in well-known fashion.
- Webcam 304 like webcam 104 , is capable of receiving photonic signals and of generating electronic image signals, in well-known fashion.
- Microphone 305 like microphone 105 , is capable of receiving acoustic signals and of generating electric signals based on the acoustic signals, in well-known fashion.
- display 302 displays window 306 , and icons 307 through 310 , in well-known fashion.
- Window 306 is a rectangular graphical object that is capable of containing text, images, and other graphical objects (e.g., an icon, a drop-down box, a tabbed panel, a subwindow, etc.), in well-known fashion.
- graphical objects e.g., an icon, a drop-down box, a tabbed panel, a subwindow, etc.
- Tabbed panels 307 and 308 are graphical objects that, when selected (indicated by boldface), make visible in window 306 an associated set of graphical objects. As shown in FIG. 3 , tabbed panel 307 corresponds to an incoming media stream (e.g., a received videoconference stream, etc.) and tabbed panel 308 corresponds to an outgoing media stream (e.g., a transmitted videoconference stream, etc.).
- incoming media stream e.g., a received videoconference stream, etc.
- tabbed panel 308 corresponds to an outgoing media stream (e.g., a transmitted videoconference stream, etc.).
- Icon 309 is an image that represents a folder (i.e., a directory) entitled Fl in the file system of processing unit 301 , as is commonplace in the art.
- Icon 310 is an image that represents a data file D 1 in the file system of processing unit 301 , as is commonplace in the art.
- File D 1 might contain a word-processing document, a spreadsheet, a PowerPoint® document, etc.
- Icon 311 is an image that represents a videoconferencing application, and thus is also associated with the outgoing and incoming media streams of the videoconferencing application.
- Icon 312 is an image located in the upper-left corner of window 306 that indicates the source of the video content of the outgoing media stream.
- icon 312 has the same image as the videoconferencing application icon, which indicates that the video capture of webcam 304 is currently being transmitted in the outgoing media stream.
- FIG. 4 depicts a drag-and-drop operation performed by a user of telecommunications terminal 300 , in accordance with the illustrative embodiments of the present invention.
- the user is drag-and-dropping icon 310 , via cursor 413 , onto videoconferencing application window 306 .
- icon 310 represents a document D 1 .
- the effect of the drag-and-drop operation, in the first illustrative embodiment of the present invention, is that the video content of document D 1 supplants the live-video capture in the outgoing media stream. (The second and third illustrative embodiments of the present invention are described below at the end of this section.)
- FIG. 5 depicts telecommunications terminal 300 , as shown in FIG. 3 , after the drag-and-drop operation of FIG. 4 , in accordance with the illustrative embodiments of the present invention.
- outgoing tabbed panel 308 which is selected, now shows the video content of D 1 (in this case, an illustrative PowerPoint® presentation), and icon 312 is replaced with icon 512 , indicating that document D 1 is currently the video source for the outgoing media stream.
- icon 310 is back in its original position; the reason for this is that the drag-and-drop operation did not move the file for D 1 in the file system of telecommunications terminal 300 .
- FIG. 6 depicts an alternative drag-and-drop operation to the drag-and-drop operation of FIG. 4 , in accordance with the illustrative embodiments of the present invention.
- the user can drag-and-drop icon 310 onto icon 311 (the icon for the videoconferencing application) instead of onto window 306 in order to transmit the video content of D 1 .
- icon 311 the icon for the videoconferencing application
- display 302 will appear as in FIG. 5 , just as for the drag-and-drop operation of FIG. 4 .
- FIG. 7 depicts a drag-and-drop operation performed by a user of telecommunications terminal 300 after the drag-and-drop operation of FIG. 4 , in accordance with the illustrative embodiments of the present invention.
- the user is drag-and-dropping icon 512 , via cursor 413 , away from videoconferencing application window 306 .
- the effect of this drag-and-drop operation is that the video of the outgoing video stream reverts to live-video capture.
- display 302 will appear once again as in FIG. 3 .
- FIG. 8 depicts an alternative drag-and-drop operation to the drag-and-drop operation of FIG. 7 , in accordance with the illustrative embodiments of the present invention.
- the user can drag-and-drop icon 311 (the icon for the videoconferencing application) onto videoconferencing application window 306 to revert to live-video capture, instead of drag-and-dropping icon 512 away from window 306 .
- display 302 will appear as in FIG. 3 , just as for the drag-and-drop operation of FIG. 7 .
- the second illustrative embodiment of the present invention augments the behavior of the first illustrative embodiment such that when a user drag-and-drops an icon associated with a document into application window 306 , as in FIG. 3 , audio content from the document is also added to the outgoing media stream. Similarly, when a user drag-and-drops the upper-left icon (e.g., icon 512 , etc.) away from window 306 , the audio content of the document represented by the upper-left icon is also removed from the outgoing media stream.
- the upper-left icon e.g., icon 512 , etc.
- the roles of the audio content and video content are reversed.
- the video content of a drag-and-dropped document is added to the current video content of the outgoing media stream (e.g., shown side-by-side in a split-screen window, superimposed, etc.) and the audio content of the outgoing media stream is supplanted with the audio content of the document.
- FIG. 9 depicts a block diagram of the salient components of processing unit 301 , in accordance with the illustrative embodiments of the present invention.
- Processing unit 301 comprises receiver 901 , processor 902 , memory 903 , and transmitter 904 , interconnected as shown.
- Receiver 901 receives signals from remote telecommunications terminals via telecommunications network 110 , and forwards the information encoded in the signals to processor 902 , in well-known fashion. It will be clear to those skilled in the art how to make and use receiver 901 .
- Processor 902 is a general-purpose processor that is capable of: receiving information from receiver 901 , webcam 304 , microphone 305 , and other input devices; reading data from and writing data into memory 903 ; executing the tasks described below and with respect to FIGS. 10 through 12 ; outputting signals to display 302 and speaker 303 ; and transmitting information to transmitter 904 .
- processor 902 might be a special-purpose processor. In either case, it will be clear to those skilled in the art, after reading this specification, how to make and use processor 902 .
- Memory 903 stores data and executable instructions, as is well-known in the art, and might be any combination of random-access memory (RAM), flash memory, disk drive memory, etc. It will be clear to those skilled in the art, after reading this specification, how to make and use memory 903 .
- RAM random-access memory
- flash memory disk drive memory
- Transmitter 904 receives information from processor 902 , and transmits signals that encode this information to remote telecommunications terminals via telecommunications network 110 , in well-known fashion. It will be clear to those skilled in the art, after reading this specification, how to make and use transmitter 904 .
- FIG. 10 depicts a flowchart of the salient tasks of telecommunications terminal 300 in response to the drag-and-drop operations of FIGS. 4 and 7 , in accordance with the first illustrative embodiment of the present invention.
- telecommunications terminal 300 transmits an outgoing media stream and receives an incoming media stream via telecommunications network 110 , in well-known fashion.
- telecommunications terminal 300 initializes variable S to an empty stack, and pushes on to stack S an identifier associated with the video of the outgoing media stream (e.g., a file descriptor for a document, a special identifier that indicates live-video capture, etc.).
- an identifier associated with the video of the outgoing media stream e.g., a file descriptor for a document, a special identifier that indicates live-video capture, etc.
- the use of a stack enables the outgoing video stream to revert to previous video content when either (i) the current video content concludes, or (ii) the current video content is stopped by the user (i.e., by drag-and-dropping the upper-left icon away from window 306 , as in FIG. 7 ).
- Task 1030 checks whether a GUI event has been generated indicating that a graphical object G (e.g., icon 310 , etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g., icon 311 , window 306 , etc.). If so, execution proceeds to task 1040 ; otherwise, execution continues at task 1060 .
- a GUI event has been generated indicating that a graphical object G (e.g., icon 310 , etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g., icon 311 , window 306 , etc.).
- telecommunications terminal 300 supplants the current video content of the outgoing media stream with video content V that is associated with graphical object G (e.g., the video content of a Windows Media Video file that is associated with icon G, live-capture video associated with icon G, etc.), in well-known fashion.
- video content V e.g., the video content of a Windows Media Video file that is associated with icon G, live-capture video associated with icon G, etc.
- telecommunications terminal 300 pushes an identifier associated with video content V onto stack S, in well-known fashion. After task 1050 is completed, execution continues back at task 1030 .
- Task 1060 checks whether the depth of stack S is greater than one. If so, execution proceeds to task 1070 ; otherwise, execution continues back at task 1030 .
- Task 1070 checks whether either:
- telecommunications terminal 300 At task 1080 , telecommunications terminal 300 :
- telecommunications terminal 300 supplants the video content associated with identifier videoID 1 in the outgoing media stream with the video content associated with identifier videoID 2 , in well-known fashion. After task 1090 is completed, execution continues back at task 1030 .
- the first illustrative embodiment (as well as the second and third illustrative embodiments, described below) employs a stack to enable the outgoing video stream to revert to previous video content when the left-hand icon is drag-and-dropped away from window 306 , in some embodiments it might be advantageous to always revert back to live-video capture in response to such drag-and-drop events. In such embodiments, the use of a stack would be unnecessary.
- FIG. 11 depicts a flowchart of the salient tasks of telecommunications terminal 300 in response to the drag-and-drop operations of FIGS. 4 and 7 , in accordance with the second illustrative embodiment of the present invention.
- the second illustrative embodiment of the present invention augments the first illustrative embodiment by also adding the audio content associated with the drag-and-dropped document to the audio content of the outgoing media stream.
- adding audio content might be implemented by a simple superposition of signals, while in some other embodiments, one or more adjustments (e.g., volume, etc.) might be made to audio content before it is added to the outgoing media stream in order to improve intelligibility.
- the audio content of the document represented by the upper-left icon is also removed from the outgoing media stream. Note that, as disclosed below in the description of the flowchart, when stack S has a depth of one, which indicates that the videoconferencing application is in its initial state or has returned to its initial state, a drag-and-drop of the upper-left icon away from window 306 is not processed because there is no other video content to “revert to.”
- telecommunications terminal 300 transmits an outgoing media stream and receives an incoming media stream via telecommunications network 110 , in well-known fashion.
- telecommunications terminal 300 initializes variable S to an empty stack, and pushes on to stack S an identifier associated with the video of the outgoing media stream (e.g., a file descriptor for a document, a special identifier that indicates live-video capture, etc.).
- an identifier associated with the video of the outgoing media stream e.g., a file descriptor for a document, a special identifier that indicates live-video capture, etc.
- Task 1130 checks whether a GUI event has been generated indicating that a graphical object G (e.g., icon 310 , etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g., icon 311 , window 306 , etc.). If so, execution proceeds to task 1140 ; otherwise, execution continues at task 1160 .
- a GUI event has been generated indicating that a graphical object G (e.g., icon 310 , etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g., icon 311 , window 306 , etc.).
- telecommunications terminal 300 supplants the current video content of the outgoing media stream with video content V that is associated with graphical object G (e.g., the video content of a Windows Media Video file that is associated with icon G, live-capture video associated with icon G, etc.), in well-known fashion.
- video content V e.g., the video content of a Windows Media Video file that is associated with icon G, live-capture video associated with icon G, etc.
- telecommunications terminal 300 adds audio content A that is associated with graphical object G (e.g., the audio content of a Windows Media Video file represented by icon G, live-capture audio associated with icon G, etc.) to the outgoing media stream, in well-known fashion.
- audio content A that is associated with graphical object G (e.g., the audio content of a Windows Media Video file represented by icon G, live-capture audio associated with icon G, etc.) to the outgoing media stream, in well-known fashion.
- telecommunications terminal 300 pushes onto stack S a first identifier associated with video content V and a second identifier associated with audio content A, in well-known fashion. After task 1150 is completed, execution continues back at task 1130 .
- Task 1160 checks whether the depth of stack S is greater than one. If so, execution proceeds to task 1170 ; otherwise, execution continues back at task 1130 .
- Task 1170 checks whether either:
- telecommunications terminal 300 At task 1180 , telecommunications terminal 300 :
- telecommunications terminal 300 supplants the video content associated with identifier videoID 1 in the outgoing media stream with the video content associated with identifier videoID 2 , in well-known fashion.
- telecommunications terminal 300 removes the audio content associated with identifier videoID 1 from the outgoing media stream, in well-known fashion. After task 1190 is completed, execution continues back at task 1130 .
- FIG. 12 depicts a flowchart of the salient tasks of telecommunications terminal 300 in response to the drag-and-drop operations of FIGS. 4 and 7 , in accordance with the third illustrative embodiment of the present invention.
- the third illustrative embodiment is similar to the second illustrative embodiment with the roles of the audio content and video content reversed (i.e., the video content of a drag-and-dropped document is added to the current video content and the audio content is supplanted with the document's audio content.)
- telecommunications terminal 300 transmits an outgoing media stream and receives an incoming media stream via telecommunications network 110 , in well-known fashion.
- telecommunications terminal 300 initializes variable S to an empty stack, and pushes on to stack S an identifier associated with the audio of the outgoing media stream (e.g., a file descriptor for a document, a special identifier that indicates live-audio capture, etc.).
- an identifier associated with the audio of the outgoing media stream e.g., a file descriptor for a document, a special identifier that indicates live-audio capture, etc.
- Task 1230 checks whether a GUI event has been generated indicating that a graphical object G (e.g., icon 310 , etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g., icon 312 , window 306 , etc.). If so, execution proceeds to task 1240 ; otherwise, execution continues at task 1260 .
- a GUI event has been generated indicating that a graphical object G (e.g., icon 310 , etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g., icon 312 , window 306 , etc.).
- telecommunications terminal 300 supplants the current audio content of the outgoing media stream with audio content A that is associated with graphical object G (e.g., the audio content of a Windows Media Audio file that is associated with icon G, live-capture audio associated with icon G, etc.), in well-known fashion.
- audio content A e.g., the audio content of a Windows Media Audio file that is associated with icon G, live-capture audio associated with icon G, etc.
- telecommunications terminal 300 adds video content V that is associated with graphical object G (e.g., the video content of a Windows Media Audio file represented by icon G, live-capture video associated with icon G, etc.) to the outgoing media stream, in well-known fashion.
- video content V that is associated with graphical object G (e.g., the video content of a Windows Media Audio file represented by icon G, live-capture video associated with icon G, etc.) to the outgoing media stream, in well-known fashion.
- telecommunications terminal 300 pushes onto stack S a first identifier associated with audio content A and a second identifier associated with video content V, in well-known fashion. After task 1250 is completed, execution continues back at task 1230 .
- Task 1260 checks whether the depth of stack S is greater than one. If so, execution proceeds to task 1270 ; otherwise, execution continues back at task 1230 .
- Task 1270 checks whether either:
- telecommunications terminal 300 At task 1280 , telecommunications terminal 300 :
- telecommunications terminal 300 supplants the audio content associated with identifier audioID 1 in the outgoing media stream with the audio content associated with identifier audioID 2 , in well-known fashion.
- telecommunications terminal 300 removes the video content associated with identifier videoID 1 from the outgoing media stream, in well-known fashion. After task 1290 is completed, execution continues back at task 1230 .
- FIG. 13 depicts an illustrative client/server architecture comprising telecommunications terminal 1301 and server 1302 , interconnected as shown.
- FIG. 13 depicts an illustrative client/server architecture comprising telecommunications terminal 1301 and server 1302 , interconnected as shown.
- telecommunications terminal 1301 provides its user with the same graphical user interface (GUI) as telecommunications terminal 300 , but, upon receiving pertinent events generated by the GUI, sends an appropriate message to server 1302 to supplant, add, or remove content accordingly with respect to the outgoing media stream.
- GUI graphical user interface
Abstract
Description
- The present invention relates to telecommunications in general, and, more particularly, to specifying the content of transmitted media streams.
- As bandwidth has become more abundant and available, transmission of multimedia content is gaining in popularity with both home and business users. For example, a user might record a message that comprises video and audio and transmit the message to a remote user (e.g., as an email attachment, as streaming content, etc.). As another example, in a videoconference, video and audio that are captured at a telecommunications terminal (e.g., a desktop computer, a personal digital assistant [PDA], a cellular telephone, etc.) are transmitted to one or more remote telecommunications terminals that participate in the conference.
-
FIG. 1 depictstelecommunications terminal 100, in this case a desktop personal computer, in accordance with the prior art.Telecommunications terminal 100 comprisesprocessing unit 101,display 102,speaker 103,webcam 104, andmicrophone 105, interconnected as shown. As shown inFIG. 2 , a user oftelecommunications terminal 100 might use videoconferencing software to transmit video and audio captured atwebcam 104 andmicrophone 105, respectively, over telecommunications network 110 (e.g., the Internet, etc.) toremote telecommunications terminal 120. Similarly, a user oftelecommunications terminal 100 might use a content authoring application to create multimedia content, and communications software (e.g., an email client, a streaming application, etc.) to transmit the content toremote telecommunications terminal 120 viatelecommunications network 110. Multimedia content at telecommunications terminal 100 (e.g., remote content received viatelecommunications network 110, content stored locally, etc.) is output to a user via display 102 (e.g., inwindow 106, etc.) andspeaker 103, in well-known fashion. - In many situations, it would be advantageous if a telecommunications terminal user who is engaged in a videoconference could dynamically supplant the video content of the outgoing media stream (e.g., video of the user talking, video of a whiteboard that the user is writing on, etc.) with alternative video content (e.g., a PowerPoint® presentation, a recorded video segment, etc.), while maintaining the audio content of the outgoing media stream (e.g., the user's speech, etc.). It would also be advantageous for the user to be able to easily switch back to the transmission of the original video content at any time, and for the original video content to automatically resume when the alternative video content has concluded.
- The present invention enables a user of a telecommunications terminal to dynamically supplant the video content of an outgoing media stream (e.g., an outgoing videoconference stream, etc.) with video associated with a document (e.g., a PowerPoint® file, a Windows Media Video [WMV] file, etc.) via the terminal's graphical user interface (GUI). In particular, in the first illustrative embodiment of the present invention, when a user drag-and-drops a first graphical object that is associated with a document (e.g., an icon, etc.) onto a second graphical object that is associated with the outgoing media stream (e.g., an icon, a videoconference application window, etc.), the video content of the outgoing video stream is supplanted with video content associated with the document. Subsequently, a user can drag-and-drop a document icon away from the second graphical object to restore the video content of the outgoing media stream to its prior source (e.g., webcam live-video capture, another document, etc.). In addition, if the video content associated with the document concludes, the video content of the outgoing media stream automatically resumes to its prior source.
- The second illustrative embodiment of the present invention augments the first illustrative embodiment by adding the audio content associated with the drag-and-dropped document to the audio content of the outgoing media stream. For example, if a user drag-and-drops an icon for a Windows Media Video (WMV) file onto a videoconference application window, audio content from the WMV file (e.g., background music, etc.) is transmitted in addition to the live-audio capture, and the live-video capture is supplanted with the video content of the WMV file. When the user subsequently drag-and-drops the WMV file icon away from the window, the transmitted audio content reverts to the live-audio capture only, and the transmitted video content reverts to the live-video capture.
- In the third illustrative embodiment of the present invention, the roles of the audio content and video content are reversed. In other words, the video content of a drag-and-dropped document is added to the live-video capture (e.g., shown side-by-side in a split-screen window, superimposed, etc.) and the live-audio capture is supplanted with the audio content of the document.
- The illustrative embodiment comprises: (a) transmitting to a remote telecommunications terminal a first media stream that comprises a first video signal and an audio signal; (b) receiving from the remote telecommunications terminal a second media stream; and (c) when a first graphical object that is associated with a document is drag-and-dropped in a graphical user interface onto a second graphical object that is associated with the first media stream, supplanting the first video signal in the first media stream with a second video signal that is based on the document.
-
FIG. 1 depicts a telecommunications terminal in accordance with the prior art. -
FIG. 2 depictstelecommunications terminal 100, as shown inFIG. 1 , communicating with another telecommunications terminal, in accordance with the prior art. -
FIG. 3 depicts a telecommunications terminal in accordance with the illustrative embodiments of the present invention. -
FIG. 4 depicts a drag-and-drop operation performed by a user oftelecommunications terminal 300, as shown inFIG. 3 , in accordance with the illustrative embodiments of the present invention. -
FIG. 5 depictstelecommunications terminal 300, as shown inFIG. 3 , after the drag-and-drop operation ofFIG. 4 , in accordance with the illustrative embodiments of the present invention. -
FIG. 6 depicts an alternative drag-and-drop operation to the drag-and-drop operation ofFIG. 4 , in accordance with the illustrative embodiments of the present invention. -
FIG. 7 depicts a drag-and-drop operation performed by a user oftelecommunications terminal 300 after the drag-and-drop operation ofFIG. 4 , in accordance with the illustrative embodiments of the present invention. -
FIG. 8 depicts an alternative drag-and-drop operation to the drag-and-drop operation ofFIG. 7 , in accordance with the illustrative embodiments of the present invention. -
FIG. 9 depicts a block diagram of the salient components ofprocessing unit 301, as shown inFIG. 3 , in accordance with the illustrative embodiments of the present invention. -
FIG. 10 depicts a flowchart of the salient tasks oftelecommunications terminal 300 in response to the drag-and-drop operations ofFIGS. 4 and 7 , in accordance with the first illustrative embodiment of the present invention. -
FIG. 11 depicts a flowchart of the salient tasks oftelecommunications terminal 300 in response to the drag-and-drop operations ofFIGS. 4 and 7 , in accordance with the second illustrative embodiment of the present invention. -
FIG. 12 depicts a flowchart of the salient tasks oftelecommunications terminal 300 in response to the drag-and-drop operations ofFIGS. 4 and 7 , in accordance with the third illustrative embodiment of the present invention. -
FIG. 13 depicts a client/server architecture in accordance with the illustrative embodiments of the present invention. - The detailed description is organized into two sections: the first section describes how a user can specify, via the graphical user interface, what content is transmitted by
telecommunications terminal 300; and the second section describes the salient hardware and software oftelecommunications terminal 300. - User Operation of the Graphical User Interface (GUI)
-
FIG. 3 depictstelecommunications terminal 300 in accordance with the illustrative embodiments of the present invention.Telecommunications terminal 300 comprisesprocessing unit 301,display 302,speaker 303,webcam 304, andmicrophone 305, interconnected as shown. -
Processing unit 301, likeprocessing unit 101 of the prior art, is capable of executing programs, of storing and retrieving data, and of receiving messages from and transmitting messages totelecommunications network 110, in well-known fashion. In addition,processing unit 301 is capable of outputting signals to display 302 andspeaker 303, and of receiving signals fromwebcam 304, microphone 305, and other input devices (not shown) such as a keyboard, a mouse, a joystick, etc. The internal architecture ofprocessing unit 301 is described in detail below and with respect toFIG. 9 . -
Display 302, likedisplay 102 of the prior art, is capable of receiving electric signals and of generating visual output (e.g., text, images, etc.) based on these signals, in well-known fashion. - Speaker 303, like
speaker 103, is a transducer that is capable of receiving electric signals and of generating acoustic output signals based on the electric signals, in well-known fashion. - Webcam 304, like
webcam 104, is capable of receiving photonic signals and of generating electronic image signals, in well-known fashion. - Microphone 305, like microphone 105, is capable of receiving acoustic signals and of generating electric signals based on the acoustic signals, in well-known fashion.
- As shown in
FIG. 3 ,display 302displays window 306, andicons 307 through 310, in well-known fashion. -
Window 306 is a rectangular graphical object that is capable of containing text, images, and other graphical objects (e.g., an icon, a drop-down box, a tabbed panel, a subwindow, etc.), in well-known fashion. - Tabbed
panels window 306 an associated set of graphical objects. As shown inFIG. 3 ,tabbed panel 307 corresponds to an incoming media stream (e.g., a received videoconference stream, etc.) andtabbed panel 308 corresponds to an outgoing media stream (e.g., a transmitted videoconference stream, etc.). - Icon 309 is an image that represents a folder (i.e., a directory) entitled Fl in the file system of
processing unit 301, as is commonplace in the art. - Icon 310 is an image that represents a data file D1 in the file system of
processing unit 301, as is commonplace in the art. File D1 might contain a word-processing document, a spreadsheet, a PowerPoint® document, etc. - Icon 311 is an image that represents a videoconferencing application, and thus is also associated with the outgoing and incoming media streams of the videoconferencing application.
- Icon 312 is an image located in the upper-left corner of
window 306 that indicates the source of the video content of the outgoing media stream. InFIG. 3 ,icon 312 has the same image as the videoconferencing application icon, which indicates that the video capture ofwebcam 304 is currently being transmitted in the outgoing media stream. -
FIG. 4 depicts a drag-and-drop operation performed by a user oftelecommunications terminal 300, in accordance with the illustrative embodiments of the present invention. As shown inFIG. 4 , the user is drag-and-droppingicon 310, viacursor 413, ontovideoconferencing application window 306. As described above,icon 310 represents a document D1. The effect of the drag-and-drop operation, in the first illustrative embodiment of the present invention, is that the video content of document D1 supplants the live-video capture in the outgoing media stream. (The second and third illustrative embodiments of the present invention are described below at the end of this section.) -
FIG. 5 depictstelecommunications terminal 300, as shown inFIG. 3 , after the drag-and-drop operation ofFIG. 4 , in accordance with the illustrative embodiments of the present invention. As shown inFIG. 5 , outgoing tabbedpanel 308, which is selected, now shows the video content of D1 (in this case, an illustrative PowerPoint® presentation), andicon 312 is replaced withicon 512, indicating that document D1 is currently the video source for the outgoing media stream. Note thaticon 310 is back in its original position; the reason for this is that the drag-and-drop operation did not move the file for D1 in the file system oftelecommunications terminal 300. -
FIG. 6 depicts an alternative drag-and-drop operation to the drag-and-drop operation ofFIG. 4 , in accordance with the illustrative embodiments of the present invention. As shown inFIG. 6 , the user can drag-and-drop icon 310 onto icon 311 (the icon for the videoconferencing application) instead of ontowindow 306 in order to transmit the video content of D1. After performing the drag-and-drop operation ofFIG. 6 ,display 302 will appear as inFIG. 5 , just as for the drag-and-drop operation ofFIG. 4 . -
FIG. 7 depicts a drag-and-drop operation performed by a user oftelecommunications terminal 300 after the drag-and-drop operation ofFIG. 4 , in accordance with the illustrative embodiments of the present invention. As shown inFIG. 7 , the user is drag-and-droppingicon 512, viacursor 413, away fromvideoconferencing application window 306. The effect of this drag-and-drop operation, in the first illustrative embodiment of the present invention, is that the video of the outgoing video stream reverts to live-video capture. Thus, after this drag-and-drop operation is performeddisplay 302 will appear once again as inFIG. 3 . -
FIG. 8 depicts an alternative drag-and-drop operation to the drag-and-drop operation ofFIG. 7 , in accordance with the illustrative embodiments of the present invention. As shown inFIG. 6 , the user can drag-and-drop icon 311 (the icon for the videoconferencing application) ontovideoconferencing application window 306 to revert to live-video capture, instead of drag-and-droppingicon 512 away fromwindow 306. After performing the drag-and-drop operation ofFIG. 8 ,display 302 will appear as inFIG. 3 , just as for the drag-and-drop operation ofFIG. 7 . - The second illustrative embodiment of the present invention augments the behavior of the first illustrative embodiment such that when a user drag-and-drops an icon associated with a document into
application window 306, as inFIG. 3 , audio content from the document is also added to the outgoing media stream. Similarly, when a user drag-and-drops the upper-left icon (e.g.,icon 512, etc.) away fromwindow 306, the audio content of the document represented by the upper-left icon is also removed from the outgoing media stream. - In the third illustrative embodiment of the present invention, the roles of the audio content and video content are reversed. In other words, the video content of a drag-and-dropped document is added to the current video content of the outgoing media stream (e.g., shown side-by-side in a split-screen window, superimposed, etc.) and the audio content of the outgoing media stream is supplanted with the audio content of the document.
- Hardware and Software
-
FIG. 9 depicts a block diagram of the salient components ofprocessing unit 301, in accordance with the illustrative embodiments of the present invention.Processing unit 301 comprisesreceiver 901,processor 902,memory 903, andtransmitter 904, interconnected as shown. -
Receiver 901 receives signals from remote telecommunications terminals viatelecommunications network 110, and forwards the information encoded in the signals toprocessor 902, in well-known fashion. It will be clear to those skilled in the art how to make and usereceiver 901. -
Processor 902 is a general-purpose processor that is capable of: receiving information fromreceiver 901,webcam 304,microphone 305, and other input devices; reading data from and writing data intomemory 903; executing the tasks described below and with respect toFIGS. 10 through 12 ; outputting signals to display 302 andspeaker 303; and transmitting information totransmitter 904. In some alternative embodiments of the present invention,processor 902 might be a special-purpose processor. In either case, it will be clear to those skilled in the art, after reading this specification, how to make and useprocessor 902. -
Memory 903 stores data and executable instructions, as is well-known in the art, and might be any combination of random-access memory (RAM), flash memory, disk drive memory, etc. It will be clear to those skilled in the art, after reading this specification, how to make and usememory 903. -
Transmitter 904 receives information fromprocessor 902, and transmits signals that encode this information to remote telecommunications terminals viatelecommunications network 110, in well-known fashion. It will be clear to those skilled in the art, after reading this specification, how to make and usetransmitter 904. -
FIG. 10 depicts a flowchart of the salient tasks oftelecommunications terminal 300 in response to the drag-and-drop operations ofFIGS. 4 and 7 , in accordance with the first illustrative embodiment of the present invention. - At
task 1010,telecommunications terminal 300 transmits an outgoing media stream and receives an incoming media stream viatelecommunications network 110, in well-known fashion. - At
task 1020,telecommunications terminal 300 initializes variable S to an empty stack, and pushes on to stack S an identifier associated with the video of the outgoing media stream (e.g., a file descriptor for a document, a special identifier that indicates live-video capture, etc.). As described below, the use of a stack enables the outgoing video stream to revert to previous video content when either (i) the current video content concludes, or (ii) the current video content is stopped by the user (i.e., by drag-and-dropping the upper-left icon away fromwindow 306, as inFIG. 7 ). -
Task 1030 checks whether a GUI event has been generated indicating that a graphical object G (e.g.,icon 310, etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g.,icon 311,window 306, etc.). If so, execution proceeds totask 1040; otherwise, execution continues attask 1060. - At
task 1040,telecommunications terminal 300 supplants the current video content of the outgoing media stream with video content V that is associated with graphical object G (e.g., the video content of a Windows Media Video file that is associated with icon G, live-capture video associated with icon G, etc.), in well-known fashion. - At
task 1050,telecommunications terminal 300 pushes an identifier associated with video content V onto stack S, in well-known fashion. Aftertask 1050 is completed, execution continues back attask 1030. -
Task 1060 checks whether the depth of stack S is greater than one. If so, execution proceeds totask 1070; otherwise, execution continues back attask 1030. -
Task 1070 checks whether either: -
- (i) a GUI event has been generated indicating that the upper-left icon in the media-stream window (e.g.,
videoconferencing application window 306, etc.) has been drag-and-dropped away from the window; or - (ii) the current video content of the outgoing media stream has concluded.
If either of these two events has occurred, execution proceeds totask 1080; otherwise, execution continues back attask 1030.
- (i) a GUI event has been generated indicating that the upper-left icon in the media-stream window (e.g.,
- At
task 1080, telecommunications terminal 300: -
- (i) pops the top element from stack S and sets the value of variable videoID1 to this element; and
- (ii) sets the value of variable videoID2 to the element that is on top of stack S after the pop operation.
- At
task 1090,telecommunications terminal 300 supplants the video content associated with identifier videoID1 in the outgoing media stream with the video content associated with identifier videoID2, in well-known fashion. Aftertask 1090 is completed, execution continues back attask 1030. - As will be appreciated by those skilled in the art, although the first illustrative embodiment (as well as the second and third illustrative embodiments, described below) employs a stack to enable the outgoing video stream to revert to previous video content when the left-hand icon is drag-and-dropped away from
window 306, in some embodiments it might be advantageous to always revert back to live-video capture in response to such drag-and-drop events. In such embodiments, the use of a stack would be unnecessary. -
FIG. 11 depicts a flowchart of the salient tasks oftelecommunications terminal 300 in response to the drag-and-drop operations ofFIGS. 4 and 7 , in accordance with the second illustrative embodiment of the present invention. As described above, the second illustrative embodiment of the present invention augments the first illustrative embodiment by also adding the audio content associated with the drag-and-dropped document to the audio content of the outgoing media stream. As will be appreciated by those skilled in the art, in some embodiments adding audio content might be implemented by a simple superposition of signals, while in some other embodiments, one or more adjustments (e.g., volume, etc.) might be made to audio content before it is added to the outgoing media stream in order to improve intelligibility. - Similarly, when the user of
telecommunications terminal 300 drag-and-drops the upper-left icon away fromvideoconferencing application window 306 in the second illustrative embodiment, the audio content of the document represented by the upper-left icon is also removed from the outgoing media stream. Note that, as disclosed below in the description of the flowchart, when stack S has a depth of one, which indicates that the videoconferencing application is in its initial state or has returned to its initial state, a drag-and-drop of the upper-left icon away fromwindow 306 is not processed because there is no other video content to “revert to.” - At
task 1110,telecommunications terminal 300 transmits an outgoing media stream and receives an incoming media stream viatelecommunications network 110, in well-known fashion. - At
task 1120,telecommunications terminal 300 initializes variable S to an empty stack, and pushes on to stack S an identifier associated with the video of the outgoing media stream (e.g., a file descriptor for a document, a special identifier that indicates live-video capture, etc.). - Task 1130 checks whether a GUI event has been generated indicating that a graphical object G (e.g.,
icon 310, etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g.,icon 311,window 306, etc.). If so, execution proceeds totask 1140; otherwise, execution continues attask 1160. - At
task 1140,telecommunications terminal 300 supplants the current video content of the outgoing media stream with video content V that is associated with graphical object G (e.g., the video content of a Windows Media Video file that is associated with icon G, live-capture video associated with icon G, etc.), in well-known fashion. - At
task 1145,telecommunications terminal 300 adds audio content A that is associated with graphical object G (e.g., the audio content of a Windows Media Video file represented by icon G, live-capture audio associated with icon G, etc.) to the outgoing media stream, in well-known fashion. - At
task 1150,telecommunications terminal 300 pushes onto stack S a first identifier associated with video content V and a second identifier associated with audio content A, in well-known fashion. Aftertask 1150 is completed, execution continues back at task 1130. -
Task 1160 checks whether the depth of stack S is greater than one. If so, execution proceeds totask 1170; otherwise, execution continues back at task 1130. -
Task 1170 checks whether either: -
- (i) a GUI event has been generated indicating that the upper-left icon in the media-stream window (e.g.,
videoconferencing application window 306, etc.) has been drag-and-dropped away from the window; or - (ii) the current video content of the outgoing media stream has concluded.
If either of these two events has occurred, execution proceeds totask 1180; otherwise, execution continues back at task 1130.
- (i) a GUI event has been generated indicating that the upper-left icon in the media-stream window (e.g.,
- At
task 1180, telecommunications terminal 300: -
- (i) pops the top element, which is an ordered pair consisting of two identifiers, from stack S, and sets variables videoID1 and audioID1 to the first and second values of this ordered pair, respectively; and
- (ii) sets variable videoID2 to the first value (i.e., head) of the ordered pair that is on top of stack S after the pop operation.
- At
task 1185,telecommunications terminal 300 supplants the video content associated with identifier videoID1 in the outgoing media stream with the video content associated with identifier videoID2, in well-known fashion. - At
task 1190,telecommunications terminal 300 removes the audio content associated with identifier videoID1 from the outgoing media stream, in well-known fashion. Aftertask 1190 is completed, execution continues back at task 1130. -
FIG. 12 depicts a flowchart of the salient tasks oftelecommunications terminal 300 in response to the drag-and-drop operations ofFIGS. 4 and 7 , in accordance with the third illustrative embodiment of the present invention. As described above, the third illustrative embodiment is similar to the second illustrative embodiment with the roles of the audio content and video content reversed (i.e., the video content of a drag-and-dropped document is added to the current video content and the audio content is supplanted with the document's audio content.) - At
task 1210,telecommunications terminal 300 transmits an outgoing media stream and receives an incoming media stream viatelecommunications network 110, in well-known fashion. - At
task 1220,telecommunications terminal 300 initializes variable S to an empty stack, and pushes on to stack S an identifier associated with the audio of the outgoing media stream (e.g., a file descriptor for a document, a special identifier that indicates live-audio capture, etc.). -
Task 1230 checks whether a GUI event has been generated indicating that a graphical object G (e.g.,icon 310, etc.) has been drag-and-dropped onto a graphical object associated with an outgoing media-stream (e.g.,icon 312,window 306, etc.). If so, execution proceeds totask 1240; otherwise, execution continues attask 1260. - At
task 1240,telecommunications terminal 300 supplants the current audio content of the outgoing media stream with audio content A that is associated with graphical object G (e.g., the audio content of a Windows Media Audio file that is associated with icon G, live-capture audio associated with icon G, etc.), in well-known fashion. - At
task 1245,telecommunications terminal 300 adds video content V that is associated with graphical object G (e.g., the video content of a Windows Media Audio file represented by icon G, live-capture video associated with icon G, etc.) to the outgoing media stream, in well-known fashion. - At
task 1250,telecommunications terminal 300 pushes onto stack S a first identifier associated with audio content A and a second identifier associated with video content V, in well-known fashion. Aftertask 1250 is completed, execution continues back attask 1230. -
Task 1260 checks whether the depth of stack S is greater than one. If so, execution proceeds totask 1270; otherwise, execution continues back attask 1230. -
Task 1270 checks whether either: -
- (i) a GUI event has been generated indicating that the upper-left icon in the media-stream window (e.g.,
audioconferencing application window 306, etc.) has been drag-and-dropped away from the window; or - (ii) the current audio content of the outgoing media stream has concluded.
If either of these two events has occurred, execution proceeds totask 1280; otherwise, execution continues back attask 1230.
- (i) a GUI event has been generated indicating that the upper-left icon in the media-stream window (e.g.,
- At
task 1280, telecommunications terminal 300: -
- (i) pops the top element, which is an ordered pair consisting of two identifiers, from stack S, and sets variables audioID1 and videoID1 to the first and second values of this ordered pair, respectively; and
- (ii) sets variable audioID2 to the first value (i.e., head) of the ordered pair that is on top of stack S after the pop operation.
- At
task 1285,telecommunications terminal 300 supplants the audio content associated with identifier audioID1 in the outgoing media stream with the audio content associated with identifier audioID2, in well-known fashion. - At
task 1290,telecommunications terminal 300 removes the video content associated with identifier videoID1 from the outgoing media stream, in well-known fashion. Aftertask 1290 is completed, execution continues back attask 1230. - As will be appreciated by those skilled in the art, although in the illustrative embodiments above
telecommunications terminal 300 does the supplanting, adding, and removing of audio and video content, some other embodiments of the present invention might employ a client/server architecture in which a server performs these tasks. For example,FIG. 13 depicts an illustrative client/server architecture comprisingtelecommunications terminal 1301 andserver 1302, interconnected as shown. In the illustrative architecture ofFIG. 13 ,telecommunications terminal 1301 provides its user with the same graphical user interface (GUI) astelecommunications terminal 300, but, upon receiving pertinent events generated by the GUI, sends an appropriate message toserver 1302 to supplant, add, or remove content accordingly with respect to the outgoing media stream. It will be clear to those skilled in the art, after reading this specification, how to make and use embodiments of the present invention that employ a client/server architecture such as the illustrative architecture ofFIG. 13 . - It is to be understood that the above-described embodiments are merely illustrative of the present invention and that many variations of the above-described embodiments can be devised by those skilled in the art without departing from the scope of the invention. For example, in this Specification, numerous specific details are provided in order to provide a thorough description and understanding of the illustrative embodiments of the present invention. Those skilled in the art will recognize, however, that the invention can be practiced without one or more of those details, or with other methods, materials, components, etc.
- Furthermore, in some instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the illustrative embodiments. It is understood that the various embodiments shown in the Figures are illustrative, and are not necessarily drawn to scale. Reference throughout the specification to “one embodiment” or “an embodiment” or “some embodiments” means that a particular feature, structure, material, or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the present invention, but not necessarily all embodiments. Consequently, the appearances of the phrase “in one embodiment,” “in an embodiment,” or “in some embodiments” in various places throughout the Specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, materials, or characteristics can be combined in any suitable manner in one or more embodiments. It is therefore intended that such variations be included within the scope of the following claims and their equivalents.
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/989,136 US20060107303A1 (en) | 2004-11-15 | 2004-11-15 | Content specification for media streams |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/989,136 US20060107303A1 (en) | 2004-11-15 | 2004-11-15 | Content specification for media streams |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060107303A1 true US20060107303A1 (en) | 2006-05-18 |
Family
ID=36387994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/989,136 Abandoned US20060107303A1 (en) | 2004-11-15 | 2004-11-15 | Content specification for media streams |
Country Status (1)
Country | Link |
---|---|
US (1) | US20060107303A1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060119595A1 (en) * | 2004-12-03 | 2006-06-08 | Wei-Yi Hsuan | Computer system of combining user interface and a display device |
US20080235609A1 (en) * | 2007-03-19 | 2008-09-25 | Carraher Theodore R | Function switching during drag-and-drop |
US20080307324A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Sharing content in a videoconference session |
US20100107067A1 (en) * | 2008-10-27 | 2010-04-29 | Nokia Corporation | Input on touch based user interfaces |
US20100107116A1 (en) * | 2008-10-27 | 2010-04-29 | Nokia Corporation | Input on touch user interfaces |
US20100251158A1 (en) * | 2009-03-30 | 2010-09-30 | Avaya Inc. | System and method for graphically managing communication sessions |
WO2011153623A3 (en) * | 2010-06-08 | 2012-02-02 | Aastra Technologies Limited | Method and system for video communication |
US20120254793A1 (en) * | 2011-03-31 | 2012-10-04 | France Telecom | Enhanced user interface to transfer media content |
EP2507996A2 (en) * | 2009-11-30 | 2012-10-10 | LG Electronics Inc. | A network television and a method of controlling the same |
US20120297339A1 (en) * | 2011-01-27 | 2012-11-22 | Kyocera Corporation | Electronic device, control method, and storage medium storing control program |
US20130147905A1 (en) * | 2011-12-13 | 2013-06-13 | Google Inc. | Processing media streams during a multi-user video conference |
US20150072675A1 (en) * | 2009-04-14 | 2015-03-12 | Lg Electronics Inc. | Terminal and controlling method thereof |
US9088426B2 (en) | 2011-12-13 | 2015-07-21 | Google Inc. | Processing media streams during a multi-user video conference |
US10007410B2 (en) * | 2015-08-19 | 2018-06-26 | Google Llc | Incorporating user content within a communication session interface |
US10120989B2 (en) * | 2013-06-04 | 2018-11-06 | NOWWW.US Pty. Ltd. | Login process for mobile phones, tablets and other types of touch screen devices or computers |
US20180373550A1 (en) * | 2017-06-22 | 2018-12-27 | Canon Kabushiki Kaisha | Information processing apparatus |
US11422678B2 (en) * | 2013-08-02 | 2022-08-23 | Samsung Electronics Co., Ltd. | Method and device for managing tab window indicating application group including heterogeneous applications |
US20230008575A1 (en) * | 2021-07-09 | 2023-01-12 | Prezi, Inc. | Relocation of content item to motion picture sequences at multiple devices |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5206721A (en) * | 1990-03-08 | 1993-04-27 | Fujitsu Limited | Television conference system |
US5801700A (en) * | 1996-01-19 | 1998-09-01 | Silicon Graphics Incorporated | System and method for an iconic drag and drop interface for electronic file transfer |
US6288753B1 (en) * | 1999-07-07 | 2001-09-11 | Corrugated Services Corp. | System and method for live interactive distance learning |
US6601087B1 (en) * | 1998-11-18 | 2003-07-29 | Webex Communications, Inc. | Instant document sharing |
US6654032B1 (en) * | 1999-12-23 | 2003-11-25 | Webex Communications, Inc. | Instant sharing of documents on a remote server |
US20040083266A1 (en) * | 2000-04-24 | 2004-04-29 | Comstock Elizabeth M. | Media role management in a video conferencing network |
US6760749B1 (en) * | 2000-05-10 | 2004-07-06 | Polycom, Inc. | Interactive conference content distribution device and methods of use thereof |
USRE38609E1 (en) * | 2000-02-28 | 2004-10-05 | Webex Communications, Inc. | On-demand presentation graphical user interface |
US20050081155A1 (en) * | 2003-10-02 | 2005-04-14 | Geoffrey Martin | Virtual player capable of handling dissimilar content |
US20060152575A1 (en) * | 2002-08-12 | 2006-07-13 | France Telecom | Method for real-time broadcasting of multimedia files during a videoconference, without interrupting communication, and a man-machine interface therefor |
US7213206B2 (en) * | 2003-09-09 | 2007-05-01 | Fogg Brian J | Relationship user interface |
US7237197B2 (en) * | 2000-04-25 | 2007-06-26 | Microsoft Corporation | Method and system for presenting a video stream of a video streaming device |
-
2004
- 2004-11-15 US US10/989,136 patent/US20060107303A1/en not_active Abandoned
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5206721A (en) * | 1990-03-08 | 1993-04-27 | Fujitsu Limited | Television conference system |
US5801700A (en) * | 1996-01-19 | 1998-09-01 | Silicon Graphics Incorporated | System and method for an iconic drag and drop interface for electronic file transfer |
US6601087B1 (en) * | 1998-11-18 | 2003-07-29 | Webex Communications, Inc. | Instant document sharing |
US6288753B1 (en) * | 1999-07-07 | 2001-09-11 | Corrugated Services Corp. | System and method for live interactive distance learning |
US6654032B1 (en) * | 1999-12-23 | 2003-11-25 | Webex Communications, Inc. | Instant sharing of documents on a remote server |
USRE38609E1 (en) * | 2000-02-28 | 2004-10-05 | Webex Communications, Inc. | On-demand presentation graphical user interface |
US20040083266A1 (en) * | 2000-04-24 | 2004-04-29 | Comstock Elizabeth M. | Media role management in a video conferencing network |
US7237197B2 (en) * | 2000-04-25 | 2007-06-26 | Microsoft Corporation | Method and system for presenting a video stream of a video streaming device |
US6760749B1 (en) * | 2000-05-10 | 2004-07-06 | Polycom, Inc. | Interactive conference content distribution device and methods of use thereof |
US20060152575A1 (en) * | 2002-08-12 | 2006-07-13 | France Telecom | Method for real-time broadcasting of multimedia files during a videoconference, without interrupting communication, and a man-machine interface therefor |
US7213206B2 (en) * | 2003-09-09 | 2007-05-01 | Fogg Brian J | Relationship user interface |
US20050081155A1 (en) * | 2003-10-02 | 2005-04-14 | Geoffrey Martin | Virtual player capable of handling dissimilar content |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060119595A1 (en) * | 2004-12-03 | 2006-06-08 | Wei-Yi Hsuan | Computer system of combining user interface and a display device |
US20080235609A1 (en) * | 2007-03-19 | 2008-09-25 | Carraher Theodore R | Function switching during drag-and-drop |
US20080307324A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Sharing content in a videoconference session |
US20100107067A1 (en) * | 2008-10-27 | 2010-04-29 | Nokia Corporation | Input on touch based user interfaces |
US20100105443A1 (en) * | 2008-10-27 | 2010-04-29 | Nokia Corporation | Methods and apparatuses for facilitating interaction with touch screen apparatuses |
US20100107116A1 (en) * | 2008-10-27 | 2010-04-29 | Nokia Corporation | Input on touch user interfaces |
US9900280B2 (en) | 2009-03-30 | 2018-02-20 | Avaya Inc. | System and method for managing incoming requests for a communication session using a graphical connection metaphor |
US11460985B2 (en) | 2009-03-30 | 2022-10-04 | Avaya Inc. | System and method for managing trusted relationships in communication sessions using a graphical metaphor |
US20100251158A1 (en) * | 2009-03-30 | 2010-09-30 | Avaya Inc. | System and method for graphically managing communication sessions |
US10574623B2 (en) | 2009-03-30 | 2020-02-25 | Avaya Inc. | System and method for graphically managing a communication session with a context based contact set |
US9413820B2 (en) | 2009-04-14 | 2016-08-09 | Lg Electronics Inc. | Terminal and controlling method thereof |
US20150072675A1 (en) * | 2009-04-14 | 2015-03-12 | Lg Electronics Inc. | Terminal and controlling method thereof |
US9753629B2 (en) | 2009-04-14 | 2017-09-05 | Lg Electronics Inc. | Terminal and controlling method thereof |
US9456028B2 (en) | 2009-04-14 | 2016-09-27 | Lg Electronics Inc. | Terminal and controlling method thereof |
US9792028B2 (en) | 2009-04-14 | 2017-10-17 | Lg Electronics Inc. | Terminal and controlling method thereof |
EP2507996A4 (en) * | 2009-11-30 | 2014-10-08 | Lg Electronics Inc | A network television and a method of controlling the same |
EP2507996A2 (en) * | 2009-11-30 | 2012-10-10 | LG Electronics Inc. | A network television and a method of controlling the same |
US9641872B2 (en) | 2009-11-30 | 2017-05-02 | Lg Electronics Inc. | Network television and a method of controlling the same |
WO2011153623A3 (en) * | 2010-06-08 | 2012-02-02 | Aastra Technologies Limited | Method and system for video communication |
US9648279B2 (en) | 2010-06-08 | 2017-05-09 | Mitel Networks Corporation | Method and system for video communication |
US20120297339A1 (en) * | 2011-01-27 | 2012-11-22 | Kyocera Corporation | Electronic device, control method, and storage medium storing control program |
US20120254793A1 (en) * | 2011-03-31 | 2012-10-04 | France Telecom | Enhanced user interface to transfer media content |
US9632688B2 (en) * | 2011-03-31 | 2017-04-25 | France Telecom | Enhanced user interface to transfer media content |
US9088426B2 (en) | 2011-12-13 | 2015-07-21 | Google Inc. | Processing media streams during a multi-user video conference |
US9088697B2 (en) * | 2011-12-13 | 2015-07-21 | Google Inc. | Processing media streams during a multi-user video conference |
US20130147905A1 (en) * | 2011-12-13 | 2013-06-13 | Google Inc. | Processing media streams during a multi-user video conference |
US10120989B2 (en) * | 2013-06-04 | 2018-11-06 | NOWWW.US Pty. Ltd. | Login process for mobile phones, tablets and other types of touch screen devices or computers |
US11422678B2 (en) * | 2013-08-02 | 2022-08-23 | Samsung Electronics Co., Ltd. | Method and device for managing tab window indicating application group including heterogeneous applications |
US10007410B2 (en) * | 2015-08-19 | 2018-06-26 | Google Llc | Incorporating user content within a communication session interface |
US10732806B2 (en) | 2015-08-19 | 2020-08-04 | Google Llc | Incorporating user content within a communication session interface |
US20180373550A1 (en) * | 2017-06-22 | 2018-12-27 | Canon Kabushiki Kaisha | Information processing apparatus |
US10970093B2 (en) * | 2017-06-22 | 2021-04-06 | Canon Kabushiki Kaisha | Information processing apparatus for displaying a software screen on a foreground of a display based on a setting |
US20230008575A1 (en) * | 2021-07-09 | 2023-01-12 | Prezi, Inc. | Relocation of content item to motion picture sequences at multiple devices |
US11704626B2 (en) * | 2021-07-09 | 2023-07-18 | Prezi, Inc. | Relocation of content item to motion picture sequences at multiple devices |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060107303A1 (en) | Content specification for media streams | |
US10362072B2 (en) | Systems and methods for multimedia multipoint real-time conferencing allowing real-time bandwidth management and prioritized media distribution | |
KR101163434B1 (en) | Networked chat and media sharing systems and methods | |
EP2940940B1 (en) | Methods for sending and receiving video short message, apparatus and handheld electronic device thereof | |
US10904179B2 (en) | System and method for voice networking | |
US20120017149A1 (en) | Video whisper sessions during online collaborative computing sessions | |
US20080307324A1 (en) | Sharing content in a videoconference session | |
US20110153768A1 (en) | E-meeting presentation relevance alerts | |
WO2021190341A1 (en) | Information interaction method and apparatus, and electronic device | |
CN114584736B (en) | Sharing method and device based on video conference, electronic equipment and computer medium | |
CN103475572A (en) | Method, device and system for sending multiple pictures in instant messaging application | |
WO2021218555A1 (en) | Information display method and apparatus, and electronic device | |
WO2021218556A1 (en) | Information display method and apparatus, and electronic device | |
WO2023143299A1 (en) | Message display method and apparatus, device, and storage medium | |
US20070011232A1 (en) | User interface for starting presentations in a meeting | |
WO2008033649A1 (en) | Adding video effects for video enabled applications | |
US20220303503A1 (en) | Parameters for overlay handling for immersive teleconferencing and telepresence for remote terminals | |
US20100061276A1 (en) | Dedicated Call User Interface (UI) for Organizing Collaborative Exchange During A Telephony or Softphone Call | |
KR102506604B1 (en) | Method for providing speech video and computing device for executing the method | |
CN113891135B (en) | Multimedia data playing method and device, electronic equipment and storage medium | |
US8832587B2 (en) | Video window with integrated content | |
CN114422468A (en) | Message processing method, device, terminal and storage medium | |
JP2007158685A (en) | Moving video distribution system | |
US11381628B1 (en) | Browser-based video production | |
CN112751819B (en) | Processing method and device for online conference, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AVAYA TECHNOLOGY CORPORATION, NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ERHART, GEORGE WILLIAM;MATULA, VALENTINE C.;SKIBA, DAVID JOSEPH;REEL/FRAME:015425/0099;SIGNING DATES FROM 20041124 TO 20041205 |
|
AS | Assignment |
Owner name: CITIBANK, N.A., AS ADMINISTRATIVE AGENT, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:AVAYA, INC.;AVAYA TECHNOLOGY LLC;OCTEL COMMUNICATIONS LLC;AND OTHERS;REEL/FRAME:020156/0149 Effective date: 20071026 Owner name: CITIBANK, N.A., AS ADMINISTRATIVE AGENT,NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:AVAYA, INC.;AVAYA TECHNOLOGY LLC;OCTEL COMMUNICATIONS LLC;AND OTHERS;REEL/FRAME:020156/0149 Effective date: 20071026 |
|
AS | Assignment |
Owner name: CITICORP USA, INC., AS ADMINISTRATIVE AGENT, NEW Y Free format text: SECURITY AGREEMENT;ASSIGNORS:AVAYA, INC.;AVAYA TECHNOLOGY LLC;OCTEL COMMUNICATIONS LLC;AND OTHERS;REEL/FRAME:020166/0705 Effective date: 20071026 Owner name: CITICORP USA, INC., AS ADMINISTRATIVE AGENT, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:AVAYA, INC.;AVAYA TECHNOLOGY LLC;OCTEL COMMUNICATIONS LLC;AND OTHERS;REEL/FRAME:020166/0705 Effective date: 20071026 Owner name: CITICORP USA, INC., AS ADMINISTRATIVE AGENT,NEW YO Free format text: SECURITY AGREEMENT;ASSIGNORS:AVAYA, INC.;AVAYA TECHNOLOGY LLC;OCTEL COMMUNICATIONS LLC;AND OTHERS;REEL/FRAME:020166/0705 Effective date: 20071026 |
|
AS | Assignment |
Owner name: AVAYA INC, NEW JERSEY Free format text: REASSIGNMENT;ASSIGNORS:AVAYA TECHNOLOGY LLC;AVAYA LICENSING LLC;REEL/FRAME:021156/0082 Effective date: 20080626 Owner name: AVAYA INC,NEW JERSEY Free format text: REASSIGNMENT;ASSIGNORS:AVAYA TECHNOLOGY LLC;AVAYA LICENSING LLC;REEL/FRAME:021156/0082 Effective date: 20080626 |
|
AS | Assignment |
Owner name: AVAYA TECHNOLOGY LLC, NEW JERSEY Free format text: CONVERSION FROM CORP TO LLC;ASSIGNOR:AVAYA TECHNOLOGY CORP.;REEL/FRAME:022677/0550 Effective date: 20050930 Owner name: AVAYA TECHNOLOGY LLC,NEW JERSEY Free format text: CONVERSION FROM CORP TO LLC;ASSIGNOR:AVAYA TECHNOLOGY CORP.;REEL/FRAME:022677/0550 Effective date: 20050930 |
|
AS | Assignment |
Owner name: BANK OF NEW YORK MELLON TRUST, NA, AS NOTES COLLATERAL AGENT, THE, PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC., A DELAWARE CORPORATION;REEL/FRAME:025863/0535 Effective date: 20110211 Owner name: BANK OF NEW YORK MELLON TRUST, NA, AS NOTES COLLAT Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC., A DELAWARE CORPORATION;REEL/FRAME:025863/0535 Effective date: 20110211 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:029608/0256 Effective date: 20121221 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., P Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:029608/0256 Effective date: 20121221 |
|
AS | Assignment |
Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., THE, PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:030083/0639 Effective date: 20130307 Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., THE, Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:030083/0639 Effective date: 20130307 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |
|
AS | Assignment |
Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 029608/0256;ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:044891/0801 Effective date: 20171128 Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 025863/0535;ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST, NA;REEL/FRAME:044892/0001 Effective date: 20171128 Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 030083/0639;ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:045012/0666 Effective date: 20171128 |
|
AS | Assignment |
Owner name: VPNET TECHNOLOGIES, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITICORP USA, INC.;REEL/FRAME:045032/0213 Effective date: 20171215 Owner name: AVAYA TECHNOLOGY, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITICORP USA, INC.;REEL/FRAME:045032/0213 Effective date: 20171215 Owner name: OCTEL COMMUNICATIONS LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITICORP USA, INC.;REEL/FRAME:045032/0213 Effective date: 20171215 Owner name: SIERRA HOLDINGS CORP., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITICORP USA, INC.;REEL/FRAME:045032/0213 Effective date: 20171215 Owner name: AVAYA, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITICORP USA, INC.;REEL/FRAME:045032/0213 Effective date: 20171215 |