US7274740B2 - Wireless video transmission system - Google Patents

Wireless video transmission system Download PDF

Info

Publication number
US7274740B2
US7274740B2 US10/607,391 US60739103A US7274740B2 US 7274740 B2 US7274740 B2 US 7274740B2 US 60739103 A US60739103 A US 60739103A US 7274740 B2 US7274740 B2 US 7274740B2
Authority
US
United States
Prior art keywords
frames
target bit
frame
bit rate
receiver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/607,391
Other versions
US20050008074A1 (en
Inventor
Petrus J. L. Van Beek
Hao Pan
Baoxin Li
M. Ibrahim Sezan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Laboratories of America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Laboratories of America Inc filed Critical Sharp Laboratories of America Inc
Priority to US10/607,391 priority Critical patent/US7274740B2/en
Assigned to SHARP LABORATORIES OF AMERICA, INC. reassignment SHARP LABORATORIES OF AMERICA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, BAOXIN, PAN, HAO, SEZAN, IBRAHIM, VAN BEEK, PETRUS J.L.
Publication of US20050008074A1 publication Critical patent/US20050008074A1/en
Application granted granted Critical
Publication of US7274740B2 publication Critical patent/US7274740B2/en
Assigned to SHARP KABUSHIKI KAISHA reassignment SHARP KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHARP LABORATORIES OF AMERICA INC.
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/162User input
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/37Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4347Demultiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43637Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/44029Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44227Monitoring of local network, e.g. connection or bandwidth variations; Detecting new devices in the local network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44231Monitoring of peripheral device or external card, e.g. to detect processing problems in a handheld device or the failure of an external recording device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen

Definitions

  • the present invention relates generally to wireless transmission systems, and relates more particularly to a wireless video transmission system.
  • a display device may be utilized to view program information received from a program source.
  • the conventional display device is typically positioned in a stationary location because of restrictions imposed by various physical connections that electrically couple the display device to input devices, output devices, and operating power. Other considerations such as display size and display weight may also significantly restrict viewer mobility in traditional television systems.
  • Portable television displays may advantageously provide viewers with additional flexibility when choosing an appropriate viewing location. For example, in a home environment, a portable television may readily be relocated to view programming at various remote locations throughout the home. A user may thus flexibly view television programming, even while performing other tasks in locations that are remote from a stationary display device.
  • portable television systems typically possess certain detrimental operational characteristics that diminish their effectiveness for use in modern television systems.
  • portable televisions typically receive television signals that are propagated from a remote terrestrial television transmitter to an antenna that is integral with the portable television. Because of the size and positioning constraints associated with a portable antenna, such portable televisions typically exhibit relatively poor reception characteristics, and the subsequent display of the transmitted television signals is therefore often of inadequate quality.
  • an economical wireless television system for flexible home use may enable television viewers to significantly improve their television-viewing experience by facilitating portability while simultaneously providing an increased number of program source selections.
  • a number of media playback systems use continuous media streams, such as video image streams, to output media content.
  • continuous media streams in their raw form often require high transmission rates, or bandwidth, for effective and/or timely transmission.
  • the cost and/or effort of providing the required transmission rate is prohibitive.
  • This transmission rate problem is often solved by compression schemes that take advantage of the continuity in content to create highly packed data. Compression methods such Motion Picture Experts Group (MPEG) methods and its variants for video are well known. MPEG and similar variants use motion estimation of blocks of images between frames to perform this compression. With extremely high resolutions, such as the resolution of 1080i used in high definition television (HDTV), the data transmission rate of such a video image stream will be very high even after compression.
  • MPEG Motion Picture Experts Group
  • a first program may use 75 megabits/second
  • a second program may use 25 megabits/second
  • a third program may use 50 megabits/second, with the channel bandwidth being distributed by measuring the bit-rate being transmitted.
  • FIG. 1 illustrates a gateway, media sources, receiving units, and a network.
  • FIG. 2 illustrates an analog extender
  • FIG. 3 illustrate a digital extender
  • FIG. 4 illustrates GOPs.
  • FIG. 5 illustrates virtual GOPs.
  • FIG. 6 illustrates a more detailed view of an extender.
  • FIG. 7 illustrates an analog source single stream.
  • FIG. 8 illustrates a digital source single stream.
  • FIG. 9 illustrates multiple streams.
  • FIG. 10 illustrates MPEG-2 TM5.
  • FIG. 11 illustrates dynamic rate adaptation with virtual GOPs.
  • FIG. 12 illustrates slowly varying channel conditions of super GOPS by GOP bit allocation.
  • FIG. 13 illustrates dynamic channel conditions of virtual super GOP by virtual super GOP bit allocation.
  • FIG. 14 illustrates dynamic channel conditions of super frame by super frame bit allocation.
  • FIG. 15 illustrates user preferences and priority determination for streams.
  • FIG. 16 illustrates the weight of a stream resulting from preferences at a particular point in time.
  • FIG. 17 illustrates the relative weight of streams set or changed at arbitrary times or on user demand.
  • FIG. 18 illustrates an MAC layer model
  • FIG. 19 illustrates an APPLICATION layer model-based approach.
  • FIG. 20 illustrates an APPLICATION layer packet burst approach.
  • FIG. 1 illustrate a system for transmission of multiple data streams in a network that may have limited bandwidth.
  • the system includes a central gateway media server 210 and a plurality of client receiver units 230 , 240 , 250 .
  • the central gateway media server may be any device that can transmit multiple data streams.
  • the input data streams may be stored on the media server or arrive from an external source, such as a satellite television transmission 260 , a digital video disc player, a video cassette recorder, or a cable head end 265 , and are transmitted to the client receiver units 230 , 240 , 250 in a compressed format.
  • the data streams can include display data, graphics data, digital data, analog data, multimedia data, audio data and the like.
  • An adaptive bandwidth system on the gateway media server 210 determines the network bandwidth characteristics and adjusts the bandwidth for the output data streams in accordance with the bandwidth characteristics.
  • the start time of each unit of media for each stream is matched against the estimated transmission time for that unit.
  • the network is deemed to be close to saturation, or already saturated, and the system may select at least one stream as a target for lowering total bandwidth usage.
  • the target stream associated with a client receiver unit is chosen, the target stream is modified to transmit less data, which may result in a lower data transmission rate. For example, a decrease in the data to be transmitted can be accomplished by a gradual escalation of the degree of data compression performed on the target stream, thereby reducing the resolution of the target stream.
  • the resolution of the target stream can also be reduced. For example, if the target stream is a video stream, the frame size could be scaled down, reducing the amount of data per frame, and thereby reducing the data transmission rate.
  • the devices may be located at different physical locations, and, over time, the users may change the location of these devices relative to the gateway. For example, the user may relocate the device near the gateway or farther away from the gateway, or, the physical environment may change significantly over time, both of which affect the performance of the wireless network for that device, and in turn the available bandwidth for other devices. This results in unpredictable and dynamically varying bandwidth.
  • Different devices interconnected to the network have different resources and different usage paradigms.
  • different devices may have different microprocessors, different memory requirements, different display characteristics, different connection bandwidth capabilities, and different battery resources.
  • different usage paradigms may include for example, a mobile handheld device versus a television versus a personal video recorder. This results in unpredictable and dynamically varying network maximum throughput.
  • the transmitted data may need to be in different formats, such as for example, MPEG-2, MPEG-1, H.263, H.261, H.264, MPEG-4, analog, and digital. These different formats may have different impacts on the bandwidth. This results in unpredictable and dynamically varying network maximum throughput.
  • the data provided to the gateway may be in the form of compressed bit streams which may include a constant bit rate (CBR) or variable bit rate (VBR). This results in unpredictable and dynamically varying network maximum throughput.
  • CBR constant bit rate
  • VBR variable bit rate
  • gateway reception and transmission such as for example, IEEE 802.11, Ethernet, and power-line networks (e.g., HomePlug Powerline Alliance). While such networks are suitable for data transmission, they do not tend to be especially suitable for audio/video content because of the stringent requirements imposed by the nature of audio/video data transmission. Moreover, the network capabilities, and in particular the data maximum throughput offered, are inherently unpredictable and may dynamically change due to varying conditions described above. The data throughput may be defined in terms of the amount of actual (application) payload bits (per second) being transmitted from the sender to the receiver successfully. It is noted that while the system may refer to audio/video, the concepts are likewise used for video alone and/or audio alone.
  • IEEE 802.11 such as IEEE 802.11a and 802.11b
  • IEEE 802.11a and 802.11b they can operate at several different data link rates:
  • the report quotes average throughput values for within a circular cell with radius of 65 feet (typical for large size houses in the US) as 22.6 Mbps and 5.1 Mbps for 802.11a and 802.11b, respectively. Accordingly, it may be observed that it is not feasible to stream a standard definition and a high definition video signal to two client devices at the same time using an 802.11a system, unless the video rates are significantly reduced. Also other situations likewise involve competing traffic from several different audiovisual signals. Moreover, wireless communications suffer from radio frequency interference from devices that are unaware of the network, such as cordless phones and microwave ovens, as previously described. Such interference leads to unpredictable and dynamic variations in network performance, i.e., losses in data maximum throughput/bandwidth.
  • Wireless Local Area Networks such as 802.11 systems
  • WLANs include efficient error detection and correction techniques at the Physical (PHY) and Medium Access Control (MAC) layer.
  • PHY Physical
  • MAC Medium Access Control
  • Such retransmission of frames by the source effectively reduces the inherent error rate of the medium, at the cost of lowering the effective maximum throughput.
  • high error rates may cause the sending stations in the network to switch to lower raw data link rates, again reducing the error rate while decreasing the data rates available to applications.
  • Networks based on power-line communication address similar challenges due to the unpredictable and harsh nature of the underlying channel medium.
  • Systems based on the HomePlug standard include technology for adapting the data link rate to the channel conditions. Similar to 802.11 wireless networks, HomePlug technology contains techniques such as error detection, error correction, and retransmission of frames to reduce the channel error rate, while lowering effective maximum throughput. Due to the dynamic nature of these conditions, the maximum throughput offered by the network may (e.g., temporarily) drop below the data rate required for transmission of AV data streams. This results in loss of AV data, which leads to an unacceptable decrease in the perceived AV quality.
  • a system may robustly stream audio/visual data over (wireless) networks by:
  • FIGS. 2 and 3 depict the transmitting portion of the system, the first having analog video inputs, the second having digital (compressed) video inputs.
  • the extender includes a video encoding or transcoding module, depending on whether the input video is in analog or digital (compressed) format. If the input is analog, the processing steps may include A/D conversion, as well as digital compression, such as by an MPEG-2 encoder, and eventual transmission over the network.
  • the processing may include transcoding of the incoming bit stream to compress the incoming video into an output stream at a different bit rate, as opposed to a regular encoder.
  • a transcoding module normally reduces the bit rate of a digitally compressed input video stream, such as an MPEG-2 bit stream or any other suitable format.
  • the coding/transcoding module is provided with a desired output bit rate (or other similar information) and uses a rate control mechanism to achieve this bit rate.
  • the value of the desired output bit rate is part of information about the transmission channel provided to the extender by a network monitor module.
  • the network monitor monitors the network and estimates the bandwidth available to the video stream in real time.
  • the information from the network monitor is used to ensure that the video stream sent from the extender to a receiver has a bit rate that is matched in some fashion to the available bandwidth (e.g., channel rate).
  • the coder/transcoder may increase the level of compression applied to the video data, thereby decreasing visual quality slowly.
  • transrating In the case of a transcoder, this may be referred to as transrating.
  • the resulting decrease in visual quality by modifying the bit stream is minimal in comparison to the loss in visual quality that would be incurred if a video stream is transmitted at bit rates that can not be supported by the network.
  • the loss of video data incurred by a bit rate that can not be supported by the network may lead to severe errors in video frames, such as dropped frames, followed by error propagation (due to the nature of video coding algorithms such as MPEG).
  • the feedback obtained from the network monitor ensures that the output bit rate is toward an optimum level so that any loss in quality incurred by transrating is minimal.
  • the receiver portion of the system may include a regular video decoder module, such as an MPEG-2 decoder.
  • This decoder may be integrated with the network interface (e.g., built into the hardware of a network interface card).
  • the receiving device may rely on a software decoder (e.g., if it is a PC).
  • the receiver portion of the system may also include a counterpart to the network monitoring module at the transmitter. In that case, the network monitoring modules at the transmitter and receiver cooperate to provide the desired estimate of the network resources to the extender system. In some cases the network monitor may be only at the receiver.
  • the extender may further increase or decrease the output video quality in accordance with the device resources by adjusting bandwidth usage accordingly. For example, consider an MPEG-1 source stream at 4 Mbps with 640 by 480 spatial resolution at 30 fps. If it is being transmitted to a resource-limited device, e.g., a handheld with playback capability of 320 by 240 picture resolution at 15 fps, the transcoder may reduce the rate to 0.5 Mbps by simply subsampling the video without increasing the quantization levels. Otherwise, without subsampling, the transcoder may have to increase the level of quantization.
  • a resource-limited device e.g., a handheld with playback capability of 320 by 240 picture resolution at 15 fps
  • the transcoder may reduce the rate to 0.5 Mbps by simply subsampling the video without increasing the quantization levels. Otherwise, without subsampling, the transcoder may have to increase the level of quantization.
  • a transcoder may also convert the compression format of an incoming digital video stream, e.g., from MPEG-2 format to MPEG-4 format. Therefore, a transcoder may for example: change bit rate, change frame rate, change spatial resolution, and change the compression format.
  • the extender may also process the video using various error control techniques, e.g. such methods as forward error correction and interleaving.
  • dynamic rate adaptation which generally uses feedback to control the bit rate.
  • the rate of the output video is modified to be smaller than the currently available network bandwidth from the sender to the receiver, most preferably smaller at all times. In this manner the system can adapt to a network that does not have a constant bit rate, which is especially suitable for wireless networks.
  • TM5 MPEG-2 Test Model 5
  • GOP Group-of-Pictures
  • N GOP The length of a GOP in number of pictures.
  • An extension for a time-varying channel can be applied if one can assume that the available bandwidth varies only slowly relative to the duration of a GOP. This may be the case when the actual channel conditions for some reason change only slowly or relatively infrequently. Alternatively, one may only be able to measure the changing channel conditions with coarse time granularity. In either case, the bandwidth can be modeled as a piece-wise constant signal, where changes are allowed only on the boundaries of a (super) GOP. Thus, G GOP is allowed to vary on a GOP-by-GOP basis.
  • Each virtual GOP may be the same length as an actual MPEG GOP, any other length, or may have a length that is an integer multiple of the length of an actual MPEG GOP.
  • a virtual GOP typically contains the same number of I-, P- and B-type pictures within a single picture sequence. However, virtual GOPs may overlap each other, where the next virtual GOP is shifted by one (or more) frame with respect to the current virtual GOP. The order of I-, P- and B-type pictures changes from one virtual GOP to the next, but this does not influence the overall bit allocation to each virtual GOP. Therefore, a similar method, as used e.g. in TM5, can be used to allocate bits to a virtual GOP (instead of a regular GOP), but the GOP-level bit allocation is in a sense “re-started” at every frame (or otherwise “re-started” at different intervals).
  • R t denote the remaining number of bits available to code the remaining frames of a GOP, at frame t.
  • S t denote the number of bits actually spent to code the frame at time t.
  • N t denote the number of frames left to code in the current GOP, starting from frame t.
  • R t is set to 0 at the start of the sequence, and is incremented by G GOP at the start of every GOP. Also, S t is subtracted from R t at the end of coding a picture. It can be shown that R t can be written as follows, in closed form:
  • G P G GOP N GOP indicating the average number of bits available to code a single frame.
  • the constant G P may be replaced by G t , which may vary with t.
  • the system may re-compute (1) at every frame t, i.e., for each virtual GOP. Since the remaining number of frames in a virtual GOP is N GOP , the system may replace N t by N GOP , resulting in:
  • the next step is allocate bits to the current frame at time t, which may be of type I, P, or B.
  • This step takes into account the complexity of coding a particular frame, denoted by C t .
  • the encoder maintains estimates of the complexity of each type of frame (I, P, or B), which are updated after coding each frame.
  • C I , C P and C B denote the current estimates of the complexity for I, P and B frames.
  • N I , N P and N B denote the number of frames of type I, P and B left to encode in a virtual GOP (note that these are constants in the case of virtual GOPs).
  • TM5 prescribes a method for computing T I , T P and T B , which are the target number of bits for an I, B, or P picture to be encoded, based on the above parameters.
  • the TM5 equations may be slightly modified to handle virtual GOPs as follows:
  • I, B, P refer to I frames, B frames, and P frames, and C is a complexity measure. It is to be understood that any type of compression rate distortion model, defined in the general sense, may likewise be used.
  • this scheme permits the reallocation of bits on a virtual GOP basis from frame to frame (or other basis consistent with virtual GOP spacing).
  • the usage and bit allocation for one virtual GOP may be tracked and the unused bit allocation for a virtual GOP may be allocated for the next virtual GOP.
  • the basic extender for a single AV stream described above will encode an analog input stream or adapt the bit rate of an input digital bit stream to the available bandwidth without being concerned about the cause of the bandwidth limitations, or about other, competing streams, if any.
  • the system may include a different extender system that processes multiple video streams, where the extender system assumes the responsibility of controlling or adjusting the bit rate of multiple streams in the case of competing traffic.
  • the multi-stream extender employs a “(trans)coder manager” on top of multiple video encoders/transcoders.
  • the system operates on n video streams, where each source may be either analog (e.g. composite) or digital (e.g. MPEG-2 compressed bitstreams).
  • V n denotes input stream n
  • V′ n denotes output stream n
  • R n denotes the bit rate of input stream n (this exists only if input stream n is in already compressed digital form; it is not used if the input is analog), while R′ n denotes the bit rate of output stream n.
  • Each input stream is encoded or transcoded separately, although their bit rates are controlled by the (trans)coder manager.
  • the (trans)coder manager handles competing requests for bandwidth dynamically.
  • the (trans)coding manager allocates bit rates to multiple video streams in such a way that the aggregate of the bit rates of the output video streams matches the desired aggregate channel bit rate.
  • the desired aggregate bit rate again, is obtained from a network monitor module, ensuring that the aggregate rate of multiple video streams does not exceed available bandwidth.
  • Each coder/transcoder again uses some form of rate control to achieve the allocated bit rate for its stream.
  • the system may include multiple receivers (not shown in the diagram). Each receiver in this system has similar functionality as the receiver for the single-stream case.
  • bit rate of the multiple streams should be controlled by some form of bit allocation and rate control in order to satisfy such constraints.
  • bit allocation and rate control in order to satisfy such constraints.
  • a more general and flexible framework is useful for dynamic bit rate adaptation. There are several reasons for this, as follows:
  • the resulting heterogeneity of the environment may be taken into account during optimization of the system.
  • the multi-stream extender system may optionally receive further information as input to the transcoder manager (in addition to information about the transmission channel), as shown in FIG. 6 .
  • bit rates of individual audio/video streams on the network are subject to various constraints.
  • the aggregate rates of individual streams may be smaller than or equal to the overall channel capacity or network bandwidth from sender to receiver. This bandwidth may vary dynamically, due to increases or decreases in the number of streams, due to congestion in the network, due to interference, etc.
  • rate of each individual stream may be bound by both a minimum and a maximum.
  • a maximum constraint may be imposed due to the following reasons.
  • the (trans)coder manager discussed above may employ several strategies. It may attempt to allocate an equal amount of available bits to each stream; however, in this case the quality of streams may vary strongly from one stream to the other, as well as in time. It may also attempt to allocate the available bits such that the quality of each stream is approximately equal; in this case, streams with highly active content will be allocated more bits than streams with less active content. Another approach is to allow users to assign different priorities to different streams, such that the quality of different streams is allowed to vary, based on the preferences of the user(s). This approach is generally equivalent to weighting the individual distortion of each stream when the (trans)coder manager minimizes the overall distortion.
  • the priority or weight of an audio/video stream may be obtained in a variety of manners, but is generally related to the preferences of the users of the client devices.
  • the weights (priorities) discussed here are different from the type of weights or coefficients seen often in literature that correspond to the encoding complexity of a macro block, video frame, group of frames, or video sequence (related to the amount of motion or texture variations in the video), which may be used to achieve a uniform quality among such parts of the video.
  • weights will purposely result in a non-uniform quality distribution across several audio/video streams, where one (or more) such audio/video stream is considered more important than others.
  • Various cases, for example, may include the following, and combinations of the following.
  • the weight of a stream may be the result of a preference that is related to the client device (see FIG. 15 ). That is, in the case of conflicting streams requesting bandwidth from the channel, one device is assigned a priority such that the distortion of streams received by this device are deemed more severe as an equal amount of distortion in a stream received by another device. For instance, the user(s) may decide to assign priority to one TV receiver over another due to their locations. The user(s) may assign a higher weight to the TV in the living room (since it is likely to be used by multiple viewers) compared to a TV in the bedroom or den. In that case, the content received on the TV in the living room will suffer from less distortion due to transcoding than the content received on other TVs.
  • priorities may be assigned to different TV receivers due to their relative screen sizes, i.e., a larger reduction in rate (and higher distortion) may be acceptable if a TV set's screen size is sufficiently small.
  • Other device resources may also be translated into weights or priorities.
  • Such weighting could by default be set to fixed values, or using a fixed pattern. Such weighting may require no input from the user, if desired.
  • Such weighting may be set once (during set up and installation). For instance, this setting could be entered by the user, once he/she decides which client devices are part of the network and where they are located. This set up procedure could be repeated periodically, when the user(s) connect new client devices to the network.
  • Such weighting may also be the result of interaction between the gateway and client device.
  • the client device may announce and describe itself to the gateway as a certain type of device. This may result in the assignment by the gateway of a certain weighting or priority value to this device.
  • the weight of a stream may be result of a preference that is related to a content item (such as TV program) that is carried by a particular stream at a particular point in time (see FIG. 16 ). That is, for the duration that a certain type of content is transmitted over a stream, this stream is assigned a priority such that the distortion of this stream is deemed more severe as an equal amount of distortion in other streams with a different type of content, received by the same or other devices. For instance, the user(s) may decide to assign priority TV programs on the basis of its genre, or other content-related attributes. These attributes, e.g. genre information, about a program can be obtained from an electronic program guide. These content attributes may also be based on knowledge of the channel of the content (e.g.
  • the user(s) may for example assign a higher weight to movies, compared to other TV programs such as gameshows.
  • the first stream is assigned a priority such that it will be distorted less by transcoding than the second stream.
  • Such weighting could by default be set to fixed values, or using a fixed pattern. Such weighting may require no input from the user, if desired.
  • Such weighting may be set once (during set up and installation). For instance, this setting could be entered by the user, once he/she decides which type(s) of content are important to him/her. Then, during operation, the gateway may match the description of user preferences (one or more user preferences) to descriptions of the programs transmitted. The actual weight could be set as a result of this matching procedure. The procedure to set up user preferences could be repeated periodically.
  • the user preference may be any type of preference, such as those of MPEG-7 or TV Anytime.
  • the system may likewise include the user's presence (any user or a particular user) to select, at least in part, the target bit rate.
  • the user may include direct input, such as a remote control. Also, the system may include priorities among the user preferences to select the target bit rate.
  • Such weighting may also be the result of the gateway tracking the actions of the user. For instance, the gateway may be able to track the type of content that the user(s) consume frequently. The gateway may be able to infer user preferences from the actions of the user(s). This may result in the assignment by the gateway of a certain weighting or priority value to certain types of content.
  • the relative weight of streams may also be set or changed at arbitrary times or on user demand (see FIG. 17 ).
  • Such weighting may be bound to a particular person in the household. For instance, one person in a household may wish to receive the highest possible quality content, no matter what device he/she uses. In this case, the weighting can be changed according to which device that person is using at any particular moment.
  • Such weighting could be set or influenced at an arbitrary time, for instance, using a remote control device.
  • Weighting could also be based on whether a user is recording content, as opposed to viewing. Weighting could be such that a stream is considered higher priority (hence should suffer less distortion) if that stream is being recorded (instead of viewed).
  • the relative weight of streams may also be set based on their modality.
  • the audio and video streams of an audiovisual stream may be separated and treated differently during their transmission.
  • the audio part of an audiovisual stream may be assigned a higher priority than the video part. This case is motivated by the fact that when viewing a TV program, in many cases, loss of audio information is deemed more severe by users than loss of video information from the TV signal. This may be the case, for instance, when the viewer is watching a sports program, where a commentator provides crucial information. As another example, it may be that users do not wish to degrade the quality of audio streams containing hi-quality music. Also, the audio quality could vary among different speakers or be sent to different speakers.
  • the physical and data link layers of the aforementioned networks are designed to mitigate the adverse conditions of the channel medium.
  • One of the characteristics of these networks specifically affects bit allocation among multiple streams as in a multi-stream extender system discussed here.
  • a gateway system may be communicating at different data link rates with different client devices.
  • WLANs based on IEEE 802.11 can operate at several data link rates, and may switch or select data link rates adaptively to reduce the effects of interference or distance between the access point and the client device. Greater distances and higher interference cause the stations in the network to switch to lower raw data link rates. This may be referred to as multi-rate support.
  • the fact that the gateway may be communicating with different client devices at different data rates, in a single wireless channel, affects the model of the channel as used in bit allocation for joint coding of multiple streams.
  • Prior work in rate control and bit allocation uses a conventional channel model, where there is a single bandwidth that can simply be divided among AV streams in direct proportion to the requested rates for individual AV streams.
  • the present inventors determined that this is not the case in LANs such as 802.11 WLANs due to their multi-rate capability.
  • Such wireless system may be characterized in that the sum of the rate of each link is not necessarily the same as the total bandwidth available from the system, for allocation among the different links. In this manner, a 10 Mbps video signal, and a 20 Mbps video signal may not be capable of being transmitted by a system having a maximum bandwidth of 30 Mbps.
  • the bandwidth used by a particular wireless link in an 802.11 wireless system is temporal in nature and is related to the maximum bandwidth of that particular wireless link.
  • link 1 has a capacity of 36 Mbps and the data is transmitted at a rate of 18 Mbps the usage of that link is 50%. This results in using 50% of the systems overall bandwidth.
  • link 2 has a capacity of 24 Mbps and the data is transmitted at a rate of 24 Mbps the usage of link 2 is 100%. Using link 2 results in using 100% of the system's overall bandwidth leaving no bandwidth for other links, thus only one stream can be transmitted.
  • a more optimal approach to rate adaptation of multiple streams is to apply joint bit allocation/rate control. This approach applies to the case where the input streams to the multi-stream extender system are analog, as well as the case where the input streams are already in compressed digital form.
  • R n , R′ n and R C may be time-varying in general; hence, these are functions of time t.
  • One form of the overall distortion criterion D is a weighted average of the distortion of the individual streams:
  • D max n ⁇ p n D n ( R′ n ) ⁇ (5)
  • a conventional channel model is used, similar to cable tv, where an equal amount of bit rate offered to a stream corresponds to an equal amount of utilization of the channel, while it may be extended to the wireless type utilizations described above. Therefore, the goal is to minimize a criterion such as (4) or (5), subject to the following constraints:
  • ⁇ n 1 N L ⁇ R n ′ ⁇ R C ( 6 ) and, for all n, 0 ⁇ a n ⁇ R′ n ⁇ b n ⁇ R n (7) at any given time t.
  • ⁇ n 1 N L ⁇ R n ⁇ R C ( 8 )
  • an optimal solution that minimizes the distortion criterion as in (5) is one where the (weighted) distortion values of individual streams are all equal.
  • (6) embodies a constraint imposed by the channel under a conventional channel model. This constraint is determined by the characteristics of the specific network. A different type of constraint will be used as applied to LANs with multi-rate support.
  • a normal MPEG-2 GOP Group-of-Pictures
  • a super GOP is formed over multiple MPEG-2 streams and consists of N GOP super frames, where a super frame is a set of frames containing one frame from each stream and all frames in a super frame coincide in time.
  • a super GOP always contains an integer number of stream-level MPEG-2 GOPs, even when the GOPs of individual streams are not the same length and not aligned.
  • the bit allocation method includes a target number of bits assigned to a super GOP. This target number T s is the same for every super GOP and is derived from the channel bit rate, which is assumed fixed.
  • T s Given T s , the bit allocation is done for each super frame within a super GOP.
  • the resulting target number of bits for a super frame T t depends on the number of I, P, and B frames in the given streams.
  • T t the bit allocation is done for each frame within a super frame.
  • the resulting target number of bits for a frame within a super frame at time t is denoted by T t,n .
  • the existing technique is based on the use of a complexity measure C for a video frame, that represents the “complexity” of encoding that frame.
  • streams are allocated bits proportional to the estimated complexity of the frames in each stream. That is, streams with frames that are more “complex” to code, receive more bits during bit allocation compared to streams that are less “complex” to code, resulting in an equal amount of distortion in each stream.
  • the complexity measure C for a video frame is defined as the product of the quantization value used to compress the DCT coefficients of that video frame, and the resulting number of bits generated to code that video frame (using that quantization value). Therefore, the target number of bits T t,n for a particular frame within a super frame can be computed on the basis of an estimate of the complexity of that frame, C t,n , and the quantizer used for that frame, Q t,n :
  • C t,n The value of C t,n is assumed constant within a stream for all future frames of the same type (I, P or B) to be encoded. Therefore, C t,n equals either C I,n , or C P,n , or C B,n depending on the frame type.
  • the sum of the number of bits allocated to all frames within a superframe should be equal to the number of bits allocated to that superframe, i.e.,
  • the technique uses an equal quantizer value Q for all frames in all streams, in order to achieve uniform picture quality.
  • An extension can be applied if one can assume that the target bit rate varies only slowly relative to the duration of a (super) GOP. This may be the case when the actual channel conditions for some reason change only slowly or relatively infrequently. Alternatively, one may only be able to measure the changing channel conditions with coarse time granularity. In either case, the target bit rate can not be made to vary more quickly than a certain value dictated by the physical limitations. Therefore, the target bit rate can be modeled as a piece-wise constant signal, where changes are allowed only on the boundaries of a (super) GOP.
  • T s is allowed to vary on a (super) GOP-by-GOP basis.
  • Another extension is to use the concept of virtual GOPs for the case where the target bit rates varies quickly relative to the duration of a (super) GOP, i.e., the case where adjustments to the target bit rate and bit allocation must be made on a (super) frame-by-frame basis.
  • the use of virtual GOPs was explained for the single-stream dynamic rate adaptation above.
  • the concept of virtual GOPs extends to the concept of virtual super GOPs.
  • Another bit allocation approach in joint coding of multiple streams in a LAN environment is suitable for those networks that have multi-rate support.
  • an access point in the gateway may be communicating at different data link rates with different client devices.
  • the maximum data throughput from the gateway to one device may be different from the maximum throughput from the gateway to another device, while transmission to each device contributes to the overall utilization of a single, shared, channel.
  • N L devices there are N L devices on a network sharing available channel capacity. It may be assumed there are N L streams being transmitted to these N L devices (1 stream per device).
  • the system employs a multi-stream manager (i.e., multi-stream transcoder or encoder manager) that is responsible for ensuring the best possibly quality of video transmitted to these devices.
  • this throughput varies per device and varies with time due to variations in the network: H n (t).
  • H n the data maximum throughput can be measured at a sufficiently fine granularity in time.
  • the maximum data throughput H n is measured in bits per second. Note that the maximum throughput H n is actually an average over a certain time interval, e.g., over the duration of a video frame or group of frames.
  • the bandwidth or maximum data throughput for device n may be estimated from knowledge about the raw data rate used for communication between the access point and device n, the packet length (in bits), and measurements of the packet error rate. Other methods to measure the maximum throughput may also be used.
  • One particular model of the (shared) channel is such that the gateway communicates with each client device n for a fraction f n of the time. For example, during a fraction f 1 of the time, the home gateway is transmitting video stream 1 to device 1 , and during a fraction f 2 of the time, the gateway is transmitting video stream 2 to device 2 , and so on. Therefore, an effective throughput is obtained from the gateway to client n that is equal to: f n H n .
  • the following channel constraint holds over any time interval:
  • ⁇ n 1 N L ⁇ f n ⁇ 1.0 ( 19 ) i.e., the sum of channel utilization fractions must be smaller than (or equal to) 1.0. If these fractions add up to 1.0, the channel is utilized to its full capacity.
  • R n denote the rate of the original (source) video stream n.
  • R n the rate of the transrated (output) video stream n.
  • the problem of determining a set of fractions f n is an under-constrained problem.
  • the above relationships do not provide a unique solution.
  • the goal is to find a solution to this problem that maximizes some measure of the overall quality of all video streams combined.
  • An embodiment is based on a joint coding principle, where the bit rates of different streams are allowed to vary based on their relative coding complexity, in order to achieve a generally uniform picture quality.
  • This approach maximizes the minimum quality of any video stream that are jointly coded, i.e., this approach attempts to minimize distortion criterion (5).
  • N L video streams where each stream is MPEG-2 encoded with GOPs of equal size N G .
  • the first step in some bit allocation techniques is to assign a target number of bits to each GOP in a super GOP, where each GOP belongs to a different stream n. The allocation is performed in proportion to the relative complexity of each GOP in a super GOP.
  • the second step in the bit allocation procedure is to assign a target number of bits to each frame of the GOP of each video stream.
  • T n denote the target number of bits assigned to the GOP of stream n (within a super GOP).
  • S n,t denote the number of bits generated by the encoder/transcoder for frame t of video stream n.
  • the total number of bits generated for stream n over the course of a GOP should be equal (or close) to T n , i.e.,
  • Equation (27) would simply be modified with an additional factor to allow for this. For instance, there may be non-AV streams active in the network that consume some of the channel capacity. In the case of non-AV streams, some capacity has to be set aside for such streams, and the optimization of the rates of AV streams should take this into account, by lowering the mentioned sum lower than 1.0.
  • the actual target rate for each GOP can be computed with (26).
  • the second step in the bit allocation procedure is to assign a target number of bits to each frame of the GOP of each video stream. This can be achieved using existing bit allocation methods, such as the one provided in TM5. Subsequent coding or transcoding can be performed with any standard method, in this case any encoder/transcoder compliant to MPEG-2 (see FIG. 12 ).
  • Equation (27) would be executed at every frame time, to assign a target number of bits to a virtual GOP of a particular stream n. Subsequently, a target number of bits for each frame within each virtual GOP must be assigned, using, for instance, equations (3).
  • the above method can be applied in the case where GOPs are not used, i.e., the above method can be applied on a frame-by-frame basis, instead of on a GOP-by-GOP basis (see FIG. 14 ).
  • the above method can be applied on a frame-by-frame basis, instead of on a GOP-by-GOP basis (see FIG. 14 ).
  • rate control is applied on a frame-by-frame basis.
  • the above method can still be used to assign a target number of bits to each frame, in accordance to the relative coding complexity of each frame within the set of frames from all streams.
  • One embodiment uses a single-stream system, as illustrated in FIG. 7 .
  • This single-stream system has a single analog AV source.
  • the analog AV source is input to a processing module that contains an AV encoder that produces a digitally compressed bit stream, e.g., an MPEG-2 or MPEG-4 bit stream.
  • the bit rate of this bit stream is dynamically adapted to the conditions of the channel.
  • This AV bit stream is transmitted over the channel.
  • the connection between transmitter and receiver is strictly point-to-point.
  • the receiver contains an AV decoder that decodes the digitally compressed bit stream.
  • FIG. 8 Another embodiment is a single-stream system, as illustrated in FIG. 8 .
  • This single-stream system has a single digital AV source, e.g. an MPEG-2 or MPEG-4 bit stream.
  • the digital source is input to a processing module that contains an transcoder/transrater that outputs a second digital bit stream.
  • the bit rate of this bit stream is dynamically adapted to the conditions of the channel.
  • This AV bit stream is transmitted over the channel.
  • the connection between transmitter and receiver is strictly point-to-point.
  • the receiver contains an AV decoder that decodes the digitally compressed bit stream.
  • FIG. 9 Another embodiment is a multi-stream system, as illustrated in FIG. 9 .
  • This multi-stream system has multiple AV sources, where some sources may be in analog form, and other sources may be in digital form (e.g., MPEG-2 or MPEG-4 bit streams).
  • These AV sources are input to a processing module that contains zero or more encoders (analog inputs) as well as zero or more transcoders (digital inputs).
  • Each encoder and/or transcoder produces a corresponding output bitstream.
  • the bit rate of these bit streams are dynamically adapted to the conditions of the channel, so as to optimize the overall quality of all streams.
  • the system may also adapt these streams based on information about the capabilities of receiver devices.
  • Each receiver contains an AV decoder that decodes the digitally compressed bit stream
  • the implementation of a system may be done in such a manner that the system is free from additional probing “traffic” from the transmitter to the receiver. In this manner, no additional burden is placed on the network bandwidth by the transmitter.
  • a limited amount of network traffic from the receiver to the transmitter may contain feedback that may be used as a mechanism to monitor the network traffic.
  • the network monitoring for bandwidth utilization may be performed at the MAC layer
  • the system described herein preferably does the network monitoring at the APPLICATION layer.
  • the application layer the implementation is less dependent on the particular network implementation and may be used in a broader range of networks.
  • many wireless protocol systems include a physical layer, a MAC layer, a transport/network layer, and an application layer.
  • the parameters may be measured at the receiver and then sent back over the channel to the transmitter. While measuring the parameters at the receiver may be implemented without impacting the system excessively, it does increase channel usage and involves a delay between the measurement at the receiver and the arrival of information at the transmitter.
  • the parameters may be measured at the transmitter.
  • the MAC layer of the transmitter has knowledge of what has been sent and when.
  • the transmitter MAC also has knowledge of what has been received and when it was received through the acknowledgments.
  • the system may use the data link rate and/or packet error rate (number of retries) from the MAC layer.
  • the data link rate and/or packet error rate may be obtained directly from the MAC layer, the 802.11 management information base parameters, or otherwise obtained in some manner.
  • FIG. 18 illustrates the re-transmission of lost packets and the fall-back to lower data link rates between the transmitter and the receiver for a wireless transmission (or communication) system.
  • the packets carry P payload bits.
  • the time T it takes to transmit a packet with P bits may be computed, given the data link rate, number of retries, and a prior knowledge of MAC and PHY overhead (e.g., duration of contention window length of header, time it takes to send acknowledgment, etc.). Accordingly, the maximum throughput may be calculated as P/T (bits/second).
  • the packets are submitted to the transmitter, which may require retransmission in some cases.
  • the receiver receives the packets from the transmitter, and at some point thereafter indicates that the packet has been received to the application layer.
  • the receipt of packets may be used to indicate the rate at which they are properly received, or otherwise the trend increasing or decreasing. This information may be used to determine the available bandwidth or maximum throughput.
  • FIG. 20 illustrates an approach based on forming bursts of packets at the transmitter and reading such bursts periodically into the channel as fast a possible and measure the maximum throughput of the system. By repeating the process on a periodic basis the maximum throughput of a particular link may be estimated, while the effective throughput of the data may be lower than the maximum.

Abstract

A transmission systems suitable for video.

Description

BACKGROUND OF THE INVENTION
The present invention relates generally to wireless transmission systems, and relates more particularly to a wireless video transmission system.
Developing an effective method for implementing enhanced television systems is a significant consideration for contemporary television designers and manufacturers. In conventional television systems, a display device may be utilized to view program information received from a program source. The conventional display device is typically positioned in a stationary location because of restrictions imposed by various physical connections that electrically couple the display device to input devices, output devices, and operating power. Other considerations such as display size and display weight may also significantly restrict viewer mobility in traditional television systems.
Portable television displays may advantageously provide viewers with additional flexibility when choosing an appropriate viewing location. For example, in a home environment, a portable television may readily be relocated to view programming at various remote locations throughout the home. A user may thus flexibly view television programming, even while performing other tasks in locations that are remote from a stationary display device.
However, portable television systems typically possess certain detrimental operational characteristics that diminish their effectiveness for use in modern television systems. For example, in order to eliminate restrictive physical connections, portable televisions typically receive television signals that are propagated from a remote terrestrial television transmitter to an antenna that is integral with the portable television. Because of the size and positioning constraints associated with a portable antenna, such portable televisions typically exhibit relatively poor reception characteristics, and the subsequent display of the transmitted television signals is therefore often of inadequate quality.
Other factors and considerations are also relevant to effectively implementing an enhanced wireless television system. For example, the evolution of digital data network technology and wireless digital transmission techniques may provide additional flexibility and increased quality to portable television systems. However, current wireless data networks typically are not optimized for flexible transmission and reception of video information.
Furthermore, a significant proliferation in the number of potential program sources (both analog and digital) may benefit a system user by providing an abundance of program material for selective viewing. In particular, an economical wireless television system for flexible home use may enable television viewers to significantly improve their television-viewing experience by facilitating portability while simultaneously providing an increased number of program source selections.
However, because of the substantially increased system complexity, such an enhanced wireless television system may require additional resources for effectively managing the control and interaction of various system components and functionalities. Therefore, for all the foregoing reasons, developing an effective method for implementing enhanced television systems remains a significant consideration for designers and manufacturers of contemporary television systems.
A number of media playback systems use continuous media streams, such as video image streams, to output media content. However, some continuous media streams in their raw form often require high transmission rates, or bandwidth, for effective and/or timely transmission. In many cases, the cost and/or effort of providing the required transmission rate is prohibitive. This transmission rate problem is often solved by compression schemes that take advantage of the continuity in content to create highly packed data. Compression methods such Motion Picture Experts Group (MPEG) methods and its variants for video are well known. MPEG and similar variants use motion estimation of blocks of images between frames to perform this compression. With extremely high resolutions, such as the resolution of 1080i used in high definition television (HDTV), the data transmission rate of such a video image stream will be very high even after compression.
One problem posed by such a high data transmission rate is data storage. Recording or saving high resolution video image streams for any reasonable length of time requires considerably large amounts of storage that can be prohibitively expensive. Another problem presented by a high data transmission rate is that many output devices are incapable of handling the transmission. For example, display systems that can be used to view video image streams having a lower resolution may not be capable of displaying such a high resolution. Yet another problem is the transmission of continuous media in networks with a limited bandwidth or capacity. For example, in a local area network with multiple receiving/output devices, such a network will often have a limited bandwidth or capacity, and hence be physically and/or logistically incapable of simultaneously supporting multiple receiving/output devices.
Laksono, U.S. Patent Application Publication Number 2002/0140851 A1 published Oct. 3, 2002 discloses an adaptive bandwidth footprint matching for multiple compressed video streams in a limited bandwidth network.
Wang and Vincent in a paper entitled Bit Allocation and Constraints for Joint Coding of Multiple Video Programs, IEEE Transaction on Circuits and Systems for Video Technology, Vol. 9, No. 6, September 1999 discuss a multi-program transmission system in which several video programs are compressed, multiplexed, and transmitted over a single channel. The aggregate bit rate of the programs has to be equal to (or less than) the bandwidth (e.g., channel rate). This can be achieved by controlling either each individual program bit rate (independent coding) or the aggregate bit rate (joint coding). Thus in order to achieve such bit rate allocation, with a channel having 150 megabits/second of bandwidth, a first program may use 75 megabits/second, a second program may use 25 megabits/second, and a third program may use 50 megabits/second, with the channel bandwidth being distributed by measuring the bit-rate being transmitted.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a gateway, media sources, receiving units, and a network.
FIG. 2 illustrates an analog extender.
FIG. 3 illustrate a digital extender.
FIG. 4 illustrates GOPs.
FIG. 5 illustrates virtual GOPs.
FIG. 6 illustrates a more detailed view of an extender.
FIG. 7 illustrates an analog source single stream.
FIG. 8 illustrates a digital source single stream.
FIG. 9 illustrates multiple streams.
FIG. 10 illustrates MPEG-2 TM5.
FIG. 11 illustrates dynamic rate adaptation with virtual GOPs.
FIG. 12 illustrates slowly varying channel conditions of super GOPS by GOP bit allocation.
FIG. 13 illustrates dynamic channel conditions of virtual super GOP by virtual super GOP bit allocation.
FIG. 14 illustrates dynamic channel conditions of super frame by super frame bit allocation.
FIG. 15 illustrates user preferences and priority determination for streams.
FIG. 16 illustrates the weight of a stream resulting from preferences at a particular point in time.
FIG. 17 illustrates the relative weight of streams set or changed at arbitrary times or on user demand.
FIG. 18 illustrates an MAC layer model.
FIG. 19 illustrates an APPLICATION layer model-based approach.
FIG. 20 illustrates an APPLICATION layer packet burst approach.
BRIEF DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
FIG. 1 illustrate a system for transmission of multiple data streams in a network that may have limited bandwidth. The system includes a central gateway media server 210 and a plurality of client receiver units 230, 240, 250. The central gateway media server may be any device that can transmit multiple data streams. The input data streams may be stored on the media server or arrive from an external source, such as a satellite television transmission 260, a digital video disc player, a video cassette recorder, or a cable head end 265, and are transmitted to the client receiver units 230, 240, 250 in a compressed format. The data streams can include display data, graphics data, digital data, analog data, multimedia data, audio data and the like. An adaptive bandwidth system on the gateway media server 210 determines the network bandwidth characteristics and adjusts the bandwidth for the output data streams in accordance with the bandwidth characteristics.
In one existing system, the start time of each unit of media for each stream is matched against the estimated transmission time for that unit. When any one actual transmission time exceeds its estimated transmission time by a predetermined threshold, the network is deemed to be close to saturation, or already saturated, and the system may select at least one stream as a target for lowering total bandwidth usage. Once the target stream associated with a client receiver unit is chosen, the target stream is modified to transmit less data, which may result in a lower data transmission rate. For example, a decrease in the data to be transmitted can be accomplished by a gradual escalation of the degree of data compression performed on the target stream, thereby reducing the resolution of the target stream. If escalation of the degree of data compression alone does not adequately reduce the data to be transmitted to prevent bandwidth saturation, the resolution of the target stream can also be reduced. For example, if the target stream is a video stream, the frame size could be scaled down, reducing the amount of data per frame, and thereby reducing the data transmission rate.
By way of background the bandwidth requirements for acceptable quality of different types of content vary significantly:
    • CD audio is generally transmitted at about 1 Mbps;
    • Standard definition video (MPEG-2) is generally transmitted at about 6 Mbps;
    • High Definition video (MPEG-2) is generally transmitted at about 20 Mbps; and
    • Multiple audio/video streams are generally transmitted at about 50-150 Mbps or more.
      The overall quality can be expressed in many different ways, such as for example, the peak signal-to-noise ratio, delay (<100 ms for effective real-time two-way communication), synchronization between audio and video (<10 ms typically), and jitter (time varying delay). In many cases the audio/video streams are unidirectional, but may include a back-channel for communication.
There are many characteristics that the present inventors identified that may be considered for an audio/visual transmission system in order to achieve improved results over the technique described above.
(1) The devices may be located at different physical locations, and, over time, the users may change the location of these devices relative to the gateway. For example, the user may relocate the device near the gateway or farther away from the gateway, or, the physical environment may change significantly over time, both of which affect the performance of the wireless network for that device, and in turn the available bandwidth for other devices. This results in unpredictable and dynamically varying bandwidth.
(2) Different devices interconnected to the network have different resources and different usage paradigms. For example, different devices may have different microprocessors, different memory requirements, different display characteristics, different connection bandwidth capabilities, and different battery resources. In addition, different usage paradigms may include for example, a mobile handheld device versus a television versus a personal video recorder. This results in unpredictable and dynamically varying network maximum throughput.
(3) Multiple users may desire to access the data from the system simultaneously using different types of devices. As the user access data and stops accessing data the network conditions will tend to dynamically change. This results in unpredictable and dynamically varying network maximum throughput.
(4) Depending on the client device the transmitted data may need to be in different formats, such as for example, MPEG-2, MPEG-1, H.263, H.261, H.264, MPEG-4, analog, and digital. These different formats may have different impacts on the bandwidth. This results in unpredictable and dynamically varying network maximum throughput.
(5) The data provided to the gateway may be in the form of compressed bit streams which may include a constant bit rate (CBR) or variable bit rate (VBR). This results in unpredictable and dynamically varying network maximum throughput.
Various network technologies may be used for the gateway reception and transmission, such as for example, IEEE 802.11, Ethernet, and power-line networks (e.g., HomePlug Powerline Alliance). While such networks are suitable for data transmission, they do not tend to be especially suitable for audio/video content because of the stringent requirements imposed by the nature of audio/video data transmission. Moreover, the network capabilities, and in particular the data maximum throughput offered, are inherently unpredictable and may dynamically change due to varying conditions described above. The data throughput may be defined in terms of the amount of actual (application) payload bits (per second) being transmitted from the sender to the receiver successfully. It is noted that while the system may refer to audio/video, the concepts are likewise used for video alone and/or audio alone.
With reference to one particular type of wireless network, namely, IEEE 802.11, such as IEEE 802.11a and 802.11b, they can operate at several different data link rates:
    • 6, 9, 12, 18, 24, 36, 48, or 54 Mbps for 802.11(a), and
    • 1, 2, 5.5, or 11 Mbps for 802.11(b).
      However, the actual maximum throughput as seen by the application layer is lower due to protocol overhead and depends on the distance between the client device and the access point (AP), and the orientation of the client device. Accordingly, the potential maximum throughput for a device within a cell (e.g., a generally circular area centered around the AP) is highest when the device is placed close to the AP and lower when it is farther away. In addition to the distance, other factors contribute to lowering the actual data maximum throughput, such as the presence of walls and other building structures, and radio-frequency interference due to the use of cordless phones and microwave ovens. Furthermore, multiple devices within the same cell communicating with the same AP must share the available cell maximum throughput.
A case study by Chen and Gilbert, “Measured Performance of 5-GHz 802.11a wireless LAN systems”, Atheros Communications, 27 Aug. 2001 shows that the actual maximum throughput of an IEE 802.11a system in an office environment is only about 23 Mbps at 24 feet, and falls below 20 Mbps (approximately the rate of a single high definition video signal) at ranges over 70 feet. The maximum throughput of an 802.11 (b) system is barely 6 Mbps and falls below 6 Mbps (approximately the rate of a single standard definition video signal at DVD quality) at ranges over 25 feet. The report quotes average throughput values for within a circular cell with radius of 65 feet (typical for large size houses in the US) as 22.6 Mbps and 5.1 Mbps for 802.11a and 802.11b, respectively. Accordingly, it may be observed that it is not feasible to stream a standard definition and a high definition video signal to two client devices at the same time using an 802.11a system, unless the video rates are significantly reduced. Also other situations likewise involve competing traffic from several different audiovisual signals. Moreover, wireless communications suffer from radio frequency interference from devices that are unaware of the network, such as cordless phones and microwave ovens, as previously described. Such interference leads to unpredictable and dynamic variations in network performance, i.e., losses in data maximum throughput/bandwidth.
Wireless Local Area Networks (WLANs), such as 802.11 systems, include efficient error detection and correction techniques at the Physical (PHY) and Medium Access Control (MAC) layer. This includes the transmission of acknowledgment frames (packets) and retransmission of frames that are believed to be lost. Such retransmission of frames by the source effectively reduces the inherent error rate of the medium, at the cost of lowering the effective maximum throughput. Also, high error rates may cause the sending stations in the network to switch to lower raw data link rates, again reducing the error rate while decreasing the data rates available to applications.
Networks based on power-line communication address similar challenges due to the unpredictable and harsh nature of the underlying channel medium. Systems based on the HomePlug standard include technology for adapting the data link rate to the channel conditions. Similar to 802.11 wireless networks, HomePlug technology contains techniques such as error detection, error correction, and retransmission of frames to reduce the channel error rate, while lowering effective maximum throughput. Due to the dynamic nature of these conditions, the maximum throughput offered by the network may (e.g., temporarily) drop below the data rate required for transmission of AV data streams. This results in loss of AV data, which leads to an unacceptable decrease in the perceived AV quality.
To reduce such limitations one may (1) improve network technology to make networks more suitable to audio/visual data and/or (2) one may modify the audio/visual data to make the audio/visual data more suitable to such transmission networks. Therefore, a system may robustly stream audio/visual data over (wireless) networks by:
    • (1) optimizing the quality of the AV data continuously, in real-time; and
    • (2) adapting to the unpredictable and dynamically changing conditions of the network.
      Accordingly a system that includes dynamic rate adaption is suitable to accommodate distribution of high quality audio/video streams over networks that suffer from significant dynamic variations in performance. These variations may be caused by varying of distance of the receiving device from the transmitter, from interference, or other factors.
The following discussion includes single-stream dynamic rate adaptation, followed by multi-stream dynamic rate adaptation, and then various other embodiments.
Single Stream Dynamic Rate Adaptation
A system that uses dynamic rate adaptation for robust streaming of video over networks may be referred to as an extender. A basic form of an extender that processes a single video stream is shown in FIGS. 2 and 3. FIGS. 2 and 3 depict the transmitting portion of the system, the first having analog video inputs, the second having digital (compressed) video inputs. The extender includes a video encoding or transcoding module, depending on whether the input video is in analog or digital (compressed) format. If the input is analog, the processing steps may include A/D conversion, as well as digital compression, such as by an MPEG-2 encoder, and eventual transmission over the network. If the input is already in digital format, such as an MPEG-2 bit stream, the processing may include transcoding of the incoming bit stream to compress the incoming video into an output stream at a different bit rate, as opposed to a regular encoder. A transcoding module normally reduces the bit rate of a digitally compressed input video stream, such as an MPEG-2 bit stream or any other suitable format.
The coding/transcoding module is provided with a desired output bit rate (or other similar information) and uses a rate control mechanism to achieve this bit rate. The value of the desired output bit rate is part of information about the transmission channel provided to the extender by a network monitor module. The network monitor monitors the network and estimates the bandwidth available to the video stream in real time. The information from the network monitor is used to ensure that the video stream sent from the extender to a receiver has a bit rate that is matched in some fashion to the available bandwidth (e.g., channel rate). With a fixed video bit rate normally the quality varies on a frame by frame basis. To achieve the optimal output bit rate, the coder/transcoder may increase the level of compression applied to the video data, thereby decreasing visual quality slowly. In the case of a transcoder, this may be referred to as transrating. Note that the resulting decrease in visual quality by modifying the bit stream is minimal in comparison to the loss in visual quality that would be incurred if a video stream is transmitted at bit rates that can not be supported by the network. The loss of video data incurred by a bit rate that can not be supported by the network may lead to severe errors in video frames, such as dropped frames, followed by error propagation (due to the nature of video coding algorithms such as MPEG). The feedback obtained from the network monitor ensures that the output bit rate is toward an optimum level so that any loss in quality incurred by transrating is minimal.
The receiver portion of the system may include a regular video decoder module, such as an MPEG-2 decoder. This decoder may be integrated with the network interface (e.g., built into the hardware of a network interface card). Alternatively, the receiving device may rely on a software decoder (e.g., if it is a PC). The receiver portion of the system may also include a counterpart to the network monitoring module at the transmitter. In that case, the network monitoring modules at the transmitter and receiver cooperate to provide the desired estimate of the network resources to the extender system. In some cases the network monitor may be only at the receiver.
If the system, including for example the extender, has information about the resources available to the client device consuming the video signal as previously described, the extender may further increase or decrease the output video quality in accordance with the device resources by adjusting bandwidth usage accordingly. For example, consider an MPEG-1 source stream at 4 Mbps with 640 by 480 spatial resolution at 30 fps. If it is being transmitted to a resource-limited device, e.g., a handheld with playback capability of 320 by 240 picture resolution at 15 fps, the transcoder may reduce the rate to 0.5 Mbps by simply subsampling the video without increasing the quantization levels. Otherwise, without subsampling, the transcoder may have to increase the level of quantization. In addition, the information about the device resources also helps prevent wasting shared network resources. A transcoder may also convert the compression format of an incoming digital video stream, e.g., from MPEG-2 format to MPEG-4 format. Therefore, a transcoder may for example: change bit rate, change frame rate, change spatial resolution, and change the compression format.
The extender may also process the video using various error control techniques, e.g. such methods as forward error correction and interleaving.
Dynamic Rate Adaptation
Another technique that may be used to manage available bandwidth is dynamic rate adaptation, which generally uses feedback to control the bit rate. The rate of the output video is modified to be smaller than the currently available network bandwidth from the sender to the receiver, most preferably smaller at all times. In this manner the system can adapt to a network that does not have a constant bit rate, which is especially suitable for wireless networks.
One technique for rate control of MPEG video streams is that of the so-called MPEG-2 Test Model 5 (TM5), which is a reference MPEG-2 codec algorithm published by the MPEG group (see FIG. 10). Referring to FIG. 4, rate control in TM5 starts at the level of a Group-of-Pictures (GOP), consisting of a number of I, P, and B-type video frames. The length of a GOP in number of pictures is denoted by NGOP. Rate control for a constant-bit-rate (CBR) channel starts by allocating a fixed number of bits GGOP to a GOP that is in direct proportion to the (constant) bandwidth offered. Subsequently, a target number of bits is allocated to a specific frame in the GOP. Each subsequent frame in a GOP is allocated bits just before it is coded. After coding all frames in a GOP, the next GOP is allocated bits. This is illustrated in FIG. 4 where NGOP=9 for illustration purposes.
An extension for a time-varying channel can be applied if one can assume that the available bandwidth varies only slowly relative to the duration of a GOP. This may be the case when the actual channel conditions for some reason change only slowly or relatively infrequently. Alternatively, one may only be able to measure the changing channel conditions with coarse time granularity. In either case, the bandwidth can be modeled as a piece-wise constant signal, where changes are allowed only on the boundaries of a (super) GOP. Thus, GGOP is allowed to vary on a GOP-by-GOP basis.
However, this does not resolve the issues when the bandwidth varies quickly relative to the duration of a GOP, i.e., the case where adjustments to the target bit rate and bit allocation should be made on a frame-by-frame basis or otherwise a much more frequent basis. To allow adjustments to the target bit rate on a frame-by-frame basis, one may introduce the concept of a virtual GOP, as shown in FIG. 5 (see FIG. 11).
Each virtual GOP may be the same length as an actual MPEG GOP, any other length, or may have a length that is an integer multiple of the length of an actual MPEG GOP. A virtual GOP typically contains the same number of I-, P- and B-type pictures within a single picture sequence. However, virtual GOPs may overlap each other, where the next virtual GOP is shifted by one (or more) frame with respect to the current virtual GOP. The order of I-, P- and B-type pictures changes from one virtual GOP to the next, but this does not influence the overall bit allocation to each virtual GOP. Therefore, a similar method, as used e.g. in TM5, can be used to allocate bits to a virtual GOP (instead of a regular GOP), but the GOP-level bit allocation is in a sense “re-started” at every frame (or otherwise “re-started” at different intervals).
Let Rt denote the remaining number of bits available to code the remaining frames of a GOP, at frame t. Let St denote the number of bits actually spent to code the frame at time t. Let Nt denote the number of frames left to code in the current GOP, starting from frame t.
In TM5, Rt is set to 0 at the start of the sequence, and is incremented by GGOP at the start of every GOP. Also, St is subtracted from Rt at the end of coding a picture. It can be shown that Rt can be written as follows, in closed form:
R t = N t G P + j = 1 t - 1 ( G P - S j ) , ( 1 )
where GP is a constant given by:
G P = G GOP N GOP
indicating the average number of bits available to code a single frame.
To handle a time varying bandwidth, the constant GP may be replaced by Gt, which may vary with t. Also, the system may re-compute (1) at every frame t, i.e., for each virtual GOP. Since the remaining number of frames in a virtual GOP is NGOP, the system may replace Nt by NGOP, resulting in:
R t = N GOP G t + j = 1 t - 1 ( G j - S j ) , ( 2 )
Given Rt, the next step is allocate bits to the current frame at time t, which may be of type I, P, or B. This step takes into account the complexity of coding a particular frame, denoted by Ct. Frames that are more complex to code, e.g., due to complex object motion in the scene, require more bits to code, to achieve a certain quality. In TM-5, the encoder maintains estimates of the complexity of each type of frame (I, P, or B), which are updated after coding each frame. Let CI, CP and CB denote the current estimates of the complexity for I, P and B frames. Let NI, NP and NB denote the number of frames of type I, P and B left to encode in a virtual GOP (note that these are constants in the case of virtual GOPs).
TM5 prescribes a method for computing TI, TP and TB, which are the target number of bits for an I, B, or P picture to be encoded, based on the above parameters. The TM5 equations may be slightly modified to handle virtual GOPs as follows:
T I = R t / ( N I + N P K I C P K P C I + N B K I C B K B C I ) T P = R t / ( N P + N I K P C I K I C P + N B K P C B K B C P ) T B = R t / ( N B + N I K B C I K I C B + N P K B C P K P C B ) , ( 3 )
where KI, KP, and KB are constants. I, B, P, refer to I frames, B frames, and P frames, and C is a complexity measure. It is to be understood that any type of compression rate distortion model, defined in the general sense, may likewise be used.
As it may be observed, this scheme permits the reallocation of bits on a virtual GOP basis from frame to frame (or other basis consistent with virtual GOP spacing). The usage and bit allocation for one virtual GOP may be tracked and the unused bit allocation for a virtual GOP may be allocated for the next virtual GOP.
Multi-Stream Dynamic Rate Adaptation
The basic extender for a single AV stream described above will encode an analog input stream or adapt the bit rate of an input digital bit stream to the available bandwidth without being concerned about the cause of the bandwidth limitations, or about other, competing streams, if any. In the following, the system may include a different extender system that processes multiple video streams, where the extender system assumes the responsibility of controlling or adjusting the bit rate of multiple streams in the case of competing traffic.
The multi-stream extender, depicted in FIG. 6, employs a “(trans)coder manager” on top of multiple video encoders/transcoders. As shown in FIG. 6, the system operates on n video streams, where each source may be either analog (e.g. composite) or digital (e.g. MPEG-2 compressed bitstreams). Here, Vn denotes input stream n, while V′n denotes output stream n. Rn denotes the bit rate of input stream n (this exists only if input stream n is in already compressed digital form; it is not used if the input is analog), while R′n denotes the bit rate of output stream n.
Each input stream is encoded or transcoded separately, although their bit rates are controlled by the (trans)coder manager. The (trans)coder manager handles competing requests for bandwidth dynamically. The (trans)coding manager allocates bit rates to multiple video streams in such a way that the aggregate of the bit rates of the output video streams matches the desired aggregate channel bit rate. The desired aggregate bit rate, again, is obtained from a network monitor module, ensuring that the aggregate rate of multiple video streams does not exceed available bandwidth. Each coder/transcoder again uses some form of rate control to achieve the allocated bit rate for its stream.
In this case, the system may include multiple receivers (not shown in the diagram). Each receiver in this system has similar functionality as the receiver for the single-stream case.
As in the single-stream case, the bit rate of the multiple streams should be controlled by some form of bit allocation and rate control in order to satisfy such constraints. However, in the case of a multi-stream system, a more general and flexible framework is useful for dynamic bit rate adaptation. There are several reasons for this, as follows:
    • (1) The system should deal with multiple AV streams that may have different characteristics, and should allocate the available bits as supported by the channel accordingly;
    • (2) The system should deal with the network characteristics, which are partly unpredictable, and need special attention in the case of multiple receivers as described later;
    • (3) The system should handle any differences between the receiving devices themselves, such as differences in screen sizes, supported frame rates, etc.; and
    • (4) The different video sources may be regarded as different in importance due to their content. Also, since the different video streams are viewed by different people (users), possibly in different locations (e.g., different rooms in a home), the system may want to take the preferences of the different users into account.
The resulting heterogeneity of the environment may be taken into account during optimization of the system.
To this end, the multi-stream extender system may optionally receive further information as input to the transcoder manager (in addition to information about the transmission channel), as shown in FIG. 6. This includes, for example:
    • Information about each receiving device;
    • Information about each video source; and
    • Information about the preferences of each user.
In the following subsections, first is listed the type of constraints that the bit rate of the multiple streams in this system are subject to. Then, the notion of stream prioritizing is described, which is used to incorporate certain heterogeneous characteristics of the network as discussed above. Then, various techniques are described to achieve multi-stream (or joint) dynamic rate adaptation.
Bit Rate Constraints for Multiple Streams
The bit rates of individual audio/video streams on the network are subject to various constraints.
Firstly, the aggregate rates of individual streams may be smaller than or equal to the overall channel capacity or network bandwidth from sender to receiver. This bandwidth may vary dynamically, due to increases or decreases in the number of streams, due to congestion in the network, due to interference, etc.
Further, the rate of each individual stream may be bound by both a minimum and a maximum. A maximum constraint may be imposed due to the following reasons.
    • (1) A stream may have a maximum rate due to limitations of the channel or network used. For instance, if a wireless network is used, the maximum throughput to a single device depends on the distance between the access point and the client device. Note that this maximum may be time-varying. For instance, if the client device in a wireless network is portable and its distance to the access point is increasing (e.g. while being carried), the maximum throughput is expected to decrease.
    • (2) A stream may have a maximum rate due to limitations of the client device. The client device may have limited capabilities or resources, e.g., a limited buffer size or limited processing power, resulting in an upper bound on the rate of an incoming audio/video stream.
    • (3) A stream may have a minimum rate imposed by the system or by the user(s), in order to guarantee a minimum quality. If this minimum rate cannot be provided by the system, transmission to the device may not be performed. This helps achieve some minimum quality. A stream may also have a minimum rate imposed in order to prevent buffer underflow.
Stream Prioritizing or Weighting
The (trans)coder manager discussed above may employ several strategies. It may attempt to allocate an equal amount of available bits to each stream; however, in this case the quality of streams may vary strongly from one stream to the other, as well as in time. It may also attempt to allocate the available bits such that the quality of each stream is approximately equal; in this case, streams with highly active content will be allocated more bits than streams with less active content. Another approach is to allow users to assign different priorities to different streams, such that the quality of different streams is allowed to vary, based on the preferences of the user(s). This approach is generally equivalent to weighting the individual distortion of each stream when the (trans)coder manager minimizes the overall distortion.
The priority or weight of an audio/video stream may be obtained in a variety of manners, but is generally related to the preferences of the users of the client devices. Note that the weights (priorities) discussed here are different from the type of weights or coefficients seen often in literature that correspond to the encoding complexity of a macro block, video frame, group of frames, or video sequence (related to the amount of motion or texture variations in the video), which may be used to achieve a uniform quality among such parts of the video. Here, weights will purposely result in a non-uniform quality distribution across several audio/video streams, where one (or more) such audio/video stream is considered more important than others. Various cases, for example, may include the following, and combinations of the following.
Case A
The weight of a stream may be the result of a preference that is related to the client device (see FIG. 15). That is, in the case of conflicting streams requesting bandwidth from the channel, one device is assigned a priority such that the distortion of streams received by this device are deemed more severe as an equal amount of distortion in a stream received by another device. For instance, the user(s) may decide to assign priority to one TV receiver over another due to their locations. The user(s) may assign a higher weight to the TV in the living room (since it is likely to be used by multiple viewers) compared to a TV in the bedroom or den. In that case, the content received on the TV in the living room will suffer from less distortion due to transcoding than the content received on other TVs. As another instance, priorities may be assigned to different TV receivers due to their relative screen sizes, i.e., a larger reduction in rate (and higher distortion) may be acceptable if a TV set's screen size is sufficiently small. Other device resources may also be translated into weights or priorities.
Such weighting could by default be set to fixed values, or using a fixed pattern. Such weighting may require no input from the user, if desired.
Such weighting may be set once (during set up and installation). For instance, this setting could be entered by the user, once he/she decides which client devices are part of the network and where they are located. This set up procedure could be repeated periodically, when the user(s) connect new client devices to the network.
Such weighting may also be the result of interaction between the gateway and client device. For instance, the client device may announce and describe itself to the gateway as a certain type of device. This may result in the assignment by the gateway of a certain weighting or priority value to this device.
Case B
The weight of a stream may be result of a preference that is related to a content item (such as TV program) that is carried by a particular stream at a particular point in time (see FIG. 16). That is, for the duration that a certain type of content is transmitted over a stream, this stream is assigned a priority such that the distortion of this stream is deemed more severe as an equal amount of distortion in other streams with a different type of content, received by the same or other devices. For instance, the user(s) may decide to assign priority TV programs on the basis of its genre, or other content-related attributes. These attributes, e.g. genre information, about a program can be obtained from an electronic program guide. These content attributes may also be based on knowledge of the channel of the content (e.g. Movie Channel, Sports Channel, etc). The user(s) may for example assign a higher weight to movies, compared to other TV programs such as gameshows. In this case, when multiple streams contend for limited channel bandwidth, and one stream carries a movie to one TV receiver, while another stream simultaneously carries a gameshow to another TV, the first stream is assigned a priority such that it will be distorted less by transcoding than the second stream.
Such weighting could by default be set to fixed values, or using a fixed pattern. Such weighting may require no input from the user, if desired.
Such weighting may be set once (during set up and installation). For instance, this setting could be entered by the user, once he/she decides which type(s) of content are important to him/her. Then, during operation, the gateway may match the description of user preferences (one or more user preferences) to descriptions of the programs transmitted. The actual weight could be set as a result of this matching procedure. The procedure to set up user preferences could be repeated periodically. The user preference may be any type of preference, such as those of MPEG-7 or TV Anytime. The system may likewise include the user's presence (any user or a particular user) to select, at least in part, the target bit rate. The user may include direct input, such as a remote control. Also, the system may include priorities among the user preferences to select the target bit rate.
Such weighting may also be the result of the gateway tracking the actions of the user. For instance, the gateway may be able to track the type of content that the user(s) consume frequently. The gateway may be able to infer user preferences from the actions of the user(s). This may result in the assignment by the gateway of a certain weighting or priority value to certain types of content.
Case C
The relative weight of streams may also be set or changed at arbitrary times or on user demand (see FIG. 17).
Such weighting may be bound to a particular person in the household. For instance, one person in a household may wish to receive the highest possible quality content, no matter what device he/she uses. In this case, the weighting can be changed according to which device that person is using at any particular moment.
Such weighting could be set or influenced at an arbitrary time, for instance, using a remote control device.
Such weighting could also be based on whether a user is recording content, as opposed to viewing. Weighting could be such that a stream is considered higher priority (hence should suffer less distortion) if that stream is being recorded (instead of viewed).
Case D
The relative weight of streams may also be set based on their modality. In particular, the audio and video streams of an audiovisual stream may be separated and treated differently during their transmission. For example, the audio part of an audiovisual stream may be assigned a higher priority than the video part. This case is motivated by the fact that when viewing a TV program, in many cases, loss of audio information is deemed more severe by users than loss of video information from the TV signal. This may be the case, for instance, when the viewer is watching a sports program, where a commentator provides crucial information. As another example, it may be that users do not wish to degrade the quality of audio streams containing hi-quality music. Also, the audio quality could vary among different speakers or be sent to different speakers.
Network Characteristics
The physical and data link layers of the aforementioned networks are designed to mitigate the adverse conditions of the channel medium. One of the characteristics of these networks specifically affects bit allocation among multiple streams as in a multi-stream extender system discussed here. In particular, in a network based on IEEE 802.11, a gateway system may be communicating at different data link rates with different client devices. WLANs based on IEEE 802.11 can operate at several data link rates, and may switch or select data link rates adaptively to reduce the effects of interference or distance between the access point and the client device. Greater distances and higher interference cause the stations in the network to switch to lower raw data link rates. This may be referred to as multi-rate support. The fact that the gateway may be communicating with different client devices at different data rates, in a single wireless channel, affects the model of the channel as used in bit allocation for joint coding of multiple streams.
Prior work in rate control and bit allocation uses a conventional channel model, where there is a single bandwidth that can simply be divided among AV streams in direct proportion to the requested rates for individual AV streams. The present inventors determined that this is not the case in LANs such as 802.11 WLANs due to their multi-rate capability. Such wireless system may be characterized in that the sum of the rate of each link is not necessarily the same as the total bandwidth available from the system, for allocation among the different links. In this manner, a 10 Mbps video signal, and a 20 Mbps video signal may not be capable of being transmitted by a system having a maximum bandwidth of 30 Mbps. The bandwidth used by a particular wireless link in an 802.11 wireless system is temporal in nature and is related to the maximum bandwidth of that particular wireless link. For example, if link 1 has a capacity of 36 Mbps and the data is transmitted at a rate of 18 Mbps the usage of that link is 50%. This results in using 50% of the systems overall bandwidth. For example, if link 2 has a capacity of 24 Mbps and the data is transmitted at a rate of 24 Mbps the usage of link 2 is 100%. Using link 2 results in using 100% of the system's overall bandwidth leaving no bandwidth for other links, thus only one stream can be transmitted.
Bit Allocation in Joint Coding of Multiple Streams
A more optimal approach to rate adaptation of multiple streams is to apply joint bit allocation/rate control. This approach applies to the case where the input streams to the multi-stream extender system are analog, as well as the case where the input streams are already in compressed digital form.
Let the following parameters be defined:
  • NL denote the number of streams
  • pn denote a weight or priority assigned to stream n, with pn≧0
  • an denote a minimum output rate for stream n, with an≧0
  • bn denote a maximum output rate for stream n, with bn≧an
  • Dn(r) denote the distortion of output stream n as a function of its output rate r (i.e. the distortion of the output with respect to the input of the encoder or transcoder)
  • RC denote the available bandwidth of the channel or maximum network maximum throughput
  • Rn denotes the bit rate of input stream n
  • R′n denotes the bit rate of output stream n
Note that Rn, R′n and RC may be time-varying in general; hence, these are functions of time t.
The problem of the multi-stream extender can be formulated generically as follows:
The goal is to find the set of output rates R′n, n=1, . . . , NL, that maximizes the overall quality of all output streams or, equivalently, minimizes an overall distortion criterion D, while the aggregate rate of all streams is within the capacity of the channel.
One form of the overall distortion criterion D is a weighted average of the distortion of the individual streams:
D = n = 1 N L p n D n ( R n ) ( 4 )
Another form is the maximum of the weighted distortion of individual streams:
D=maxn {p n D n(R′ n)}  (5)
In this section, a conventional channel model is used, similar to cable tv, where an equal amount of bit rate offered to a stream corresponds to an equal amount of utilization of the channel, while it may be extended to the wireless type utilizations described above. Therefore, the goal is to minimize a criterion such as (4) or (5), subject to the following
constraints:
n = 1 N L R n R C ( 6 )
and, for all n,
0≦an<R′n≦bn≦Rn  (7)
at any given time t.
In the case of transcoding, note that the distortion of each output stream V′n is measured with respect to the input stream Vn, which may already have significant distortion with respect to the original data due to the original compression. However, this distortion with respect to the original data is unknown. Therefore, the final distortion of the output stream with respect to the original (not input) stream is also unknown, but bounded below by the distortion already present in the corresponding input stream Vn. It is noted that in the case of transcoding, a trivial solution to this problem is found when the combined input rates do not exceed the available channel bandwidth, i.e, when:
n = 1 N L R n R C ( 8 )
In this case, R′n=Rn and Dn(R′n)=Dn(Rn) for all n, and no transcoding needs to be applied.
It is noted that no solution to the problem exists, when:
R C n = 1 N L a n ( 9 )
This may happen when the available channel bandwidth/network maximum throughput would drop (dramatically) due to congestion, interference, or other problems. In this situation, one of the constraints (7) would have to be relaxed, or the system would have to deny access to a stream requesting bandwidth.
It is noted that an optimal solution that minimizes the distortion criterion as in (5) is one where the (weighted) distortion values of individual streams are all equal.
It is noted that (6) embodies a constraint imposed by the channel under a conventional channel model. This constraint is determined by the characteristics of the specific network. A different type of constraint will be used as applied to LANs with multi-rate support.
A few existing optimization algorithms exist that can be used to find a solution to the above minimization problem, such as Lagrangian optimization and dynamic programming. Application of such optimization algorithms to the above problem may require search over a large solution space, as well as multiple iterations of compressing the video data. This may be prohibitively computationally expensive. A practical approach to the problem of bit allocation for joint coding of multiple video programs extends the approach used in the so-called MPEG-2 Test Model 5 (TM5).
An existing approach is based on the notions of super GOP and super frame. A normal MPEG-2 GOP (Group-of-Pictures) of a single stream contains a number of I, P and B-type frames. A super GOP is formed over multiple MPEG-2 streams and consists of NGOP super frames, where a super frame is a set of frames containing one frame from each stream and all frames in a super frame coincide in time. A super GOP always contains an integer number of stream-level MPEG-2 GOPs, even when the GOPs of individual streams are not the same length and not aligned. The bit allocation method includes a target number of bits assigned to a super GOP. This target number Ts is the same for every super GOP and is derived from the channel bit rate, which is assumed fixed. Given Ts, the bit allocation is done for each super frame within a super GOP. The resulting target number of bits for a super frame Tt depends on the number of I, P, and B frames in the given streams. Then, given Tt, the bit allocation is done for each frame within a super frame. The resulting target number of bits for a frame within a super frame at time t is denoted by Tt,n.
The existing technique is based on the use of a complexity measure C for a video frame, that represents the “complexity” of encoding that frame. Subsequently, streams are allocated bits proportional to the estimated complexity of the frames in each stream. That is, streams with frames that are more “complex” to code, receive more bits during bit allocation compared to streams that are less “complex” to code, resulting in an equal amount of distortion in each stream.
The complexity measure C for a video frame is defined as the product of the quantization value used to compress the DCT coefficients of that video frame, and the resulting number of bits generated to code that video frame (using that quantization value). Therefore, the target number of bits Tt,n for a particular frame within a super frame can be computed on the basis of an estimate of the complexity of that frame, Ct,n, and the quantizer used for that frame, Qt,n:
T t , n = C t , n Q t , n ( 10 )
The value of Ct,n is assumed constant within a stream for all future frames of the same type (I, P or B) to be encoded. Therefore, Ct,n equals either CI,n, or CP,n, or CB,n depending on the frame type.
The sum of the number of bits allocated to all frames within a superframe should be equal to the number of bits allocated to that superframe, i.e.,
T t = n = 1 N L T t , n ( 11 )
The technique uses an equal quantizer value Q for all frames in all streams, in order to achieve uniform picture quality. However, taking into account the different picture types (I, P and B), the quantizer values for each frame are related to the fixed Q by a constant weighting factor:
Qt,n=Kt,nQ  (12)
where Kt,n is simply either KI, KP or KB, depending only on the frame type.
Combining (10), (11) and (12), results in the following bit allocation equation for frames within a super frame:
T t , n = 1 K t , n C t , n n = 1 N L 1 K t , n C t , n T t ( 13 )
This equation expresses that frames from different streams are allocated bits proportional to their estimated complexities.
To accommodate prioritization of streams as discussed above, the existing techniques may be extended as follows:
One may generalize (12) by including stream priorities pn as follows:
Q t , n = K t , n Q p n ( 14 )
where pn are chosen such that:
n = 1 N L 1 p n = N L ( 15 )
For example, if all streams have the same priority, pn=1 for all n, such that (15) holds. Higher priority streams are assigned values pn greater than 1, while lower priority streams are assigned values of pn smaller than 1.
Combining (10), (11) and (14), one obtains:
T t , n = p n K t , n C t , n n = 1 N L p n K t , n C t , n T t ( 16 )
which can be used for bit allocation instead of (13). From (16), it can be seen that the priorities can be used to allocate more bits to frames from high priority streams and less bits to frames from low priority streams. This strategy implicitly attempts to minimize the distortion criterion (5). Note that this extension applies to both encoding and transcoding.
In the approach described above, intended for encoding, encoding complexities C of frames are estimated from past encoded frames. These estimates are updated every frame and used to allocate bits to upcoming frames. That is, the estimate of complexity for the current frame t and future frames is based on the measurement of the values of the quantizer used in a previous frame as well as the actual amount of bits spent in that previous frame (in the same stream n). Therefore, the estimate is:
C′t,n =S t−τ,nQ′t−τ,n  (17)
where S indicates the number of bits actually spent on a video frame, t indicates the current frame and t−τ indicates the nearest previous frame of the same type (I, P or B), and the prime indicates that the estimate is computed from the output of the encoder. Note again that in reality only 3 different values for these estimates are kept for a single stream, one for each picture type.
While this approach can also be used in transcoding, the present inventor determined that it is possible to improve these estimates. The reason is that in the case of transcoding, one has information about the complexity of the current frame, because one has this frame available in encoded form at the input of the transcoder. However, it has been observed that complexity of the output frame is not the same as the complexity of the input frame of the transcoder because the transcoder changes the rate of the bitstream. It has been observed that the ratio of the output complexity over the input complexity remains relatively constant over time. Therefore, an estimate of this ratio based on both input and output complexity estimates of a previous frame can be used to scale the given input complexity value of the current frame, to arrive at a better estimate of the output complexity of the current frame:
C t , n = S t - τ , n Q t - τ , n S t - τ , n Q t - τ , n S t , n Q t , n ( 18 )
where S and Q without prime are computed from the input bitstream.
The approach described above for multi-stream encoding all assumed a constant target bit rate, i.e., a constant bit rate channel. This assumption actually does not hold in certain networks, especially for wireless channels, as previously described. Accordingly, a modified approach that takes into account the time varying nature of the channel is useful.
An extension can be applied if one can assume that the target bit rate varies only slowly relative to the duration of a (super) GOP. This may be the case when the actual channel conditions for some reason change only slowly or relatively infrequently. Alternatively, one may only be able to measure the changing channel conditions with coarse time granularity. In either case, the target bit rate can not be made to vary more quickly than a certain value dictated by the physical limitations. Therefore, the target bit rate can be modeled as a piece-wise constant signal, where changes are allowed only on the boundaries of a (super) GOP.
This approach can be combined with the aforementioned approach by providing a new value of Ts to the bit allocation algorithm (possibly with other extensions as discussed above) at the start of every (super) GOP. In other words, Ts is allowed to vary on a (super) GOP-by-GOP basis.
Another extension is to use the concept of virtual GOPs for the case where the target bit rates varies quickly relative to the duration of a (super) GOP, i.e., the case where adjustments to the target bit rate and bit allocation must be made on a (super) frame-by-frame basis. The use of virtual GOPs was explained for the single-stream dynamic rate adaptation above. In the multi-stream case, the concept of virtual GOPs extends to the concept of virtual super GOPs.
Another bit allocation approach in joint coding of multiple streams in a LAN environment, such as those based on IEEE 802.11, is suitable for those networks that have multi-rate support. In this case an access point in the gateway may be communicating at different data link rates with different client devices. For this, and other reasons, the maximum data throughput from the gateway to one device may be different from the maximum throughput from the gateway to another device, while transmission to each device contributes to the overall utilization of a single, shared, channel.
As before, there are NL devices on a network sharing available channel capacity. It may be assumed there are NL streams being transmitted to these NL devices (1 stream per device). The system employs a multi-stream manager (i.e., multi-stream transcoder or encoder manager) that is responsible for ensuring the best possibly quality of video transmitted to these devices.
It may be assumed that a mechanism is available to measure the bandwidth or maximum data throughput Hn to each device n=1, 2, . . . , NL. In general, this throughput varies per device and varies with time due to variations in the network: Hn(t). It can be assumed that the data maximum throughput can be measured at a sufficiently fine granularity in time. The maximum data throughput Hn is measured in bits per second. Note that the maximum throughput Hn is actually an average over a certain time interval, e.g., over the duration of a video frame or group of frames.
In the case of 802.11 networks, for instance, the bandwidth or maximum data throughput for device n may be estimated from knowledge about the raw data rate used for communication between the access point and device n, the packet length (in bits), and measurements of the packet error rate. Other methods to measure the maximum throughput may also be used.
One particular model of the (shared) channel is such that the gateway communicates with each client device n for a fraction fn of the time. For example, during a fraction f1 of the time, the home gateway is transmitting video stream 1 to device 1, and during a fraction f2 of the time, the gateway is transmitting video stream 2 to device 2, and so on. Therefore, an effective throughput is obtained from the gateway to client n that is equal to:
fnHn.
The following channel constraint holds over any time interval:
n = 1 N L f n 1.0 ( 19 )
i.e., the sum of channel utilization fractions must be smaller than (or equal to) 1.0. If these fractions add up to 1.0, the channel is utilized to its full capacity.
In the case of transcoding, let Rn denote the rate of the original (source) video stream n. To be able to transmit video streams to all devices concurrently, there may exist a set of fn, n=1, 2, . . . , NL, such that the following holds for all n, under the constraint of (19):
fnHn≧Rn  (20)
If such a set of fn does not exist, then the rate of one or more video sources be reduced. Let R′n, denote the rate of the transrated (output) video stream n. To retain the highest possible video quality, the minimum amount of rate reduction should be applied, in order for a set of fn to exist, such that the following holds for all n, under the constraint of (19):
fnHn=R′n  (21)
In the case of joint encoding (instead of joint transcoding), the goal is simply to find a solution to (21), under the constraint of (19), where R′n denotes the rate of the encoder output stream n.
In general, the problem of determining a set of fractions fn is an under-constrained problem. The above relationships do not provide a unique solution. Naturally, the goal is to find a solution to this problem that maximizes some measure of the overall quality of all video streams combined.
An embodiment is based on a joint coding principle, where the bit rates of different streams are allowed to vary based on their relative coding complexity, in order to achieve a generally uniform picture quality. This approach maximizes the minimum quality of any video stream that are jointly coded, i.e., this approach attempts to minimize distortion criterion (5).
One may consider NL video streams, where each stream is MPEG-2 encoded with GOPs of equal size NG. One may also consider a set of NL GOPs, one from each stream, concurrent in time. This set, also called super GOP, contains NL×NG video frames. The first step in some bit allocation techniques is to assign a target number of bits to each GOP in a super GOP, where each GOP belongs to a different stream n. The allocation is performed in proportion to the relative complexity of each GOP in a super GOP. The second step in the bit allocation procedure is to assign a target number of bits to each frame of the GOP of each video stream.
Let Tn denote the target number of bits assigned to the GOP of stream n (within a super GOP). Let Sn,t denote the number of bits generated by the encoder/transcoder for frame t of video stream n. The total number of bits generated for stream n over the course of a GOP should be equal (or close) to Tn, i.e.,
T n = t = 1 N G S n , t ( 22 )
As in the MPEG-2 TM5 a coding complexity measure for a frame is used that is the product of the quantizer value used and the number of bits generated for that frame, i.e.,
Cn,t=Qn,tSn,t  (23)
Therefore, (22) can be rewritten as:
T n = t = 1 N G C n , t Q n , t ( 24 )
As in equation (14) a generally constant quality approach may be used. All quantizer values may be equal, up to a constant factor Kn,t that accounts for the differences in picture types (I, P, and B) and a stream priority pn. Therefore, (24) can be rewritten as:
T n = p n Q t = 1 N G C n , t K n , t ( 25 )
To achieve (21), the following may hold:
T n = f n H n N G frame_rate ( 26 )
Combining equations (25) and (26), together with (19), provides the following solution for the set of n unknowns, fn (factoring out Q).
f n = p n H n t = 1 N G C n , t K n , t n = 1 N L p n H n t = 1 N G C n , t K n , t ( 27 )
It is assumed that the channel is utilized to its maximum capacity, i.e., the sum of channel utilization fractions adds up to exactly 1.0. Note that the approach is still valid if the utilization fractions need to add up to a lower value than 1.0. Equation (27) would simply be modified with an additional factor to allow for this. For instance, there may be non-AV streams active in the network that consume some of the channel capacity. In the case of non-AV streams, some capacity has to be set aside for such streams, and the optimization of the rates of AV streams should take this into account, by lowering the mentioned sum lower than 1.0.
Given fn, the actual target rate for each GOP can be computed with (26).
As mentioned above, the second step in the bit allocation procedure is to assign a target number of bits to each frame of the GOP of each video stream. This can be achieved using existing bit allocation methods, such as the one provided in TM5. Subsequent coding or transcoding can be performed with any standard method, in this case any encoder/transcoder compliant to MPEG-2 (see FIG. 12).
Although the above method has been derived specifically for the wireless LAN case, it should be noted that the above model and equations hold for any other type of LAN or network where a central gateway, server, or access point may communicate with multiple client devices at different maximum rates.
In the case of dynamic rate adaptation, the maximum throughput rates Hn vary in time. In this case, the above method can be combined with the notion of virtual GOPs, or virtual super GOPs, which consist of virtual GOPs of multiple AV streams, and overlap in time (see FIG. 13). Equation (27) would be executed at every frame time, to assign a target number of bits to a virtual GOP of a particular stream n. Subsequently, a target number of bits for each frame within each virtual GOP must be assigned, using, for instance, equations (3).
Note further, that the above method can be applied in the case where GOPs are not used, i.e., the above method can be applied on a frame-by-frame basis, instead of on a GOP-by-GOP basis (see FIG. 14). For instance, there may be cases where only P-type pictures are considered, and rate control is applied on a frame-by-frame basis. In this case, there is a need to allocate bits to individual frames from a set of NL co-occurring frames from different video streams. The above method can still be used to assign a target number of bits to each frame, in accordance to the relative coding complexity of each frame within the set of frames from all streams.
One embodiment uses a single-stream system, as illustrated in FIG. 7. This single-stream system has a single analog AV source. The analog AV source is input to a processing module that contains an AV encoder that produces a digitally compressed bit stream, e.g., an MPEG-2 or MPEG-4 bit stream. The bit rate of this bit stream is dynamically adapted to the conditions of the channel. This AV bit stream is transmitted over the channel. The connection between transmitter and receiver is strictly point-to-point. The receiver contains an AV decoder that decodes the digitally compressed bit stream.
Another embodiment is a single-stream system, as illustrated in FIG. 8. This single-stream system has a single digital AV source, e.g. an MPEG-2 or MPEG-4 bit stream. The digital source is input to a processing module that contains an transcoder/transrater that outputs a second digital bit stream. The bit rate of this bit stream is dynamically adapted to the conditions of the channel. This AV bit stream is transmitted over the channel. The connection between transmitter and receiver is strictly point-to-point. The receiver contains an AV decoder that decodes the digitally compressed bit stream.
Another embodiment is a multi-stream system, as illustrated in FIG. 9. This multi-stream system has multiple AV sources, where some sources may be in analog form, and other sources may be in digital form (e.g., MPEG-2 or MPEG-4 bit streams). These AV sources are input to a processing module that contains zero or more encoders (analog inputs) as well as zero or more transcoders (digital inputs). Each encoder and/or transcoder produces a corresponding output bitstream. The bit rate of these bit streams are dynamically adapted to the conditions of the channel, so as to optimize the overall quality of all streams. The system may also adapt these streams based on information about the capabilities of receiver devices. The system may also adapt streams based on information about the preferences of each user. All encoded/transcoded bit streams are sent to a network access point, which transmits each bit stream to the corresponding receiver. Each receiver contains an AV decoder that decodes the digitally compressed bit stream.
Channel Bandwidth Estimation
The implementation of a system may be done in such a manner that the system is free from additional probing “traffic” from the transmitter to the receiver. In this manner, no additional burden is placed on the network bandwidth by the transmitter. A limited amount of network traffic from the receiver to the transmitter may contain feedback that may be used as a mechanism to monitor the network traffic. In the typical wireless implementation there is transmission, feedback, and retransmission of data at the MAC layer of the protocol. While the network monitoring for bandwidth utilization may be performed at the MAC layer, the system described herein preferably does the network monitoring at the APPLICATION layer. By using the application layer the implementation is less dependent on the particular network implementation and may be used in a broader range of networks. By way of background, many wireless protocol systems include a physical layer, a MAC layer, a transport/network layer, and an application layer.
When considering an optimal solution one should consider (1) what parameters to measure, (2) whether the parameters should be measured at the transmitter or the receiver, and (3) whether to use a model-based approach (have a model of how the system behaves) versus a probe-based approach (try sending more and more data and see when the system breaks down, then try sending less data and return to increasing data until the system breaks down). In a model-based approach a more optimal utilization of the available bandwidth is likely possible because more accurate adjustments of the transmitted streams can be done.
The parameters may be measured at the receiver and then sent back over the channel to the transmitter. While measuring the parameters at the receiver may be implemented without impacting the system excessively, it does increase channel usage and involves a delay between the measurement at the receiver and the arrival of information at the transmitter.
MAC Layer
Alternatively, the parameters may be measured at the transmitter. The MAC layer of the transmitter has knowledge of what has been sent and when. The transmitter MAC also has knowledge of what has been received and when it was received through the acknowledgments. For example, the system may use the data link rate and/or packet error rate (number of retries) from the MAC layer. The data link rate and/or packet error rate may be obtained directly from the MAC layer, the 802.11 management information base parameters, or otherwise obtained in some manner. For example, FIG. 18 illustrates the re-transmission of lost packets and the fall-back to lower data link rates between the transmitter and the receiver for a wireless transmission (or communication) system.
In a wireless transmission system the packets carry P payload bits. The time T it takes to transmit a packet with P bits may be computed, given the data link rate, number of retries, and a prior knowledge of MAC and PHY overhead (e.g., duration of contention window length of header, time it takes to send acknowledgment, etc.). Accordingly, the maximum throughput may be calculated as P/T (bits/second).
Application Layer
As illustrated in FIG. 19, the packets are submitted to the transmitter, which may require retransmission in some cases. The receiver receives the packets from the transmitter, and at some point thereafter indicates that the packet has been received to the application layer. The receipt of packets may be used to indicate the rate at which they are properly received, or otherwise the trend increasing or decreasing. This information may be used to determine the available bandwidth or maximum throughput. FIG. 20 illustrates an approach based on forming bursts of packets at the transmitter and reading such bursts periodically into the channel as fast a possible and measure the maximum throughput of the system. By repeating the process on a periodic basis the maximum throughput of a particular link may be estimated, while the effective throughput of the data may be lower than the maximum.
All references cited herein are incorporated by reference.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.

Claims (31)

1. A method for transmitting video comprising:
(a) defining a first set of frames and a first target bit rate for said first set of frames;
(b) at least one of transcoding and encoding a first frame of said first set of frames based upon said first target bit rate;
(c) transmitting said first frame of said at least one of transcoded and encoded first set of frames;
(d) defining a second set of frames, wherein said second set of frames has at least one frame in common with said first set of frames, wherein said second set of frames has at least one frame not in common with said first set of frames, and a second target bit rate for said second set of frames;
(e) at least one of transcoding and encoding a first frame of said second set of frames based upon said second target bit rate;
(f) transmitting said first frame of said at least one of transcoded and encoded second set of frames;
(g) wherein at least one of said first and second target bit rates is based upon a compression rate distortion model of at least one frame resulting from the output of said at least one of said transcoder and encoder; and
(h) wherein at least one of said first and second target bit rates is based on the ratio of a first complexity measure to a second complexity measure, said first complexity measure based upon at least one frame resulting from the output of at least one of an encoder and a transcoder, and said second complexity measure based upon said at least one frame prior to said at least one of encoding and transcoding.
2. The method of claim 1 wherein said ratio is defined as:
C t , n = S t - τ , n Q t - τ , n S t - τ , n Q t - τ , n S t , n Q t , n .
3. The method of claim 1 wherein the video is transmitted using a transmission system that includes multiple links, wherein said system is characterized in that the sum of the maximum transmission rate of said multiple links is not the same as the total bandwidth available by the transmission system.
4. A method for transmitting video comprising:
(a) defining a first set of frames and determining a first target bit rate for said first set of frames;
(b) transmitting a first frame of said first set of frames based upon said first target bit rate;
(c) defining a second set of frames, wherein said second set of frames has at least one frame in common with said first set of frames, wherein said second set of frames has at least one frame not in common with said first set of frames, and determining a second target bit rate for said second set of frames;
(d) transmitting said first frame of said second set of frames based upon said second target bit rate;
(e) wherein at least one of said first and second target bit rates is based upon determining the currently available bandwidth for transmission; and
(f) wherein at least one of said first and second target bit rates is based on the ratio of a first complexity measure to a second complexity measure, said first complexity measure based upon at least one frame resulting from the output of at least one of an encoder and a transcoder, and said second complexity measure based upon said at least one frame prior to at least one of encoding and transcoding.
5. The method of claim 4 wherein said available bandwidth is determined based upon information from a MAC layer.
6. The method of claim 4 wherein said available bandwidth is determined based upon information transmitted from a receiving device to the transmitter.
7. The method of claim 4 wherein said available bandwidth is determined based upon information from the transmitter.
8. The method of claim 4 wherein said available bandwidth is determined based upon information within an APPLICATION layer.
9. A method for transmitting video comprising:
(a) defining a first set of frames and a first target bit rate for said first set of frames;
(b) transmitting to a first receiver a first frame of said first set of frames based upon said first target bit rate;
(c) defining a second set of frames and a second target bit rate for said second set of frames;
(d) transmitting to a second receiver a first frame of said second set of frames based upon said second target bit rate;
(e) wherein the values of said first and second target bit rates are dependant upon one another; and
(f) wherein said method is used in a system that includes a shared channel that is characterized by at least one of:
(i) the maximum throughput to said first receiver differing from the maximum throughput to a second receiver;
(ii) an amount of data bits sent to said first receiver utilizing a different fraction of the channel capacity than an identical amount of data bits sent to said second receiver;
(iii) the sum of the maximum transmission rates to all receivers in said system is not the same as the total bandwidth of said system; and
(g) wherein said first bit rate is based upon:
f n = p n H n t = 1 N G C n , t K n , t n = 1 N L p n H n t = 1 N G C n , t K n , t .
10. The method of claim 9 wherein said system is an 802.11 based system.
11. The method of claim 9 wherein said first and second target bit rates are based upon determining the available bandwidth for transmission.
12. A method for transmitting video comprising:
(a) defining a first set of frames and a first target bit rate for said first set of frames;
(b) transmitting to a first receiver a first frame of said first set of frames based upon said first target bit rate;
(c) defining a second set of frames and a second target bit rate for said second set of frames;
(d) transmitting to a second receiver a first frame of said second set of frames based upon said second target bit rate;
(e) wherein the value of at least one of said first and second target bit rates are modified based upon at least one of:
(i) the content of the video source;
(ii) the location of the receiving device;
(iii) a modality of said first set frames;
(iv) the screen size of the receiving device; and
(v) the frame rate supported by the receiving device.
13. The method of claim 12 wherein said method is used in an 801.11 based system.
14. The method of claim 12 wherein said first and second target bit rates are based upon determining the available bandwidth for transmission.
15. The method of claim 12 wherein the value of said first target bit rate is determined based upon said user preferences.
16. The method of claim 12 wherein the value of said first target bit rate is determined based upon said information about the video source.
17. The method of claim 12 wherein the value of said first target bit rate is determined based upon said information regarding the receiving device.
18. The method of claim 12 wherein the value of said first target bit rate is determined based upon said the screen size of the receiving device.
19. The method of claim 12 wherein the value of said first target bit rate is determined based upon said frame rate supported by the receiving device.
20. The method of claim 12 wherein the value of said first target bit rate is determined based upon said content of the video source.
21. The method of claim 12 wherein the value of said first target bit rate is determined based upon said location of the receiving device.
22. The method of claim 12 wherein the value of said first target bit rate is determined based upon said modality of said first set of frames.
23. A method for transmitting video comprising:
(a) defining a first set of frames and a first target bit rate for said first set of frames;
(b) at least one of transcoding and encoding a first frame of said first set of frames based upon said first target bit rate;
(c) transmitting to a first receiver said first frame of at least one of said transcoded and encoded first set of frames;
(d) defining a second set of frames and a second target bit rate for said second set of frames;
(e) at least one of transcoding and encoding a first frame of said second set of frames based upon said second target bit rate;
(f) transmitting to a second receiver said first frame of at least one of said transcoded and encoded said second set of frames based upon said second target bit rate;
(g) wherein said first target bit rate is based upon a compression rate distortion model of at least one frame resulting from the output of said transcoder; and
(h) wherein at least one of said first and second target bit rates is based on the ratio of a first complexity measure to a second complexity measure, said first complexity measure based upon at least one frame resulting from the output of said at least one of an encoder and a transcoder, and said second complexity measure based upon said at least one frame prior to said at least one of encoding and transcoding.
24. The method of claim 23 further comprising transcoding said first frame of said second set of frames, transmitting in a second channel said first frame of said transcoded second set of frames based upon said second target bit rate, and wherein said second target bit rate is based upon a complexity measure of at least one frame resulting from the output of said transcoder.
25. The method of claim 23 wherein at least one of said first and second target bit rates is based upon both a first complexity measure based upon at least one frame resulting from the output of said transcoder and a second complexity measure based upon said at least one frame prior to transcoding.
26. The method of claim 25 wherein said at least one of said first and second target bit rates is based upon the ratio of said first and second complexity measures.
27. The method of claim 25 wherein said ratio is defined as:
C t , n = S t - τ , n Q t - τ , n S t - τ , n Q t - τ , n S t , n Q t , n .
28. A method for transmitting video comprising:
(a) defining a first set of frames and a first target bit rate for said first set of frames;
(b) transmitting to a first receiver a first frame of said first set of frames based upon said first target bit rate;
(c) defining a second set of frames and a second target bit rate for said first set of frames;
(d) transmitting to a second receiver a first frame of said second set of frames based upon said second target bit rate;
(e) wherein the value of said first and second target bit rates are dependant upon one another;
(f) wherein the value of said first and second target bit rates are determined based upon the bandwidth available to transmit said first frame of said first set and said first frame of said second set; and
(g) wherein said method is used in a system that includes a shared channel that is characterized by at least one of:
(i) the maximum throughput to said first receiver differing from the maximum throughput to a second receiver:
(ii) an amount of data bits sent to said first receiver utilizing a different fraction of the channel capacity than an identical amount of data bits sent to said second receiver:
(iii) the sum of the maximum transmission rate to said first receiver and said second receiver not being the same as the total bandwidth of said system.
29. The method of claim 28 wherein said system is an 802.11 based system.
30. The method of claim 28 wherein said first and second target bit rates are based upon determining the available bandwidth for transmission.
31. A method for transmitting video comprising:
(a) defining a first set of frames and a first target bit rate for said first set of frames;
(b) at least one of transcoding and encoding a first frame of said first set of frames based upon said first target bit rate;
(c) transmitting said first frame of said at least one of transcoded and encoded first set of frames;
(d) defining a second set of frames, wherein said second set of frames has at least one frame in common with said first set of frames, wherein said second set of frames has at least one frame not in common with said first set of frames, and a second target bit rate for said second set of frames;
(e) at least one of transcoding and encoding a first frame of said second set of frames based upon said second target bit rate;
(f) transmitting said first frame of said at least one of transcoded and encoded second set of frames;
(g) wherein said first and second target bit rates are adapted to a temporal changing bandwidth; and
(h) wherein at least one of said first and second target bit rates is based on the ratio of a first complexity measure to a second complexity measure, said first complexity measure based upon at least one frame resulting from the output of at least one of an encoder and a transcoder, and said second complexity measure based upon said at least one frame prior to said at least one of encoding and transcoding.
US10/607,391 2003-06-25 2003-06-25 Wireless video transmission system Expired - Fee Related US7274740B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/607,391 US7274740B2 (en) 2003-06-25 2003-06-25 Wireless video transmission system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/607,391 US7274740B2 (en) 2003-06-25 2003-06-25 Wireless video transmission system

Publications (2)

Publication Number Publication Date
US20050008074A1 US20050008074A1 (en) 2005-01-13
US7274740B2 true US7274740B2 (en) 2007-09-25

Family

ID=33564191

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/607,391 Expired - Fee Related US7274740B2 (en) 2003-06-25 2003-06-25 Wireless video transmission system

Country Status (1)

Country Link
US (1) US7274740B2 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040114906A1 (en) * 2002-12-09 2004-06-17 Lg Electronics Inc. Method of presenting auxiliary data for an interactive recording medium
US20050102357A1 (en) * 2003-09-12 2005-05-12 Nobuhiro Shohga Receiver supporting broadband broadcasting
US20050141858A1 (en) * 2003-12-25 2005-06-30 Funai Electric Co., Ltd. Transmitting apparatus and transceiving system
US20050213502A1 (en) * 2004-03-26 2005-09-29 Stmicroelectronics S.R.I. Method and system for controlling operation of a network, such as a WLAN, related network and computer program product therefor
US20060109915A1 (en) * 2003-11-12 2006-05-25 Sony Corporation Apparatus and method for use in providing dynamic bit rate encoding
US20060188014A1 (en) * 2005-02-23 2006-08-24 Civanlar M R Video coding and adaptation by semantics-driven resolution control for transport and storage
US20060198438A1 (en) * 2000-02-29 2006-09-07 Shinji Negishi Scene description generating apparatus and method, scene description converting apparatus and method, scene description storing apparatus and method, scene description decoding apparatus and method, user interface system, recording medium, and transmission medium
US20060222110A1 (en) * 2005-03-31 2006-10-05 Christian Kuhtz Methods and systems for providing bandwidth adjustment
US20070033494A1 (en) * 2005-08-02 2007-02-08 Nokia Corporation Method, device, and system for forward channel error recovery in video sequence transmission over packet-based network
US20070122118A1 (en) * 2000-06-24 2007-05-31 Lg Electronics Inc. Method for reproducing data recorded on an interactive recording medium in conjunction with associated auxiliary data
US20070234385A1 (en) * 2006-03-31 2007-10-04 Rajendra Bopardikar Cross-layer video quality manager
US20080037540A1 (en) * 2006-08-09 2008-02-14 Samsung Information Systems America Method and apparatus of fixed size mac header with an extension in wirelesshd
US20080130561A1 (en) * 2006-12-04 2008-06-05 Samsung Electronics Co., Ltd. System and method for wireless communication
US7412516B1 (en) * 2003-12-29 2008-08-12 Aol Llc Using a network bandwidth setting based on determining the network environment
US20100142616A1 (en) * 2008-12-05 2010-06-10 Motorola, Inc. Bi-directional video compression for real-time video streams during transport in a packet switched network
US20110158146A1 (en) * 2009-12-29 2011-06-30 Jeelan Poola Method and system for multicast video streaming over a wireless local area network (wlan)
US8295224B1 (en) 2010-03-16 2012-10-23 Deshong Rick L Wireless signal transceiver system
US8370514B2 (en) 2005-04-28 2013-02-05 DISH Digital L.L.C. System and method of minimizing network bandwidth retrieved from an external network
US8402156B2 (en) 2004-04-30 2013-03-19 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
CN103222264A (en) * 2010-11-17 2013-07-24 通用仪表公司 System and method for selectively transcoding signal from one format to one of plurality of formats
US20140012981A1 (en) * 2011-12-28 2014-01-09 Avvasi Inc. Apparatus and methods for optimizing network data transmission
US8683066B2 (en) 2007-08-06 2014-03-25 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US20140146870A1 (en) * 2009-11-04 2014-05-29 Tencent Technology (Shenzhen) Company Limited Method and system for media file compression
US20140241629A1 (en) * 2013-02-28 2014-08-28 Facebook, Inc. Methods and systems for differentiating synthetic and non-synthetic images
US8868772B2 (en) 2004-04-30 2014-10-21 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US9426498B2 (en) 2012-07-10 2016-08-23 Broadcom Corporation Real-time encoding system of multiple spatially scaled video based on shared video coding information
US9510029B2 (en) 2010-02-11 2016-11-29 Echostar Advanced Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US10166917B2 (en) 2014-08-30 2019-01-01 Mariana Goldhamer Transmission of uncompressed video in cellular networks
US11284133B2 (en) 2012-07-10 2022-03-22 Avago Technologies International Sales Pte. Limited Real-time video coding system of multiple temporally scaled video and of multiple profile and standards based on shared video coding information

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6817028B1 (en) * 1999-06-11 2004-11-09 Scientific-Atlanta, Inc. Reduced screen control system for interactive program guide
US7010801B1 (en) 1999-06-11 2006-03-07 Scientific-Atlanta, Inc. Video on demand system with parameter-controlled bandwidth deallocation
US7150031B1 (en) * 2000-06-09 2006-12-12 Scientific-Atlanta, Inc. System and method for reminders of upcoming rentable media offerings
US7992163B1 (en) * 1999-06-11 2011-08-02 Jerding Dean F Video-on-demand navigational system
US6986156B1 (en) * 1999-06-11 2006-01-10 Scientific Atlanta, Inc Systems and methods for adaptive scheduling and dynamic bandwidth resource allocation management in a digital broadband delivery system
US20060059525A1 (en) * 1999-12-13 2006-03-16 Jerding Dean F Media services window configuration system
WO2001067736A2 (en) * 2000-03-02 2001-09-13 Scientific-Atlanta, Inc. Apparatus and method for providing a plurality of interactive program guide initial arrangements
US7200857B1 (en) * 2000-06-09 2007-04-03 Scientific-Atlanta, Inc. Synchronized video-on-demand supplemental commentary
US8516525B1 (en) 2000-06-09 2013-08-20 Dean F. Jerding Integrated searching system for interactive media guide
US20020007485A1 (en) * 2000-04-03 2002-01-17 Rodriguez Arturo A. Television service enhancements
US7975277B1 (en) 2000-04-03 2011-07-05 Jerding Dean F System for providing alternative services
US7934232B1 (en) * 2000-05-04 2011-04-26 Jerding Dean F Navigation paradigm for access to television services
US8069259B2 (en) * 2000-06-09 2011-11-29 Rodriguez Arturo A Managing removal of media titles from a list
US7962370B2 (en) * 2000-06-29 2011-06-14 Rodriguez Arturo A Methods in a media service system for transaction processing
US7340759B1 (en) * 2000-11-10 2008-03-04 Scientific-Atlanta, Inc. Systems and methods for adaptive pricing in a digital broadband delivery system
US7512964B2 (en) 2001-06-29 2009-03-31 Cisco Technology System and method for archiving multiple downloaded recordable media content
US8006262B2 (en) * 2001-06-29 2011-08-23 Rodriguez Arturo A Graphic user interfaces for purchasable and recordable media (PRM) downloads
US7526788B2 (en) 2001-06-29 2009-04-28 Scientific-Atlanta, Inc. Graphic user interface alternate download options for unavailable PRM content
US7496945B2 (en) * 2001-06-29 2009-02-24 Cisco Technology, Inc. Interactive program guide for bidirectional services
US7334251B2 (en) * 2002-02-11 2008-02-19 Scientific-Atlanta, Inc. Management of television advertising
US8161388B2 (en) 2004-01-21 2012-04-17 Rodriguez Arturo A Interactive discovery of display device characteristics
US20060031564A1 (en) * 2004-05-24 2006-02-09 Brassil John T Methods and systems for streaming data at increasing transmission rates
ITMI20042234A1 (en) * 2004-11-19 2005-02-19 Abb Service Srl AUTOMATIC SWITCH WITH RELEASE KINEMATISM USED BY MOBILE CONTACT
PA8660701A1 (en) * 2005-02-04 2006-09-22 Pfizer Prod Inc SMALL AGONISTS AND THEIR USES
US7664856B2 (en) * 2005-07-28 2010-02-16 Microsoft Corporation Dynamically balancing user experiences in a multi-user computing system
US8189472B2 (en) * 2005-09-07 2012-05-29 Mcdonald James F Optimizing bandwidth utilization to a subscriber premises
US7518503B2 (en) * 2005-12-14 2009-04-14 Sony Ericsson Mobile Communications Ab Portable A/V relay device
US8116317B2 (en) * 2006-01-31 2012-02-14 Microsoft Corporation Preventing quality of service policy abuse in a network
US8451910B1 (en) * 2006-07-13 2013-05-28 Logitech Europe S.A. Wireless multimedia device with real time adjustment of packet retry function and bit rate modulation
US20100017836A1 (en) * 2006-07-25 2010-01-21 Elmar Trojer Method and Device for Stream Adaptation
JP4284353B2 (en) * 2006-12-26 2009-06-24 株式会社東芝 Wireless communication device
MX2009007037A (en) * 2007-01-04 2009-07-10 Qualcomm Inc Method and apparatus for distributed spectrum sensing for wireless communication.
JP2009260818A (en) * 2008-04-18 2009-11-05 Nec Corp Server apparatus, content distribution method, and program
KR20100122518A (en) * 2008-04-18 2010-11-22 닛본 덴끼 가부시끼가이샤 Gateway device, method, and program
JP5451755B2 (en) * 2008-06-04 2014-03-26 コーニンクレッカ フィリップス エヌ ヴェ Apparatus, system and method for adaptive data rate control
US8958475B2 (en) * 2009-07-02 2015-02-17 Qualcomm Incorporated Transmitter quieting and null data encoding
US9112618B2 (en) * 2009-07-02 2015-08-18 Qualcomm Incorporated Coding latency reductions during transmitter quieting
US8537772B2 (en) * 2009-07-02 2013-09-17 Qualcomm Incorporated Transmitter quieting during spectrum sensing
US8902995B2 (en) * 2009-07-02 2014-12-02 Qualcomm Incorporated Transmitter quieting and reduced rate encoding
US8780982B2 (en) * 2009-07-02 2014-07-15 Qualcomm Incorporated Transmitter quieting and different encoding rates for portions of a set of frames
US20110032986A1 (en) * 2009-08-07 2011-02-10 Sling Media Pvt Ltd Systems and methods for automatically controlling the resolution of streaming video content
US20110182257A1 (en) * 2010-01-26 2011-07-28 Qualcomm Incorporated White space spectrum commmunciation device with multiplexing capabilties
US10009647B2 (en) * 2010-03-02 2018-06-26 Qualcomm Incorporated Reducing end-to-end latency for communicating information from a user device to a receiving device via television white space
CN103220550B (en) * 2012-01-19 2016-12-07 华为技术有限公司 The method and device of video conversion
US9357213B2 (en) * 2012-12-12 2016-05-31 Imagine Communications Corp. High-density quality-adaptive multi-rate transcoder systems and methods
US10218981B2 (en) * 2015-02-11 2019-02-26 Wowza Media Systems, LLC Clip generation based on multiple encodings of a media stream
US10593028B2 (en) * 2015-12-03 2020-03-17 Samsung Electronics Co., Ltd. Method and apparatus for view-dependent tone mapping of virtual reality images

Citations (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159447A (en) 1991-05-23 1992-10-27 At&T Bell Laboratories Buffer control for variable bit-rate channel
EP0699368A1 (en) 1994-03-17 1996-03-06 Koninklijke Philips Electronics N.V. An encoder buffer having an effective size which varies automatically with the channel bit-rate
US5506686A (en) * 1994-11-23 1996-04-09 Motorola, Inc. Method and device for determining bit allocation in a video compression system
US5541852A (en) 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
JPH10200484A (en) 1997-01-08 1998-07-31 Toshiba Corp Optical network system provided with mobile communication service, its center device and base station device
JPH11215480A (en) 1998-01-21 1999-08-06 Canon Inc Video communication system, video transmitter and video receiver, and their control methods and storage media
US5936940A (en) 1996-08-22 1999-08-10 International Business Machines Corporation Adaptive rate-based congestion control in packet networks
US5978236A (en) 1997-01-31 1999-11-02 Silverline Power Conversion Llc Uninterruptible power supply with direction of DC electrical energy depending on predetermined ratio
US5982778A (en) 1996-08-30 1999-11-09 Advanced Micro Devices, Inc. Arrangement for regulating packet flow rate in shared-medium, point-to-point, and switched networks
US5995705A (en) 1988-12-27 1999-11-30 Instant Video Technologies, Inc. Burst transmission apparatus and method for audio/video information
US6014694A (en) 1997-06-26 2000-01-11 Citrix Systems, Inc. System for adaptive video/audio transport over a network
US6049549A (en) 1997-08-14 2000-04-11 University Of Massachusetts Adaptive media control
US6055578A (en) 1996-08-15 2000-04-25 Advanced Micro Devices, Inc. Apparatus and method for selectively controlling transmission of consecutive packets in a network station
EP1026855A2 (en) 1999-02-04 2000-08-09 Fujitsu Limited Measuring network communication performance
EP1047223A2 (en) 1999-04-21 2000-10-25 KDD Corporation Band width measuring method and apparatus for a packet switching network
JP2000308023A (en) 1999-04-16 2000-11-02 Sony Corp Method and device for transmitting data
US6167084A (en) * 1998-08-27 2000-12-26 Motorola, Inc. Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals
US6233226B1 (en) 1998-12-14 2001-05-15 Verizon Laboratories Inc. System and method for analyzing and transmitting video over a switched network
WO2001039508A1 (en) 1999-11-26 2001-05-31 British Telecommunications Public Limited Company Video transmission
US6263503B1 (en) 1999-05-26 2001-07-17 Neal Margulis Method for effectively implementing a wireless television system
US6275531B1 (en) 1998-07-23 2001-08-14 Optivision, Inc. Scalable video coding method and apparatus
US6292834B1 (en) 1997-03-14 2001-09-18 Microsoft Corporation Dynamic bandwidth selection for efficient transmission of multimedia streams in a computer network
US6300665B1 (en) 2000-09-28 2001-10-09 Xerox Corporation Structure for an optical switch on a silicon on insulator substrate
US6310495B1 (en) 2000-02-15 2001-10-30 Hewlett Packard Company Clock wave noise reducer
WO2002003609A2 (en) 2000-07-06 2002-01-10 Telefonaktiebolaget Lm Ericsson (Publ) System and method for estimating cell rate in an atm network
US6343085B1 (en) 1997-08-28 2002-01-29 Microsoft Corporation Adaptive bandwidth throttling for individual virtual services supported on a network server
EP1179925A2 (en) 2000-08-09 2002-02-13 Microsoft Corporation Fast dynamic measurement of bandwith in a TCP network environment
JP2002058002A (en) 2000-08-10 2002-02-22 Nec Corp Video telephone device
US6351153B1 (en) 2000-10-30 2002-02-26 Hewlett-Packard Company Phase detector with high precision
US6363056B1 (en) 1998-07-15 2002-03-26 International Business Machines Corporation Low overhead continuous monitoring of network performance
GB2367219A (en) 2000-09-20 2002-03-27 Vintage Global Streaming of media file data over a dynamically variable bandwidth channel
US20020075857A1 (en) 1999-12-09 2002-06-20 Leblanc Wilfrid Jitter buffer and lost-frame-recovery interworking
US20020085587A1 (en) 2000-10-17 2002-07-04 Saverio Mascolo End-to end bandwidth estimation for congestion control in packet switching networks
US20020101880A1 (en) * 2001-01-30 2002-08-01 Byoung-Jo Kim Network service for adaptive mobile applications
JP2002217965A (en) 2001-01-23 2002-08-02 Hitachi Commun Syst Inc Method and device for dynamically measuring bandwidth of ip network
US6434606B1 (en) 1997-10-01 2002-08-13 3Com Corporation System for real time communication buffer management
US20020126891A1 (en) * 2001-01-17 2002-09-12 Osberger Wilfried M. Visual attention model
US6456591B1 (en) 1995-11-09 2002-09-24 At&T Corporation Fair bandwidth sharing for video traffic sources using distributed feedback control
US20020136298A1 (en) 2001-01-18 2002-09-26 Chandrashekhara Anantharamu System and method for adaptive streaming of predictive coded video data
US6459811B1 (en) 1998-04-02 2002-10-01 Sarnoff Corporation Bursty data transmission of compressed video data
US20020140851A1 (en) 2001-03-30 2002-10-03 Indra Laksono Adaptive bandwidth footprint matching for multiple compressed video streams in a fixed bandwidth network
WO2002087276A2 (en) 2001-04-19 2002-10-31 Koninklijke Philips Electronics N.V. Method and device for robust real-time estimation of bottleneck bandwidth
US20020186660A1 (en) 2001-06-12 2002-12-12 Bahadiroglu Murat I. Adaptive control of data packet size in networks
WO2003009581A1 (en) 2001-07-19 2003-01-30 British Telecommunications Public Limited Company Video stream switching
JP2003032316A (en) 2001-05-11 2003-01-31 Kddi Corp Image transmission protocol controller
JP2003037649A (en) 2001-07-24 2003-02-07 Nippon Telegr & Teleph Corp <Ntt> Method for estimating contents distribution end time, recording medium and program
US6542467B2 (en) 1998-03-05 2003-04-01 Nec Corporation Bandwidth allocation system of virtual path in communication network of asynchronous transfer mode
US20030067872A1 (en) 2001-09-17 2003-04-10 Pulsent Corporation Flow control method for quality streaming of audio/video/media over packet networks
WO2003032643A2 (en) 2001-10-05 2003-04-17 Matsushita Electric Industrial Co., Ltd Video data transmission method and apparatus
US20030095594A1 (en) * 2001-11-21 2003-05-22 Indra Laksono Method and system for rate control during video transcoding
US6587875B1 (en) 1999-04-30 2003-07-01 Microsoft Corporation Network protocol and associated methods for optimizing use of available bandwidth
US6590936B1 (en) * 1999-04-13 2003-07-08 Matsushita Electric Industrial Co., Ltd. Coded data transform method, transcoding method, transcoding system, and data storage media
US6598228B2 (en) 1999-05-26 2003-07-22 Enounde Incorporated Method and apparatus for controlling time-scale modification during multi-media broadcasts
US20030152032A1 (en) 2002-02-14 2003-08-14 Kddi Corporation Video information transmission system, and apparatus and program used for video information transmission system
WO2003075021A1 (en) 2002-02-28 2003-09-12 Air Magnet, Inc. Measuring the throughput of transmissions over wireless local area networks
US20030189589A1 (en) * 2002-03-15 2003-10-09 Air-Grid Networks, Inc. Systems and methods for enhancing event quality
WO2003096698A1 (en) 2002-05-10 2003-11-20 Mayah Communications Gmbh Method and/or system for transferring/receiving audio and/or video signals and minimizing delay time over internet networks and relating apparatuses
US6665751B1 (en) 1999-04-17 2003-12-16 International Business Machines Corporation Streaming media player varying a play speed from an original to a maximum allowable slowdown proportionally in accordance with a buffer state
US20040017773A1 (en) 2002-07-23 2004-01-29 Eyeball Networks Inc. Method and system for controlling the rate of transmission for data packets over a computer network
US20040057381A1 (en) 2002-09-24 2004-03-25 Kuo-Kun Tseng Codec aware adaptive playout method and playout device
US20040086268A1 (en) 1998-11-18 2004-05-06 Hayder Radha Decoder buffer for streaming video receiver and method of operation
US6747991B1 (en) 2000-04-26 2004-06-08 Carnegie Mellon University Filter and method for adaptively modifying the bit rate of synchronized video and audio streams to meet packet-switched network bandwidth constraints
WO2004062291A1 (en) 2003-01-07 2004-07-22 Koninklijke Philips Electronics N.V. Audio-visual content transmission
US20040153951A1 (en) 2000-11-29 2004-08-05 Walker Matthew D Transmitting and receiving real-time data
US20040170186A1 (en) 2003-02-28 2004-09-02 Huai-Rong Shao Dynamic resource control for high-speed downlink packet access wireless channels
US20040255328A1 (en) 2003-06-13 2004-12-16 Baldwin James Armand Fast start-up for digital video streams
US20050041689A1 (en) * 2000-09-25 2005-02-24 General Instrument Corporation Statistical remultiplexing with bandwidth allocation among different transcoding channels
US20050055201A1 (en) 2003-09-10 2005-03-10 Microsoft Corporation, Corporation In The State Of Washington System and method for real-time detection and preservation of speech onset in a signal
US20050094622A1 (en) 2003-10-29 2005-05-05 Nokia Corporation Method and apparatus providing smooth adaptive management of packets containing time-ordered content at a receiving terminal
EP1536582A2 (en) 2001-04-24 2005-06-01 Nokia Corporation Methods for changing the size of a jitter buffer and for time alignment, communications system, receiving end, and transcoder

Patent Citations (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5995705A (en) 1988-12-27 1999-11-30 Instant Video Technologies, Inc. Burst transmission apparatus and method for audio/video information
US5159447A (en) 1991-05-23 1992-10-27 At&T Bell Laboratories Buffer control for variable bit-rate channel
EP0699368A1 (en) 1994-03-17 1996-03-06 Koninklijke Philips Electronics N.V. An encoder buffer having an effective size which varies automatically with the channel bit-rate
US5541852A (en) 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
US5506686A (en) * 1994-11-23 1996-04-09 Motorola, Inc. Method and device for determining bit allocation in a video compression system
US6456591B1 (en) 1995-11-09 2002-09-24 At&T Corporation Fair bandwidth sharing for video traffic sources using distributed feedback control
US6055578A (en) 1996-08-15 2000-04-25 Advanced Micro Devices, Inc. Apparatus and method for selectively controlling transmission of consecutive packets in a network station
US5936940A (en) 1996-08-22 1999-08-10 International Business Machines Corporation Adaptive rate-based congestion control in packet networks
US5982778A (en) 1996-08-30 1999-11-09 Advanced Micro Devices, Inc. Arrangement for regulating packet flow rate in shared-medium, point-to-point, and switched networks
JPH10200484A (en) 1997-01-08 1998-07-31 Toshiba Corp Optical network system provided with mobile communication service, its center device and base station device
US5978236A (en) 1997-01-31 1999-11-02 Silverline Power Conversion Llc Uninterruptible power supply with direction of DC electrical energy depending on predetermined ratio
US6292834B1 (en) 1997-03-14 2001-09-18 Microsoft Corporation Dynamic bandwidth selection for efficient transmission of multimedia streams in a computer network
US6014694A (en) 1997-06-26 2000-01-11 Citrix Systems, Inc. System for adaptive video/audio transport over a network
US6049549A (en) 1997-08-14 2000-04-11 University Of Massachusetts Adaptive media control
US6343085B1 (en) 1997-08-28 2002-01-29 Microsoft Corporation Adaptive bandwidth throttling for individual virtual services supported on a network server
US6434606B1 (en) 1997-10-01 2002-08-13 3Com Corporation System for real time communication buffer management
JPH11215480A (en) 1998-01-21 1999-08-06 Canon Inc Video communication system, video transmitter and video receiver, and their control methods and storage media
US6542467B2 (en) 1998-03-05 2003-04-01 Nec Corporation Bandwidth allocation system of virtual path in communication network of asynchronous transfer mode
US6459811B1 (en) 1998-04-02 2002-10-01 Sarnoff Corporation Bursty data transmission of compressed video data
US6363056B1 (en) 1998-07-15 2002-03-26 International Business Machines Corporation Low overhead continuous monitoring of network performance
US6275531B1 (en) 1998-07-23 2001-08-14 Optivision, Inc. Scalable video coding method and apparatus
US6167084A (en) * 1998-08-27 2000-12-26 Motorola, Inc. Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals
US20040086268A1 (en) 1998-11-18 2004-05-06 Hayder Radha Decoder buffer for streaming video receiver and method of operation
US6233226B1 (en) 1998-12-14 2001-05-15 Verizon Laboratories Inc. System and method for analyzing and transmitting video over a switched network
EP1026855A2 (en) 1999-02-04 2000-08-09 Fujitsu Limited Measuring network communication performance
US6590936B1 (en) * 1999-04-13 2003-07-08 Matsushita Electric Industrial Co., Ltd. Coded data transform method, transcoding method, transcoding system, and data storage media
JP2000308023A (en) 1999-04-16 2000-11-02 Sony Corp Method and device for transmitting data
US6665751B1 (en) 1999-04-17 2003-12-16 International Business Machines Corporation Streaming media player varying a play speed from an original to a maximum allowable slowdown proportionally in accordance with a buffer state
EP1047223A2 (en) 1999-04-21 2000-10-25 KDD Corporation Band width measuring method and apparatus for a packet switching network
US6587875B1 (en) 1999-04-30 2003-07-01 Microsoft Corporation Network protocol and associated methods for optimizing use of available bandwidth
US6263503B1 (en) 1999-05-26 2001-07-17 Neal Margulis Method for effectively implementing a wireless television system
US6598228B2 (en) 1999-05-26 2003-07-22 Enounde Incorporated Method and apparatus for controlling time-scale modification during multi-media broadcasts
WO2001039508A1 (en) 1999-11-26 2001-05-31 British Telecommunications Public Limited Company Video transmission
US20020075857A1 (en) 1999-12-09 2002-06-20 Leblanc Wilfrid Jitter buffer and lost-frame-recovery interworking
US6310495B1 (en) 2000-02-15 2001-10-30 Hewlett Packard Company Clock wave noise reducer
US6747991B1 (en) 2000-04-26 2004-06-08 Carnegie Mellon University Filter and method for adaptively modifying the bit rate of synchronized video and audio streams to meet packet-switched network bandwidth constraints
EP1300046A2 (en) 2000-07-06 2003-04-09 Telefonaktiebolaget LM Ericsson (publ) System and method for estimating cell rate in an atm network
WO2002003609A2 (en) 2000-07-06 2002-01-10 Telefonaktiebolaget Lm Ericsson (Publ) System and method for estimating cell rate in an atm network
US6741565B1 (en) 2000-07-06 2004-05-25 Telefonaktiebolaget Lm Ericsson (Publ) System and method for estimating cell rate in an ATM network
JP2002094567A (en) 2000-08-09 2002-03-29 Microsoft Corp High speed dynamic measurement of bandwidth in tcp network environment
EP1179925A2 (en) 2000-08-09 2002-02-13 Microsoft Corporation Fast dynamic measurement of bandwith in a TCP network environment
JP2002058002A (en) 2000-08-10 2002-02-22 Nec Corp Video telephone device
GB2367219A (en) 2000-09-20 2002-03-27 Vintage Global Streaming of media file data over a dynamically variable bandwidth channel
US20050041689A1 (en) * 2000-09-25 2005-02-24 General Instrument Corporation Statistical remultiplexing with bandwidth allocation among different transcoding channels
US6300665B1 (en) 2000-09-28 2001-10-09 Xerox Corporation Structure for an optical switch on a silicon on insulator substrate
US20020085587A1 (en) 2000-10-17 2002-07-04 Saverio Mascolo End-to end bandwidth estimation for congestion control in packet switching networks
US6351153B1 (en) 2000-10-30 2002-02-26 Hewlett-Packard Company Phase detector with high precision
US20040153951A1 (en) 2000-11-29 2004-08-05 Walker Matthew D Transmitting and receiving real-time data
US20020126891A1 (en) * 2001-01-17 2002-09-12 Osberger Wilfried M. Visual attention model
US20020136298A1 (en) 2001-01-18 2002-09-26 Chandrashekhara Anantharamu System and method for adaptive streaming of predictive coded video data
JP2002217965A (en) 2001-01-23 2002-08-02 Hitachi Commun Syst Inc Method and device for dynamically measuring bandwidth of ip network
US20020101880A1 (en) * 2001-01-30 2002-08-01 Byoung-Jo Kim Network service for adaptive mobile applications
US20020140851A1 (en) 2001-03-30 2002-10-03 Indra Laksono Adaptive bandwidth footprint matching for multiple compressed video streams in a fixed bandwidth network
WO2002087276A2 (en) 2001-04-19 2002-10-31 Koninklijke Philips Electronics N.V. Method and device for robust real-time estimation of bottleneck bandwidth
US20020169880A1 (en) 2001-04-19 2002-11-14 Koninklijke Philips Electronics N.V. Method and device for robust real-time estimation of the bottleneck bandwidth in the internet
EP1536582A2 (en) 2001-04-24 2005-06-01 Nokia Corporation Methods for changing the size of a jitter buffer and for time alignment, communications system, receiving end, and transcoder
JP2003032316A (en) 2001-05-11 2003-01-31 Kddi Corp Image transmission protocol controller
WO2002101513A2 (en) 2001-06-12 2002-12-19 Smartpackets, Inc. Adaptive control of data packet size in networks
US20020186660A1 (en) 2001-06-12 2002-12-12 Bahadiroglu Murat I. Adaptive control of data packet size in networks
WO2003009581A1 (en) 2001-07-19 2003-01-30 British Telecommunications Public Limited Company Video stream switching
JP2003037649A (en) 2001-07-24 2003-02-07 Nippon Telegr & Teleph Corp <Ntt> Method for estimating contents distribution end time, recording medium and program
US20030067872A1 (en) 2001-09-17 2003-04-10 Pulsent Corporation Flow control method for quality streaming of audio/video/media over packet networks
WO2003032643A2 (en) 2001-10-05 2003-04-17 Matsushita Electric Industrial Co., Ltd Video data transmission method and apparatus
US20030095594A1 (en) * 2001-11-21 2003-05-22 Indra Laksono Method and system for rate control during video transcoding
US20030152032A1 (en) 2002-02-14 2003-08-14 Kddi Corporation Video information transmission system, and apparatus and program used for video information transmission system
WO2003075021A1 (en) 2002-02-28 2003-09-12 Air Magnet, Inc. Measuring the throughput of transmissions over wireless local area networks
US20030189589A1 (en) * 2002-03-15 2003-10-09 Air-Grid Networks, Inc. Systems and methods for enhancing event quality
WO2003096698A1 (en) 2002-05-10 2003-11-20 Mayah Communications Gmbh Method and/or system for transferring/receiving audio and/or video signals and minimizing delay time over internet networks and relating apparatuses
US20040017773A1 (en) 2002-07-23 2004-01-29 Eyeball Networks Inc. Method and system for controlling the rate of transmission for data packets over a computer network
US20040057381A1 (en) 2002-09-24 2004-03-25 Kuo-Kun Tseng Codec aware adaptive playout method and playout device
WO2004062291A1 (en) 2003-01-07 2004-07-22 Koninklijke Philips Electronics N.V. Audio-visual content transmission
US20040170186A1 (en) 2003-02-28 2004-09-02 Huai-Rong Shao Dynamic resource control for high-speed downlink packet access wireless channels
US20040255328A1 (en) 2003-06-13 2004-12-16 Baldwin James Armand Fast start-up for digital video streams
US20050055201A1 (en) 2003-09-10 2005-03-10 Microsoft Corporation, Corporation In The State Of Washington System and method for real-time detection and preservation of speech onset in a signal
US20050094622A1 (en) 2003-10-29 2005-05-05 Nokia Corporation Method and apparatus providing smooth adaptive management of packets containing time-ordered content at a receiving terminal

Non-Patent Citations (26)

* Cited by examiner, † Cited by third party
Title
"IEEE Wireless LAN Edition: A compilation based on IEEE Std 802.11-1999 (R2003) and its amendments," IEEE Standards Information Network, IEEE Press, Copyright 2003.
Amy R. Reibman and Barry G. Haskell, "Constraints on Variable Bit-Rate Video for ATM Networks," IEEE Transactions on circuits and Systems for Video Technology, vol. 2, No. 4, Dec. 1992, 361-372.
Bolot & Turletti, "Experience with Control Mechanisms for Packet Video in the Internet,".
Bolot, Jean-Chrysostome and Turletti, Thierry, A Rate Control Mechanism for Packet Video in the Internet.
Chen and Gilbert, Measured Performance of 5-Ghz 802.11a Wireless LAN Systems, Atheros Communications, Inc., Aug. 21, 2001.
Chi-Yuan Hsu, Antonio Ortega, and Masoud Khansari, "Rate Control for Robust Video Transmission over Burst-Error Wireless Channels," IEEE Journal on Selected Areas in Communications, vol. 17, No. 5, May 1999, pp. 1-18.
Daji Qiao, Sunghyun Choi, and Kang G. Shin, "Goodput Analyis and Link Adaptation for IEEE 802.11a Wireless LANs," IEEE Transactions on Mobile Computing, vol. 1, No. 4, Oct.-Dec. 2002, pp. 278-292.
Dmitri Loguinov and Hayden Radha, "Video Receiver Based Real-Time Estimation of Channel Capacity," 2002 IEEE ICIP, pp. III-213-III-216.
Draft Amendment to IEEE Std. 802.11 (1999 Edition), Part 11: MAC and PHY specifications: Medium Access Control (MAC) Quality of Service (QoS) Enhancements, IEEE P802.11e/D8.0, Feb. 2004.
Garner, Markwalter and Yonge, HomePlug Standard Brings Networking to the Home, Communication System Design, vol. 6, No. 12, Dec. 2000.
Girod and Farber, "Wireless Video," lcompressed Video Over Networks, 1999.
ISO/IEC JTCI/SC29/WG11, Test Model 5, Apr. 1993, Sidney, Australia.
James F. Kurose and Keith W. Ross, "Computer Networking: A Top-Down Approach Featuring the Internet," Copyright 2001.
Juan Xin, et al., Bit Allocation for Joint Transcoding of Multiple MPEG Coded Video Streams, Proc. Of Int. Conf. On Multimedia and Expo (ICME 2001), Tokyo, Japan.
Kevin Lai and Mary Baker, "Measuring Bandwidth,"Jul. 15, 1998, pp. 1-25.
M. Kalman, E. Steinbach, and B. Girod, "Adaptive Media Playout for Low-Delay video Streaming Over Error-Prone Channels," IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, No. 6, Jun. 2004, pp. 841-851.
M. Kalman, E. Steinbach, and B. Girod, "Adaptive Playout for Real-Time Media Streaming," Proc. IEEE International Symposium on Circuits and Systems, ISCAS-2002, Scottsdale, AZ, May 2002, 4 pages.
O'Hara and Petrick, IEEE 802.11 Handbook, A Designer's Companion, published May 2001 by Standard Information Network IEEE Press, New York, NY.
Paxon, "End-to-End Internet Pack Dynamics," Jun. 23, 1997.
Peter Van Beek, Sachin Deshpande, Hao Pan, and Ibrahim Sezan, "Adaptive streaming of high-quality video over wireless LANs," Proc. Of SPIE-IS&T Electronic Imaging, SPIE vol. 5308, 2004, pp. 647-660.
Philip A. Chou and Zhourong Miao, "Rate-Distortion Optimized Streaming of Packetized Media," Submitted to IEEE Transactions on Multimedia, Feb. 2001, pp. 1-20.
R. Ramjee, J. Kurose, D. Towsley, and H. Schulzrinne, "Adaptive Playout Mechanisms for Packetized Audio Applications in Wide-Area Networks," IEEE Infocom, Toronto, Canada, Jun. 1994, 9 pages.
Sergio D. Servetto and Klara Nahrstedt, "Broadcast Quality video over IP," IEEE Transactions on Multimedia, vol. 3, No. 1, Mar. 2001, pp. 162-173.
Vasan & Shankar, "An Empirical Charaterization of Instantaneous Throughput in 802.11b WLANs,".
Wang and Vincent, Bit Allocation and Constraints for Joint Coding of Multiple Video Programs, IEEE Trans. On Circuits and Systems for Video Technology, vol. 9, No. 6, Sep. 1999.
Wendell Smith, "Wireless Video White Paper," ViXS Systems, Inc., Jan. 2002, 15 pages.

Cited By (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060198438A1 (en) * 2000-02-29 2006-09-07 Shinji Negishi Scene description generating apparatus and method, scene description converting apparatus and method, scene description storing apparatus and method, scene description decoding apparatus and method, user interface system, recording medium, and transmission medium
US20090290853A1 (en) * 2000-06-24 2009-11-26 Lg Electronics Inc. Method for reproducing data recorded on an interactive recording medium in conjunction with associated auxiliary data
US20090285562A1 (en) * 2000-06-24 2009-11-19 Lg Electronics Inc. Apparatus and method of reproducing audio/video data and additional data associated with the audio/video data
US20100119218A1 (en) * 2000-06-24 2010-05-13 Lg Electronics Inc. Method for reproducing data recorded on an interactive recording medium in conjunction with associated auxiliary data
US7778523B2 (en) 2000-06-24 2010-08-17 Lg Electronics Inc. Method for reproducing data recorded on an interactive recording medium in conjunction with associated auxiliary data
US8699854B2 (en) 2000-06-24 2014-04-15 Lg Electronics Inc. Method for reproducing data recorded on an interactive recording medium in conjunction with associated auxiliary data
US20070122118A1 (en) * 2000-06-24 2007-05-31 Lg Electronics Inc. Method for reproducing data recorded on an interactive recording medium in conjunction with associated auxiliary data
US8676028B2 (en) 2000-06-24 2014-03-18 Lg Electronics Inc. Method for reproducing data recorded on an interactive recording medium in conjunction with associated auxiliary data
US7715694B2 (en) * 2000-06-24 2010-05-11 Lg Electronics Inc. Apparatus and method of reproducing audio/video data and additional data associated with the audio/video data
US20040114906A1 (en) * 2002-12-09 2004-06-17 Lg Electronics Inc. Method of presenting auxiliary data for an interactive recording medium
US8295679B2 (en) 2002-12-09 2012-10-23 Lg Electronics Inc. Method of presenting auxiliary data for an interactive recording medium
US7995900B2 (en) 2002-12-09 2011-08-09 Lg Electronics Inc. Method of presenting auxiliary data for an interactive recording medium
US20100119212A1 (en) * 2002-12-09 2010-05-13 Lg Electronics, Inc. Method of presenting auxiliary data for an interactive recording medium
US20090257737A1 (en) * 2002-12-09 2009-10-15 Lg Electronics Inc. Method of presenting auxiliary data for an interactive recording medium
US20050102357A1 (en) * 2003-09-12 2005-05-12 Nobuhiro Shohga Receiver supporting broadband broadcasting
US20060109915A1 (en) * 2003-11-12 2006-05-25 Sony Corporation Apparatus and method for use in providing dynamic bit rate encoding
US9497513B2 (en) * 2003-11-12 2016-11-15 Sony Corporation Apparatus and method for use in providing dynamic bit rate encoding
US8667178B2 (en) * 2003-12-25 2014-03-04 Funai Electric Co., Ltd. Transmitting apparatus for transmitting data and transceiving system for transmitting and receiving data
US20050141858A1 (en) * 2003-12-25 2005-06-30 Funai Electric Co., Ltd. Transmitting apparatus and transceiving system
US8635345B2 (en) 2003-12-29 2014-01-21 Aol Inc. Network scoring system and method
US8271646B2 (en) 2003-12-29 2012-09-18 Aol Inc. Network scoring system and method
US20100180293A1 (en) * 2003-12-29 2010-07-15 Aol Llc Network scoring system and method
US7412516B1 (en) * 2003-12-29 2008-08-12 Aol Llc Using a network bandwidth setting based on determining the network environment
US20050213502A1 (en) * 2004-03-26 2005-09-29 Stmicroelectronics S.R.I. Method and system for controlling operation of a network, such as a WLAN, related network and computer program product therefor
US8612624B2 (en) 2004-04-30 2013-12-17 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10951680B2 (en) 2004-04-30 2021-03-16 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9407564B2 (en) 2004-04-30 2016-08-02 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US10225304B2 (en) 2004-04-30 2019-03-05 Dish Technologies Llc Apparatus, system, and method for adaptive-rate shifting of streaming content
US11470138B2 (en) 2004-04-30 2022-10-11 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US11677798B2 (en) 2004-04-30 2023-06-13 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9071668B2 (en) 2004-04-30 2015-06-30 Echostar Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10469554B2 (en) 2004-04-30 2019-11-05 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9571551B2 (en) 2004-04-30 2017-02-14 Echostar Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8402156B2 (en) 2004-04-30 2013-03-19 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10469555B2 (en) 2004-04-30 2019-11-05 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8868772B2 (en) 2004-04-30 2014-10-21 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US20060188014A1 (en) * 2005-02-23 2006-08-24 Civanlar M R Video coding and adaptation by semantics-driven resolution control for transport and storage
US8514980B2 (en) 2005-03-31 2013-08-20 At&T Intellectual Property I, L.P. Methods and systems for providing bandwidth adjustment
US20060222110A1 (en) * 2005-03-31 2006-10-05 Christian Kuhtz Methods and systems for providing bandwidth adjustment
US8259861B2 (en) * 2005-03-31 2012-09-04 At&T Intellectual Property I, L.P. Methods and systems for providing bandwidth adjustment
US8370514B2 (en) 2005-04-28 2013-02-05 DISH Digital L.L.C. System and method of minimizing network bandwidth retrieved from an external network
US9344496B2 (en) 2005-04-28 2016-05-17 Echostar Technologies L.L.C. System and method for minimizing network bandwidth retrieved from an external network
US8880721B2 (en) 2005-04-28 2014-11-04 Echostar Technologies L.L.C. System and method for minimizing network bandwidth retrieved from an external network
US7617436B2 (en) * 2005-08-02 2009-11-10 Nokia Corporation Method, device, and system for forward channel error recovery in video sequence transmission over packet-based network
US20070033494A1 (en) * 2005-08-02 2007-02-08 Nokia Corporation Method, device, and system for forward channel error recovery in video sequence transmission over packet-based network
US20070234385A1 (en) * 2006-03-31 2007-10-04 Rajendra Bopardikar Cross-layer video quality manager
US8102853B2 (en) * 2006-08-09 2012-01-24 Samsung Electronics Co., Ltd. System and method for wireless communication of uncompressed video having fixed size MAC header with an extension
US20080037540A1 (en) * 2006-08-09 2008-02-14 Samsung Information Systems America Method and apparatus of fixed size mac header with an extension in wirelesshd
US20080130561A1 (en) * 2006-12-04 2008-06-05 Samsung Electronics Co., Ltd. System and method for wireless communication
US10116722B2 (en) 2007-08-06 2018-10-30 Dish Technologies Llc Apparatus, system, and method for multi-bitrate content streaming
US8683066B2 (en) 2007-08-06 2014-03-25 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10165034B2 (en) 2007-08-06 2018-12-25 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9253063B2 (en) 2008-12-05 2016-02-02 Google Technology Holdings LLC Bi-directional video compression for real-time video streams during transport in a packet switched network
US20100142616A1 (en) * 2008-12-05 2010-06-10 Motorola, Inc. Bi-directional video compression for real-time video streams during transport in a packet switched network
US8798150B2 (en) 2008-12-05 2014-08-05 Motorola Mobility Llc Bi-directional video compression for real-time video streams during transport in a packet switched network
US8989259B2 (en) * 2009-11-04 2015-03-24 Tencent Technology (Shenzhen) Company Limited Method and system for media file compression
US20140146870A1 (en) * 2009-11-04 2014-05-29 Tencent Technology (Shenzhen) Company Limited Method and system for media file compression
US9014261B2 (en) 2009-11-04 2015-04-21 Tencent Technology (Shenzhen) Company Limited Method and system for media file compression
US20110158146A1 (en) * 2009-12-29 2011-06-30 Jeelan Poola Method and system for multicast video streaming over a wireless local area network (wlan)
US8270425B2 (en) 2009-12-29 2012-09-18 Symbol Technologies, Inc. Method and system for multicast video streaming over a wireless local area network (WLAN)
US9510029B2 (en) 2010-02-11 2016-11-29 Echostar Advanced Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US10075744B2 (en) 2010-02-11 2018-09-11 DISH Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US8295224B1 (en) 2010-03-16 2012-10-23 Deshong Rick L Wireless signal transceiver system
CN103222264B (en) * 2010-11-17 2016-10-19 谷歌技术控股有限责任公司 For optionally by signal from the system and method that a kind of format code transferring is one of multiple format
US9258578B2 (en) 2010-11-17 2016-02-09 Google Technology Holdings LLC System and method for selectively transcoding signal from one format to one of plurality of formats
CN103222264A (en) * 2010-11-17 2013-07-24 通用仪表公司 System and method for selectively transcoding signal from one format to one of plurality of formats
US20140012981A1 (en) * 2011-12-28 2014-01-09 Avvasi Inc. Apparatus and methods for optimizing network data transmission
US9438494B2 (en) * 2011-12-28 2016-09-06 Avvasi Inc. Apparatus and methods for optimizing network data transmission
US11284133B2 (en) 2012-07-10 2022-03-22 Avago Technologies International Sales Pte. Limited Real-time video coding system of multiple temporally scaled video and of multiple profile and standards based on shared video coding information
US9426498B2 (en) 2012-07-10 2016-08-23 Broadcom Corporation Real-time encoding system of multiple spatially scaled video based on shared video coding information
US10140545B2 (en) * 2013-02-28 2018-11-27 Facebook, Inc. Methods and systems for differentiating synthetic and non-synthetic images
US20140241629A1 (en) * 2013-02-28 2014-08-28 Facebook, Inc. Methods and systems for differentiating synthetic and non-synthetic images
US8903186B2 (en) * 2013-02-28 2014-12-02 Facebook, Inc. Methods and systems for differentiating synthetic and non-synthetic images
US9280723B2 (en) 2013-02-28 2016-03-08 Facebook, Inc. Methods and systems for differentiating synthetic and non-synthetic images
US20170091576A1 (en) * 2013-02-28 2017-03-30 Facebook, Inc. Methods and systems for differentiating synthetic and non-synthetic images
US9558422B2 (en) * 2013-02-28 2017-01-31 Facebook, Inc. Methods and systems for differentiating synthetic and non-synthetic images
US10166917B2 (en) 2014-08-30 2019-01-01 Mariana Goldhamer Transmission of uncompressed video in cellular networks
US10471895B2 (en) 2014-08-30 2019-11-12 Mariana Goldhamer Video transmission for road safety applications

Also Published As

Publication number Publication date
US20050008074A1 (en) 2005-01-13

Similar Documents

Publication Publication Date Title
US7274740B2 (en) Wireless video transmission system
US9325998B2 (en) Wireless video transmission system
US8018850B2 (en) Wireless video transmission system
US8356327B2 (en) Wireless video transmission system
US9544602B2 (en) Wireless video transmission system
US7797723B2 (en) Packet scheduling for video transmission with sender queue control
US7784076B2 (en) Sender-side bandwidth estimation for video transmission with receiver packet buffer
US7652993B2 (en) Multi-stream pro-active rate adaptation for robust video transmission
Van der Schaar et al. Optimized scalable video streaming over IEEE 802.11 a/e HCCA wireless networks under delay constraints
US7095782B1 (en) Method and apparatus for streaming scalable video
KR100932692B1 (en) Transmission of Video Using Variable Rate Modulation
US9407909B2 (en) Video rate control for video coding standards
US7984179B1 (en) Adaptive media transport management for continuous media stream over LAN/WAN environment
US7668170B2 (en) Adaptive packet transmission with explicit deadline adjustment
US8861597B2 (en) Distributed channel time allocation for video streaming over wireless networks
US20110274180A1 (en) Method and apparatus for transmitting and receiving layered coded video
US20060167987A1 (en) Content delivery system, communicating apparatus, communicating method, and program
US9219934B2 (en) Data stream rate adaptation mechanism
CA2747539A1 (en) Systems and methods for controlling the encoding of a media stream
JP2004529553A (en) Adaptive bandwidth footprint matching for multiple compressed video streams in fixed bandwidth networks
US20050246751A1 (en) Video on demand server system and method
JP2004507985A (en) System and method for dynamically adaptively decoding scalable video to stabilize CPU load
KR20040104063A (en) Packet scheduling method for streaming multimedia
EP1417841B1 (en) Method for transmission control in hybrid temporal-snr fine granular video coding
Van Beek et al. Adaptive streaming of high-quality video over wireless LANs

Legal Events

Date Code Title Description
AS Assignment

Owner name: SHARP LABORATORIES OF AMERICA, INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VAN BEEK, PETRUS J.L.;PAN, HAO;LI, BAOXIN;AND OTHERS;REEL/FRAME:014790/0860

Effective date: 20031111

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SHARP KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHARP LABORATORIES OF AMERICA INC.;REEL/FRAME:020986/0354

Effective date: 20080522

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20190925