US20020059394A1 - Content propagation in interactive television - Google Patents

Content propagation in interactive television Download PDF

Info

Publication number
US20020059394A1
US20020059394A1 US09/896,562 US89656201A US2002059394A1 US 20020059394 A1 US20020059394 A1 US 20020059394A1 US 89656201 A US89656201 A US 89656201A US 2002059394 A1 US2002059394 A1 US 2002059394A1
Authority
US
United States
Prior art keywords
assets
asset
viewing
replica
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/896,562
Inventor
Mark Sanders
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SeaChange International Inc
Original Assignee
SeaChange International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SeaChange International Inc filed Critical SeaChange International Inc
Priority to US09/896,562 priority Critical patent/US20020059394A1/en
Publication of US20020059394A1 publication Critical patent/US20020059394A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23113Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving housekeeping operations for stored content, e.g. prioritizing content for deletion because of storage space restrictions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends

Definitions

  • This invention relates to interactive television.
  • Interactive television systems provide viewers with network access to video servers that offer a large plurality of viewing selections.
  • a viewer looks at a menu transmitted by the system and selects a viewing asset.
  • the viewer issues a request for the selected asset through a network that connects his or her television to the interactive television system.
  • the interactive television system uses the network to stream the requested asset from one of the video servers to the viewer's television.
  • the collection of video data objects and related data objects such as posters, descriptions, and preview objects, may together form a complete viewing asset.
  • the selection of available viewing assets is preferably large.
  • the viewing assets themselves often include large video data objects.
  • the desire to offer large selections to viewers means that such systems need very substantial data storage resources for viewing assets.
  • An interactive television system may store a large amount of asset data on an array of servers. Typically, only a subset of the servers is accessible to a single viewer, because one server cannot server every viewer. In such a system, an individual asset may have to reside on several servers so that different viewers can access the asset.
  • the collection of accessible assets may also change over time. Changes to the collection of assets may respond to asset popularity shifts and/or viewing population changes.
  • the invention features a process of propagating viewing assets on a system of video storages.
  • the process includes copying a missing portion of a replica of a selected viewing asset onto a target video server.
  • the act of copying is responsive to determining that a priority to propagate the selected asset to the target server is higher than a retention value of a replica of one or more viewing assets stored on the target server.
  • the act of copying includes writing the missing portion of the replica of the selected asset onto a storage region of the target video server that previously stored a portion of the replica of one or more viewing assets.
  • the copying may include reading the missing portion from video servers that serve viewers.
  • the act of selecting a portion of the replica of one or more viewing assets is responsive to the replica of one or more viewing assets having a data size at least as large as a data size of the missing portion of the selected asset.
  • the process also includes assigning priorities to propagate to a plurality of viewing assets, ranking the viewing assets according to the assigned priorities, and choosing the selected asset for copying in response to the selected asset being ranked above a preselected minimum rank.
  • the invention features a process for propagating digital viewing assets to video servers.
  • the process includes assigning to each of a plurality of digital viewing assets a priority to propagate the asset onto video servers, ranking the assets based on the assigned priorities; and propagating one of the assets to one or more selected video servers.
  • the act of propagating is responsive to the one of the assets having a preselected minimum ranking.
  • the act of assigning includes assigning a viewing asset to a usage class.
  • the usage class provides a contribution to initial values of the priorities to propagate assets assigned to the class.
  • the process further includes accumulating usage data on individual assets stored on the video servers and updating the priorities to propagate based on the usage data.
  • the usage data may include numbers of viewer requests during predetermined time periods and differences between numbers of viewer requests during earlier and later predetermined periods.
  • the invention features a process for propagating viewing assets onto a video storage.
  • the process includes assigning propagation priorities to viewing assets, constructing a table of element deletion lists for a target video storage, and selecting a group of element deletion lists from the table.
  • the group has a data size at least as large as a data size of a portion of a replica of another asset not stored on the target storage.
  • the process also includes copying the portion of a replica of the other asset onto the target video storage in response to the propagation priority of the other asset being larger than a retention value of the group.
  • the act of copying includes writing the portion onto a region of the target video storage previously storing the group.
  • the act of selecting a group includes constructing a table listing sets of element deletion lists with lower retention value than the priority of the other asset.
  • the act of selecting includes picking one of the lists having a data size at least as large as the portion of the replica of the other asset.
  • the invention features a process of distributing viewing assets to viewers.
  • the process includes assigning priorities to assets, selecting a video server, and copying one of the assets onto the video server.
  • the priorities are priorities for distributing the associated assets to video servers accessible to viewers.
  • the act of copying is responsive to determining that the priority associated with the one of the assets is greater than a retention value associated with a set of viewing assets having replicas on the video server. The replicas occupy enough space to store the one of the assets.
  • the copying includes searching for one or more sets of replicas of asset elements to delete on a table of element deletion lists.
  • the process further includes accumulating data on usage of individual ones of the assets. Then, the act of updating is based at least in part on the accumulated data.
  • the invention features an interactive television system.
  • the system includes a network or bus, a plurality of video servers to store digital replicas of viewing assets for viewers, and a control unit connected to the video servers.
  • the video servers are connected by the network or bus.
  • the control unit orders copying of a missing portion of a replica of a selected asset to one of the video servers if a priority to propagate the selected asset onto the one of servers is higher than a value of retaining a replica of one or more other assets already stored on the target server.
  • the system also includes a plurality of distribution networks to provide channels for delivering viewing assets to viewer televisions.
  • Each distribution network connects to portion of the video servers.
  • the invention features a process for propagating digital viewing assets onto video servers.
  • the process includes propagating a plurality of viewing assets onto video servers based on priorities to propagate, accumulating usage data on individual assets stored on the video servers, and updating the priorities based on the usage data.
  • the priorities provide a ranking of the assets.
  • the process includes assigning a viewing asset to a usage class that provides a portion of an initial value for the priorities to propagate the assets assigned to the class.
  • the process may also include calculating the priority to propagate a selected one of the assets onto one of the video servers. The calculation may be based on a global priority to propagate the selected one of the assets and a local priority to propagate a replica of the selected one of the assets onto the one of the video servers.
  • the global priority may be based in part on a counter value that measures usage of the selected one of the assets.
  • the local priority may be based in part on a bandwidth for streaming the selected one of assets from the one of the video servers to a set of viewers.
  • the invention features a data storage media storing a computer executable program of instructions for performing one or more of the above-mentioned processes.
  • FIG. 1 is a block diagram of an interactive television system
  • FIG. 2 illustrates asset delivery pathways of the media clusters shown in FIG. 1;
  • FIG. 3 is a high-level block diagram of software processes that manage and control the interactive television system of FIG. 1;
  • FIG. 4 illustrates interactions between a propagation service process and other processes of FIG. 3;
  • FIG. 5A is a flow chart for a process that ranks viewing assets and evaluates retention values of replicas of asset elements and entries in element deletion lists;
  • FIG. 5B is a flow chart for a process that decides whether to propagate a replica of an asset to a media cluster
  • FIG. 6 is a flow chart illustrating a process that calculates total propagation priorities of replicas of assets
  • FIG. 7 is a flow chart illustrating a process that determines whether to copy a replica of a viewing asset onto a media cluster
  • FIG. 8A illustrates relations between replicas of viewing assets and replicas of asset elements stored on a media cluster
  • FIG. 8B illustrates ELists for the replicas of viewing assets and asset elements shown in FIG. 8A;
  • FIG. 8C illustrates replicas of assets, replicas of asset elements, and ELists that remain on the media cluster of FIG. 8A after deletion of the asset elements of one EList;
  • FIG. 8D is a table enumerating the ELists remaining in FIG. 8C;
  • FIG. 9A is a table showing the ELists shown in FIG. 8B;
  • FIG. 9B is a table that enumerates “combinations of ELists” from FIG. 9A with retention values below 50;
  • FIG. 10 is a flow chart illustrating a process that uses ELists to free space for propagating replicas of new assets.
  • FIG. 11 is a flow chart illustrating a process that initially defines and subsequently updates a new asset's global propagation priority
  • FIG. 1 shows a networked system 10 that provides interactive television to subscribing viewers.
  • the system 10 includes a set of control units 16 .
  • the control units 16 connect to each other through a multi-channel communications bus 12 .
  • the bus 12 may be a network.
  • the bus 12 also couples a plurality of media clusters 40 , 40 ′, 40 ′′, which store replicas of viewing assets for viewers.
  • the media clusters 40 , 40 ′, 40 ′′ couple to node groups of local viewers 20 , 20 ′, 20 ′′, 20 ′′′ through hybrid fiber coaxial (HFC) networks 22 , which function as broadband or multi-channel broadcast networks for the local node groups 20 , 20 ′, 20 ′′, 20 ′′′.
  • HFC hybrid fiber coaxial
  • the HFC networks 22 carry viewing assets from the media clusters 40 , 40 ′, 40 ′′ to interactive viewer televisions 24 and carry viewing status data and viewing requests from the televisions 24 up to the media clusters 40 , 40 ′, 40 ′′ and control units 16 .
  • the arrangement of control units 16 and media clusters 40 , 40 ′, 40 ′′ may serve a large number of viewers.
  • each media cluster 40 , 40 ′, 40 ′′ may serve between about ten to fifty thousand local viewers.
  • the connectivity between individual media clusters 40 , 40 ′, 40 ′′ and individual node groups 20 , 20 ′, 20 ′′, 20 ′′′ may differ from cluster to cluster.
  • the media cluster 40 couples to node groups 20 and 20 ′ while the media cluster 40 ′′ couples to node groups 20 ′′ and 20 ′′′.
  • different viewers may be served by different subsets of the media clusters 40 , 40 ′, 40 ′′.
  • Each interactive viewer television 24 includes a set top box 26 that connects a normal television 25 to the local HFC network 22 and provides an interface for communicating with a portion of the media clusters 40 , 40 ′, 40 ′′ and control units 16 .
  • the set top boxes 26 receive viewing assets from the associated HFC networks 22 , decode the received assets, and display the decoded assets on a normal television 25 .
  • the set top boxes 26 may be integrated into the televisions 25 .
  • the set top boxes 26 also receive menus of available viewing assets, display the menus on the normal televisions 25 , and transmit viewer requests for viewing assets and streaming-control commands to the control units 16 .
  • the streaming-control commands implemented by the set top boxes 26 may include stop, pause, fast-forward, and reverse.
  • the viewing assets are sequences of encoded digital files for video, text, audio, graphic, and/or interactive control applications. Each file of the sequence, for a viewing asset, will be referred to as an asset element.
  • the displayed viewing assets may, for example, be movies, newscasts, shopping emissions or interfaces, posters, or audio presentations.
  • Each control unit 16 includes a computer 28 , a data storage media 30 , e.g., a hard drive or compact disk, for storing software processes executable by the computer.
  • the control units 16 manage viewing assets on the media clusters 40 , 40 ′, 40 ′′ and control delivery of these assets to viewers.
  • the management of assets includes propagating assets among the media clusters 40 , 40 ′, 40 ′′ and accumulating asset usage data to insure that the propagation of assets to the media clusters 40 , 40 ′, 40 ′′ anticipates viewer demand.
  • the controlling asset delivery includes receiving, viewing requests from individual interactive televisions 24 and assigning asset delivery pathways from the media clusters 40 to the node groups 20 , 20 ′, 20 ′′, 20 ′′′ corresponding to requesting viewers.
  • Each media cluster 40 , 40 ′, 40 ′′ stores replicas of viewing assets that are currently available to the node groups 20 , 20 ′, 20 ′′, 20 ′′′ connected to the media cluster 40 , 40 ′, 40 ′′.
  • the selection of viewing assets varies from media cluster to media cluster 40 , 40 ′, 40 ′′.
  • the media clusters 40 , 40 ′, 40 ′′ stream viewing assets to associated local node groups 20 , 20 ′, 20 ′′, 20 ′′′ in response to control commands received from the control units 16 .
  • the control units 16 send such control commands in response to receiving requests for viewing assets from the various node groups 20 , 20 ′, 20 ′′, 20 ′′′.
  • Each media cluster 40 , 40 ′, 40 ′′ has space for storing a limited number of replicas of viewing assets.
  • the media clusters 40 , 40 ′, 40 ′′ store assets for meeting present and near-future viewing demands.
  • the control units 16 regularly update the asset selection on the media clusters 40 , 40 ′, 40 ′′ by copying replicas of new viewing assets to the media clusters 40 , 40 ′, 40 ′′ and/or by copying viewing assets between different media clusters 40 ′′, 40 , 40 ′.
  • a control unit 16 To propagate an asset, a control unit 16 first copies a replica of a new asset to a preselected one of the media clusters 40 , 40 ′, 40 ′′ and then orders cluster-to-cluster copying to propagate the asset to other clusters 40 , 40 ′, 40 ′′.
  • the control units 16 update the viewing asset selection on the media clusters 40 , 40 ′ 40 ′′ to maximize the economic value that the entire asset collect provides to the system 10 as is explained below.
  • system 10 of FIG. 1 is not hierarchical, other embodiments may use hierarchical organizations of media clusters and/or control units, e.g., in master-slave relationships.
  • master servers control slave servers and provide for larger video storages.
  • hierarchical organizations of media clusters or video servers are described in U.S. Pat. No. 5,862,312 and U.S. patent application Ser. No. 09/293,011, filed Apr. 16, 1999, which are both incorporated by reference herein.
  • FIG. 2 shows media clusters 40 , 40 ′, 40 ′′ in more detail.
  • Each media cluster 40 , 40 ′, 40 ′′ has several video data servers 42 , 42 ′, 42 ′′ that locally interconnect through a network or bus 34 , 34 ′, 34 ′′.
  • the servers 42 , 42 ′, 42 ′′ of the same cluster 40 , 40 ′, 40 ′′ share a data storage 36 , 36 ′, 36 ′′, which may be physically lumped or spread over the individual local servers 42 , 42 ′, 42 ′′.
  • the servers 42 , 42 ′, 42 ′′ delivery viewing assets from the cluster video storages 36 , 36 ′, 36 ′′ to node groups 20 , 20 ′, 20 ′′, 20 ′′′ connected to the associated media cluster 40 , 40 ′, 40 ′′.
  • the video data storages 36 , 36 ′, 36 ′′ store replicas of the viewing assets, which the media cluster 40 , 40 ′ can deliver to local node groups 20 , 20 ′, 20 ′′, 20 ′′′.
  • the selection of replicas of assets stored on different clusters 40 , 40 ′, 40 ′′ may differ so that different media clusters 40 , 40 ′, 40 ′′ do not generally provide identical viewing selections to the locally connected node groups 20 , 20 ′, 20 ′′, 20 ′′′.
  • FIG. 2 also shows some of the delivery pathways between various media clusters 40 , 40 ′, 40 ′′ and local node groups 20 , 20 ′, 20 ′′, 20 ′′′.
  • Each delivery pathway includes one of the servers 42 , 42 ′, 42 ′′, an output card of the server, a quadrature amplitude modulator (QAMs) 50 , a combiner 52 , 52 ′, 52 ′′, 52 ′′′ and an HFC 22 that connects the destination node group 20 , 20 ′, 20 ′′, 20 ′′′.
  • QAMs quadrature amplitude modulator
  • the servers 42 , 42 ′, 42 ′′ have one or more output cards, which produce streams of digital data packets for transporting viewing assets to the node groups 20 , 20 ′, 20 ′′, 20 ′′′.
  • the output streams are received by QAMs 50 that connect to the output cards.
  • Each QAM 50 reads headers of received packets and retransmits the packets towards the node group 20 , 20 ′, 20 ′′, 20 ′′′ served by the QAM 50 if that node group 20 , 20 ′, 20 ′′, 20 ′′.′ is a destination of the packet.
  • the retransmitted packets are received by combiners 52 , 52 ′, 52 ′′, 52 ′′′, which send broadband transmissions from several QAMs 50 to the associated node groups 20 , 20 ′, 20 ′′, 20 ′′′ via the associated HFC 22 .
  • the different media clusters 40 , 40 ′, 40 ′′ may have different delivery pathways to the node groups 20 , 20 ′, 20 ′′, 20 ′′′.
  • the control of delivery of viewing content over these delivery pathways and the management of replicas of assets stored on the media clusters 40 , 40 ′, 40 ′′ are both performed by the control units 16 .
  • the control units 16 execute processes that perform these functions and are able to manage in excess of ten thousand assets and a variety of control application types.
  • An individual control unit 16 may perform the above-functions for some or all of the media clusters 40 , 40 ′, 40 ′′.
  • Assets are the smallest viewable objects that can be requested by or streamed to viewers. Replicas of assets can be activated or deactivated on the media clusters 40 , 40 ′, 40 ′′.
  • An asset may include several elements, e.g., consecutive portions of a movie, a movie poster, and a movie trailer.
  • the elements of an asset are individual files and are the smallest data objects that can be copied to or deleted from a media cluster 40 , 40 ′, 40 ′′.
  • Physical copies of assets and asset elements on particular media cluster 40 , 40 ′, 40 ′′ are referred to as replicas of assets and replicas of asset elements, respectively.
  • FIG. 3 is a block diagram showing software processes executed by the computers 28 of the control units 16 to manage and control viewing assets and asset usage in the interactive television system 10 shown in FIG. 1.
  • the processes include modules 62 , 64 , 66 , 68 , 70 , 72 , 74 , 76 , which perform asset management, directory service, propagation services, connection management, streaming services, movies-on-demand (MOD) control application services, MOD billing services, and program guide services.
  • other processes may provide control application services and billing services for content types, such as news-on-demand (NOD), interactive shopping, and interactive games (not shown).
  • resident processes include an instance of a media cluster agent 78 .
  • resident processes include one or more application agents 80 , e.g., a MOD application agent.
  • FIG. 4 illustrates functional relationships between the software process modules 62 , 64 , 66 , 68 , 70 , 78 that manage and control assets. These relationships are further described below.
  • the asset management module 62 provides an interface for receiving new viewing assets into the control unit 16 .
  • the interface may, e.g., support automated downloads of viewing assets from a distribution cable (not shown) and/or manual uploads of viewing assets under control of a user operating a graphical user interface.
  • the asset management module 62 creates an automated work queue 63 that controls the life cycle of the associated asset.
  • the work queues 63 control the life cycle transitions performed by the propagation service (PS) module 66 .
  • An asset's life cycle may include states such as received; encoded; stored on some media clusters 40 , 40 ′, 40 ′′; activated; deactivated; and deleted.
  • the asset management module 62 accepts several types of data objects including encoded data files, e.g., encoded according to the Movie Picture Experts Group (MPEG) standards, nonencoded data files, executable applications, and metadata associated with other assets.
  • the received data assets may be for video, audio, text, graphics, or interactive applications.
  • the directory service module 64 provides assets with filenames arranged in hierarchical namespaces.
  • the directory service module 64 keeps information about the element composition of assets and metadata associated with assets.
  • control applications may include movies on demand (MOD), television on demand (TVOD), news on demand (NOV), interactive shopping and others.
  • MOD movies on demand
  • TVOD television on demand
  • NOV news on demand
  • interactive shopping and others.
  • the propagation service module 66 controls copying of assets to and deleting of assets from individual media clusters 40 , 40 ′, 40 ′′.
  • a media cluster 40 , 40 ′, 40 ′′ needs a replica of each element of an asset to be able to play the asset to viewers. But, replicas of different assets may share replicas of some asset elements stored on the same media cluster 40 , 40 ′, 40 ′′.
  • the propagation service module 66 orders copying of new assets to a preselected one of the media clusters, e.g., cluster 40 .
  • the propagation service module 66 also orders copying of the asset replica to other ones of the media clusters, e.g., clusters 40 ′, 40 ′′, to meet anticipated user demand for the assets, e.g., demands that are upcoming in the next few hours.
  • the propagation service module 66 also provides location information on active replicas of assets to the other process modules 64 , 68 , 70 .
  • connection manager module 68 selects pathways for streaming viewing assets from media clusters 40 , 40 ′, 40 ′′ storing replicas of the viewing assets to viewers requesting the viewing assets.
  • the connection manager module 68 uses an abstract representation of each potential delivery pathway. The representations indicate throughputs and bottlenecks along each pathway. The connection manager module 68 selects pathways with the highest available throughputs, i.e. the least restrictive bottlenecks, as the pathways for delivering assets to requesting viewers.
  • the connection manager module 68 also provides the abstract representation of delivery pathways for the propagation service module 66 .
  • This representation indicates available total bandwidths for delivering various viewing assets to local node groups 20 .
  • the propagation server module 66 uses this representation to determine when the available bandwidth for delivering an asset to viewers is so diminished that an additional replica of the asset is needed on another media cluster 40 , 40 ′, 40 ′′.
  • the connection manager module 68 provides the representations of delivery pathways between media clusters 40 , 40 ′, 40 ′′ and node groups 20 , 20 ′, 20 ′′, 20 ′′′ to other ones of the software modules.
  • connection manager module 68 is also an interface that receives requests for assets from viewer televisions 24 and set top boxes 26 .
  • the streaming service module 70 provides application-independent streaming services to the connection management module 68 , control application service modules 72 , and media cluster agents 78 .
  • the provided services include stream creation, operation, and tear down of delivery pathways.
  • the streaming service module 70 also has interfaces for controlling media cluster agents 78 that reside on the individual media clusters 40 , 40 ′, 40 ′′.
  • the media cluster agents 78 copy new replicas of asset elements to and delete old replicas of asset elements from the associated media clusters 40 , 40 ′, 40 ′′ in response to commands or orders from the propagation service module 66 .
  • the MOD application service module 72 resides on the control unit 16 and controls processing of viewer requests to purchase movies and other on-demand video assets.
  • the MOD application server module 72 interacts with an application database 82 and the MOD billing service module 74 to check viewer credit status and to bill virtual video rentals.
  • the MOD application service module 72 can suspend or resume asset streaming to viewers, monitors viewing progress, and records viewer activities in the database 82 .
  • the MOD application client 80 resides on each set top box 26 and provides a viewer with system navigation interface for requesting a programming guide and requesting assets for viewing.
  • the interface transmits a viewer's requests to one of the control units 16 .
  • the program guide service module 76 provides program guide files to set top boxes 26 , which in turn displays a program guide on the attached normal television 25 . Viewers can request assets in the program guide for viewing.
  • each EList is identified by a “selected element” belonging to the EList.
  • Each EList indicates a set of replicas of elements that can also be deleted if the identified selected element is deleted without incurring addition loss of retention value (RV) from the media cluster 40 , 40 ′, 40 ′′ storing the replicas.
  • RV retention value
  • the propagation service module 66 controls propagations of viewing assets to and among the media clusters 40 , 40 ′, 40 ′′.
  • the propagation service module 66 propagates assets to increase the economic value of the entire collection of assets available to viewers.
  • the economic value of propagating a particular asset to a particular media cluster 40 , 40 ′, 40 ′′ is rated by a total propagation priority (TPP).
  • TPPs enable comparisons of the economic values of potential propagations of assets to particular media clusters 40 , 40 ′, 40 ′′ of the interactive television system 10 .
  • the asset propagation process includes a process 100 that ranks potential propagations and a process 110 that selects which asset propagations to perform.
  • the process 100 evaluates the TPP of potential asset propagations (step 102 ).
  • a potential asset propagation identifies an asset and a target media cluster 40 , 40 ′, 40 ′′ to which the identified asset can be propagated.
  • the target cluster 40 , 40 ′, 40 ′′ does not already store a replica of the asset.
  • the process 100 ranks the set of potential asset propagation in a list (step 104 ). Potential asset propagations with higher TPPs are ranked higher and correspond to propagations predicted to provide larger increases the economic value of the entire collection of replicas of assets stored on media clusters 40 , 40 ′, 40 ′′.
  • the ranking process 100 also assigns a retention value (RV) to each replica of an asset (step 106 ).
  • RV retention value
  • the assigned RVs depend both on the asset and on the media cluster 40 , 40 ′, 40 ′′.
  • the retention value, RV indicates the value to the entire system 10 of keeping the associated replica of the asset on the associated media cluster 40 , 40 ′, 40 ′′.
  • the process 100 calculates the RVs of element deletion lists (step 108 ). Element deletion lists, which are described below, are groups of replicas of asset elements that can be deleted together.
  • the ranking process 100 is repeated at regular intervals.
  • a flow chart for a propagation selection process 110 is shown.
  • the process 110 selects the highest ranked potential asset propagation that remains on the ranking list (step 112 ).
  • the selected potential propagation has the largest TPP among potential asset propagations, which have not already been processed.
  • the process determines whether the associated target media cluster 40 , 40 ′, 40 ′′ of the selected potential asset propagation has a suitable storage region for a replica the asset (step 114 ).
  • a suitable storage area is storage space that is large enough to store a replica of any elements of the asset not already on the target media cluster 40 , 40 ′, 40 ′′, i.e., any missing elements, and that has a total RV that is smaller than the TPP of the selected asset.
  • the process 110 selects the most appropriate region; orders copying of replicas of the missing elements of the selected asset from another media cluster 40 , 40 ′, 40 ′′ onto the most appropriate region; and then updates RVs, ELists, and combinations of ELists of the target cluster 40 , 40 ′, 40 ′′ (step 1 I 16 ).
  • the most appropriate region has the smallest total RV value and among such regions the smallest size.
  • the copying replaces existing replicas of the most appropriate region with missing replicas of the asset being propagated.
  • the process 110 loops back 118 to select the remaining asset propagation on the ranking list having the next highest TPP.
  • FIG. 6 is a flow chart for a process 120 that calculates TPPs of potential asset propagations.
  • the propagation service module 66 determines a global propagation priority (GPP) for a viewing asset, which is available for copying onto the media clusters 40 , 40 ′, 40 ′′ (step 122 ).
  • the GPP is a time-dependent number, e.g., in the range of 0 to 100, that expresses the economic value of malting a new replica of the associated asset available to viewers.
  • the process 120 also determines a local propagation priority (LPP) for copying a replica of the asset onto a particular target media cluster 40 , 40 ′, 40 ′′ (step 122 ).
  • LPP local propagation priority
  • the determination of an LPP is performed separately for each target media cluster on which the asset is not already stored. Finally, the process 120 adds the GPP and LPP to obtain the TPP associated with the asset and the particular target media cluster 40 , 40 ′, 40 ′′ (step 126 ).
  • Replication of an asset to a media cluster 40 , 40 ′, 40 ′′ involves copying replicas of the elements of the asset, which are not already present, onto the cluster's video data storage 36 , 36 ′, 36 ′′.
  • replicas of different assets can share replicas of the asset elements.
  • replicas of asset elements already on a media cluster are not recopied onto the cluster during propagation of the asset to the cluster.
  • Copying entails pulling elements of the asset, which are not already on the cluster, from the video data storage 36 , 36 ′, 36 ′′ of another media cluster 40 , 40 ′, 40 ′′ and writing the pulled elements to the video data storage 36 , 36 ′, 36 ′′ of the target media cluster 40 , 40 ′, 40 ′′.
  • the propagation service module 66 updates RVs of the replicas of assets and asset elements on the target cluster 40 , 40 ′, 40 ′′.
  • a process 140 for determining whether to propagate a selected asset to a target media cluster 40 , 40 ′, 40 ′′ is shown.
  • the propagation service module 66 calculates a TPP for a replica of the selected asset on the target media cluster 40 , 40 ′, 40 ′′ (step 142 ).
  • the TPP is the sum of the asset's GPP for propagating a new replica of the asset on the system 10 and the LPP for having a replica of the selected asset on the particular media cluster 40 , 40 ′, 40 ′′.
  • the propagation service module 66 selects a list of regions, e.g., combinations of ELists, of the target media cluster 40 , 40 ′, 40 ′′ that have smaller RVs than the TPP for the selected asset (step 144 ).
  • the propagation service module 66 determines whether any regions on the list have a size sufficient to store replicas of the elements of the selected asset that are not already stored on the media cluster 40 , 40 ′, 40 ′′, i.e., replicas of the missing elements (step 146 ). If at least one such region exists, the propagation service module selects the most appropriate one of such regions (step 147 ).
  • the most appropriate region is a region with the smallest RV and among regions with the smallest RV, the most appropriate region has the smallest size.
  • the propagation service module 66 replaces data in the selected region by replicas of the missing elements of the selected asset (step 148 ).
  • the target region for the copying is the region from the list with the smallest RV and sufficient size to store the elements.
  • the propagation service module 66 protects both source and destination replicas from deletion.
  • the propagation service module 66 identifies a video data storage 36 , 36 ′, 36 ′′ of another media cluster 40 , 40 ′, 40 ′′ to act as a source for the elements being copied.
  • the propagation service module 66 also fixes a minimum transfer rate for the elements being copied and protects the source and target from being overwritten during copying.
  • the propagation service module 66 After ordering the replication of the selected asset, the propagation service module 66 also updates the RVs of replicas of asset elements remaining on the target media cluster 40 , 40 ′, 40 ′′ (step 150 ). Any replicas of asset elements not belonging to a full replica of an asset are updated to have RVs with low values, e.g., the value zero. Replicas of these asset elements will be the first elements removed to provide space for new replicas of assets.
  • the propagation service module 66 does not copy replicas of the missing asset elements to the target media cluster 40 (step 152 ).
  • FIG. 8A is a snapshot showing relationships between data objects stored on the video data storage 36 of one media cluster 40 shown in FIG. 1.
  • the data objects include replicas of asset elements A-H and replicas of assets 1 - 5 , which are composed of the replicas of elements A-H.
  • Each replica of an asset 1 - 4 has an associated RV, i.e., a number determining the asset replicas value with respect to being deleted.
  • Each replica of an element A-H has an associated data size and an RV inherited from the assets to which the element belongs.
  • the replicas of elements A-H that belong to an asset are streamed from the media cluster 36 to a viewer in response to a viewing request.
  • the absence of a replica of an element A-H belonging to the replica of the requested asset results in the asset being not streamable and thus, unusable for viewers.
  • the deletion of the replica of any element A-H results in each replica of an asset 1 - 5 , to which the deleted replica of the element A-H belonged, ceasing to exist.
  • the RV of a replica of an element is itself equal to a preselected combination of the RVs of the replicas of assets 1 - 5 to which the element A-H belongs, e.g., a sum of the RVs of replicas of such assets.
  • the shared element I has an RV of 30, which is the sum of the RVs of the replicas of assets 1 and 2 to which replica of element I belongs.
  • a replica of a selected element A-H can be deleted or overwritten.
  • Each replica of an element A-H also defines an element deletion list (EList).
  • An EList includes the additional replicas of elements A-H that can be deleted along with the selected replica of an element A-H without making additional replicas of assets of the media cluster 40 disappear.
  • the EList of a selected asset includes replicas of all elements A-H that belong “only” to assets to which the selected replica of an element A-H belongs.
  • the EList for selected replica of an element H includes the replicas of elements H and G.
  • the replica of element G belongs to this EList, because it belongs only to the replicas of assets 4 , 5 to which the selected replica of element H also belongs.
  • Deletion of all replicas of elements A-H of an EList does not produce more loss of RV for the media cluster 40 than the RV loss produced by only deleting the selected replica of element A-H defining the EList.
  • deleting all the replicas of assets in the EList of a selected replica of an asset often liberates more space for writing new replicas of assets to the media cluster 40 .
  • FIG. 8B illustrates the ELists 1 ′- 6 ′ for the replicas of elements A-H and replicas of assets 1 - 5 shown in FIG. 8A.
  • an RV and a size can be defined for each EList 1 ′- 6 ′.
  • FIG. 8B shows RV and size of each EList by labels RV/size.
  • the RV of an EList 1 ′- 6 ′ is the preselected combination of the RVs of the replicas of assets 1 - 5 to which the ELists 1 ′- 6 ′ selected replica of an element AH belongs.
  • the size of an EList 1 ′- 6 ′ is a sum of the data sizes of the replicas of elements A-H that belong to the EList 1 ′- 6 ′.
  • FIG. 9A is a table 160 whose entries characterize the ELists 1 ′- 6 ′ of the video data storage 36 , shown in FIGS. 8 A- 8 B.
  • Each entry of table 160 provides an RV, size, and a list of replicas of elements A-I for one of the ELists 1 ′- 6 ′.
  • the last column 162 of the table 160 indicates which replicas of assets 1 - 5 will cease to exist on the associated media cluster 40 if the EList 1 ′- 6 ′ associated with the entry is deleted.
  • FIG. 8C shows the data objects remaining on the video data storage 36 after deletion of EList 3 ′, shown in table 160 .
  • remaining ELists 2 ′′, 3 ′′ have forms that differ from the forms of any ELists 1 ′- 6 ′ prior to the deletion.
  • Table 160 ′ of FIG. 8D characterizes the ELists 1 ′, 2 ′′, 3 ′′, 4 ′ that remain after the above-described deletion.
  • FIG. 9B shows a table 160 ′′ that provides region economic values, i.e., total RVs, and sizes of “combinations of ELists”, which are formed from the ELists 1 ′- 6 ′ shown in FIG. 8B.
  • the region values and sizes are found by combining RVs, e.g., by summing, and summing sizes, respectively, for the ELists 1 ′- 6 ′ belonging to the combinations.
  • the table 160 ′′ lists the “combinations of ELists” in order of increasing replacement value up to a value of 50 where 50 is the largest TPP of assets currently under consideration for propagation.
  • the propagation service module 66 produces a table listing “combinations of ELists” for the media cluster 40 .
  • the table lists “combinations of ELists” whose total RVs are smaller than the maximum TPP of any asset being currently considered for propagation to the target cluster 40 .
  • the propagation service module 66 searches the media cluster's table of ELists for combinations of ELists whose combined RVs are smaller than the maximum TPP of any asset considered for propagation to the target media cluster 40 . Limiting searches to this upper bound on RVs reduces the number of “combinations of ELists” for which the propagation service module 66 needs to determine a total RV or region economic value.
  • FIG. 10 is a flow chart for a process 168 that decides whether a target media cluster 40 , 40 ′, 40 ′′ has space for a replica of a new selected asset not presently on the cluster 40 , 40 ′, 40 ′′. From the table listing “combinations of ELists”, the process 168 selects the entries whose RVs are smaller than the TPP of the new selected asset (step 170 ). From these entries, the propagation service module 66 searches for a “combination of ELists” that that occupy space large enough to store the replicas of elements of the selected new asset, which are not already on the media cluster (step 172 ).
  • the propagation service module 66 selects the “combination of ELists” whose RV is smallest. Among the combinations of ELists with smallest RVs, the propagation service module 66 selects the combination having the smallest data size, i.e., this combination of ELists defines the most appropriate storage region to delete. The data size must be sufficient to store replicas of elements of the selected new asset not already on the target media cluster 40 , 40 ′, 40 ′′. If the sought “combination of ELists” exists, the propagation service module 66 replaces elements of the new selected asset, not already on the media cluster, to the region previously occupied by the sought combination of ELists (step 174 ).
  • the propagation service module 66 After performing the copying, the propagation service module 66 updates RV of replicas of elements and asset elements on the media cluster 40 , 40 ′, 40 ′′ and the table of ELists (step 176 ). If the sought combination of ELists does not exist, the propagation service module 66 does not copy missing elements of the new selected asset to the target media cluster 40 , 40 ′, 40 ′′ (step 178 ).
  • the TPP for replicating an asset is a sum of a global propagation priority (GPP) and a local propagation priority.
  • the GPP is independent of target media cluster 40 , 40 ′, 40 ′′.
  • the LPP depends on the target media cluster 40 , 40 ′, 40 ′′.
  • Each GPP is a sum of several components, which may take values from 0 to 100. Each component may separately vary in time thereby changing the probability of propagating a replica of the asset to new media clusters 40 , 40 ′, 40 ′′.
  • FIG. 11 shows a flow chart for a process 180 that initially defines and subsequently updates the GPP of a new asset.
  • the process 180 assigns the new asset to a usage class of similar assets (step 182 ). Membership to usage classes may be based on an asset's genre, e.g., a subject classification such as newly released movies or sports events, or may be based on other criteria such as anticipated popularity.
  • the process 180 assigns an initial GPP to the new asset based, in part, on the usage class of which the asset is a member (step 184 ).
  • Media clusters 40 , 40 ′, 40 ′′ having replicas of assets report usage data to the connection management module 68 , which reports the data to propagation service module 66 . Using the data, the propagation service module 66 recalculates the GPP of the asset.
  • the updated GPP partially determines to which media clusters 40 , 40 ′, 40 ′′ an asset propagates, the distribution of the asset on the media clusters 40 , 40 ′, 40 ′′ changes in response to usage data fed back to the propagation service module 66 .
  • the GPP is adjusted to reflect, in part, relative viewer requests for the asset as compared to viewer requests for other assets.
  • the propagation service module 66 uses usage data to update the value of the usage pattern of the usage class to which the asset is a member (step 190 ).
  • the viewer usage data may be determined from values of counters 162 , shown in FIG. 3, associated with individual assets.
  • the counters 162 record demands for individual ones of the assets. The usage data accumulated by these counters 162 is described below.
  • One of the counters 162 measures use of each asset during specific times of week.
  • a set of accumulators is maintained to actually indicate use of each asset during every 2 hour interval of the week. For example, one of these counters may store data indicating that the Tuesday night news is most requested on Wednesday morning.
  • the accumulator corresponding to the current time of week is one of the counters 162 .
  • the usage data for a usage class, as described above, for a new asset is maintained by an analogous set of counters and accumulators.
  • Usage data is accumulated on short-term viewer demand, medium-term viewer demand, total number of viewer requests, and last-request-time for each asset.
  • the counters 162 (FIG. 3) accumulate usage data, which are associated with individual replicas of assets on media clusters 40 , 40 ′, 40 ′′.
  • Another of the counters 162 measures the “short-term demand” for an associated asset by counting each viewer request for the asset.
  • the propagation service module 66 may decrement this counter 162 by a fixed amount every few hours or perform another counter correction to make the counter's total count indicative of the number of “recent” requests. The count is never decremented to negative values. If the accumulated count is high, the propagation priority, i.e., value of putting another replica of the asset onto a media cluster 40 is high, and the propagation service module 66 increases the asset's TPP.
  • the medium-term demand for each asset is measured by seven of the counters 162 , which accumulate numbers of demands for the asset over weeklong periods. The period of each counter ends on different day. For each counter, the count from the present week is compared to the count from the same counter for the last week. A decrease in this count indicates a declining interest for the asset, and the propagation service module 66 reduces the GPP of the asset in response to such decreases.
  • Other counters 162 measure the total number of requests and time of the last request for each asset. These counts track popularities of assets. The total number of requests may be used to update the GPP of the asset to generally reflect its popularity. The lengths of time since last request for different assets are compared to determine relative asset popularities.
  • the GPP of an asset may depend on other components.
  • One component measures whether an asset is a rapid mover, e.g., an asset whose usage changes suddenly.
  • Another component raises the GPP of assets for which only one replica exists. This favorizes generation of a second replica, which is valuable to the system to avoid failures.
  • Another component raises the GPP of assets whose one or more replicas are only accessible through heavily loaded delivery networks. This favors adding a second replica, which can offset unavailability of the asset caused by other video streaming traffic.
  • the LPP assigned to a replica of an asset is also a sum of several components, but the components depend on properties of a server's local environment. Each of the components may take a value in a preselected range, e.g., 0 to 100.
  • the components contributing to LPP cause the propagation service module 66 to distribute replicas of assets to media clusters 40 a manner that accounts, in part, for local viewing preferences.
  • LPPs One component of LPPs depends on the classification of an asset, e.g., genre and language. This component causes distributions of replica of assets to accord with local preferences of viewers and may be updated by historical viewing data. Counters 162 of the propagation service module 66 accumulate numbers of viewer requests for various classes of assets for use in determining the value of this component.
  • Another component of LPPs depends on whether multiple replicas of the asset are stored on the set of media clusters 40 , 40 ′, 40 ′′. This component is high for clusters 40 , 40 ′, 40 ′′ not storing a replica of the asset if a replica of the asset is only on one cluster 40 , 40 ′, 40 ′′. This component stimulates the propagation service module 66 to replicate each asset present on one media cluster 40 , 40 ′, 40 ′′ onto a second media cluster 40 , 40 ′, 40 ′′. The second replica helps to avoid delivery failures caused by hardware failures.
  • Another component to LPPs depends on activity levels of media clusters 40 increasing the LPP for replicas of new assets only located on other media clusters 40 , 40 ′, 40 ′′ that are heavily loaded.
  • the component causes the propagation service module 66 to copy a replica of the asset to a less busy media cluster 40 , 40 ′, 40 ′′ if current replicas of the asset are on media clusters 40 operating at near capacity.
  • the propagation service module 66 uses delivery pathway data from the connection manager 68 to determine whether media clusters 40 , 40 ′, 40 ′′ are operating at near capacity.
  • Another component to LPPs depends on activity levels of delivery pathways. This component causes the propagation service module 66 to copy a replica of an asset onto a media cluster 40 that is connected to a node group 20 , 20 ′, 20 ′′, 20 ′′′ by less burdened delivery pathway if the presently usable delivery pathways are near capacity.
  • the loading of each delivery pathway is determined by the connection manager 68 , which provides an abstracted view of the deliver pathways to the propagation service module 66 .
  • Another component to LPPs causes the propagation service module 66 to copy a replica of an asset to a new media cluster 40 , 40 ′, 40 ′′ if present replicas are inaccessible to some users.
  • the inaccessibility may result from the absence of delivery pathways from the clusters 40 , 40 ′, 40 ′′ presently storing a replica of the asset and the node groups 20 , 20 ′, 20 ′′, 20 ′′′ without access.
  • other historical usage data is used to set the values of the GPPs and LPPs of potential propagations.
  • the propagation of viewing assets to media clusters 40 , 40 ′, 40 ′ is calculated to increase the total economic value of the assets to the system 10 , e.g., by increasing viewer payments for movie rentals.
  • the propagation priorities assigned to replicas of assets are updated to follow the changing demands.
  • Asset propagation evolves the distribution of replicas of the assets on the media clusters 40 , 40 ′, 40 ′′ in response to the updates to GPP and LPP.
  • changes in viewing preferences automatically induce an evolution in the distribution of viewing assets and, at least partially, follows the viewing preferences.
  • the retention values (RVs) assigned to replicas of assets and to replicas of asset elements, which are both stored on the media clusters 40 , 40 ′, 40 ′′, are calculated and updated through processes analogous to the processes used to determine the TPP.
  • the RV may be a sum of a global value and a local value.
  • the global value indicates an economic value associated with retaining the present set of replicas of the asset on the entire collection of media clusters 40 , 40 ′, 40 ′′.
  • the local value indicates an economic value associated with retaining the replica of the asset on a particular associated media cluster 40 , 40 ′, 40 ′′.
  • Both the local and global values may be sums of components of the same types as the components contributing to LPP and GPP, but coefficients may vary.
  • the global and local values may depend on collected usage data and be assigned to usage classes that contribute to defining initial values.

Abstract

A process of propagates viewing assets on a system of video storages. The process includes copying a missing portion of a replica of a selected viewing asset onto a target video server. The act of copying is responsive to determining that a priority to propagate the selected asset to the target server is higher than a retention value of a replica of one or more viewing assets stored on the target server.

Description

    TECHNICAL FIELD
  • This invention relates to interactive television. [0001]
  • BACKGROUND
  • Interactive television systems provide viewers with network access to video servers that offer a large plurality of viewing selections. To make a viewing selection, a viewer looks at a menu transmitted by the system and selects a viewing asset. The viewer issues a request for the selected asset through a network that connects his or her television to the interactive television system. In response to receiving the viewer's request, the interactive television system uses the network to stream the requested asset from one of the video servers to the viewer's television. The collection of video data objects and related data objects such as posters, descriptions, and preview objects, may together form a complete viewing asset. [0002]
  • In an interactive television system, the selection of available viewing assets is preferably large. Furthermore, the viewing assets themselves often include large video data objects. The desire to offer large selections to viewers means that such systems need very substantial data storage resources for viewing assets. [0003]
  • An interactive television system may store a large amount of asset data on an array of servers. Typically, only a subset of the servers is accessible to a single viewer, because one server cannot server every viewer. In such a system, an individual asset may have to reside on several servers so that different viewers can access the asset. [0004]
  • The collection of accessible assets may also change over time. Changes to the collection of assets may respond to asset popularity shifts and/or viewing population changes. [0005]
  • SUMMARY
  • In one aspect, the invention features a process of propagating viewing assets on a system of video storages. [0006]
  • The process includes copying a missing portion of a replica of a selected viewing asset onto a target video server. The act of copying is responsive to determining that a priority to propagate the selected asset to the target server is higher than a retention value of a replica of one or more viewing assets stored on the target server. [0007]
  • In some embodiments, the act of copying includes writing the missing portion of the replica of the selected asset onto a storage region of the target video server that previously stored a portion of the replica of one or more viewing assets. The copying may include reading the missing portion from video servers that serve viewers. [0008]
  • In some embodiments, the act of selecting a portion of the replica of one or more viewing assets is responsive to the replica of one or more viewing assets having a data size at least as large as a data size of the missing portion of the selected asset. [0009]
  • In some embodiments, the process also includes assigning priorities to propagate to a plurality of viewing assets, ranking the viewing assets according to the assigned priorities, and choosing the selected asset for copying in response to the selected asset being ranked above a preselected minimum rank. [0010]
  • In a second aspect, the invention features a process for propagating digital viewing assets to video servers. The process includes assigning to each of a plurality of digital viewing assets a priority to propagate the asset onto video servers, ranking the assets based on the assigned priorities; and propagating one of the assets to one or more selected video servers. The act of propagating is responsive to the one of the assets having a preselected minimum ranking. [0011]
  • In some embodiments, the act of assigning includes assigning a viewing asset to a usage class. The usage class provides a contribution to initial values of the priorities to propagate assets assigned to the class. [0012]
  • In some embodiments, the process further includes accumulating usage data on individual assets stored on the video servers and updating the priorities to propagate based on the usage data. The usage data may include numbers of viewer requests during predetermined time periods and differences between numbers of viewer requests during earlier and later predetermined periods. [0013]
  • In a third aspect, the invention features a process for propagating viewing assets onto a video storage. The process includes assigning propagation priorities to viewing assets, constructing a table of element deletion lists for a target video storage, and selecting a group of element deletion lists from the table. The group has a data size at least as large as a data size of a portion of a replica of another asset not stored on the target storage. The process also includes copying the portion of a replica of the other asset onto the target video storage in response to the propagation priority of the other asset being larger than a retention value of the group. [0014]
  • In some embodiments, the act of copying includes writing the portion onto a region of the target video storage previously storing the group. [0015]
  • In some embodiments, the act of selecting a group includes constructing a table listing sets of element deletion lists with lower retention value than the priority of the other asset. The act of selecting includes picking one of the lists having a data size at least as large as the portion of the replica of the other asset. [0016]
  • In a fourth aspect, the invention features a process of distributing viewing assets to viewers. The process includes assigning priorities to assets, selecting a video server, and copying one of the assets onto the video server. The priorities are priorities for distributing the associated assets to video servers accessible to viewers. The act of copying is responsive to determining that the priority associated with the one of the assets is greater than a retention value associated with a set of viewing assets having replicas on the video server. The replicas occupy enough space to store the one of the assets. [0017]
  • In some embodiments, the copying includes searching for one or more sets of replicas of asset elements to delete on a table of element deletion lists. [0018]
  • In some embodiments, the process further includes accumulating data on usage of individual ones of the assets. Then, the act of updating is based at least in part on the accumulated data. [0019]
  • In a fifth aspect, the invention features an interactive television system. The system includes a network or bus, a plurality of video servers to store digital replicas of viewing assets for viewers, and a control unit connected to the video servers. The video servers are connected by the network or bus. The control unit orders copying of a missing portion of a replica of a selected asset to one of the video servers if a priority to propagate the selected asset onto the one of servers is higher than a value of retaining a replica of one or more other assets already stored on the target server. [0020]
  • In some embodiments, the system also includes a plurality of distribution networks to provide channels for delivering viewing assets to viewer televisions. Each distribution network connects to portion of the video servers. [0021]
  • In a sixth aspect, the invention features a process for propagating digital viewing assets onto video servers. The process includes propagating a plurality of viewing assets onto video servers based on priorities to propagate, accumulating usage data on individual assets stored on the video servers, and updating the priorities based on the usage data. The priorities provide a ranking of the assets. [0022]
  • In some embodiments, the process includes assigning a viewing asset to a usage class that provides a portion of an initial value for the priorities to propagate the assets assigned to the class. The process may also include calculating the priority to propagate a selected one of the assets onto one of the video servers. The calculation may be based on a global priority to propagate the selected one of the assets and a local priority to propagate a replica of the selected one of the assets onto the one of the video servers. The global priority may be based in part on a counter value that measures usage of the selected one of the assets. The local priority may be based in part on a bandwidth for streaming the selected one of assets from the one of the video servers to a set of viewers. [0023]
  • In various aspects, the invention features a data storage media storing a computer executable program of instructions for performing one or more of the above-mentioned processes. [0024]
  • Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.[0025]
  • DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram of an interactive television system; [0026]
  • FIG. 2 illustrates asset delivery pathways of the media clusters shown in FIG. 1; [0027]
  • FIG. 3 is a high-level block diagram of software processes that manage and control the interactive television system of FIG. 1; [0028]
  • FIG. 4 illustrates interactions between a propagation service process and other processes of FIG. 3; [0029]
  • FIG. 5A is a flow chart for a process that ranks viewing assets and evaluates retention values of replicas of asset elements and entries in element deletion lists; [0030]
  • FIG. 5B is a flow chart for a process that decides whether to propagate a replica of an asset to a media cluster; [0031]
  • FIG. 6 is a flow chart illustrating a process that calculates total propagation priorities of replicas of assets; [0032]
  • FIG. 7 is a flow chart illustrating a process that determines whether to copy a replica of a viewing asset onto a media cluster; [0033]
  • FIG. 8A illustrates relations between replicas of viewing assets and replicas of asset elements stored on a media cluster; [0034]
  • FIG. 8B illustrates ELists for the replicas of viewing assets and asset elements shown in FIG. 8A; [0035]
  • FIG. 8C illustrates replicas of assets, replicas of asset elements, and ELists that remain on the media cluster of FIG. 8A after deletion of the asset elements of one EList; [0036]
  • FIG. 8D is a table enumerating the ELists remaining in FIG. 8C; [0037]
  • FIG. 9A is a table showing the ELists shown in FIG. 8B; [0038]
  • FIG. 9B is a table that enumerates “combinations of ELists” from FIG. 9A with retention values below 50; [0039]
  • FIG. 10 is a flow chart illustrating a process that uses ELists to free space for propagating replicas of new assets; and [0040]
  • FIG. 11 is a flow chart illustrating a process that initially defines and subsequently updates a new asset's global propagation priority; [0041]
  • Like reference symbols in the drawings indicate like elements.[0042]
  • DETAILED DESCRIPTION
  • Interactive Television System [0043]
  • FIG. 1 shows a [0044] networked system 10 that provides interactive television to subscribing viewers. The system 10 includes a set of control units 16. The control units 16 connect to each other through a multi-channel communications bus 12. In some embodiments, the bus 12 may be a network. The bus 12 also couples a plurality of media clusters 40, 40′, 40″, which store replicas of viewing assets for viewers. The media clusters 40, 40′, 40″ couple to node groups of local viewers 20, 20′, 20″, 20″′ through hybrid fiber coaxial (HFC) networks 22, which function as broadband or multi-channel broadcast networks for the local node groups 20, 20′, 20″, 20″′. The HFC networks 22 carry viewing assets from the media clusters 40, 40′, 40″ to interactive viewer televisions 24 and carry viewing status data and viewing requests from the televisions 24 up to the media clusters 40, 40′, 40″ and control units 16.
  • The arrangement of [0045] control units 16 and media clusters 40, 40′, 40″ may serve a large number of viewers. For example, each media cluster 40, 40′, 40″ may serve between about ten to fifty thousand local viewers. The connectivity between individual media clusters 40, 40′, 40″ and individual node groups 20, 20′, 20″, 20″′ may differ from cluster to cluster. For example, the media cluster 40 couples to node groups 20 and 20′ while the media cluster 40″ couples to node groups 20″ and 20″′. Thus, different viewers may be served by different subsets of the media clusters 40, 40′, 40″.
  • Each [0046] interactive viewer television 24 includes a set top box 26 that connects a normal television 25 to the local HFC network 22 and provides an interface for communicating with a portion of the media clusters 40, 40′, 40″ and control units 16. The set top boxes 26 receive viewing assets from the associated HFC networks 22, decode the received assets, and display the decoded assets on a normal television 25. In some embodiments, the set top boxes 26 may be integrated into the televisions 25. The set top boxes 26 also receive menus of available viewing assets, display the menus on the normal televisions 25, and transmit viewer requests for viewing assets and streaming-control commands to the control units 16. The streaming-control commands implemented by the set top boxes 26 may include stop, pause, fast-forward, and reverse.
  • The viewing assets are sequences of encoded digital files for video, text, audio, graphic, and/or interactive control applications. Each file of the sequence, for a viewing asset, will be referred to as an asset element. The displayed viewing assets may, for example, be movies, newscasts, shopping emissions or interfaces, posters, or audio presentations. [0047]
  • Each [0048] control unit 16 includes a computer 28, a data storage media 30, e.g., a hard drive or compact disk, for storing software processes executable by the computer. The control units 16 manage viewing assets on the media clusters 40, 40′, 40″ and control delivery of these assets to viewers. The management of assets includes propagating assets among the media clusters 40, 40′, 40″ and accumulating asset usage data to insure that the propagation of assets to the media clusters 40, 40′, 40″ anticipates viewer demand. The controlling asset delivery includes receiving, viewing requests from individual interactive televisions 24 and assigning asset delivery pathways from the media clusters 40 to the node groups 20, 20′, 20″, 20″′ corresponding to requesting viewers.
  • Each [0049] media cluster 40, 40′, 40″ stores replicas of viewing assets that are currently available to the node groups 20, 20′, 20″, 20″′ connected to the media cluster 40, 40′, 40″. The selection of viewing assets varies from media cluster to media cluster 40, 40′, 40″. The media clusters 40, 40′, 40″ stream viewing assets to associated local node groups 20, 20′, 20″, 20″′ in response to control commands received from the control units 16. The control units 16 send such control commands in response to receiving requests for viewing assets from the various node groups 20, 20′, 20″, 20″′.
  • Each [0050] media cluster 40, 40′, 40″ has space for storing a limited number of replicas of viewing assets. The media clusters 40, 40′, 40″ store assets for meeting present and near-future viewing demands. To handle changing viewing demands, the control units 16 regularly update the asset selection on the media clusters 40, 40′, 40″ by copying replicas of new viewing assets to the media clusters 40, 40′, 40″ and/or by copying viewing assets between different media clusters 40″, 40, 40′. To propagate an asset, a control unit 16 first copies a replica of a new asset to a preselected one of the media clusters 40, 40′, 40″ and then orders cluster-to-cluster copying to propagate the asset to other clusters 40, 40′, 40″. The control units 16 update the viewing asset selection on the media clusters 40, 4040″ to maximize the economic value that the entire asset collect provides to the system 10 as is explained below.
  • Though the [0051] system 10 of FIG. 1 is not hierarchical, other embodiments may use hierarchical organizations of media clusters and/or control units, e.g., in master-slave relationships. In those embodiments, master servers control slave servers and provide for larger video storages. Several hierarchical organizations of media clusters or video servers are described in U.S. Pat. No. 5,862,312 and U.S. patent application Ser. No. 09/293,011, filed Apr. 16, 1999, which are both incorporated by reference herein.
  • FIG. 2 shows [0052] media clusters 40, 40′, 40″ in more detail. Each media cluster 40, 40′, 40″ has several video data servers 42, 42′, 42″ that locally interconnect through a network or bus 34, 34′, 34″. The servers 42, 42′, 42″ of the same cluster 40, 40′, 40″ share a data storage 36, 36′, 36″, which may be physically lumped or spread over the individual local servers 42, 42′, 42″. The servers 42, 42′, 42″ delivery viewing assets from the cluster video storages 36, 36′, 36″ to node groups 20, 20′, 20″, 20″′ connected to the associated media cluster 40, 40′, 40″. The video data storages 36, 36′, 36″ store replicas of the viewing assets, which the media cluster 40, 40′ can deliver to local node groups 20, 20′, 20″, 20″′. The selection of replicas of assets stored on different clusters 40, 40′, 40″ may differ so that different media clusters 40, 40′, 40″ do not generally provide identical viewing selections to the locally connected node groups 20, 20′, 20″, 20″′.
  • FIG. 2 also shows some of the delivery pathways between [0053] various media clusters 40, 40′, 40″ and local node groups 20, 20′, 20″, 20″′. Each delivery pathway includes one of the servers 42, 42′, 42″, an output card of the server, a quadrature amplitude modulator (QAMs) 50, a combiner 52, 52′, 52″, 52″′ and an HFC 22 that connects the destination node group 20, 20′, 20″, 20″′. The servers 42, 42′, 42″ have one or more output cards, which produce streams of digital data packets for transporting viewing assets to the node groups 20, 20′, 20″, 20″′. The output streams are received by QAMs 50 that connect to the output cards. Each QAM 50 reads headers of received packets and retransmits the packets towards the node group 20, 20′, 20″, 20″′ served by the QAM 50 if that node group 20, 20′, 20″, 20″.′ is a destination of the packet. The retransmitted packets are received by combiners 52, 52′, 52″, 52″′, which send broadband transmissions from several QAMs 50 to the associated node groups 20, 20′, 20″, 20″′ via the associated HFC 22.
  • Referring again to FIG. 1, the [0054] different media clusters 40, 40′, 40″ may have different delivery pathways to the node groups 20, 20′, 20″, 20″′. The control of delivery of viewing content over these delivery pathways and the management of replicas of assets stored on the media clusters 40, 40′, 40″ are both performed by the control units 16. The control units 16 execute processes that perform these functions and are able to manage in excess of ten thousand assets and a variety of control application types. An individual control unit 16 may perform the above-functions for some or all of the media clusters 40, 40′, 40″.
  • Processes Controlling Assets [0055]
  • Assets are the smallest viewable objects that can be requested by or streamed to viewers. Replicas of assets can be activated or deactivated on the [0056] media clusters 40, 40′, 40″. An asset may include several elements, e.g., consecutive portions of a movie, a movie poster, and a movie trailer. The elements of an asset are individual files and are the smallest data objects that can be copied to or deleted from a media cluster 40, 40′, 40″. Physical copies of assets and asset elements on particular media cluster 40, 40′, 40″ are referred to as replicas of assets and replicas of asset elements, respectively.
  • FIG. 3 is a block diagram showing software processes executed by the [0057] computers 28 of the control units 16 to manage and control viewing assets and asset usage in the interactive television system 10 shown in FIG. 1. On each control unit 16, the processes include modules 62, 64, 66, 68, 70, 72, 74, 76, which perform asset management, directory service, propagation services, connection management, streaming services, movies-on-demand (MOD) control application services, MOD billing services, and program guide services. In some embodiments, other processes may provide control application services and billing services for content types, such as news-on-demand (NOD), interactive shopping, and interactive games (not shown). On each media clusters 40, 40′, 40″, resident processes include an instance of a media cluster agent 78. On the set top boxes 26, resident processes include one or more application agents 80, e.g., a MOD application agent.
  • FIG. 4 illustrates functional relationships between the [0058] software process modules 62, 64, 66, 68, 70, 78 that manage and control assets. These relationships are further described below.
  • The [0059] asset management module 62 provides an interface for receiving new viewing assets into the control unit 16. The interface may, e.g., support automated downloads of viewing assets from a distribution cable (not shown) and/or manual uploads of viewing assets under control of a user operating a graphical user interface. For each newly received asset, the asset management module 62 creates an automated work queue 63 that controls the life cycle of the associated asset. The work queues 63 control the life cycle transitions performed by the propagation service (PS) module 66. An asset's life cycle may include states such as received; encoded; stored on some media clusters 40, 40′, 40″; activated; deactivated; and deleted.
  • The [0060] asset management module 62 accepts several types of data objects including encoded data files, e.g., encoded according to the Movie Picture Experts Group (MPEG) standards, nonencoded data files, executable applications, and metadata associated with other assets. The received data assets may be for video, audio, text, graphics, or interactive applications.
  • The [0061] directory service module 64 provides assets with filenames arranged in hierarchical namespaces. The directory service module 64 keeps information about the element composition of assets and metadata associated with assets.
  • In various embodiments, the control applications may include movies on demand (MOD), television on demand (TVOD), news on demand (NOV), interactive shopping and others. [0062]
  • The [0063] propagation service module 66 controls copying of assets to and deleting of assets from individual media clusters 40, 40′, 40″. A media cluster 40, 40′, 40″ needs a replica of each element of an asset to be able to play the asset to viewers. But, replicas of different assets may share replicas of some asset elements stored on the same media cluster 40, 40′, 40″.
  • The [0064] propagation service module 66 orders copying of new assets to a preselected one of the media clusters, e.g., cluster 40. The propagation service module 66 also orders copying of the asset replica to other ones of the media clusters, e.g., clusters 40′, 40″, to meet anticipated user demand for the assets, e.g., demands that are upcoming in the next few hours. The propagation service module 66 also provides location information on active replicas of assets to the other process modules 64, 68, 70.
  • The [0065] connection manager module 68 selects pathways for streaming viewing assets from media clusters 40, 40′, 40″ storing replicas of the viewing assets to viewers requesting the viewing assets. To optimize streaming, the connection manager module 68 uses an abstract representation of each potential delivery pathway. The representations indicate throughputs and bottlenecks along each pathway. The connection manager module 68 selects pathways with the highest available throughputs, i.e. the least restrictive bottlenecks, as the pathways for delivering assets to requesting viewers.
  • The [0066] connection manager module 68 also provides the abstract representation of delivery pathways for the propagation service module 66. This representation indicates available total bandwidths for delivering various viewing assets to local node groups 20. The propagation server module 66 uses this representation to determine when the available bandwidth for delivering an asset to viewers is so diminished that an additional replica of the asset is needed on another media cluster 40, 40′, 40″. The connection manager module 68 provides the representations of delivery pathways between media clusters 40, 40′, 40″ and node groups 20, 20′, 20″, 20″′ to other ones of the software modules.
  • The [0067] connection manager module 68 is also an interface that receives requests for assets from viewer televisions 24 and set top boxes 26.
  • The [0068] streaming service module 70 provides application-independent streaming services to the connection management module 68, control application service modules 72, and media cluster agents 78. The provided services include stream creation, operation, and tear down of delivery pathways. The streaming service module 70 also has interfaces for controlling media cluster agents 78 that reside on the individual media clusters 40, 40′, 40″.
  • The [0069] media cluster agents 78 copy new replicas of asset elements to and delete old replicas of asset elements from the associated media clusters 40, 40′, 40″ in response to commands or orders from the propagation service module 66.
  • The MOD [0070] application service module 72 resides on the control unit 16 and controls processing of viewer requests to purchase movies and other on-demand video assets. The MOD application server module 72 interacts with an application database 82 and the MOD billing service module 74 to check viewer credit status and to bill virtual video rentals. The MOD application service module 72 can suspend or resume asset streaming to viewers, monitors viewing progress, and records viewer activities in the database 82.
  • The [0071] MOD application client 80 resides on each set top box 26 and provides a viewer with system navigation interface for requesting a programming guide and requesting assets for viewing. The interface transmits a viewer's requests to one of the control units 16.
  • The program [0072] guide service module 76 provides program guide files to set top boxes 26, which in turn displays a program guide on the attached normal television 25. Viewers can request assets in the program guide for viewing.
  • For each [0073] media cluster 40, 40′, 40″ serviced, the propagation service module 66 generates and regularly updates a table of element deletion lists (ELists). In the table, each EList is identified by a “selected element” belonging to the EList. Each EList indicates a set of replicas of elements that can also be deleted if the identified selected element is deleted without incurring addition loss of retention value (RV) from the media cluster 40, 40′, 40″ storing the replicas.
  • Management of Replicas of Assets on Media Clusters [0074]
  • The [0075] propagation service module 66 controls propagations of viewing assets to and among the media clusters 40, 40′, 40″. The propagation service module 66 propagates assets to increase the economic value of the entire collection of assets available to viewers. The economic value of propagating a particular asset to a particular media cluster 40, 40′, 40″ is rated by a total propagation priority (TPP). TPPs enable comparisons of the economic values of potential propagations of assets to particular media clusters 40, 40′, 40″ of the interactive television system 10.
  • The asset propagation process includes a [0076] process 100 that ranks potential propagations and a process 110 that selects which asset propagations to perform.
  • Referring to FIG. 5A, a flow chart for the [0077] ranking process 100 is shown. The process 100 evaluates the TPP of potential asset propagations (step 102). A potential asset propagation identifies an asset and a target media cluster 40, 40′, 40″ to which the identified asset can be propagated. The target cluster 40, 40′, 40″ does not already store a replica of the asset. Using the TPPs, the process 100 ranks the set of potential asset propagation in a list (step 104). Potential asset propagations with higher TPPs are ranked higher and correspond to propagations predicted to provide larger increases the economic value of the entire collection of replicas of assets stored on media clusters 40, 40′, 40″.
  • The [0078] ranking process 100 also assigns a retention value (RV) to each replica of an asset (step 106). The assigned RVs depend both on the asset and on the media cluster 40, 40′, 40″. The retention value, RV, indicates the value to the entire system 10 of keeping the associated replica of the asset on the associated media cluster 40, 40′, 40″. From the RVs of replicas of assets, the process 100 calculates the RVs of element deletion lists (step 108). Element deletion lists, which are described below, are groups of replicas of asset elements that can be deleted together. The ranking process 100 is repeated at regular intervals.
  • Referring to FIG. 5B, a flow chart for a [0079] propagation selection process 110 is shown. The process 110 selects the highest ranked potential asset propagation that remains on the ranking list (step 112). The selected potential propagation has the largest TPP among potential asset propagations, which have not already been processed. The process determines whether the associated target media cluster 40, 40′, 40″ of the selected potential asset propagation has a suitable storage region for a replica the asset (step 114). A suitable storage area is storage space that is large enough to store a replica of any elements of the asset not already on the target media cluster 40, 40′, 40″, i.e., any missing elements, and that has a total RV that is smaller than the TPP of the selected asset. If a suitable storage area exists, the process 110 selects the most appropriate region; orders copying of replicas of the missing elements of the selected asset from another media cluster 40, 40′, 40″ onto the most appropriate region; and then updates RVs, ELists, and combinations of ELists of the target cluster 40, 40′, 40″ (step 1I16). The most appropriate region has the smallest total RV value and among such regions the smallest size. The copying replaces existing replicas of the most appropriate region with missing replicas of the asset being propagated. After considering the selected propagation, the process 110 loops back 118 to select the remaining asset propagation on the ranking list having the next highest TPP.
  • FIG. 6 is a flow chart for a [0080] process 120 that calculates TPPs of potential asset propagations. The propagation service module 66 determines a global propagation priority (GPP) for a viewing asset, which is available for copying onto the media clusters 40, 40′, 40″ (step 122). The GPP is a time-dependent number, e.g., in the range of 0 to 100, that expresses the economic value of malting a new replica of the associated asset available to viewers. The process 120 also determines a local propagation priority (LPP) for copying a replica of the asset onto a particular target media cluster 40, 40′, 40″ (step 122). The determination of an LPP is performed separately for each target media cluster on which the asset is not already stored. Finally, the process 120 adds the GPP and LPP to obtain the TPP associated with the asset and the particular target media cluster 40, 40′, 40″ (step 126).
  • Replication of an asset to a [0081] media cluster 40, 40′, 40″ involves copying replicas of the elements of the asset, which are not already present, onto the cluster's video data storage 36, 36′, 36″. On the same media cluster 40, 40′, 40″, replicas of different assets can share replicas of the asset elements. Thus, replicas of asset elements already on a media cluster are not recopied onto the cluster during propagation of the asset to the cluster. Copying entails pulling elements of the asset, which are not already on the cluster, from the video data storage 36, 36′, 36″ of another media cluster 40, 40′, 40″ and writing the pulled elements to the video data storage 36, 36′, 36″ of the target media cluster 40, 40′, 40″. After deciding to propagate an asset to a media cluster 40, 40′, 40″, the propagation service module 66 updates RVs of the replicas of assets and asset elements on the target cluster 40, 40′, 40″.
  • Referring to FIG. 7, a [0082] process 140 for determining whether to propagate a selected asset to a target media cluster 40, 40′, 40″ is shown. The propagation service module 66 calculates a TPP for a replica of the selected asset on the target media cluster 40, 40′, 40″ (step 142). The TPP is the sum of the asset's GPP for propagating a new replica of the asset on the system 10 and the LPP for having a replica of the selected asset on the particular media cluster 40, 40′, 40″.
  • The [0083] propagation service module 66 selects a list of regions, e.g., combinations of ELists, of the target media cluster 40, 40′, 40″ that have smaller RVs than the TPP for the selected asset (step 144). The propagation service module 66 determines whether any regions on the list have a size sufficient to store replicas of the elements of the selected asset that are not already stored on the media cluster 40, 40′, 40″, i.e., replicas of the missing elements (step 146). If at least one such region exists, the propagation service module selects the most appropriate one of such regions (step 147). The most appropriate region is a region with the smallest RV and among regions with the smallest RV, the most appropriate region has the smallest size. The propagation service module 66 replaces data in the selected region by replicas of the missing elements of the selected asset (step 148). The target region for the copying is the region from the list with the smallest RV and sufficient size to store the elements. During copying of the missing asset elements to the target cluster 40, 40′, 40″, the propagation service module 66 protects both source and destination replicas from deletion.
  • To copy the asset elements, the [0084] propagation service module 66 identifies a video data storage 36, 36′, 36″ of another media cluster 40, 40′, 40″ to act as a source for the elements being copied. The propagation service module 66 also fixes a minimum transfer rate for the elements being copied and protects the source and target from being overwritten during copying.
  • After ordering the replication of the selected asset, the [0085] propagation service module 66 also updates the RVs of replicas of asset elements remaining on the target media cluster 40, 40′, 40″ (step 150). Any replicas of asset elements not belonging to a full replica of an asset are updated to have RVs with low values, e.g., the value zero. Replicas of these asset elements will be the first elements removed to provide space for new replicas of assets.
  • If none of the listed regions has sufficient size to store the replicas of missing elements of the selected asset, the [0086] propagation service module 66 does not copy replicas of the missing asset elements to the target media cluster 40 (step 152).
  • Element Deletion Lists [0087]
  • FIG. 8A is a snapshot showing relationships between data objects stored on the [0088] video data storage 36 of one media cluster 40 shown in FIG. 1. The data objects include replicas of asset elements A-H and replicas of assets 1-5, which are composed of the replicas of elements A-H. Each replica of an asset 1-4 has an associated RV, i.e., a number determining the asset replicas value with respect to being deleted. Each replica of an element A-H has an associated data size and an RV inherited from the assets to which the element belongs.
  • The replicas of elements A-H that belong to an asset are streamed from the [0089] media cluster 36 to a viewer in response to a viewing request. Thus, the absence of a replica of an element A-H belonging to the replica of the requested asset results in the asset being not streamable and thus, unusable for viewers. The deletion of the replica of any element A-H results in each replica of an asset 1-5, to which the deleted replica of the element A-H belonged, ceasing to exist. For this reason, the RV of a replica of an element is itself equal to a preselected combination of the RVs of the replicas of assets 1-5 to which the element A-H belongs, e.g., a sum of the RVs of replicas of such assets. For example, the shared element I has an RV of 30, which is the sum of the RVs of the replicas of assets 1 and 2 to which replica of element I belongs.
  • To free space on the [0090] video data storage 36, a replica of a selected element A-H can be deleted or overwritten. Each replica of an element A-H also defines an element deletion list (EList). An EList includes the additional replicas of elements A-H that can be deleted along with the selected replica of an element A-H without making additional replicas of assets of the media cluster 40 disappear. The EList of a selected asset includes replicas of all elements A-H that belong “only” to assets to which the selected replica of an element A-H belongs. For example, the EList for selected replica of an element H includes the replicas of elements H and G. The replica of element G belongs to this EList, because it belongs only to the replicas of assets 4, 5 to which the selected replica of element H also belongs. Deletion of all replicas of elements A-H of an EList does not produce more loss of RV for the media cluster 40 than the RV loss produced by only deleting the selected replica of element A-H defining the EList. On the other hand, deleting all the replicas of assets in the EList of a selected replica of an asset often liberates more space for writing new replicas of assets to the media cluster 40.
  • FIG. 8B illustrates the [0091] ELists 1′-6′ for the replicas of elements A-H and replicas of assets 1-5 shown in FIG. 8A. For each EList 1′-6′, an RV and a size can be defined. FIG. 8B shows RV and size of each EList by labels RV/size. The RV of an EList 1′-6′ is the preselected combination of the RVs of the replicas of assets 1-5 to which the ELists 1′-6′ selected replica of an element AH belongs. The size of an EList 1′-6′ is a sum of the data sizes of the replicas of elements A-H that belong to the EList 1′-6′.
  • FIG. 9A is a table [0092] 160 whose entries characterize the ELists 1′-6′ of the video data storage 36, shown in FIGS. 8A-8B. Each entry of table 160 provides an RV, size, and a list of replicas of elements A-I for one of the ELists 1′-6′. The last column 162 of the table 160 indicates which replicas of assets 1-5 will cease to exist on the associated media cluster 40 if the EList 1′-6′ associated with the entry is deleted.
  • FIG. 8C shows the data objects remaining on the [0093] video data storage 36 after deletion of EList 3′, shown in table 160. After the deletion, remaining ELists 2″, 3″ have forms that differ from the forms of any ELists 1′-6′ prior to the deletion. Table 160′ of FIG. 8D characterizes the ELists 1′, 2″, 3″, 4′ that remain after the above-described deletion.
  • Elements from several ELists can be deleted together to free regions of the video data storage for replicas of new asset elements. FIG. 9B shows a table [0094] 160″ that provides region economic values, i.e., total RVs, and sizes of “combinations of ELists”, which are formed from the ELists 1′-6′ shown in FIG. 8B. The region values and sizes are found by combining RVs, e.g., by summing, and summing sizes, respectively, for the ELists 1′-6′ belonging to the combinations. The table 160″ lists the “combinations of ELists” in order of increasing replacement value up to a value of 50 where 50 is the largest TPP of assets currently under consideration for propagation.
  • From the ELists, the [0095] propagation service module 66 produces a table listing “combinations of ELists” for the media cluster 40. The table lists “combinations of ELists” whose total RVs are smaller than the maximum TPP of any asset being currently considered for propagation to the target cluster 40. To create the table, the propagation service module 66 searches the media cluster's table of ELists for combinations of ELists whose combined RVs are smaller than the maximum TPP of any asset considered for propagation to the target media cluster 40. Limiting searches to this upper bound on RVs reduces the number of “combinations of ELists” for which the propagation service module 66 needs to determine a total RV or region economic value.
  • FIG. 10 is a flow chart for a [0096] process 168 that decides whether a target media cluster 40, 40′, 40″ has space for a replica of a new selected asset not presently on the cluster 40, 40′, 40″. From the table listing “combinations of ELists”, the process 168 selects the entries whose RVs are smaller than the TPP of the new selected asset (step 170). From these entries, the propagation service module 66 searches for a “combination of ELists” that that occupy space large enough to store the replicas of elements of the selected new asset, which are not already on the media cluster (step 172). If several one or more such combinations exist, the propagation service module 66 selects the “combination of ELists” whose RV is smallest. Among the combinations of ELists with smallest RVs, the propagation service module 66 selects the combination having the smallest data size, i.e., this combination of ELists defines the most appropriate storage region to delete. The data size must be sufficient to store replicas of elements of the selected new asset not already on the target media cluster 40, 40′, 40″. If the sought “combination of ELists” exists, the propagation service module 66 replaces elements of the new selected asset, not already on the media cluster, to the region previously occupied by the sought combination of ELists (step 174). After performing the copying, the propagation service module 66 updates RV of replicas of elements and asset elements on the media cluster 40, 40′, 40″ and the table of ELists (step 176). If the sought combination of ELists does not exist, the propagation service module 66 does not copy missing elements of the new selected asset to the target media cluster 40, 40′, 40″ (step 178).
  • Global and Local Propagation Priorities [0097]
  • The TPP for replicating an asset is a sum of a global propagation priority (GPP) and a local propagation priority. The GPP is independent of [0098] target media cluster 40, 40′, 40″. The LPP depends on the target media cluster 40, 40′, 40″.
  • Each GPP is a sum of several components, which may take values from 0 to 100. Each component may separately vary in time thereby changing the probability of propagating a replica of the asset to [0099] new media clusters 40, 40′, 40″.
  • FIG. 11 shows a flow chart for a [0100] process 180 that initially defines and subsequently updates the GPP of a new asset. When a new asset first becomes active in the system 10, the process 180 assigns the new asset to a usage class of similar assets (step 182). Membership to usage classes may be based on an asset's genre, e.g., a subject classification such as newly released movies or sports events, or may be based on other criteria such as anticipated popularity. The process 180 assigns an initial GPP to the new asset based, in part, on the usage class of which the asset is a member (step 184).
  • After being assigned a GPP, replicas of the new asset may be propagated to [0101] new media clusters 40, 40′, 40″ based on the TPP, i.e., the TPP=GPP+LPP (step 186). While the asset is available to viewers, the associated GPP is updated over time to match the actual viewer usage (step 188). Media clusters 40, 40′, 40″ having replicas of assets report usage data to the connection management module 68, which reports the data to propagation service module 66. Using the data, the propagation service module 66 recalculates the GPP of the asset.
  • Since the updated GPP partially determines to which [0102] media clusters 40, 40′, 40″ an asset propagates, the distribution of the asset on the media clusters 40, 40′, 40″ changes in response to usage data fed back to the propagation service module 66. Gradually, the GPP is adjusted to reflect, in part, relative viewer requests for the asset as compared to viewer requests for other assets.
  • The [0103] propagation service module 66 uses usage data to update the value of the usage pattern of the usage class to which the asset is a member (step 190).
  • The viewer usage data may be determined from values of [0104] counters 162, shown in FIG. 3, associated with individual assets. The counters 162 record demands for individual ones of the assets. The usage data accumulated by these counters 162 is described below.
  • One of the [0105] counters 162 measures use of each asset during specific times of week. A set of accumulators is maintained to actually indicate use of each asset during every 2 hour interval of the week. For example, one of these counters may store data indicating that the Tuesday night news is most requested on Wednesday morning. The accumulator corresponding to the current time of week is one of the counters 162. The usage data for a usage class, as described above, for a new asset is maintained by an analogous set of counters and accumulators.
  • Usage data is accumulated on short-term viewer demand, medium-term viewer demand, total number of viewer requests, and last-request-time for each asset. The counters [0106] 162 (FIG. 3) accumulate usage data, which are associated with individual replicas of assets on media clusters 40, 40′, 40″.
  • Another of the [0107] counters 162 measures the “short-term demand” for an associated asset by counting each viewer request for the asset. The propagation service module 66 may decrement this counter 162 by a fixed amount every few hours or perform another counter correction to make the counter's total count indicative of the number of “recent” requests. The count is never decremented to negative values. If the accumulated count is high, the propagation priority, i.e., value of putting another replica of the asset onto a media cluster 40 is high, and the propagation service module 66 increases the asset's TPP.
  • The medium-term demand for each asset is measured by seven of the [0108] counters 162, which accumulate numbers of demands for the asset over weeklong periods. The period of each counter ends on different day. For each counter, the count from the present week is compared to the count from the same counter for the last week. A decrease in this count indicates a declining interest for the asset, and the propagation service module 66 reduces the GPP of the asset in response to such decreases.
  • [0109] Other counters 162 measure the total number of requests and time of the last request for each asset. These counts track popularities of assets. The total number of requests may be used to update the GPP of the asset to generally reflect its popularity. The lengths of time since last request for different assets are compared to determine relative asset popularities.
  • The GPP of an asset may depend on other components. One component measures whether an asset is a rapid mover, e.g., an asset whose usage changes suddenly. Another component raises the GPP of assets for which only one replica exists. This favorizes generation of a second replica, which is valuable to the system to avoid failures. Another component raises the GPP of assets whose one or more replicas are only accessible through heavily loaded delivery networks. This favors adding a second replica, which can offset unavailability of the asset caused by other video streaming traffic. [0110]
  • The LPP assigned to a replica of an asset is also a sum of several components, but the components depend on properties of a server's local environment. Each of the components may take a value in a preselected range, e.g., 0 to 100. The components contributing to LPP cause the [0111] propagation service module 66 to distribute replicas of assets to media clusters 40 a manner that accounts, in part, for local viewing preferences.
  • One component of LPPs depends on the classification of an asset, e.g., genre and language. This component causes distributions of replica of assets to accord with local preferences of viewers and may be updated by historical viewing data. [0112] Counters 162 of the propagation service module 66 accumulate numbers of viewer requests for various classes of assets for use in determining the value of this component.
  • Another component of LPPs depends on whether multiple replicas of the asset are stored on the set of [0113] media clusters 40, 40′, 40″. This component is high for clusters 40, 40′, 40″ not storing a replica of the asset if a replica of the asset is only on one cluster 40, 40′, 40″. This component stimulates the propagation service module 66 to replicate each asset present on one media cluster 40, 40′, 40″ onto a second media cluster 40, 40′, 40″. The second replica helps to avoid delivery failures caused by hardware failures.
  • Another component to LPPs depends on activity levels of [0114] media clusters 40 increasing the LPP for replicas of new assets only located on other media clusters 40, 40′, 40″ that are heavily loaded. The component causes the propagation service module 66 to copy a replica of the asset to a less busy media cluster 40, 40′, 40″ if current replicas of the asset are on media clusters 40 operating at near capacity. The propagation service module 66 uses delivery pathway data from the connection manager 68 to determine whether media clusters 40, 40′, 40″ are operating at near capacity.
  • Another component to LPPs depends on activity levels of delivery pathways. This component causes the [0115] propagation service module 66 to copy a replica of an asset onto a media cluster 40 that is connected to a node group 20, 20′, 20″, 20″′ by less burdened delivery pathway if the presently usable delivery pathways are near capacity. The loading of each delivery pathway is determined by the connection manager 68, which provides an abstracted view of the deliver pathways to the propagation service module 66.
  • Another component to LPPs causes the [0116] propagation service module 66 to copy a replica of an asset to a new media cluster 40, 40′, 40″ if present replicas are inaccessible to some users. The inaccessibility may result from the absence of delivery pathways from the clusters 40, 40′, 40″ presently storing a replica of the asset and the node groups 20, 20′, 20″, 20″′ without access.
  • In various embodiments, other historical usage data is used to set the values of the GPPs and LPPs of potential propagations. The propagation of viewing assets to [0117] media clusters 40, 40′, 40′ is calculated to increase the total economic value of the assets to the system 10, e.g., by increasing viewer payments for movie rentals. As viewer demands change, the propagation priorities assigned to replicas of assets are updated to follow the changing demands. Asset propagation evolves the distribution of replicas of the assets on the media clusters 40, 40′, 40″ in response to the updates to GPP and LPP. Thus, changes in viewing preferences automatically induce an evolution in the distribution of viewing assets and, at least partially, follows the viewing preferences.
  • The retention values (RVs) assigned to replicas of assets and to replicas of asset elements, which are both stored on the [0118] media clusters 40, 40′, 40″, are calculated and updated through processes analogous to the processes used to determine the TPP. The RV may be a sum of a global value and a local value. The global value indicates an economic value associated with retaining the present set of replicas of the asset on the entire collection of media clusters 40, 40′, 40″. The local value indicates an economic value associated with retaining the replica of the asset on a particular associated media cluster 40, 40′, 40″. Both the local and global values may be sums of components of the same types as the components contributing to LPP and GPP, but coefficients may vary. The global and local values may depend on collected usage data and be assigned to usage classes that contribute to defining initial values.
  • Other additions, subtractions, and modifications of the described embodiments may be apparent to one practiced in this field.[0119]

Claims (55)

What is claimed is:
1. A process of propagating viewing assets to a system of video servers, comprising:
copying a missing portion of a replica of a selected viewing asset to a target video server in response to determining that a priority to propagate the selected asset to the target server is higher than a retention value of a replica of one or more viewing assets stored on the target server.
2. The process of claim 1, wherein the copying writes the missing portion of the replica of the selected asset onto a storage region of the target video server previously storing a portion of the replica of one or more viewing assets.
3. The process of claim 1, further comprising:
selecting a portion of the replica of one or more viewing assets in response to the replica of one or more viewing assets having a data size at least as large as a data size of the missing portion of the selected asset.
4. The process of claim 1, wherein the copying the missing portion of the replica of a selected asset includes copying the missing portion from one or more video servers.
5. The process of claim 1, further comprising:
assigning propagate priorities of a plurality of viewing assets;
ranking the viewing assets according to the assigned priorities; and
selecting the selected asset in response to the selected asset having more than a preselected minimum rank.
6. The process of claim 5, wherein the assigning includes determining the propagation priorities based at least in part on global priorities to propagate associated ones of the assets to target video servers.
7. The process of claim 5, wherein the assigning includes determining local priorities to have replicas of associated assets on particular video servers, the local priorities depending on the states of the particular video servers.
8. The process of claim 5, wherein the portion of replica of one or more viewing assets consists of replicas of asset elements belonging to one or more ELists.
9. The process of claim 3, further comprising:
updating retention values of replicas of viewing assets remaining on the target server in response to the copying.
10. The process of claim 1, wherein the viewing assets include video files for at least one of movies, news emissions, and shopping emissions.
11. The process of claim 1, wherein the replica of one or more viewing assets includes a replica of an asset element shared by replicas of two assets on the target server.
12. A process for propagating digital viewing assets to video servers, comprising:
assigning to each of a plurality of digital viewing assets a priority to propagate the asset onto video servers;
ranking the assets based on the assigned priorities; and
propagating one of the assets to one or more selected video servers in response to the one of the assets having a preselected minimum ranking.
13. The process of claim 12, wherein the assigning includes:
assigning a viewing asset to a usage class, the usage class providing a portion of an initial value for priorities to propagate assets assigned to the class.
14. The process of claim 13, further comprising:
accumulating usage data on individual assets stored on the video servers; and
updating the priorities to propagate the assets based on the usage data.
15. The process of claim 13, wherein the viewing assets include one of encoded digital video assets and encoded digital audio assets.
16. The process of claim 14, wherein the usage data includes numbers of viewer requests during predetermined periods and differences between numbers of viewer requests during earlier and later predetermined periods.
17. The process of claim 14, further comprising:
updating the priority to propagate a particular asset in the usage class based on a difference between the usage level of the usage class and a usage level of the particular asset determined from the accumulated usage data.
18. The process of claim 13, further comprising:
calculating the priority to propagate the one of the assets onto a particular video server from a global priority to propagate the one of the assets and a local priority to propagate a replica of the asset on the one of the selected video servers.
19. The process of claim 13, further comprising:
streaming a replica of the copied one of the assets from the particular video server to a television of a viewer in response to receiving a request to view the asset from the viewer.
20. A process of propagating viewing assets to a video storage, comprising:
assigning propagation priorities to viewing assets;
constructing a table of element deletion lists for a target video storage;
selecting a group of element deletion lists from the table, the group having a data size at least as large as a data size of a portion of a replica of another asset not stored on the target storage; and
copying the portion of the replica of the another asset onto the target video storage in response to the propagation priority of the another asset being larger than a retention value of the group.
21. The process of claim 20, wherein the copying writes the portion onto a region of the target video storage previously storing the group.
22. The process of claim 20, wherein the selecting a group includes constructing a table listing sets of element deletion lists with lower retention value than the propagation priority of the another asset.
23. The process of claim 22, wherein the selecting includes picking one of the lists having a data size at least as large as the portion of the replica of the another asset.
24. The process of claim 20, further comprising:
updating the table of element deletion lists in response to performing the copying.
25. The process of claim 20, wherein each element deletion list includes a set of replicas of asset elements that are shared by the same assets.
26. A process of distributing viewing assets to viewers, comprising:
assigning priorities to assets, the priorities indicating priorities for distributing the associated assets to video servers accessible to viewers;
selecting a video server; and
copying one of the assets onto the video server in response to determining that the priority associated with the one of the assets is greater than a retention value associated with a set replicas of viewing assets stored on the video server, the replicas occupying enough space to store the one of the assets.
27. The process of claim 26, wherein the copying includes searching for one or more sets of replicas of asset elements to delete from a table of element deletion lists.
28. The process of claim 26, further comprising:
updating the retention values in response to anticipated changes in viewer request levels for assets.
29. The process of claim 28, further comprising:
accumulating data on usage of individual ones of the assets, the updating based at least in part on the accumulated data.
30. An interactive television system, comprising:
one of a network and a bus;
a plurality of video servers to store digital replicas of viewing assets for viewers, the video servers being connected by the one of a network and a bus; and
a control unit connected to the video servers and configured to control copying of a missing portion of a replica of a selected asset to one of the video servers in response to a priority to propagate the selected asset onto the target server being higher than a value of retaining a replica of one or more other assets already stored on the one of the servers.
31. The system of claim 30, wherein the control unit is further configured to record usage data for the assets stored on each of the local video storages.
32. The system of claim 30, further comprising:
a plurality of distribution networks to provide channels for delivering viewing assets to viewer televisions, each distribution network connected to portion of the video servers.
33. The system of claim 30, wherein the control unit is configured to accumulate usage data on viewing assets from the video servers.
34. A program storage media storing computer executable instructions for propagating viewing assets onto video storages, the instructions to cause the computer to:
order copying of a missing portion of a replica of a selected viewing asset to a target video server in response to determining that a priority to propagate the selected asset to the target server is higher than a retention value of a replica of one or more viewing assets stored on the target server.
35. The media of claim 34, the instructions further causing the computer to:
select a portion of the replica of one or more viewing assets in response to the replica of one or more viewing assets having a data size at least as large as a data size of the missing portion of the selected asset.
36. The media of claim 34, wherein the instructions to order copying of the missing portion of the replica of a selected asset cause the computer to control copying of the missing portion from one or more video servers.
37. The media of claim 34, the instructions further causing the computer to:
assign priorities to propagate to a plurality of viewing assets;
rank the viewing assets according to the assigned priorities; and
select the selected asset in response to the selected asset having more than a preselected minimum rank.
38. The media of claim 37, wherein the instructions to assign cause the computer to:
determine the propagation priorities based at least in part on global priorities to propagate associated ones of the assets to target video servers.
39. The media of claim 37, wherein the portion of replica of one or more viewing assets consists of replicas of asset elements belonging to one or more ELists.
40. The media of claim 35, the instructions further causing the computer to:
update retention values of replicas of viewing assets remaining on the target server in response to the copying.
41. A program storage media storing executable instructions for propagating digital viewing assets onto video servers, the instructions causing a computer to:
assign to each of a plurality of digital viewing assets a priority to propagate the asset onto video servers;
rank the assets based on the assigned priorities; and
order propagation of one of the assets to one or more selected video servers in response to the one of the assets having a preselected minimum ranking.
42. The media of claim 41, wherein the instructions to assign cause the computer to:
assign a viewing asset to a usage class, the usage class providing a portion of an initial value for priorities to propagate assets assigned to the class.
43. The media of claim 42, the instructions further causing the computer to:
accumulate usage data on individual assets stored on the video servers; and
update the priorities to propagate the assets based on the usage data.
44. The media of claim 43, wherein the usage data includes numbers of viewer requests during predetermined periods and differences between numbers of viewer requests during earlier and later predetermined periods.
45. The media of claim 43, the instructions further causing the computer to:
update the priority to propagate a particular asset in the usage class based on a difference between the usage level of the usage class and a usage level of the particular asset determined from the accumulated usage data.
46. The media of claim 42, the instructions further causing the computer to:
calculate the priority to propagate the one of the assets onto a particular video server from a global priority to propagate the one of the assets and a local priority to propagate a replica of the asset on the one of the selected video servers.
47. A program storage media storing executable instructions for propagating viewing assets to a video storage, the instructions causing a computer to:
assign propagation priorities to viewing assets;
construct a table of element deletion lists for a target video storage;
select a group of element deletion lists from the table, the group having a data size at least as large as a data size of a portion of a replica of another asset not stored on the target storage; and
order copying of the portion of the replica of the another asset onto the target video storage in response to the propagation priority of the another asset being larger than a retention value of the group.
48. The media of claim 47, wherein the instructions to select a group cause the computer to construct a table listing sets of element deletion lists with lower retention value than the propagation priority of the another asset.
49. The media of claim 48, wherein the instructions to select cause the computer to pick one of the lists having a data size at least as large as the portion of the replica of the another asset.
50. The media of claim 47, wherein each element deletion list includes a set of replicas of asset elements that are shared by the same assets.
51. A process for propagating digital viewing assets to video servers, comprising:
propagating a plurality of viewing assets to video servers based on priorities to propagate, the priorities providing a ranking of the assets;
accumulating usage data on individual ones of the assets stored on the video servers; and
updating the priorities based on the usage data.
52. The process of claim 51, further comprising:
assigning a viewing asset to a usage class, the usage class providing a portion of an initial value for the priorities to propagate for the assets assigned to the class.
53. The process of claim 52, further comprising:
calculating the priority to propagate a selected one of the assets onto one of the video servers from a global priority to propagate the selected one of the assets and a local priority to propagate a replica of the selected one of the assets onto the one of the video servers.
54. The process of claim 53, wherein the global priority is based in part on a counter value, the counter value measuring usage of the selected one of the assets.
55. The process of claim 53, wherein the local priority is based in part on a bandwidth for streaming the selected one of the assets from the one of the video servers to a set of viewers.
US09/896,562 2000-04-12 2001-06-29 Content propagation in interactive television Abandoned US20020059394A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/896,562 US20020059394A1 (en) 2000-04-12 2001-06-29 Content propagation in interactive television

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/547,474 US7278153B1 (en) 2000-04-12 2000-04-12 Content propagation in interactive television
US09/896,562 US20020059394A1 (en) 2000-04-12 2001-06-29 Content propagation in interactive television

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/547,474 Continuation US7278153B1 (en) 2000-04-12 2000-04-12 Content propagation in interactive television

Publications (1)

Publication Number Publication Date
US20020059394A1 true US20020059394A1 (en) 2002-05-16

Family

ID=24184771

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/547,474 Expired - Lifetime US7278153B1 (en) 2000-04-12 2000-04-12 Content propagation in interactive television
US09/896,562 Abandoned US20020059394A1 (en) 2000-04-12 2001-06-29 Content propagation in interactive television

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/547,474 Expired - Lifetime US7278153B1 (en) 2000-04-12 2000-04-12 Content propagation in interactive television

Country Status (9)

Country Link
US (2) US7278153B1 (en)
EP (1) EP1275242A2 (en)
JP (1) JP2003533138A (en)
CN (1) CN1423894A (en)
AU (1) AU2001251564A1 (en)
CA (1) CA2405820A1 (en)
HK (1) HK1053930A1 (en)
IL (1) IL152145A0 (en)
WO (1) WO2001086945A2 (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020199205A1 (en) * 2001-06-25 2002-12-26 Narad Networks, Inc Method and apparatus for delivering consumer entertainment services using virtual devices accessed over a high-speed quality-of-service-enabled communications network
US20030005438A1 (en) * 2001-06-29 2003-01-02 Crinon Regis J. Tailoring a broadcast schedule based on storage area and consumer information
US20030110448A1 (en) * 2001-10-24 2003-06-12 John Haut System and method for portal page layout
US20030135553A1 (en) * 2002-01-11 2003-07-17 Ramesh Pendakur Content-based caching and routing of content using subscription information from downstream nodes
US20030204856A1 (en) * 2002-04-30 2003-10-30 Buxton Mark J. Distributed server video-on-demand system
US20030217333A1 (en) * 2001-04-16 2003-11-20 Greg Smith System and method for rules-based web scenarios and campaigns
US20040103120A1 (en) * 2002-11-27 2004-05-27 Ascent Media Group, Inc. Video-on-demand (VOD) management system and methods
US20040143850A1 (en) * 2003-01-16 2004-07-22 Pierre Costa Video Content distribution architecture
US20040162905A1 (en) * 2003-02-14 2004-08-19 Griffin Philip B. Method for role and resource policy management optimization
US20040167920A1 (en) * 2003-02-20 2004-08-26 Bea Systems, Inc. Virtual repository content model
US20040167868A1 (en) * 2003-02-20 2004-08-26 Bea Systems, Inc. System and method for a virtual content repository
US20040167880A1 (en) * 2003-02-20 2004-08-26 Bea Systems, Inc. System and method for searching a virtual repository content
US20040187160A1 (en) * 2003-03-17 2004-09-23 Qwest Communications International Inc. Methods and systems for providing video on demand
US20040230917A1 (en) * 2003-02-28 2004-11-18 Bales Christopher E. Systems and methods for navigating a graphical hierarchy
US20040230996A1 (en) * 2003-02-14 2004-11-18 Hitachi, Ltd. Data distribution server
US20040230557A1 (en) * 2003-02-28 2004-11-18 Bales Christopher E. Systems and methods for context-sensitive editing
US20050188295A1 (en) * 2004-02-25 2005-08-25 Loren Konkus Systems and methods for an extensible administration tool
US20050228816A1 (en) * 2004-04-13 2005-10-13 Bea Systems, Inc. System and method for content type versions
US20050228784A1 (en) * 2004-04-13 2005-10-13 Bea Systems, Inc. System and method for batch operations in a virtual content repository
US20050228807A1 (en) * 2004-04-13 2005-10-13 Bea Systems, Inc. System and method for content lifecycles
US20050234849A1 (en) * 2004-04-13 2005-10-20 Bea Systems, Inc. System and method for content lifecycles
US20050234942A1 (en) * 2004-04-13 2005-10-20 Bea Systems, Inc. System and method for content and schema lifecycles
US20050251512A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for searching a virtual content repository
US20050251506A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for providing content services to a repository
US20050251504A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for custom content lifecycles
US20050251505A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for information lifecycle workflow integration
US20050257247A1 (en) * 1998-10-28 2005-11-17 Bea Systems, Inc. System and method for maintaining security in a distributed computer network
US20050278761A1 (en) * 2004-05-27 2005-12-15 Gonder Thomas L Playlist menu navigation
US20050278760A1 (en) * 2004-06-01 2005-12-15 Don Dewar Method and system for controlling streaming in an on-demand server
US20060028252A1 (en) * 2004-04-13 2006-02-09 Bea Systems, Inc. System and method for content type management
US20070073744A1 (en) * 2005-09-26 2007-03-29 Bea Systems, Inc. System and method for providing link property types for content management
US20070073661A1 (en) * 2005-09-26 2007-03-29 Bea Systems, Inc. System and method for providing nested types for content management
WO2005119492A3 (en) * 2004-06-01 2007-05-10 Broadbus Technologies Inc Method and system for controlling streaming in an on-demand server
US7236975B2 (en) 2004-04-13 2007-06-26 Bea Systems, Inc. System and method for controlling access to anode in a virtual content repository that integrates a plurality of content repositories
US20070261088A1 (en) * 2006-04-20 2007-11-08 Sbc Knowledge Ventures, L.P. Rules-based content management
US7444593B1 (en) * 2000-10-04 2008-10-28 Apple Inc. Disk space management and clip remainder during edit operations
US20080320022A1 (en) * 2003-02-20 2008-12-25 Oracle International Corporation Federated Management of Content Repositories
AU2005234070B2 (en) * 2004-04-13 2009-02-05 Oracle International Corporation System and method for a virtual content repository
US20100037290A1 (en) * 2003-02-14 2010-02-11 Oracle International Corporation System and method for hierarchical role-based entitlements
US7725560B2 (en) 2002-05-01 2010-05-25 Bea Systems Inc. Web service-enabled portlet wizard
US7752205B2 (en) 2005-09-26 2010-07-06 Bea Systems, Inc. Method and system for interacting with a virtual content repository
US7774601B2 (en) 2004-04-06 2010-08-10 Bea Systems, Inc. Method for delegated administration
US7810036B2 (en) 2003-02-28 2010-10-05 Bea Systems, Inc. Systems and methods for personalizing a portal
US7840614B2 (en) 2003-02-20 2010-11-23 Bea Systems, Inc. Virtual content repository application program interface
US20110093475A1 (en) * 2000-03-21 2011-04-21 Connelly Jay H Method and apparatus to determine broadcast content and scheduling in a broadcast system
US7953734B2 (en) 2005-09-26 2011-05-31 Oracle International Corporation System and method for providing SPI extensions for content management system
US8463852B2 (en) 2006-10-06 2013-06-11 Oracle International Corporation Groupware portlets for integrating a portal with groupware systems
US8831966B2 (en) 2003-02-14 2014-09-09 Oracle International Corporation Method for delegated administration
US8943540B2 (en) 2001-09-28 2015-01-27 Intel Corporation Method and apparatus to provide a personalized channel
US9723343B2 (en) 2010-11-29 2017-08-01 At&T Intellectual Property I, L.P. Content placement
US20180192100A1 (en) * 2015-09-10 2018-07-05 Sony Corporation Av server system and av server
US11317134B1 (en) * 2014-09-11 2022-04-26 Swfy, Llc System and method for dynamically switching among sources of video content

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003223378A (en) * 2002-01-29 2003-08-08 Fujitsu Ltd Contents delivery network service method and system
US8782687B2 (en) 2003-04-30 2014-07-15 At&T Intellectual Property I, Lp Multi-platform digital television
US7590997B2 (en) 2004-07-30 2009-09-15 Broadband Itv, Inc. System and method for managing, converting and displaying video content on a video-on-demand platform, including ads used for drill-down navigation and consumer-generated classified ads
US7631336B2 (en) 2004-07-30 2009-12-08 Broadband Itv, Inc. Method for converting, navigating and displaying video content uploaded from the internet to a digital TV video-on-demand platform
US11259059B2 (en) 2004-07-30 2022-02-22 Broadband Itv, Inc. System for addressing on-demand TV program content on TV services platform of a digital TV services provider
US20060174270A1 (en) * 2005-02-02 2006-08-03 United Video Properties, Inc. Systems and methods for providing approximated information in an interactive television program guide
US9247208B2 (en) * 2006-07-25 2016-01-26 At&T Intellectual Property I, Lp Adaptive video-server reconfiguration for self-optimizing multi-tier IPTV networks
US9654833B2 (en) 2007-06-26 2017-05-16 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
US11570521B2 (en) 2007-06-26 2023-01-31 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
CN101557619B (en) * 2008-04-09 2011-06-22 华为技术有限公司 Method, terminal and system for subdistrict reelection
EP2234397A1 (en) * 2009-03-24 2010-09-29 Thomson Licensing Methods for delivering and receiving interactive multimedia data attached to an audio video content
US8970669B2 (en) * 2009-09-30 2015-03-03 Rovi Guides, Inc. Systems and methods for generating a three-dimensional media guidance application
US20110137727A1 (en) * 2009-12-07 2011-06-09 Rovi Technologies Corporation Systems and methods for determining proximity of media objects in a 3d media environment
US9197712B2 (en) 2012-11-30 2015-11-24 At&T Intellectual Property I, L.P. Multi-stage batching of content distribution in a media distribution system
TWI524756B (en) * 2013-11-05 2016-03-01 財團法人工業技術研究院 Method and device operable to store video and audio data
CN105307010B (en) * 2015-11-14 2018-01-26 华中科技大学 The video uploading system and method for a kind of cloud net cast platform
US10972761B2 (en) * 2018-12-26 2021-04-06 Purdue Research Foundation Minimizing stall duration tail probability in over-the-top streaming systems
US11395021B2 (en) * 2020-03-23 2022-07-19 Rovi Guides, Inc. Systems and methods for managing storage of media content item

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5253275A (en) * 1991-01-07 1993-10-12 H. Lee Browne Audio and video transmission and receiving system
US5550577A (en) * 1993-05-19 1996-08-27 Alcatel N.V. Video on demand network, including a central video server and distributed video servers with random access read/write memories
US5557317A (en) * 1994-05-20 1996-09-17 Nec Corporation Video-on-demand system with program relocation center
US5568181A (en) * 1993-12-09 1996-10-22 International Business Machines Corporation Multimedia distribution over wide area networks
US5652613A (en) * 1995-06-07 1997-07-29 Lazarus; David Beryl Intelligent electronic program guide memory management system and method
US5815662A (en) * 1995-08-15 1998-09-29 Ong; Lance Predictive memory caching for media-on-demand systems
US5875300A (en) * 1997-01-30 1999-02-23 Matsushita Electric Industrial Co., Ltd. Cell loss reduction in a video server with ATM backbone network
US5920700A (en) * 1996-09-06 1999-07-06 Time Warner Cable System for managing the addition/deletion of media assets within a network based on usage and media asset metadata
US5940594A (en) * 1996-05-31 1999-08-17 International Business Machines Corp. Distributed storage management system having a cache server and method therefor
US6064980A (en) * 1998-03-17 2000-05-16 Amazon.Com, Inc. System and methods for collaborative recommendations
US6124877A (en) * 1997-12-08 2000-09-26 Soundview Technologies, Inc. System for monitoring and reporting viewing of television programming
US6295092B1 (en) * 1998-07-30 2001-09-25 Cbs Corporation System for analyzing television programs
US6473902B1 (en) * 1997-04-04 2002-10-29 Sony Corporation Method and apparatus for transmitting programs
US6530082B1 (en) * 1998-04-30 2003-03-04 Wink Communications, Inc. Configurable monitoring of program viewership and usage of interactive applications

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5029104A (en) * 1989-02-21 1991-07-02 International Business Machines Corporation Prestaging objects in a distributed environment
US5220516A (en) * 1989-02-21 1993-06-15 International Business Machines Corp. Asynchronous staging of objects between computer systems in cooperative processing systems
US5351075A (en) * 1990-03-20 1994-09-27 Frederick Herz Home video club television broadcasting system
US5251297A (en) * 1990-10-10 1993-10-05 Fuji Xerox Co., Ltd. Picture image processing system for entering batches of original documents to provide corresponding picture image datafiles
US5172413A (en) * 1990-12-20 1992-12-15 Sasktel Secure hierarchial video delivery system and method
JP3140621B2 (en) * 1993-09-28 2001-03-05 株式会社日立製作所 Distributed file system
JPH07175868A (en) * 1993-10-15 1995-07-14 Internatl Business Mach Corp <Ibm> Method and system for output of digital information to medium
US5473362A (en) * 1993-11-30 1995-12-05 Microsoft Corporation Video on demand system comprising stripped data across plural storable devices with time multiplex scheduling
US5537585A (en) * 1994-02-25 1996-07-16 Avail Systems Corporation Data storage management for network interconnected processors
US5544327A (en) * 1994-03-01 1996-08-06 International Business Machines Corporation Load balancing in video-on-demand servers by allocating buffer to streams with successively larger buffer requirements until the buffer requirements of a stream can not be satisfied
US5586264A (en) * 1994-09-08 1996-12-17 Ibm Corporation Video optimized media streamer with cache management
US5619247A (en) * 1995-02-24 1997-04-08 Smart Vcr Limited Partnership Stored program pay-per-play
DE19514616A1 (en) * 1995-04-25 1996-10-31 Sel Alcatel Ag Communication system with hierarchical server structure
US5991811A (en) * 1995-09-04 1999-11-23 Kabushiki Kaisha Toshiba Information transmission system utilizing both real-time data transmitted in a normal-in-time direction and in a retrospective-in-time direction
US5991306A (en) * 1996-08-26 1999-11-23 Microsoft Corporation Pull based, intelligent caching system and method for delivering data over a network
US6378130B1 (en) * 1997-10-20 2002-04-23 Time Warner Entertainment Company Media server interconnect architecture
US6415373B1 (en) * 1997-12-24 2002-07-02 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
EP0940984B1 (en) * 1998-03-03 2007-06-27 Matsushita Electric Industrial Co., Ltd. Terminal device for multimedia data
US6763523B1 (en) * 1998-04-03 2004-07-13 Avid Technology, Inc. Intelligent transfer of multimedia data files from an editing system to a playback device
US6898762B2 (en) * 1998-08-21 2005-05-24 United Video Properties, Inc. Client-server electronic program guide
US20040210932A1 (en) * 1998-11-05 2004-10-21 Toshiaki Mori Program preselecting/recording apparatus for searching an electronic program guide for programs according to predetermined search criteria
US6973662B1 (en) * 1999-10-13 2005-12-06 Starz Entertainment Group Llc Method for providing programming distribution

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5253275A (en) * 1991-01-07 1993-10-12 H. Lee Browne Audio and video transmission and receiving system
US5550577A (en) * 1993-05-19 1996-08-27 Alcatel N.V. Video on demand network, including a central video server and distributed video servers with random access read/write memories
US5568181A (en) * 1993-12-09 1996-10-22 International Business Machines Corporation Multimedia distribution over wide area networks
US5557317A (en) * 1994-05-20 1996-09-17 Nec Corporation Video-on-demand system with program relocation center
US5652613A (en) * 1995-06-07 1997-07-29 Lazarus; David Beryl Intelligent electronic program guide memory management system and method
US5815662A (en) * 1995-08-15 1998-09-29 Ong; Lance Predictive memory caching for media-on-demand systems
US5940594A (en) * 1996-05-31 1999-08-17 International Business Machines Corp. Distributed storage management system having a cache server and method therefor
US5920700A (en) * 1996-09-06 1999-07-06 Time Warner Cable System for managing the addition/deletion of media assets within a network based on usage and media asset metadata
US5875300A (en) * 1997-01-30 1999-02-23 Matsushita Electric Industrial Co., Ltd. Cell loss reduction in a video server with ATM backbone network
US6473902B1 (en) * 1997-04-04 2002-10-29 Sony Corporation Method and apparatus for transmitting programs
US6124877A (en) * 1997-12-08 2000-09-26 Soundview Technologies, Inc. System for monitoring and reporting viewing of television programming
US6064980A (en) * 1998-03-17 2000-05-16 Amazon.Com, Inc. System and methods for collaborative recommendations
US6530082B1 (en) * 1998-04-30 2003-03-04 Wink Communications, Inc. Configurable monitoring of program viewership and usage of interactive applications
US6295092B1 (en) * 1998-07-30 2001-09-25 Cbs Corporation System for analyzing television programs

Cited By (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050257247A1 (en) * 1998-10-28 2005-11-17 Bea Systems, Inc. System and method for maintaining security in a distributed computer network
US8108542B2 (en) 2000-03-21 2012-01-31 Intel Corporation Method and apparatus to determine broadcast content and scheduling in a broadcast system
US8839298B2 (en) 2000-03-21 2014-09-16 Intel Corporation Method and apparatus to determine broadcast content and scheduling in a broadcast system
US7962573B2 (en) 2000-03-21 2011-06-14 Intel Corporation Method and apparatus to determine broadcast content and scheduling in a broadcast system
US20110093475A1 (en) * 2000-03-21 2011-04-21 Connelly Jay H Method and apparatus to determine broadcast content and scheduling in a broadcast system
US7444593B1 (en) * 2000-10-04 2008-10-28 Apple Inc. Disk space management and clip remainder during edit operations
US20030217333A1 (en) * 2001-04-16 2003-11-20 Greg Smith System and method for rules-based web scenarios and campaigns
US20020199205A1 (en) * 2001-06-25 2002-12-26 Narad Networks, Inc Method and apparatus for delivering consumer entertainment services using virtual devices accessed over a high-speed quality-of-service-enabled communications network
US20030005438A1 (en) * 2001-06-29 2003-01-02 Crinon Regis J. Tailoring a broadcast schedule based on storage area and consumer information
US8943540B2 (en) 2001-09-28 2015-01-27 Intel Corporation Method and apparatus to provide a personalized channel
US20050187986A1 (en) * 2001-10-24 2005-08-25 Bea Systems, Inc. Data synchronization
US20050187978A1 (en) * 2001-10-24 2005-08-25 Bea Systems, Inc. System and method for portal rendering
US20030145275A1 (en) * 2001-10-24 2003-07-31 Shelly Qian System and method for portal rendering
US20030117437A1 (en) * 2001-10-24 2003-06-26 Cook Thomas A. Portal administration tool
US20030115292A1 (en) * 2001-10-24 2003-06-19 Griffin Philip B. System and method for delegated administration
US20030110448A1 (en) * 2001-10-24 2003-06-12 John Haut System and method for portal page layout
US20030135553A1 (en) * 2002-01-11 2003-07-17 Ramesh Pendakur Content-based caching and routing of content using subscription information from downstream nodes
US20030204856A1 (en) * 2002-04-30 2003-10-30 Buxton Mark J. Distributed server video-on-demand system
US7725560B2 (en) 2002-05-01 2010-05-25 Bea Systems Inc. Web service-enabled portlet wizard
US9027063B2 (en) * 2002-11-27 2015-05-05 Deluxe Digital Distribution Inc. Video-on-demand (VOD) management system and methods
US20040103120A1 (en) * 2002-11-27 2004-05-27 Ascent Media Group, Inc. Video-on-demand (VOD) management system and methods
US20040143850A1 (en) * 2003-01-16 2004-07-22 Pierre Costa Video Content distribution architecture
US7992189B2 (en) 2003-02-14 2011-08-02 Oracle International Corporation System and method for hierarchical role-based entitlements
US20100037290A1 (en) * 2003-02-14 2010-02-11 Oracle International Corporation System and method for hierarchical role-based entitlements
US7404201B2 (en) * 2003-02-14 2008-07-22 Hitachi, Ltd. Data distribution server
US20040162905A1 (en) * 2003-02-14 2004-08-19 Griffin Philip B. Method for role and resource policy management optimization
US7653930B2 (en) 2003-02-14 2010-01-26 Bea Systems, Inc. Method for role and resource policy management optimization
US20040230996A1 (en) * 2003-02-14 2004-11-18 Hitachi, Ltd. Data distribution server
US8831966B2 (en) 2003-02-14 2014-09-09 Oracle International Corporation Method for delegated administration
US20080320022A1 (en) * 2003-02-20 2008-12-25 Oracle International Corporation Federated Management of Content Repositories
US20040167868A1 (en) * 2003-02-20 2004-08-26 Bea Systems, Inc. System and method for a virtual content repository
US20040167880A1 (en) * 2003-02-20 2004-08-26 Bea Systems, Inc. System and method for searching a virtual repository content
US7840614B2 (en) 2003-02-20 2010-11-23 Bea Systems, Inc. Virtual content repository application program interface
US8099779B2 (en) 2003-02-20 2012-01-17 Oracle International Corporation Federated management of content repositories
US20040167920A1 (en) * 2003-02-20 2004-08-26 Bea Systems, Inc. Virtual repository content model
US7810036B2 (en) 2003-02-28 2010-10-05 Bea Systems, Inc. Systems and methods for personalizing a portal
US20040230557A1 (en) * 2003-02-28 2004-11-18 Bales Christopher E. Systems and methods for context-sensitive editing
US20040230917A1 (en) * 2003-02-28 2004-11-18 Bales Christopher E. Systems and methods for navigating a graphical hierarchy
US8832758B2 (en) * 2003-03-17 2014-09-09 Qwest Communications International Inc. Methods and systems for providing video on demand
US20040187160A1 (en) * 2003-03-17 2004-09-23 Qwest Communications International Inc. Methods and systems for providing video on demand
US20050188295A1 (en) * 2004-02-25 2005-08-25 Loren Konkus Systems and methods for an extensible administration tool
US7774601B2 (en) 2004-04-06 2010-08-10 Bea Systems, Inc. Method for delegated administration
US7246138B2 (en) * 2004-04-13 2007-07-17 Bea Systems, Inc. System and method for content lifecycles in a virtual content repository that integrates a plurality of content repositories
US20050251512A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for searching a virtual content repository
US7236990B2 (en) * 2004-04-13 2007-06-26 Bea Systems, Inc. System and method for information lifecycle workflow integration
US7236975B2 (en) 2004-04-13 2007-06-26 Bea Systems, Inc. System and method for controlling access to anode in a virtual content repository that integrates a plurality of content repositories
US7236989B2 (en) * 2004-04-13 2007-06-26 Bea Systems, Inc. System and method for providing lifecycles for custom content in a virtual content repository
US20050228807A1 (en) * 2004-04-13 2005-10-13 Bea Systems, Inc. System and method for content lifecycles
AU2005234070B2 (en) * 2004-04-13 2009-02-05 Oracle International Corporation System and method for a virtual content repository
US20050228784A1 (en) * 2004-04-13 2005-10-13 Bea Systems, Inc. System and method for batch operations in a virtual content repository
US7162504B2 (en) * 2004-04-13 2007-01-09 Bea Systems, Inc. System and method for providing content services to a repository
US20060028252A1 (en) * 2004-04-13 2006-02-09 Bea Systems, Inc. System and method for content type management
US20050234849A1 (en) * 2004-04-13 2005-10-20 Bea Systems, Inc. System and method for content lifecycles
US20050234942A1 (en) * 2004-04-13 2005-10-20 Bea Systems, Inc. System and method for content and schema lifecycles
US20050228816A1 (en) * 2004-04-13 2005-10-13 Bea Systems, Inc. System and method for content type versions
US20050251505A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for information lifecycle workflow integration
US20050251504A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for custom content lifecycles
US20050251506A1 (en) * 2004-04-13 2005-11-10 Bea Systems, Inc. System and method for providing content services to a repository
US7240076B2 (en) * 2004-04-13 2007-07-03 Bea Systems, Inc. System and method for providing a lifecycle for information in a virtual content repository
US8434118B2 (en) 2004-05-27 2013-04-30 Time Warner Cable Enterprises Llc Playlist menu navigation
US20050278761A1 (en) * 2004-05-27 2005-12-15 Gonder Thomas L Playlist menu navigation
US20050278760A1 (en) * 2004-06-01 2005-12-15 Don Dewar Method and system for controlling streaming in an on-demand server
WO2005119492A3 (en) * 2004-06-01 2007-05-10 Broadbus Technologies Inc Method and system for controlling streaming in an on-demand server
US20070073744A1 (en) * 2005-09-26 2007-03-29 Bea Systems, Inc. System and method for providing link property types for content management
US20070073661A1 (en) * 2005-09-26 2007-03-29 Bea Systems, Inc. System and method for providing nested types for content management
US8316025B2 (en) 2005-09-26 2012-11-20 Oracle International Corporation System and method for providing SPI extensions for content management system
US7953734B2 (en) 2005-09-26 2011-05-31 Oracle International Corporation System and method for providing SPI extensions for content management system
US7752205B2 (en) 2005-09-26 2010-07-06 Bea Systems, Inc. Method and system for interacting with a virtual content repository
US7917537B2 (en) 2005-09-26 2011-03-29 Oracle International Corporation System and method for providing link property types for content management
US7818344B2 (en) 2005-09-26 2010-10-19 Bea Systems, Inc. System and method for providing nested types for content management
US20070261088A1 (en) * 2006-04-20 2007-11-08 Sbc Knowledge Ventures, L.P. Rules-based content management
US8209729B2 (en) * 2006-04-20 2012-06-26 At&T Intellectual Property I, Lp Rules-based content management
US9247209B2 (en) 2006-04-20 2016-01-26 At&T Intellectual Property I, Lp Rules-based content management
US9661388B2 (en) 2006-04-20 2017-05-23 At&T Intellectual Property I, L.P. Rules-based content management
US9877078B2 (en) 2006-04-20 2018-01-23 At&T Intellectual Property I, L.P. Rules-based content management
US10206006B2 (en) 2006-04-20 2019-02-12 At&T Intellectual Property I, L.P. Rules-based content management
US8463852B2 (en) 2006-10-06 2013-06-11 Oracle International Corporation Groupware portlets for integrating a portal with groupware systems
US9723343B2 (en) 2010-11-29 2017-08-01 At&T Intellectual Property I, L.P. Content placement
US11317134B1 (en) * 2014-09-11 2022-04-26 Swfy, Llc System and method for dynamically switching among sources of video content
US20180192100A1 (en) * 2015-09-10 2018-07-05 Sony Corporation Av server system and av server
US10887636B2 (en) * 2015-09-10 2021-01-05 Sony Corporation AV server system and AV server

Also Published As

Publication number Publication date
HK1053930A1 (en) 2003-11-07
JP2003533138A (en) 2003-11-05
IL152145A0 (en) 2003-05-29
WO2001086945B1 (en) 2002-03-28
AU2001251564A1 (en) 2001-11-20
US7278153B1 (en) 2007-10-02
EP1275242A2 (en) 2003-01-15
CA2405820A1 (en) 2001-11-15
WO2001086945A2 (en) 2001-11-15
CN1423894A (en) 2003-06-11
WO2001086945A3 (en) 2002-02-28

Similar Documents

Publication Publication Date Title
US7278153B1 (en) Content propagation in interactive television
US10848816B2 (en) Updating content libraries by transmitting release data
JP4934650B2 (en) Instant media on demand
JP4843195B2 (en) Method, program, apparatus, and system for distributing content using multi-stage distribution system
CA2841216C (en) Modular storage server architecture with dynamic data management
US5790935A (en) Virtual on-demand digital information delivery system and method
JP5160657B2 (en) A process for scalable and reliable transfer of multiple high-bandwidth data streams between computer systems and multiple storage devices and multiple applications
US8739231B2 (en) System and method for distributed video-on-demand
US8683534B2 (en) Method and apparatus for hierarchical distribution of video content for an interactive information distribution system
US7831989B1 (en) Intelligent asset management in a cable services system
US20010014975A1 (en) Transmitting viewable data objects
US7797440B2 (en) Method and system for managing objects distributed in a network
US20070283449A1 (en) Controlled content release system and method
US20020143791A1 (en) Content deployment system, method and network
JP2003167813A (en) Stream data storing and distributing method and system
Ghose et al. Scheduling video streams in video-on-demand systems: A survey
JP2005517314A (en) Method and apparatus for delivering content using a multi-stage delivery system
WO2009079142A1 (en) Indicating program popularity
JP2003263359A (en) Content management device and content management program
JP5560545B2 (en) Distribution system and distribution method
JP2003009116A (en) Video distribution system, video distribution equipment, video distribution method, recording medium and program
Candan et al. An event-based model for continuous media data on heterogeneous disk servers
JP4011858B2 (en) Digital broadcast receiving apparatus and control method thereof
DoĞanata et al. A video server cost/performance estimator tool
Venkatesh et al. The Use of Media Characteristics and User Behavior for the Design of Multimedia Servers

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION