WO2006047133A2 - System and method for predictive streaming - Google Patents

System and method for predictive streaming Download PDF

Info

Publication number
WO2006047133A2
WO2006047133A2 PCT/US2005/037351 US2005037351W WO2006047133A2 WO 2006047133 A2 WO2006047133 A2 WO 2006047133A2 US 2005037351 W US2005037351 W US 2005037351W WO 2006047133 A2 WO2006047133 A2 WO 2006047133A2
Authority
WO
WIPO (PCT)
Prior art keywords
block
request
streaming
data associated
data
Prior art date
Application number
PCT/US2005/037351
Other languages
French (fr)
Other versions
WO2006047133A3 (en
Inventor
Jeffrey De Vries
Original Assignee
Stream Theory, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stream Theory, Inc. filed Critical Stream Theory, Inc.
Priority to JP2007537956A priority Critical patent/JP2008518508A/en
Priority to EP05808446A priority patent/EP1836581A2/en
Publication of WO2006047133A2 publication Critical patent/WO2006047133A2/en
Publication of WO2006047133A3 publication Critical patent/WO2006047133A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4351Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reassembling additional data, e.g. rebuilding an executable program from recovered modules

Definitions

  • Software streaming involves downloading small pieces of files as the pieces are needed by the program being streamed. These small pieces may be referred to as blocks.
  • a streaming client sends requests for blocks as they are needed up to a streaming server, which sends back streaming data that is associated with the requested block. Sending a request and receiving the streaming data may cause delays that can slow down th e streamed program.
  • a technique for predictive streaming involves receiving a request for a first block of a streaming application, checking a block request database, predicting a second block request based on the block request database, and sending, in response to the request, data associated with, the first block and data associated with the second block.
  • a block may be an arbitrarily large portion of a streaming application.
  • the block request database includes probabilities or means for determining probabilities that the second block will be requested given that the first block has been requested. One or more factors may be considered when determining the probability of a request for the second block.
  • Data sent in response to the request may include data associated with the first block and data associated with the second block.
  • the data associated with the first block and the data associated with the second block may not be analogous.
  • the data associated with the first block is responsive to a block request for the first block, while the data associated with the second block is data sufficient to facilitate making a request for the second block.
  • the technique may further include predicting block requests based on the block request database, then sending data associated with the block requests.
  • the data associated with the second block includes data sufficient to render a request for the second block unnecessary.
  • the technique further includes piggybacking the data associated with the second block on a reply to the request for the first i block.
  • the technique further includes logging the request for the first block and updating the block request database to incorporate data associated with the logged request.
  • the technique further includes setting an aggressiveness parameter, wherein the data associated with the second block is sent when a probability of the second block request is higher than the aggressiveness parameter.
  • a system constructed according to the technique may include a processor, a block request database that includes predictive parameters, a prediction engine that is configured to check the block request database and predict a second block request for a second block based upon a first block request for a first block and predictive parameters associated with the first block, and a streaming server.
  • the streaming server may be configured to obtain the prediction about the second block request from the prediction engine, include data associated with the second block in a response to the first block request, in addition to data associated with the first block, and send the response in reply to the first block request.
  • the data associated with the second block is sufficient to identify the second block so as to facilitate making the second block request.
  • the data associated with the second block includes data sufficient to render the second block request unnecessary.
  • the streaming server is further configured to piggyback the data associated with the second block on a reply to the first block request.
  • the system includes a request log, wherein the streaming server is further configured to log the first block request in the request log.
  • the prediction engine may be further configured to update the block request database according to the request log.
  • FIG. 1 depicts a networked system for use In an exemplary embodiment
  • FIG. 2 depicts a computer system for use in the system of FIG. 1;
  • FIG. 3 depicts a portion of the computer system of FIG. 2 and components of the system of FIG.
  • FIGS. 4 and 5 depict flowcharts of exemplary methods for predictive streaming according to embodiments
  • FIG. 6 depicts a conceptual view of a system 600 for providing streaming data and identifying a predicted block in response to a block request according to an embodiment
  • FIGS. 7A to 7C and 8A to 8B depict exemplary request logs and block request databases according to embodiments
  • FIGS. 9 to 12 depict exemplary methods according to alternative embodiments
  • FIG. 13 depicts an exemplary log request and block request database according to an embodiment.
  • Parametric predictive streaming involves maintaining parameters (parametric) to predict (predictive) which blocks will be requested by or served to a streaming client (streaming). Parametric predictive streaming can improve pipe saturation with large reads or facilitate rapid provision of sequential small reads. Parametric predictive streaming may, in an exemplary embodiment, also be adaptive. Parametric predictive adaptive streaming involves changing parameters over time as access patterns are learned.
  • FIGS. 1-3 The following description of FIGS. 1-3 is intended to provide an overview of computer hardware and other operating components suitable for performing the methods of the invention described herein, but is not intended to limit the applicable environments. Similarly, the computer hardware and other operating components may be suitable as part of the apparatuses of the invention described herein.
  • the invention can be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
  • the invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • Fig. 1 depicts a networked system 100 that includes several computer systems coupled together through a network 102, such as the Internet.
  • the term "Internet” as used hexein refers to a network of networks which uses certain protocols, such as the TCP/IP protocol, and possibly other protocols such as the hypertext transfer protocol (HTTP) for hypertext markup language (HTML) documents that make up the World Wide Web (the web).
  • HTTP hypertext transfer protocol
  • HTML hypertext markup language
  • the web server 104 is typically at least one computer system which operates as a server computer system and is configured to operate with the protocols of the world wide web and is coupled to the Internet.
  • the web server system 104 can be a conventional server computer system.
  • the web server 104 can be part of an ISP which provides access to the Internet for client systems.
  • the web server 104 is shown coupled to the server computer system 106 which itself is coupled to web content 108, which can be considered a form of a. media database. While two computer systems 104 and 106 are shown in Fig. 1, the web server system 104 and the server computer system 106 can be one computer system having different software components providing the web server functionality and the server functionality provided by the server computer system 106, which will be described further below.
  • Access to the network 1 02 is typically provided by Internet service providers (ISPs), such as the ISPs 110 and 116.
  • ISPs Internet service providers
  • Users on client systems, such as client computer systems 112, 1 18, 122, and 126 obtain access to the Internet through the ISPs 110 and 116.
  • Access to the Internet allows users of the client computer systems to exchange information, receive and send e-mails, and view documents, such as documents which have been prepared in the HTML format.
  • These documents are often provided by web servers, such as web server 104, which are referred to as being "on" the Internet.
  • web servers are provided by the ISPs, such as ISP 110, although a computer system can be set up and connected to the Internet without that system also being an ISP.
  • Client computer systems 112, 118, 122, and 126 can each, with the appropriate web browsing software, view HTML pages provided by the web server 104.
  • the ISP 1 IO provides Internet connectivity to the client computer system 112 through the modem interface 114, which can be considered part of the client computer system 112.
  • the client computer system can be a personal computer system, a network computer, a web TV system, or other computer system. While Fig. 1 shows the modem interface 114 generically as a "modem,” the interface can be an analog modem, isdn modem, cable modem, satellite transmission interface (e.g. "direct PC"), or other interface for coupling a computer system to other computer systems.
  • the ISP 116 provides Internet connectivity for client systems 118, 122, and 126, although as shown in Fig. 1, the connections are not the same for these three computer systems.
  • Client computer system 118 is coupled through a modem interface 120 while client computer systems 122 and 126 are part of a LAN 130.
  • Client computer systems 122 and 126 are coupled to the LAN 130 through network interfaces 124 and 128, which can be ethernet network or other network interfaces.
  • the LAN 130 is also coupled to a gateway computer system 132 which can provide firewall and other Internet-related services for the local area network.
  • This gateway computer system 132 is coupled to the ISP 116 to provide Internet connectivity to the client computer systems 122 and 126.
  • the gateway computer system 132 can be a conventional server computer system.
  • FIG. 2 depicts a computer system 140 for use in the system 100 (FIG. 1).
  • the computer system 140 may be a conventional computer system that can be used as a client computer system or a server computer system or as a web server system. Such a computer system can be used to perform many of the functions of an Internet service provider, such as ISP 110 (FIG. 1).
  • the computer system 140 Excludes a computer 142, I/O devices 144, and a display device 146.
  • the computer 142 includes a processor 148, a communications interface 150, memory 152, display controller 154, non-volatile storage 156, and I/O controller 158.
  • the computer system 140 may be couple to or include the I/O devices 144 and display device 146.
  • the computer 142 interfaces to external systems through the communications interface 150,which may include a modem or network interface. It will be appreciated that the communications interface 150 can be considered to be part of the computer system 140 or a part of the computer 142.
  • the communications interface can be an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. "direct PC"), or other interfaces for coupling a computer system to other computer systems.
  • the processor 1 48 may be, for example, a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor.
  • the memory 152 is coupled to the processor 148 by a " bus 160.
  • the memory 152 can be dynamic random access memory (dram) and can also include static ram (sram).
  • the bus 160 couples the processor 148 to the memory 152, also to the non- volatile storage 156, to the display controller 154, and to the I/O controller 158.
  • the I/O devices 144 can include a keyboard, disk drives, printers, a scanner, and other input and output devices, including a mouse or other pointing device.
  • the display controller 154 may control in the conventional manner a display on the display device 146, which can be, for example, a cathode ray tube (CRT) or liquid crystal display (LCD).
  • CTR cathode ray tube
  • LCD liquid crystal display
  • the non- volatile storage 156 is often a magnetic hard disk, an optical disk, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory 152 during execution of software in the computer 142.
  • machine-readable medium or “computer- readable medium” includes any type of storage device that is accessible by the processor 148 and also encompasses a carrier wave that encodes a data signal.
  • the computer system 140 is one example of many possible computer systems which have different architectures.
  • bus For example, personal computers based on an Intel microprocessor often have multiple buses, one of which can be an I/O bus for the peripherals and one that directly connects the processor 148 and the memory 152 (often referred to as a memory " bus).
  • the buses are connected together through bridge components that perform any necessary translation due to differing bus protocols.
  • Network computers are another type of computer system that can be used with the present invention.
  • Network computers do not usually include a hard disk or other mass storage, and the executable programs are loaded from a network connection into the memory 152 for execution by the processor 148.
  • a Web TV system which is known in the art, is also considered to be a computer system according to the present invention, but it may lack some of the features shown in FIG. 2, such as certain input or output devices.
  • a typical computer system will usually include at least a processor, memory, and a bus coupling the memory to the processor.
  • the computer system 140 is controlled by operating system software which includes a file management system, such as a disk operating system, which is part of the operating system software.
  • a file management system such as a disk operating system
  • One example of an operating system software with its associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Washington, and their associated file management systems.
  • Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system.
  • the file management system is typically stored in the non- volatile storage 156 and causes the processor 148 to execute the various acts required by the operating system to input and output data and to store data in memory, including storing files on the non- volatile storage 156.
  • the present invention also relates to apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such, as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • FIG. 3 depicts a portion of the computer system 140 (FIG. 2) and components of the system 100 (FIG. 1).
  • FIG. 3 depicts the computer system 140, a network 162, and a streaming client 164.
  • the network 162 could be a global information network, such as the Internet, a local or wide area network (LAN or WAN), or some other intranet or network:.
  • the network 102 (FIG. 1) could include the network 162.
  • the LAN 130 (FIG. 1) could include the network 162.
  • the computer system 140 may by physically or wirelessly coupled to the streaming client 164.
  • the streaming client 164 may be coupled to and accessible through the network 162.
  • the streaming client 164 could be a software, firmware, or hardware module.
  • the streaming client 164 co ⁇ ld include some combination of software, firmware, or hardware components.
  • the streaming client 164 may be part of a computer system, such as the computer system 140 (FIG. 2).
  • the streaming client 164 may include a processor, a memory, and a bus that couples the processor to the memory.
  • the streaming client 164, or a computer system associated with the streaming client 164 may make use of certain programs ⁇ vhen executing a streaming application.
  • a streaming application may be intended for use with a specific version of DirectXTM, AcrobatTM, or QuickTimeTM, which typically is installed prior to executing the streaming application.
  • the computer system 140 includes a processor 166, a memory 168, and a bus 170 that couples the processor 1 66 to the memory 168.
  • the memory 168 may include both volatile memory, such as DRAlM or SRAM, and non- volatile memory, such as magnetic or optical storage.
  • the memory 168 may also include, for example, environment variables.
  • the processor 166 executes code in the memory 168.
  • the memory 168 includes a streaming server 172, a block request database 174, a prediction engine 176, and one or more streaming applications 178.
  • Some or all of the programs, files, or data of the computer system 140 could be served as Web content, such as by the server computer 106 (FIG. 1).
  • the programs, files, or data could be part of a server computer on a LAN or WAN, such as the server computer 1 34 (FIG. 1).
  • the streaming server 172 may be configured to serve data associated with a block of a streaming application, such as one of the streaming applications 178, in response to a request for the block.
  • the block request database 174 may include data associated with block requests received from one or more streaming clients, such as the streaming client 164.
  • the block request database 174 may include a block request log, which is updated with each o ⁇ with a subset of block requests.
  • the block request database 174 may be client-specific or generally used by all or a subset of all streaming clients from which block requests are received.
  • the block request database 174 may be application-specific or generally associated with all or a subset of all of the streaming applications 178.
  • the block request database 174 may include a " block request history that has been derived from data associated with block requests received over time, from some original default values, or from data input by, for example, a user or software agent that administers the computer system 140.
  • the block request database 174 may include parameters derived from individual block requests, a block request log, or a block request history.
  • the prediction engine 176 is configured to predict one or more blocks, if any, the streaming client 164 will request.
  • the term "engine,” as used herein, generally refers to any combination of software, firmware, hardware, or other component that is used to effect a purpose.
  • the prediction engine 176 may include an aggressiveness parameter (not shown). If the aggressiveness is high, the prediction engine 176 is more likely to make a prediction than if the aggressiveness is low.
  • the aggressiveness parameter is associated with a probability threshold.
  • the probability threshold may be a cut-off probability that results in predictions being made for blocks that are determined to have a probability of being requested that exceeds the cut-off probability.
  • the prediction engine 176 may predict any block with a probability of being requested that is higher than the probability threshold.
  • the aggressiveness parameter is low (i.e., the probability threshold is low) when aggressiveness is high. Nevertheless, for the pr ⁇ poses of linguistic clarity, hereinafter, it is assumed that the aggressiveness parameter is high when aggressiveness is high. For example, if the aggressiveness parameter is associated with a threshold probability, x, then the value of the aggressiveness parameter may be thought of as 1-x, which means the aggressiveness parameter is high when aggressiveness is high. This is for the purposes of linguistic clarity, and should not be construed as a limitation as to how the aggressiveness parameter is implemented. It should also be noted that the aggressiveness parameter is not limited to a threshold probability.
  • the streaming server 172 may communicate the prediction to the streaming client 164.
  • the prediction may include identifying data for one or more blocks, each of which met aggressiveness criteria, such as by having a predicted probability of being requested that is higher than a probability threshold. Identifying data, as used herein, is data sufficient to enable a streaming client to make a request for one or more blocks that are associated with the identifying data.
  • the identifying data may, for example, include a block ID associated with a block. This identifying data may facilitate predictive requests for blocks before the blocks are actually needed at the streaming client 164.
  • the streaming client 164 may determine whether to request the block associated with streaming data.
  • the prediction engine 176 may give a weight to a number of blocks. For example, the prediction engine 176 may predict a first block that is more likely to be requested than a second block (a probability parameter). The streaming client 164 may first request the blocks with, for example, the highest probability of being requested. As another example, the prediction engine 176 may give greater weight to a first block over a second block if the prediction engine 176 predicts the first block will be requested sooner than the second block (a temporal parameter). As another example, the prediction engine 176 may give greater weight to a first block over a second block if the first block is larger than the second block (a size parameter). The prediction engine 176 may weigh parameters associated with the streaming client 164, as well.
  • the aggressiveness parameter may be set lower.
  • the prediction engine 176 may place more weigh? on temporal or size parameters.
  • the streaming client 164 may manage block requests according to local conditions, while the prediction engine 176 acts the same for all or a subset of all streaming clients.
  • the streaming client 164 sends over the network 162 to the streaming server 172 a request for a block associated with a streaming application of the streaming applications 178.
  • the streaming server 172 or some other component (not shown) of ⁇ the computer system 140, logs the request.
  • the block request database 174 is treated as a block request log.
  • the block request database 174 could be derived from a block request log or from individual or groups of block requests that are not recorded in a log.
  • the block request database 174 includes parameters derived from previous block requests, or from default or initial settings, that aid in predicting subsequent block requests from the streaming client 164.
  • the prediction engine 176 uses data from the block request database 174 to predict a subsequent block request.
  • the subsequent block request may be for, for example, a block from the same streaming application as the initial block request, from a DLL, from a data file, from a different executable file, or from any other file or application.
  • the streaming server 172 serves streaming data associated with the requested block and identifying data associated with the predicted block to the streaming client 164.
  • the streaming client 164 then decides whether to request the identified predicted block. For example, the streaming client 164 may already have data for the block associated with the identifying data in a local cache. If the streaming server 172 sends the data again, that is a waste of bandwidth and may slow down the execution of the streaming application.
  • the streaming client 164 can, for example, check the local cache first, and request the streaming data if it is not already available in the local cache.
  • the streaming server 172 serves streaming data associated with the requested block and identifying data associated with one or more predicted blocks. The streaming client 164 then decides which of the identified blocks to request and, for example, in what order.
  • the prediction engine 176 is correct in its prediction, the streaming client 164 will event ⁇ ally request a predicted block in due course even if the streaming client 164 does not request the predicted block in response to receiving the identifying data.
  • FIGr. 4 depicts a flowchart of an exemplary method for predictive streaming according to an embodiment.
  • the flowchart starts at module 180 with requesting a first block.
  • the request may originate from a streaming client.
  • the flowchart continues at module 182 with receiving the request for the first block.
  • the receipt may be at a streaming server.
  • the streaming server may log the request.
  • the flowchart continues at module 184 with checking a block request database.
  • the server may check the block request database.
  • the block request database may include the logged requests.
  • the flowchart continues at module 186 with predicting a second block request based on the block request database.
  • the server may use a. prediction engine to predict the second block request.
  • the prediction may be based upon block requests for the first block followed by block requests for the second block from a streaming client.
  • the flowchart continues at module 188 with piggybacking a second block ID on a reply that includes block data associated with the first block.
  • the flowchart continues at module 190 with receiving the first block data and the second block ID.
  • the flowchart continues at module 192 with determining whether to request the second block based on local factors.
  • the flowchart continues at module 194 with requesting the second block. It is assumed for the purposes of example that local factors merit the request for the second block. For example, if streaming data associated with the second block is in a local disk cache, the second block may not be requested.
  • the flowchart continues at module 196 with receiving the request for the second block.
  • the flowchart continues at module 198 with second a reply that includes block data associated ⁇ vith the second block.
  • the flowchart ends at module 200 with receiving the second block data. It should be noted that, though the flowchart may be thought of as depicting sequential smalL reads, as opposed to depicting building large blocks, the blocks could very well be large. This method and other methods are depicted as serially arranged modules. However, modules of the methods may be reordered, or arranged for parallel execution as appropriate.
  • the streaming server When a streaming server sends data associated with a predicted block (second blocl:) request along with data associated with a requested block (first block), the first and second block data may be thought of as a "large block” if they are queued for sending back-to-back. However, in order to queue the first and second blocks back-to-back, the streaming server would to receive the request for the second block and queue the second block data before the streaming server was finished sending the first block data. When data is sent as a large block, the streaming server can have nearly continuous output.
  • a block may be made large initially.
  • a large block may include data associated with an entire level.
  • data associated with the entire level is returned as continuous output from the streaming server.
  • large blocks can be built "on the fly" based on a first block request and predicted block requests. For example, if the streaming server sends identifying data for multiple predicted block requests piggybacked on a reply to a first block request, along with streaming data associated with the first block, the streaming client can consecutively request some or all of the multiple predicted blocks. If the requests are made in relatively rapid succession, the streaming server may queue streaming data associated with two predicted blocks more rapidly than the streaming server sends streaming data associated with one predicted block. In this way, the streaming server maintains a fall queue and, accordingly, can keep the pipe saturated.
  • FIG. 5 depicts a flowchart of an exemplary method for predictive streaming according to an embodiment.
  • the flowchart starts at module 202 with requesting a first block.
  • the flowchart continues at module 204 with receiving the request for the first block:.
  • the streaming server may log the request.
  • the flowchart continues at module 206 with predicting block requests based on the first block request.
  • the server may use a prediction engine to predict the block requests.
  • the prediction engine may make use of a block request database that includes parameters derived from, for example, prior block requests for the first block.
  • the flowchart continues at module 208 with sending streaming data associated with the first block and identifying data for the predicted blocks.
  • the streaming server may piggyback the identifying data on a reply to the request for the first block.
  • the flowchart continues at module 210 with receiving the streaming data and the identifying data.
  • the streaming client may receive the data.
  • the flowchart continues at module 212 with requesting one or more predicted blocks in succession.
  • the streaming client may determine, based upon local factors, which of the blocks associated with the identifying data are to be requested.
  • the streaming client may or may not order the predictive block requests according to factors associated with the blocks. For example, the streaming client may make predictive block requests in the order of probability. The probability for each block may be sent along with the identifying data or derived locally.
  • the flowchart continues at module 214 with receiving the predictive block requests.
  • the flowchart continues at module 216 with saturating the output pipe with streaming data associated with the multiple blocks.
  • the streaming server may saturate the output pipe by queuing streaming data as fast as or faster than the streaming data is sent to the streaming client.
  • the streaming server may maintain output pipe saturation even if queuing streaming data slower than streaming data is sent to the streaming client as long as the queue remains non-empty.
  • the streaming server may or may not maintain output pipe saturation for an entire streaming session.
  • the streaming server saturates the pipe, it is presumed that the streaming server does not maintain saturation throughout the entire streaming session, though total saturation may be possible.
  • the streaming server saturates the pipe, the pipe is not necessarily perfectly saturated.
  • Streaming data is not necessarily sent in a perfectly continuous stream.
  • a continuous stream of data means data that is sent from a non-empty queue.
  • first and second streams of data sent before and after the period of time are not referred to as continuous.
  • the flowchart ends at module 218 with receiving the streaming data.
  • the streaming client may receive the streaming data. Though the streaming server saturates the pipe, the streaming client may or may not receive a continuous stream of data, depending on various factors that are well-understood in the art of data transmission, such as network delays, and are, therefore, not described herein.
  • a streaming client may rapidly request blocks in a strictly sequential manner. For example, if a sequence of blocks is associated with a video clip, the blocks are reasonably likely to be served, in order, one after the other.
  • a streaming server may recognize a sequential pattern of block requests, either because the streaming server is provided with the pattern, or because the streaming server notices the pattern from block requests it receives over time.
  • the streaming server may send the pattern to the streaming client in response to a block request that has been found to normally precede block requests for blocks that are identified in the pattern.
  • the streaming client may predictively request additional blocks in anticipation of the streaming application needing them.
  • the streaming client may make these requests in relatively rapid succession, since each request can be made using the pattern the streaming client received from the streaming server.
  • the streaming client may or may not wait for a reply to each request.
  • Predictively requested blocks may be stored in a local cache until they are needed.
  • the parameters that control recognition of the pattern, as well as how aggressive the read-ahead schedule should be, can be independently specified at the file, file extension, directory, and application level. In addition, it can be specified that some blocks are predictively downloaded as soon as a file is opened (even before seeing a read) and that the open call itself should wait until the initial blocks have been downloaded.
  • the pattern includes identifying data for each block represented in the pattern.
  • the patterns may be included in a block request database.
  • the streaming server may provide the streaming client with one or more patterns as a pattern database.
  • the streaming server may or may not provide the pattern database when the streaming client first requests streaming of a streaming application associated with the pattern database.
  • the streaming client may use the pattern database to predict which blocks to request based on block requests it intends to make. For example, if a first block request is associated with blocks in a pattern in the pattern database, the streaming client requests the first block and each of the blocks in the pattern in succession.
  • the pattern database may be included in a block request database.
  • FIG. 6 depicts a conceptual view of a system 600 for providing block data and identifying a predicted block in response to a block request according to an embodiment.
  • the system 600 includes an input node 218, a streaming server 220, a request log 222, a prediction engine 224, a block request database 226, a streaming application 228, and an output node 230.
  • the input node 218 may be an interface for connecting the system 600 to other computer systems, or a logical node within the system 600 for providing block requests to the streaming server 220.
  • the input node 218 may or may not be treated as part of the streaming server 220.
  • the streaming server 220 may include software, firmware, hardware, or a combination thereof.
  • the streaming server 220 may include one or more dedicated processors or share one or more processors (not shown) with other local or remote components.
  • the request log 222 may be a log that records incoming traffic, as is well-known in the art of computer networking, or a dedicated log that only records block requests.
  • the request log 222 may be associated with the streaming application 228 or multiple streaming applications (not shown).
  • the request log 222 may be associated with one or more streaming clients (not shown), either individually, in the aggregate, or in some combination.
  • the request log may be dedicated to the streaming server 220, or associated with multiple streaming servers (not shown).
  • the request log may be local or remote with respect to the streaming server 220.
  • the prediction engine 224 may include software, firmware, hardware, or a combination thereof.
  • the prediction engine may include one or more dedicated processors or share one or more processors (not shown) with other local components, such as the streaming server 220, or remote components (not shown).
  • the prediction engine 224 may or may not be treated as part of the streaming server 220.
  • the block request database 226 may include software, firmware, hardware, or a combination thereof.
  • the block request database 226 may or may not be treated as part of the request log 222, part of the prediction engine 224, or part of the streaming server 220.
  • block request database 226 includes the request log 222.
  • the block request database 226 includes block request parameters.
  • the parameters may be stored in memory as constants or variables, act as or be environment variables.
  • the parameters may be locally or remotely available.
  • the parameters may be input manually, input automatically, or derived.
  • the parameters may or may not change over time depending upon inputs from a user or agent, block requests, or other factors.
  • the streaming application 228 is any application that is available for streaming.
  • the streaming application 228 may or may not be prepared for streaming in advance. While only the streaming application 228 is illustrated in FIG. 6, multiple streaming applications could be included in the system 600. The multiple streaming applications could be discretely managed (e. g., by allowing only predictions of blocks from the same streaming application as a requested block) or as a collection of blocks (e.g., a predicted block could be from a streaming application that is different from that of the requested block). The streaming applications could be local or remote.
  • the output node 230 may be an interface for connecting the system 600 to other computer systems, or a logical node within the system 600 for sending block requests from the streaming server 220. The output node 230 may or may not be treated as part of the streaming server 220. The output node 230 and the input node 218 may or may not be treated as components of an input/output node.
  • the input node 219 receives a block request from a streaming client (not shown).
  • the input node 219 provides the block request to the streaming server 220.
  • the streaming server 220 logs the block request in the request log 222.
  • the streaming server obtains a prediction from the prediction engine 224.
  • the logging of the request and the obtaining of a prediction need not occur in any particular order.
  • the streaming server 220 could obtain a prediction from the prediction engine 224 prior to (or without consideration of) the logged block request.
  • the prediction engine 224 checks the request log 222, performs calculations to represent data in the request log 222 parametrically, and updates the parameters of the block request database 226.
  • the prediction engine 224 checks the parameters of the block request database 226 in order to make a prediction as to subsequent block requests from the streaming client. The checking of the request log and checking of the parameters need not occur in any particular order. For example, the prediction engine 224 could check the parameters prior to checking the request log and updating the parameters.
  • the prediction engine 224 could check the request log and update the parameters as part of a routine updating procedure, when instructed to update parameters by a user or agent of the system 600, or in response to some other stimulus, such as an access to the streaming application 228.
  • the checking of the log request and updating of the parameters may be thought of as a separate, and only indirectly related, procedure vis-a-vis the checking of parameters to make a prediction.
  • the prediction may be in the form of one or more block IDs, identifying data for one or more blocks, a pattern, or any other data that can be used by a streaming client to determine what blocks should be requested predictively.
  • the streaming server 220 obtains the prediction
  • the streaming server 220 optionally accesses the requested block of the streaming application 228. Obtaining the prediction and accessing the requested block need not occur in any particular order and could overlap .
  • the prediction provided to the streaming server 220 may or may not be modified by the streaming server 220.
  • the prediction may include data that is used to "look up" identifying data.
  • the portion of the streaming server 220 that is used to look up identifying data may be referred to as part of the prediction engine 224.
  • Access to the requested block of the streaming application 228 is optional because the streaming server 220 could simply return an identifier of the predicted block without sending the predicted block itself. Indeed, it may be more desirable to send, only a prediction because the recipient of the prediction (e.g., a streaming client) may already have received the block. If the recipient already received the block, and the block remains cached, there is probably no reason to send the block again. Accordingly, the streaming server, upon receiving the prediction, would not request the predicted block again.
  • the streaming server 220 can be referred to as obtaining identifying data from the prediction engine 224, where the identifying data can be used by, for example, the streaming client when making one or more predictive block requests.
  • the streaming server 220 provides data associated with the requested block, such as streaming data, and the prediction, such as identifying data, to the output node 230.
  • the data is sent from the output node 230 to, for example, a streaming client (not shown).
  • the prediction is piggy-backed on the reply that includes the, for example, streaming data.
  • the prediction could be sent separately, either before, at approximately the same time as, or after the reply that includes the, for example, streaming data.
  • the streaming server 220 could access a predicted block of the streaming application 228 and send a reply that includes streaming data associated with the predicted block as part of the reply that includes the requested block data.
  • the streaming server 220 could send streaming data associated with the predicted block separately, before, at the same time as, or after sending the reply that inclxides the requested block data.
  • a prediction may or may not always be provided by trie prediction engine 224. If no prediction is provided to the streaming server 220, the streaming server may simply provide streaming data associated with the requested block. In an exemplary embodiment, the prediction engine 224 may fail to provide a prediction if it does not have sufficient data to make a prediction. Alternatively, the prediction engine 224 may provide a prediction only if it meets a certain probability. For example, an aggressiveness parameter may be set to a cut-off threshold of, e.g., 0.5. If predictive certainty for a block does not meet or exceed the cut-off threshold, the prediction engine 224 may not provide the prediction for the block to the streaming server 220.
  • a cut-off threshold e.g., 0.5
  • FIGS. 7 A to 7C are intended to help illustrate how to predict a block request based on the requested block.
  • FIGS. 7 A to 7C depict exemplary " block request logs and parameters derived therefrom according to embodiments.
  • FIGS. 7A to 1C are not intended to illustrate preferred embodiments because there are many different items of data that could be recorded or omitted in the request log and many different parameters that could be derived from selected items of data.
  • FIG. 7A depicts an exemplary request log 232 and a block request database 234.
  • the request log 232 includes a listing of blocks in the order the blocks were requested from a streaming client.
  • trie request log 232 is assumed to be associated with a single streaming client.
  • the request log 232 may or may not be associated with a single streaming application.
  • the block requ-est database 234 includes parameters derived from the request log 232.
  • the parameters are derived only from entries in the request log 232. In alternative embodiments, the parameters could include default values or otherwise rely on data that is not included in the request log 232.
  • the request log 232 is assumed to include only the values shown in FIG. 7A.
  • a block request (the current block request) is considered for each of the blocks 3 to 8.
  • a block request for block 5 (the second block request) immediately follows the request for block 3 (the first block request). Since the second block request follows the request for block 3, a prediction can be made about whether the current block request (for block 3) will be followed by a request for block 5. Since, for the purposes of example, the parameters are derived only from the request log 232 (and no other data is considered), it might b>e assumed that a request for block 5 is 100% certain.
  • the predicted block parameters array for block 3 is [(5, 1.0)]. This can be interpreted to mean, following a request for block 3, the probability of a request for block 5 is 1.0. Of course, this is based on a small data sample and is, therefore, subject to a very large error. However, over time the probability may become more accurate.
  • Current block request for block 4 As illustxated in the request log 232, a block request for block 4 has not been made. Accordingly, no prediction can be made.
  • Current block request for block 5 For reasons similar to those given with respect to the request for block 3, the predicted parameters array for block 5 is [(8, 1.0)].
  • a block request for block 8 has been made twice before.
  • the block request following a request for block 8 was 3 one time and 6 the other time. It may be assumed, for exemplary purposes, that the request for block 3 or 6 is equally probable.
  • the predicted block parameters array for block 8 is [(3, 0.5), (6, 0.5)]. This can be interpreted to mean, following a request for block 8, the probability of a request for block 3 is 0.5 and the probability of a request for block 6 is 0.5. Since the requests for blocks 3 and 6 are considered equally probable, data associated with both block 3 and block 6 may be included in a reply. Alternatively, neither may be used. Also, an aggressiveness parameter may be set that requires a higher than 0.5 probability in order to be predicted.
  • a higher- order predictor e.g., a first-order predictor, as described later
  • the returned list of predicted blocks may be constructed using all n-order predictors, and the best N predicted blocks, regardless of which n-order predictor it came from, may be returned in ranked order. If, for example, the first-order predictor gave a probability of 1.0 to predicted block 3, but the zero- order predictor gave a probability of 0.5, then block 3 would be included with a probability of 1.0, the max of the probabilities for that particular block.
  • FIG. 7B depicts an exemplary request log 236 and a block request database 238 according to another embodiment.
  • the request log 236 includes a listing of blocks in the order the blocks were requested from a streaming client, plus a time field that represents what time the blocks were sent by a streaming client. In an alternative, the time field may represent what time the blocks were received, logged, or otherwise managed.
  • the block request database 238 includes a predicted block parameters array with a time field, which represents the time difference between when a first block request and a second block request were sent.
  • the time field is 6: 1 8:04, which is intended to mean 6 hours, 18 minutes, and 4 seconds. This is the difference between when block 3 and block 5 were sent, as shown in the request log 236.
  • the predicted block parameters array for block 8 is [(3, 0.5, 0:05:20), (6, 0.5, 5:10:15)]. This can be interpreted to mean that blocks 3 and 6 are equally likely to be requested following a request for block 8. However, since there is a time entry, weight can be given to the entry with the lower associated time difference.
  • weight may be given to block 3 (requested about 5 minutes after block 8) over block 6 (requested about 5 hours after block 8). Indeed, in this example, since the time between block 6 and block 8 is so long, a prediction engine could choose to assume that there really isn't a predictive relationship between blocks 6 and 8.
  • time-related predictive parameters may be ignored if they are too high.
  • the threshold value over which a time difference would result in a block request being ignored may be referred to as a temporal aggressiveness parameter.
  • a temporal aggressiveness paramei er is 1 hour
  • the predicted block parameters array for block 8 could be rewritten as [(3, 1.0, 0:05:20)]. That is, the values related to block 6 are ignored since block 6 was received more than 1 hour after block 8, according to the request log 236.
  • the predicted block parameters array for block 8 could be rewritten as [(3, 0.5, 0:05:20)], which is basically the same, but the probability is not recalculated when the values related to block 6 are ignored.
  • the predicted block parameters array for block 3 could be rewritten as [ ], since there are no block requests within 1 hour (the temporal aggressiveness threshold, in this example) of the block request for block 3.
  • a prediction engine could keep chaining predicted blocks back through the prediction engine to get subsequent predicted blocks, under the assumption that the "first round" predicted blocks were correct.
  • the probabilities of subsequent predicted blocks may be multiplied by the probability of the original predicted block to accurately get a predicted probability for the secondary (ternary, etc) blocks This could continue until the probabilities fell below an aggressiveness threshold
  • a limit may be placed on the max number of predictions returned It should be noted that in the case of a long chain of blocks that follow each other with probability 1 0, the first request may return a list of all subsequent blocks in the chain, which a streaming client can then "blast request" to keep the pipeline at the streaming server full
  • FIG 7C depicts an exemplary request log 240 and a block request database 242 according to another embodiment
  • FIG 7C is similar to FIG 7B, but entries in the request log 240 include a streaming client associated with a block request.
  • the streaming client is assumed to have requested the block with which the streaming client is associated
  • an entry for a given streaming client for example client 1
  • client 2 requests block 3 after requesting block 8 and client 2 requests block 3 after requesting block 8 Using this principle, predictive parameters can adapt over time based on the block requests of various clients
  • the principles for calculating predictive parameters is the same as described with reference to FIG 7B, but predictive accuracy can be improved by considering the block requests of multiple clients, as shown in the block request database 242 of FIG 7C
  • the times are not actually recorded, as depicted in FIG 7C Rather, the times are used as a filter to decide whether one block truly follows another block in terms of predictive value.
  • the entry for block 3 may be more similar to the example entry 242-1, which has a value of [(4, 1 0, 0:07-05)], than is depicted for illustrative purposes, in the block request database 242.
  • the block request database 242 does not include any predictive data for block 5.
  • the entry for block 4 may be the empty set, and the entry for block 8 may be similar to the example entry 242-2. This alternative, where the probabilities are omitted for rejected blocks, may be more desirable since it omits data that is probably irrelevant for predictive purposes.
  • FIGS 8 A and 8B depict request logs and block request databases for use with zero order and first order predictions
  • a streaming server builds up the zero, first, or higher order probabilities over time based on which blocks have been requested while streaming an application
  • FIG 8A depicts a request log 244 and a block request database 246
  • Each block associated with a streaming application has a probability of being requested when streaming the streaming application that is based on how many times the block is requested when streaming the streaming application, compared to the total number of times the streaming application has been streamed.
  • a session For the purposes of example, one instance, from beginning to end, of streaming a streaming application may be referred to as a session. In a given session, one or more blocks, in one or more combinations, may be requested.
  • a session may be thought of in terms of the block requests made over the course of streaming a streaming application.
  • Session 1 consists of three block requests: 7, 8, and 3;
  • Session 2 consists of three block requests: 8, 3, and 4;
  • Session 3 consists of three block requests: 5, 8, and 6;
  • Session 4 consists of two block requests: 8 and 3;
  • Session 5 consists of two block requests: 8 and 3.
  • a probability associated with a block request may be determined by adding the number of times a block: is requested in a session and dividing the sum by the total number of sessions (in this case, five).
  • Block 3 which is included in Sessions 1, 2, 4, and 5 has a zero order probability of 0.8.
  • Blocks 4, 5, 6, and 7 each have a zero order probability of 0.2 because they are included only in Session 2, 3, 3, and 1, respectively.
  • Block 8 is included in each of the Sessions 1- 5, Block 8 has a zero order probability of 1.0.
  • probabilities are constructed over multiple sessions by multiple users to obtain composite probabilities.
  • a first-order predictor once we see the first block request for one of the rooms, the subsequent blocks for that room will have first-order probabilities near 100%, and so the multiple blocks for that room can be predictively downloaded.
  • FIG. 8B is intended to illustrate first order probabilities.
  • FIG. 8B depicts a request log 248 and a block request database 250.
  • a first block request is used as context.
  • the probability of a second block request may be calculated by comparing the probability, based upon previous sessions, that the second block request follows (or occurs during the same session as) the first block request. Since the probability of the second block request is based on one block of context, the probability may be referred to as a first order probability.
  • First-order predictions may or may not be more accurate than zero-order predictions, and both zero- and first-order probabilities may be used, individually or in combination, when making a block request, prediction.
  • the block request database 250 is organized to show the first-order probability for a block, given the context of another block.
  • the block request database 250 is not intended to illustrate a data structure, but rather to simply illustrate, for exemplary purposes, first-order probability.
  • the probability of a block request (for the block in the Block column) is depicted under the first-order probability for each block.
  • the first-order probability of a block request for Block 3 is 1.0 with Block 4 as context, 0 with Block 5 as context, 0 with Block 6 as context, 1.0 with Block 7 as context, and 0.8 with Block 8 as context.
  • the request log 248 when blocks 5 or 6 are requested (Session 3), block 4 is not requested. Accordingly, the first-order probability for block 3 with either block 5 or 6 as context is 0. When blocks 4 or 7 have been requested (Sessions 2 and 1, respectively), block 3 is also requested. Accordingly, the first-order probability for block 3 with block 4 as context is 1.0. When block 8 has been requested (all five sessions), block 3 is also requested 4 out of 5 times. Accordingly, the first- order probability for block 3 with block 8 as context is 0.8. First-order probabilities for each block can be derived in a similar manner. The probabilities are shown in the block request database 250.
  • Second- and higher-order probabilities can be determined in a similar manner to that described with reference to FIG. 8B, but with multiple block requests as context.
  • a streaming server can use the zero- or higher order probabilities to determine which blocks have probabilities that exceed a "predictive download aggressiveness" parameter and piggyback those block IDs, but not the block data itself, onto the block data returned to a streaming client.
  • the client can then decide which blocks it will predictively download, based on local factors. Local factors may include client system load, contents of the local disk cache, bandwidth, memory, or other factors. It should be noted that in certain embodiments, it may not be possible to incorporate all incoming block requests into the request log because block requests are being "filtered" by the local disk cache. This may result in the request log including only those block requests for blocks that wasn't in the local disk cache, which will throw the predictors off.
  • a streaming server may indicate to a streaming client that the server is interested in collecting prediction data.
  • the streaming client would send a separate data stream with complete block request statistics to the streaming server.
  • the separate data stream may include all actual block requests made by the application, regardless of whether that request was successfully predicted or is stored in a local cache of the streaming client.
  • FIG. 9 depicts a flowchart of an exemplary method for updating predictive parameters.
  • the flowchart starts at module 252 with informing a streaming client of an interest in collecting prediction data. This request may or may not be included in a token file. The request may " be sent to a streaming client when trie streaming client requests streaming of an application. After module 252, the flowchart continues along two paths (254-1 and 254-2). The modules 254 may occur simultaneously, or one may occur before the other.
  • the flowchart continues at module 254-1 with receiving block requests.
  • the streaming server may receive the block requests from the streaming client.
  • the flowchart continues a.t module 256-1 with sending data associated with the block requests, including predictions, If any.
  • the flowchart continues at decision point 258-1, where it is determined whether the session is over. If the session is not over, the flowchart continues from module 254-1 for another block request. Otherwise, if the session is over, the flowchart ends for modules 254-1 to 258-1.
  • module 254-2 receives block request statistics, including block requests for bloclcs stored in a local disk cache.
  • Module 254-2 may or may not begin after module 258-1 ends.
  • Module 254-2 may or may not continue after module 258- 1 ends.
  • the flowchart continues at module 256-2 with updating predictive parameters using the block request statistics.
  • the predictive parameters may be used at module 256-1 to provide predictions.
  • decision point 258-2 where it is determined whetrier the session is over. If the session is not over, the flowchart continues from module 254-2 for more block request statistics. Otherwise, if the session is over, the flowchart ends for modules 254-2 to 258-2.
  • a streaming client may indicate to a streaming server that the streaming client is interested in receiving predictive block data IDs. Then the streaming server would piggyback the IDs, as described previously.
  • FIG. 10 depicts a flowchart of an exemplary method for providing predictions to a streaming client. The flowchart begins at module 260 with receiving notice that a streaming client is interested in receiving predictive block data IDs. The streaming client may send the notice when requesting streaming of an application from a streaming server. The flowchart continues at module 262 with receiving a block request. The streaming server may receive the block request from the streaming client. The flowchart continues at decision point 264 where it is determined whether a prediction is available.
  • a prediction is available (264-Y)
  • the flowchart continues at module 266 with piggybacking one or more predicted block IDs on a reply to the block request, and at module 268 with sending the reply to the block request.
  • a prediction is not available (264-NF)
  • the flowchart continues at module 268 with sending the reply to the block request (with no predicted block ID). In either case, the flowchart continues at decision point 270, where it is determined whether the session is over. If the session is not over (270-N), then the flowchart continues at module 262, as described previously. If, on the other hand, the session is over (270- Y), then the flowchart ends.
  • the streaming server may send the block request database to the client along with an initial token file so that the client can do all of the predictive calculations.
  • FIG. 11 depicts a flowchart of an exemplary method for providing predictive capabilities to a streaming client.
  • the flowchart begins at module 272 with sending a block request database to a streaming client.
  • the streaming client may or may not have requested the block request database from a streaming server.
  • the block request database may be sent in response to a request for streaming of an application.
  • the flowchart continues at module 274 with receiving block requests, including predictive block requests, from the streaming client. Since the streaming client has the block request database, the streaming client is able to make predictions about which blocks it should request in advance.
  • the streaming server need not piggyback IDs in this case, since the streaming client makes the decision and requests the blocks directly.
  • the flowchart ends at module 276 with sending data associated with the block requests to the streaming client.
  • the streaming client may or may not send an alternate data stream with actual block request patterns to the streaming server.
  • Modules 274 and 276 may occur intermittently or simultaneously.
  • FIG. 12 depicts a flowchart of an exemplary method for /eceiving predictive data from a streaming client.
  • the flowchart begins at module 278 with receiving block requests.
  • the block requests may be from a streaming client.
  • the flowchart continues at module 280 with sending block requests, including predictions, if any, to the streaming client.
  • the flowchart continues at decision point 282, where it is determined whether the session is over. If the session is not over (282-N), then the flowchart continues at module 278, as described previously. If, on the other hand, the session is over (282- Y), then the flowchart continues at module 284 with receiving predictive data from the streaming client.
  • the predictive data may include a request log, a block request history, or one or more parameters derived from block request data.
  • the flowchart ends at module 286 with updating a block request database using the predictive data. In this way, the streaming server can adapt the block request database in response to each session.
  • a streaming server can maintain a block request database that keeps track of "runs." Runs are sets of blocks for which the block requests occur closely spaced in time. For example, if a sequence of blocks is requested within, say, 0.5 second of a preceding block request in the sequence, the sequence of blocks may be referred to as a run. Runs can be used to efficiently utilize memory resources by recording probabilities on the run level instead of per block. Runs can also be used to reduce the amount of predictive download data. For example, for a level-based game, a user may download a first block, then the rest of the game will be predictively do ⁇ vnloaded for each level, since the probabilities of downloading each level are, for the purposes of this example, nearly 100%.
  • predictive downloads can be "shut off until the start of blocks for the second level begin. For example, if the blocks at the beginning of the run, which trigger or signal the run, are detected, the block IE>s for the rest of the run may be sent to the streaming client, who then chain requests them. However, those subsequent block requests do not trigger any further run. So, the predictive downloads cause the blocks in the middle of a run to not act as a triggering prefix of another run; no further predictions are made from the middle of a run.
  • FIG. 13 is intended to illustrate maintaining a block request database that includes runs.
  • FIG. 13 depicts a request log 288 and a block request database 290. Parameters associated with blocks and runs are omitted from the block request database 290 so as to more clearly focus on the point being illustrated.
  • the omitted parameters could be any parameters 'derived from the request log for the purpose of facilitating the prediction of future block requests (e.g-, first-order probabilities).
  • the request log 288 includes two sessions, Session 1 and Session 2.
  • Session 1 a series of blocks are received in succession.
  • a run parameter (not shown) is set to one second. If blocks are received within one second of one another, they may be considered part of a run.
  • FIG. 13 depicts a request log 288 and a block request database 290. Parameters associated with blocks and runs are omitted from the block request database 290 so as to more clearly focus on the point being illustrated.
  • the omitted parameters could be any parameters 'derived from the
  • the blocks 17-25 are received within one second of one another, and the blocks 15 and 16 are received more than one second from each of the blocks 17-25.
  • the blocks 17-25 which are a run, can be grouped together, as illustrated in block request database 290-1.
  • Session 2 a different series of blocks are received in succession.
  • the blocks 17-19 and 26-30 are received within one second of one another so, in this example, they can be considered a run.
  • the run can be grouped together, as illustrated in block request database 290- 2.
  • each block of the run may be downloaded in succession without requiring individual predictions.
  • each successive block of a run may be given an effective probability of 1.0, which guarantees favorable predictive treatment regardless of the aggressiveness threshold (assuming the threshold allows for some prediction).
  • a streaming client may receive an identifier for the run and request successive blocks in the run, using the identifier to identify the successive blocks.

Abstract

A technique for predictive streaming involves receiving a request for a block associated with a streaming application and serving data associated with the block. A block request database is checked to predict what block is likely to be requested next based upon prior block request data. The predicted block may be identified when serving the data associated with the requested block. A system developed according to the technique includes a streaming server, a block request database, and a prediction engine that uses the block request database to predict block requests. The streaming server provides data associated with the predicted block request.

Description

SYSTEM AM) METHOD FOR PREDICTIVE STREAMING
BACKGROUND
Software streaming involves downloading small pieces of files as the pieces are needed by the program being streamed. These small pieces may be referred to as blocks. A streaming client sends requests for blocks as they are needed up to a streaming server, which sends back streaming data that is associated with the requested block. Sending a request and receiving the streaming data may cause delays that can slow down th e streamed program.
There are many problems associated with streaming software that it would be advantageous to negate, work around, or reduce. For example, predicting blocks for streaming has not been satisfactorily addressed.
SUMMARY
A technique for predictive streaming involves receiving a request for a first block of a streaming application, checking a block request database, predicting a second block request based on the block request database, and sending, in response to the request, data associated with, the first block and data associated with the second block. A block may be an arbitrarily large portion of a streaming application. The block request database includes probabilities or means for determining probabilities that the second block will be requested given that the first block has been requested. One or more factors may be considered when determining the probability of a request for the second block. Data sent in response to the request may include data associated with the first block and data associated with the second block. The data associated with the first block and the data associated with the second block may not be analogous. In an embodiment, the data associated with the first block is responsive to a block request for the first block, while the data associated with the second block is data sufficient to facilitate making a request for the second block.
In an embodiment, the technique may further include predicting block requests based on the block request database, then sending data associated with the block requests. In another embodiment, the data associated with the second block includes data sufficient to render a request for the second block unnecessary. In another embodiment, the technique further includes piggybacking the data associated with the second block on a reply to the request for the first i block. In another embodiment, the technique further includes logging the request for the first block and updating the block request database to incorporate data associated with the logged request. In another embodiment, the technique further includes setting an aggressiveness parameter, wherein the data associated with the second block is sent when a probability of the second block request is higher than the aggressiveness parameter.
A system constructed according to the technique may include a processor, a block request database that includes predictive parameters, a prediction engine that is configured to check the block request database and predict a second block request for a second block based upon a first block request for a first block and predictive parameters associated with the first block, and a streaming server. The streaming server may be configured to obtain the prediction about the second block request from the prediction engine, include data associated with the second block in a response to the first block request, in addition to data associated with the first block, and send the response in reply to the first block request.
In an embodiment, the data associated with the second block is sufficient to identify the second block so as to facilitate making the second block request. In another embodiment, the data associated with the second block includes data sufficient to render the second block request unnecessary. In another embodiment, the streaming server is further configured to piggyback the data associated with the second block on a reply to the first block request. In another embodiment, the system includes a request log, wherein the streaming server is further configured to log the first block request in the request log. The prediction engine may be further configured to update the block request database according to the request log.
BRIEF DESCRIPTION QF THE DRAWINGS
FIG. 1 depicts a networked system for use In an exemplary embodiment;
FIG. 2 depicts a computer system for use in the system of FIG. 1;
FIG. 3 depicts a portion of the computer system of FIG. 2 and components of the system of FIG.
1;
FIGS. 4 and 5 depict flowcharts of exemplary methods for predictive streaming according to embodiments;
FIG. 6 depicts a conceptual view of a system 600 for providing streaming data and identifying a predicted block in response to a block request according to an embodiment;
FIGS. 7A to 7C and 8A to 8B depict exemplary request logs and block request databases according to embodiments;
FIGS. 9 to 12 depict exemplary methods according to alternative embodiments;
FIG. 13 depicts an exemplary log request and block request database according to an embodiment.
DETAILED DESCRIPTION
Parametric predictive streaming involves maintaining parameters (parametric) to predict (predictive) which blocks will be requested by or served to a streaming client (streaming). Parametric predictive streaming can improve pipe saturation with large reads or facilitate rapid provision of sequential small reads. Parametric predictive streaming may, in an exemplary embodiment, also be adaptive. Parametric predictive adaptive streaming involves changing parameters over time as access patterns are learned.
The following description of FIGS. 1-3 is intended to provide an overview of computer hardware and other operating components suitable for performing the methods of the invention described herein, but is not intended to limit the applicable environments. Similarly, the computer hardware and other operating components may be suitable as part of the apparatuses of the invention described herein. The invention can be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
Fig. 1 depicts a networked system 100 that includes several computer systems coupled together through a network 102, such as the Internet. The term "Internet" as used hexein refers to a network of networks which uses certain protocols, such as the TCP/IP protocol, and possibly other protocols such as the hypertext transfer protocol (HTTP) for hypertext markup language (HTML) documents that make up the World Wide Web (the web). The physical connections of the Internet and the protocols and communication procedures of the Internet are well known to those of skill in the art.
The web server 104 is typically at least one computer system which operates as a server computer system and is configured to operate with the protocols of the world wide web and is coupled to the Internet. The web server system 104 can be a conventional server computer system. Optionally, the web server 104 can be part of an ISP which provides access to the Internet for client systems. The web server 104 is shown coupled to the server computer system 106 which itself is coupled to web content 108, which can be considered a form of a. media database. While two computer systems 104 and 106 are shown in Fig. 1, the web server system 104 and the server computer system 106 can be one computer system having different software components providing the web server functionality and the server functionality provided by the server computer system 106, which will be described further below.
Access to the network 1 02 is typically provided by Internet service providers (ISPs), such as the ISPs 110 and 116. Users on client systems, such as client computer systems 112, 1 18, 122, and 126 obtain access to the Internet through the ISPs 110 and 116. Access to the Internet allows users of the client computer systems to exchange information, receive and send e-mails, and view documents, such as documents which have been prepared in the HTML format. These documents are often provided by web servers, such as web server 104, which are referred to as being "on" the Internet. Often these web servers are provided by the ISPs, such as ISP 110, although a computer system can be set up and connected to the Internet without that system also being an ISP.
Client computer systems 112, 118, 122, and 126 can each, with the appropriate web browsing software, view HTML pages provided by the web server 104. The ISP 1 IO provides Internet connectivity to the client computer system 112 through the modem interface 114, which can be considered part of the client computer system 112. The client computer system can be a personal computer system, a network computer, a web TV system, or other computer system. While Fig. 1 shows the modem interface 114 generically as a "modem," the interface can be an analog modem, isdn modem, cable modem, satellite transmission interface (e.g. "direct PC"), or other interface for coupling a computer system to other computer systems.
Similar to the ISP 114, the ISP 116 provides Internet connectivity for client systems 118, 122, and 126, although as shown in Fig. 1, the connections are not the same for these three computer systems. Client computer system 118 is coupled through a modem interface 120 while client computer systems 122 and 126 are part of a LAN 130.
Client computer systems 122 and 126 are coupled to the LAN 130 through network interfaces 124 and 128, which can be ethernet network or other network interfaces. The LAN 130 is also coupled to a gateway computer system 132 which can provide firewall and other Internet-related services for the local area network. This gateway computer system 132 is coupled to the ISP 116 to provide Internet connectivity to the client computer systems 122 and 126. The gateway computer system 132 can be a conventional server computer system.
Alternatively, a server computer system 134 can be directly coupled to the LAN 130 through a network interface 136 to provide files 138 and other services to the clients 122 and 126, without the need to connect to the Internet through the gateway system 132. FIG. 2 depicts a computer system 140 for use in the system 100 (FIG. 1). The computer system 140 may be a conventional computer system that can be used as a client computer system or a server computer system or as a web server system. Such a computer system can be used to perform many of the functions of an Internet service provider, such as ISP 110 (FIG. 1). The computer system 140 Excludes a computer 142, I/O devices 144, and a display device 146. The computer 142 includes a processor 148, a communications interface 150, memory 152, display controller 154, non-volatile storage 156, and I/O controller 158. The computer system 140 may be couple to or include the I/O devices 144 and display device 146.
The computer 142 interfaces to external systems through the communications interface 150,which may include a modem or network interface. It will be appreciated that the communications interface 150 can be considered to be part of the computer system 140 or a part of the computer 142. The communications interface can be an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. "direct PC"), or other interfaces for coupling a computer system to other computer systems.
The processor 1 48 may be, for example, a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor. The memory 152 is coupled to the processor 148 by a "bus 160. The memory 152 can be dynamic random access memory (dram) and can also include static ram (sram). The bus 160 couples the processor 148 to the memory 152, also to the non- volatile storage 156, to the display controller 154, and to the I/O controller 158.
The I/O devices 144 can include a keyboard, disk drives, printers, a scanner, and other input and output devices, including a mouse or other pointing device. The display controller 154 may control in the conventional manner a display on the display device 146, which can be, for example, a cathode ray tube (CRT) or liquid crystal display (LCD). The display controller 154 and the I/O controller 1 58 can be implemented with conventional well known technology.
The non- volatile storage 156 is often a magnetic hard disk, an optical disk, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory 152 during execution of software in the computer 142. One of skill in the ait will immediately recognize that the terms "machine-readable medium" or "computer- readable medium" includes any type of storage device that is accessible by the processor 148 and also encompasses a carrier wave that encodes a data signal. The computer system 140 is one example of many possible computer systems which have different architectures. For example, personal computers based on an Intel microprocessor often have multiple buses, one of which can be an I/O bus for the peripherals and one that directly connects the processor 148 and the memory 152 (often referred to as a memory "bus). The buses are connected together through bridge components that perform any necessary translation due to differing bus protocols.
Network computers are another type of computer system that can be used with the present invention. Network computers do not usually include a hard disk or other mass storage, and the executable programs are loaded from a network connection into the memory 152 for execution by the processor 148. A Web TV system, which is known in the art, is also considered to be a computer system according to the present invention, but it may lack some of the features shown in FIG. 2, such as certain input or output devices. A typical computer system will usually include at least a processor, memory, and a bus coupling the memory to the processor.
In addition, the computer system 140 is controlled by operating system software which includes a file management system, such as a disk operating system, which is part of the operating system software. One example of an operating system software with its associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Washington, and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non- volatile storage 156 and causes the processor 148 to execute the various acts required by the operating system to input and output data and to store data in memory, including storing files on the non- volatile storage 156.
Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from tfcte following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the lake, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within, the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention, in some embodiments, also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such, as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language, and various embodiments may thus be implemented "using a variety of programming languages.
FIG. 3 depicts a portion of the computer system 140 (FIG. 2) and components of the system 100 (FIG. 1). FIG. 3 depicts the computer system 140, a network 162, and a streaming client 164. The network 162 could be a global information network, such as the Internet, a local or wide area network (LAN or WAN), or some other intranet or network:. For example, the network 102 (FIG. 1) could include the network 162. Alternatively, the LAN 130 (FIG. 1) could include the network 162. In another alternative, the computer system 140 may by physically or wirelessly coupled to the streaming client 164.
The streaming client 164 may be coupled to and accessible through the network 162. The streaming client 164 could be a software, firmware, or hardware module. Alternatively, the streaming client 164 coααld include some combination of software, firmware, or hardware components. The streaming client 164 may be part of a computer system, such as the computer system 140 (FIG. 2). The streaming client 164 may include a processor, a memory, and a bus that couples the processor to the memory. The streaming client 164, or a computer system associated with the streaming client 164, may make use of certain programs Λvhen executing a streaming application. For example, a streaming application may be intended for use with a specific version of DirectX™, Acrobat™, or QuickTime™, which typically is installed prior to executing the streaming application.
The computer system 140 includes a processor 166, a memory 168, and a bus 170 that couples the processor 1 66 to the memory 168. The memory 168 may include both volatile memory, such as DRAlM or SRAM, and non- volatile memory, such as magnetic or optical storage. The memory 168 may also include, for example, environment variables. The processor 166 executes code in the memory 168. The memory 168 includes a streaming server 172, a block request database 174, a prediction engine 176, and one or more streaming applications 178. Some or all of the programs, files, or data of the computer system 140 could be served as Web content, such as by the server computer 106 (FIG. 1). The programs, files, or data could be part of a server computer on a LAN or WAN, such as the server computer 1 34 (FIG. 1).
The streaming server 172 may be configured to serve data associated with a block of a streaming application, such as one of the streaming applications 178, in response to a request for the block. The block request database 174 may include data associated with block requests received from one or more streaming clients, such as the streaming client 164. The block request database 174 may include a block request log, which is updated with each oτ with a subset of block requests. The block request database 174 may be client-specific or generally used by all or a subset of all streaming clients from which block requests are received. The block request database 174 may be application-specific or generally associated with all or a subset of all of the streaming applications 178. The block request database 174 may include a "block request history that has been derived from data associated with block requests received over time, from some original default values, or from data input by, for example, a user or software agent that administers the computer system 140. The block request database 174 may include parameters derived from individual block requests, a block request log, or a block request history.
The prediction engine 176 is configured to predict one or more blocks, if any, the streaming client 164 will request. The term "engine," as used herein, generally refers to any combination of software, firmware, hardware, or other component that is used to effect a purpose. The prediction engine 176 may include an aggressiveness parameter (not shown). If the aggressiveness is high, the prediction engine 176 is more likely to make a prediction than if the aggressiveness is low. In an exemplary embodiment, the aggressiveness parameter is associated with a probability threshold. The probability threshold may be a cut-off probability that results in predictions being made for blocks that are determined to have a probability of being requested that exceeds the cut-off probability. In another exemplary embodiment, the prediction engine 176 may predict any block with a probability of being requested that is higher than the probability threshold. In these examples, it should be noted that the aggressiveness parameter is low (i.e., the probability threshold is low) when aggressiveness is high. Nevertheless, for the prøposes of linguistic clarity, hereinafter, it is assumed that the aggressiveness parameter is high when aggressiveness is high. For example, if the aggressiveness parameter is associated with a threshold probability, x, then the value of the aggressiveness parameter may be thought of as 1-x, which means the aggressiveness parameter is high when aggressiveness is high. This is for the purposes of linguistic clarity, and should not be construed as a limitation as to how the aggressiveness parameter is implemented. It should also be noted that the aggressiveness parameter is not limited to a threshold probability.
When the prediction engine 176 makes a prediction, the streaming server 172 may communicate the prediction to the streaming client 164. The prediction may include identifying data for one or more blocks, each of which met aggressiveness criteria, such as by having a predicted probability of being requested that is higher than a probability threshold. Identifying data, as used herein, is data sufficient to enable a streaming client to make a request for one or more blocks that are associated with the identifying data. The identifying data may, for example, include a block ID associated with a block. This identifying data may facilitate predictive requests for blocks before the blocks are actually needed at the streaming client 164. In an exemplary embodiment wherein the identifying data is provided to a streaming client 164, the streaming client 164 may determine whether to request the block associated with streaming data. This may help prevent downloading streaming data that the streaming client 164 doesn't really want or need. For example, if the streaming server 172 predicts that a streaming client 164 will need streaming data associated with a first block, and sends streaming data associated with the first block, the streaming client 164 is unable to choose whether to receive the streaming data based on, for example, local factors. On the other hand, if the streaming client 164 receives identifying data, the streaming client 164 may determine whether it actually wants the streaming data. In an alternative embodiment, the streaming server 172 may sometimes serve streaming data associated with a predicted block right away, possibly without even receiving a request from the streaming client 164 for the predicted block.
The prediction engine 176 may give a weight to a number of blocks. For example, the prediction engine 176 may predict a first block that is more likely to be requested than a second block (a probability parameter). The streaming client 164 may first request the blocks with, for example, the highest probability of being requested. As another example, the prediction engine 176 may give greater weight to a first block over a second block if the prediction engine 176 predicts the first block will be requested sooner than the second block (a temporal parameter). As another example, the prediction engine 176 may give greater weight to a first block over a second block if the first block is larger than the second block (a size parameter). The prediction engine 176 may weigh parameters associated with the streaming client 164, as well. For example, if the streaming client 164 has a limited buffer size, the aggressiveness parameter may be set lower. As another example, if the download bandwidth is low, the prediction engine 176 may place more weigh? on temporal or size parameters. Alternatively, the streaming client 164 may manage block requests according to local conditions, while the prediction engine 176 acts the same for all or a subset of all streaming clients.
In operation, the streaming client 164 sends over the network 162 to the streaming server 172 a request for a block associated with a streaming application of the streaming applications 178. The streaming server 172, or some other component (not shown) of~ the computer system 140, logs the request. For the purposes of example only, the block request database 174 is treated as a block request log. However, in various embodiments, the block request database 174 could be derived from a block request log or from individual or groups of block requests that are not recorded in a log. In an exemplary embodiment, the block request database 174 includes parameters derived from previous block requests, or from default or initial settings, that aid in predicting subsequent block requests from the streaming client 164. The prediction engine 176 uses data from the block request database 174 to predict a subsequent block request. The subsequent block request may be for, for example, a block from the same streaming application as the initial block request, from a DLL, from a data file, from a different executable file, or from any other file or application. The streaming server 172 serves streaming data associated with the requested block and identifying data associated with the predicted block to the streaming client 164. The streaming client 164 then decides whether to request the identified predicted block. For example, the streaming client 164 may already have data for the block associated with the identifying data in a local cache. If the streaming server 172 sends the data again, that is a waste of bandwidth and may slow down the execution of the streaming application. Since the streaming server 172 sends identifying data instead of streaming data, the streaming client 164 can, for example, check the local cache first, and request the streaming data if it is not already available in the local cache. In another embodiment, the streaming server 172 serves streaming data associated with the requested block and identifying data associated with one or more predicted blocks. The streaming client 164 then decides which of the identified blocks to request and, for example, in what order. Naturally, if the prediction engine 176 is correct in its prediction, the streaming client 164 will eventυally request a predicted block in due course even if the streaming client 164 does not request the predicted block in response to receiving the identifying data.
FIGr. 4 depicts a flowchart of an exemplary method for predictive streaming according to an embodiment. The flowchart starts at module 180 with requesting a first block. The request may originate from a streaming client. The flowchart continues at module 182 with receiving the request for the first block. The receipt may be at a streaming server. The streaming server may log the request. The flowchart continues at module 184 with checking a block request database. The server may check the block request database. The block request database may include the logged requests. The flowchart continues at module 186 with predicting a second block request based on the block request database. The server may use a. prediction engine to predict the second block request. The prediction may be based upon block requests for the first block followed by block requests for the second block from a streaming client. The flowchart continues at module 188 with piggybacking a second block ID on a reply that includes block data associated with the first block. The flowchart continues at module 190 with receiving the first block data and the second block ID. The flowchart continues at module 192 with determining whether to request the second block based on local factors. The flowchart continues at module 194 with requesting the second block. It is assumed for the purposes of example that local factors merit the request for the second block. For example, if streaming data associated with the second block is in a local disk cache, the second block may not be requested. The flowchart continues at module 196 with receiving the request for the second block. The flowchart continues at module 198 with second a reply that includes block data associated Λvith the second block. The flowchart ends at module 200 with receiving the second block data. It should be noted that, though the flowchart may be thought of as depicting sequential smalL reads, as opposed to depicting building large blocks, the blocks could very well be large. This method and other methods are depicted as serially arranged modules. However, modules of the methods may be reordered, or arranged for parallel execution as appropriate.
When a streaming server sends data associated with a predicted block (second blocl:) request along with data associated with a requested block (first block), the first and second block data may be thought of as a "large block" if they are queued for sending back-to-back. However, in order to queue the first and second blocks back-to-back, the streaming server would
Figure imgf000014_0001
to receive the request for the second block and queue the second block data before the streaming server was finished sending the first block data. When data is sent as a large block, the streaming server can have nearly continuous output. Large blocks can help to more fully utilize available bandwidth, since large data requests can fully "saturate the pipe." This is in contrast to sequential requests for blocks, which may not saturate the pipe because of the pause between sending first block streaming data and receiving a request for and sending second block streaming data. Fully saturating the pipe can improve performance.
In some cases, a block may be made large initially. For example, if a streaming application is for a level-based game, a large block may include data associated with an entire level. When a request for the block is received, data associated with the entire level is returned as continuous output from the streaming server. Alternatively, large blocks can be built "on the fly" based on a first block request and predicted block requests. For example, if the streaming server sends identifying data for multiple predicted block requests piggybacked on a reply to a first block request, along with streaming data associated with the first block, the streaming client can consecutively request some or all of the multiple predicted blocks. If the requests are made in relatively rapid succession, the streaming server may queue streaming data associated with two predicted blocks more rapidly than the streaming server sends streaming data associated with one predicted block. In this way, the streaming server maintains a fall queue and, accordingly, can keep the pipe saturated.
FIG. 5 depicts a flowchart of an exemplary method for predictive streaming according to an embodiment. The flowchart starts at module 202 with requesting a first block. The flowchart continues at module 204 with receiving the request for the first block:. The streaming server may log the request. The flowchart continues at module 206 with predicting block requests based on the first block request. The server may use a prediction engine to predict the block requests. The prediction engine may make use of a block request database that includes parameters derived from, for example, prior block requests for the first block. The flowchart continues at module 208 with sending streaming data associated with the first block and identifying data for the predicted blocks. The streaming server may piggyback the identifying data on a reply to the request for the first block. The flowchart continues at module 210 with receiving the streaming data and the identifying data. The streaming client may receive the data. The flowchart continues at module 212 with requesting one or more predicted blocks in succession. The streaming client may determine, based upon local factors, which of the blocks associated with the identifying data are to be requested. The streaming client may or may not order the predictive block requests according to factors associated with the blocks. For example, the streaming client may make predictive block requests in the order of probability. The probability for each block may be sent along with the identifying data or derived locally. The flowchart continues at module 214 with receiving the predictive block requests. The flowchart continues at module 216 with saturating the output pipe with streaming data associated with the multiple blocks. The streaming server may saturate the output pipe by queuing streaming data as fast as or faster than the streaming data is sent to the streaming client. The streaming server may maintain output pipe saturation even if queuing streaming data slower than streaming data is sent to the streaming client as long as the queue remains non-empty. The streaming server may or may not maintain output pipe saturation for an entire streaming session. When the streaming server saturates the pipe, it is presumed that the streaming server does not maintain saturation throughout the entire streaming session, though total saturation may be possible. When the streaming server saturates the pipe, the pipe is not necessarily perfectly saturated. Streaming data is not necessarily sent in a perfectly continuous stream. A continuous stream of data, as used herein, means data that is sent from a non-empty queue. If there is a period of time during which the output queue is empty, first and second streams of data sent before and after the period of time are not referred to as continuous. The flowchart ends at module 218 with receiving the streaming data. The streaming client may receive the streaming data. Though the streaming server saturates the pipe, the streaming client may or may not receive a continuous stream of data, depending on various factors that are well-understood in the art of data transmission, such as network delays, and are, therefore, not described herein.
At times, a streaming client may rapidly request blocks in a strictly sequential manner. For example, if a sequence of blocks is associated with a video clip, the blocks are reasonably likely to be served, in order, one after the other. A streaming server may recognize a sequential pattern of block requests, either because the streaming server is provided with the pattern, or because the streaming server notices the pattern from block requests it receives over time. The streaming server may send the pattern to the streaming client in response to a block request that has been found to normally precede block requests for blocks that are identified in the pattern. Using the pattern, the streaming client may predictively request additional blocks in anticipation of the streaming application needing them. The streaming client may make these requests in relatively rapid succession, since each request can be made using the pattern the streaming client received from the streaming server. This may result in output pipe saturation at the streaming server. The streaming client may or may not wait for a reply to each request. Predictively requested blocks may be stored in a local cache until they are needed. The parameters that control recognition of the pattern, as well as how aggressive the read-ahead schedule should be, can be independently specified at the file, file extension, directory, and application level. In addition, it can be specified that some blocks are predictively downloaded as soon as a file is opened (even before seeing a read) and that the open call itself should wait until the initial blocks have been downloaded. In an exemplary embodiment, the pattern includes identifying data for each block represented in the pattern. The patterns may be included in a block request database.
In an alternative embodiment, the streaming server may provide the streaming client with one or more patterns as a pattern database. The streaming server may or may not provide the pattern database when the streaming client first requests streaming of a streaming application associated with the pattern database. The streaming client may use the pattern database to predict which blocks to request based on block requests it intends to make. For example, if a first block request is associated with blocks in a pattern in the pattern database, the streaming client requests the first block and each of the blocks in the pattern in succession. The pattern database may be included in a block request database. FIG. 6 depicts a conceptual view of a system 600 for providing block data and identifying a predicted block in response to a block request according to an embodiment. The system 600 includes an input node 218, a streaming server 220, a request log 222, a prediction engine 224, a block request database 226, a streaming application 228, and an output node 230. The input node 218 may be an interface for connecting the system 600 to other computer systems, or a logical node within the system 600 for providing block requests to the streaming server 220. The input node 218 may or may not be treated as part of the streaming server 220. The streaming server 220 may include software, firmware, hardware, or a combination thereof. The streaming server 220 may include one or more dedicated processors or share one or more processors (not shown) with other local or remote components. The request log 222 may be a log that records incoming traffic, as is well-known in the art of computer networking, or a dedicated log that only records block requests. The request log 222 may be associated with the streaming application 228 or multiple streaming applications (not shown). The request log 222 may be associated with one or more streaming clients (not shown), either individually, in the aggregate, or in some combination. The request log may be dedicated to the streaming server 220, or associated with multiple streaming servers (not shown). The request log may be local or remote with respect to the streaming server 220. The prediction engine 224 may include software, firmware, hardware, or a combination thereof. The prediction engine may include one or more dedicated processors or share one or more processors (not shown) with other local components, such as the streaming server 220, or remote components (not shown). The prediction engine 224 may or may not be treated as part of the streaming server 220. The block request database 226 may include software, firmware, hardware, or a combination thereof. The block request database 226 may or may not be treated as part of the request log 222, part of the prediction engine 224, or part of the streaming server 220. In an alternative embodiment, block request database 226 includes the request log 222. The block request database 226 includes block request parameters. The parameters may be stored in memory as constants or variables, act as or be environment variables. The parameters may be locally or remotely available. The parameters may be input manually, input automatically, or derived. The parameters may or may not change over time depending upon inputs from a user or agent, block requests, or other factors. The streaming application 228 is any application that is available for streaming. The streaming application 228 may or may not be prepared for streaming in advance. While only the streaming application 228 is illustrated in FIG. 6, multiple streaming applications could be included in the system 600. The multiple streaming applications could be discretely managed (e. g., by allowing only predictions of blocks from the same streaming application as a requested block) or as a collection of blocks (e.g., a predicted block could be from a streaming application that is different from that of the requested block). The streaming applications could be local or remote. The output node 230 may be an interface for connecting the system 600 to other computer systems, or a logical node within the system 600 for sending block requests from the streaming server 220. The output node 230 may or may not be treated as part of the streaming server 220. The output node 230 and the input node 218 may or may not be treated as components of an input/output node.
In operation, the input node 219 receives a block request from a streaming client (not shown). The input node 219 provides the block request to the streaming server 220. The streaming server 220 logs the block request in the request log 222. The streaming server obtains a prediction from the prediction engine 224. The logging of the request and the obtaining of a prediction need not occur in any particular order. For example, the streaming server 220 could obtain a prediction from the prediction engine 224 prior to (or without consideration of) the logged block request.
The prediction engine 224 checks the request log 222, performs calculations to represent data in the request log 222 parametrically, and updates the parameters of the block request database 226. The prediction engine 224 checks the parameters of the block request database 226 in order to make a prediction as to subsequent block requests from the streaming client. The checking of the request log and checking of the parameters need not occur in any particular order. For example, the prediction engine 224 could check the parameters prior to checking the request log and updating the parameters. The prediction engine 224 could check the request log and update the parameters as part of a routine updating procedure, when instructed to update parameters by a user or agent of the system 600, or in response to some other stimulus, such as an access to the streaming application 228. Accordingly, the checking of the log request and updating of the parameters may be thought of as a separate, and only indirectly related, procedure vis-a-vis the checking of parameters to make a prediction. The prediction may be in the form of one or more block IDs, identifying data for one or more blocks, a pattern, or any other data that can be used by a streaming client to determine what blocks should be requested predictively. ΛVhen the streaming server 220 obtains the prediction, the streaming server 220 optionally accesses the requested block of the streaming application 228. Obtaining the prediction and accessing the requested block need not occur in any particular order and could overlap . The prediction provided to the streaming server 220 may or may not be modified by the streaming server 220. For example, the prediction may include data that is used to "look up" identifying data. In this case, the portion of the streaming server 220 that is used to look up identifying data may be referred to as part of the prediction engine 224.
Access to the requested block of the streaming application 228 is optional because the streaming server 220 could simply return an identifier of the predicted block without sending the predicted block itself. Indeed, it may be more desirable to send, only a prediction because the recipient of the prediction (e.g., a streaming client) may already have received the block. If the recipient already received the block, and the block remains cached, there is probably no reason to send the block again. Accordingly, the streaming server, upon receiving the prediction, would not request the predicted block again. The streaming server 220 can be referred to as obtaining identifying data from the prediction engine 224, where the identifying data can be used by, for example, the streaming client when making one or more predictive block requests.
The streaming server 220 provides data associated with the requested block, such as streaming data, and the prediction, such as identifying data, to the output node 230. The data is sent from the output node 230 to, for example, a streaming client (not shown). In an exemplary embodiment, the prediction is piggy-backed on the reply that includes the, for example, streaming data. In ano±er exemplary embodiment, the prediction could be sent separately, either before, at approximately the same time as, or after the reply that includes the, for example, streaming data. In an alternative embodiment, the streaming server 220 could access a predicted block of the streaming application 228 and send a reply that includes streaming data associated with the predicted block as part of the reply that includes the requested block data. Or, the streaming server 220 could send streaming data associated with the predicted block separately, before, at the same time as, or after sending the reply that inclxides the requested block data.
A prediction may or may not always be provided by trie prediction engine 224. If no prediction is provided to the streaming server 220, the streaming server may simply provide streaming data associated with the requested block. In an exemplary embodiment, the prediction engine 224 may fail to provide a prediction if it does not have sufficient data to make a prediction. Alternatively, the prediction engine 224 may provide a prediction only if it meets a certain probability. For example, an aggressiveness parameter may be set to a cut-off threshold of, e.g., 0.5. If predictive certainty for a block does not meet or exceed the cut-off threshold, the prediction engine 224 may not provide the prediction for the block to the streaming server 220.
FIGS. 7 A to 7C are intended to help illustrate how to predict a block request based on the requested block. FIGS. 7 A to 7C depict exemplary "block request logs and parameters derived therefrom according to embodiments. FIGS. 7A to 1C are not intended to illustrate preferred embodiments because there are many different items of data that could be recorded or omitted in the request log and many different parameters that could be derived from selected items of data.
FIG. 7A depicts an exemplary request log 232 and a block request database 234. In this example, the request log 232 includes a listing of blocks in the order the blocks were requested from a streaming client. For exemplary purposes, trie request log 232 is assumed to be associated with a single streaming client. The request log 232 may or may not be associated with a single streaming application. The block requ-est database 234 includes parameters derived from the request log 232. For exemplary purposes, the parameters are derived only from entries in the request log 232. In alternative embodiments, the parameters could include default values or otherwise rely on data that is not included in the request log 232. For the purposes of example, the request log 232 is assumed to include only the values shown in FIG. 7A. For the purposes of example, a block request (the current block request) is considered for each of the blocks 3 to 8.
Current block request for block 3: As illustrated in the request log 232, a block request for block 5 (the second block request) immediately follows the request for block 3 (the first block request). Since the second block request follows the request for block 3, a prediction can be made about whether the current block request (for block 3) will be followed by a request for block 5. Since, for the purposes of example, the parameters are derived only from the request log 232 (and no other data is considered), it might b>e assumed that a request for block 5 is 100% certain. The predicted block parameters array for block 3 is [(5, 1.0)]. This can be interpreted to mean, following a request for block 3, the probability of a request for block 5 is 1.0. Of course, this is based on a small data sample and is, therefore, subject to a very large error. However, over time the probability may become more accurate.
Current block request for block 4: As illustxated in the request log 232, a block request for block 4 has not been made. Accordingly, no prediction can be made. Current block request for block 5: For reasons similar to those given with respect to the request for block 3, the predicted parameters array for block 5 is [(8, 1.0)].
Current block request for block 6: As illustrated in the request log 232, a block request for block 6 has been made before, but it was the last block requested (none follow). Accordingly, no prediction can be made.
Current block request for block 7: For reasons similar to those given with respect to the request for block 3, the predicted parameters array for "block 5 is [(8, 1.0)].
Current block request for block 8: As illustrated in the request log 232, a block request for block 8 has been made twice before. The block request following a request for block 8 was 3 one time and 6 the other time. It may be assumed, for exemplary purposes, that the request for block 3 or 6 is equally probable. Accordingly, the predicted block parameters array for block 8 is [(3, 0.5), (6, 0.5)]. This can be interpreted to mean, following a request for block 8, the probability of a request for block 3 is 0.5 and the probability of a request for block 6 is 0.5. Since the requests for blocks 3 and 6 are considered equally probable, data associated with both block 3 and block 6 may be included in a reply. Alternatively, neither may be used. Also, an aggressiveness parameter may be set that requires a higher than 0.5 probability in order to be predicted.
Other factors could be considered in "breaking a tie" between equally probable block predictions, such as blocks 3 and 6, which are predicted to follow block 8 in FIG. 7A. A higher- order predictor (e.g., a first-order predictor, as described later) could break the tie depending on whether block 7 was requested prior to block 8 (implying that block 3 is the better choice), or block 5 was requested prior to block 8 (implying that "block 6 is the better choice). The returned list of predicted blocks may be constructed using all n-order predictors, and the best N predicted blocks, regardless of which n-order predictor it came from, may be returned in ranked order. If, for example, the first-order predictor gave a probability of 1.0 to predicted block 3, but the zero- order predictor gave a probability of 0.5, then block 3 would be included with a probability of 1.0, the max of the probabilities for that particular block.
In this example, a prediction was made as to which block was requested following a current block request, though in alternative examples, predictions could be made as to the next requested block, the one after, any other subsequent block request, or some combination thereof. Also, the predicted block parameters are represented in an array, but any data structure that captures the relevant information would be acceptable. FIG. 7B depicts an exemplary request log 236 and a block request database 238 according to another embodiment. In this example, the request log 236 includes a listing of blocks in the order the blocks were requested from a streaming client, plus a time field that represents what time the blocks were sent by a streaming client. In an alternative, the time field may represent what time the blocks were received, logged, or otherwise managed. The block request database 238 includes a predicted block parameters array with a time field, which represents the time difference between when a first block request and a second block request were sent. For example, for the predicted block parameters array for block 3, the time field is 6: 1 8:04, which is intended to mean 6 hours, 18 minutes, and 4 seconds. This is the difference between when block 3 and block 5 were sent, as shown in the request log 236. As another example, the predicted block parameters array for block 8 is [(3, 0.5, 0:05:20), (6, 0.5, 5:10:15)]. This can be interpreted to mean that blocks 3 and 6 are equally likely to be requested following a request for block 8. However, since there is a time entry, weight can be given to the entry with the lower associated time difference. Since, with respect to a streaming application, 5 hours is a long time, weight may be given to block 3 (requested about 5 minutes after block 8) over block 6 (requested about 5 hours after block 8). Indeed, in this example, since the time between block 6 and block 8 is so long, a prediction engine could choose to assume that there really isn't a predictive relationship between blocks 6 and 8.
In an embodiment, time-related predictive parameters may be ignored if they are too high. The threshold value over which a time difference would result in a block request being ignored may be referred to as a temporal aggressiveness parameter. For example, if a temporal aggressiveness paramei er is 1 hour, then the predicted block parameters array for block 8 could be rewritten as [(3, 1.0, 0:05:20)]. That is, the values related to block 6 are ignored since block 6 was received more than 1 hour after block 8, according to the request log 236. Alternatively, the predicted block parameters array for block 8 could be rewritten as [(3, 0.5, 0:05:20)], which is basically the same, but the probability is not recalculated when the values related to block 6 are ignored. Similarly, the predicted block parameters array for block 3 could be rewritten as [ ], since there are no block requests within 1 hour (the temporal aggressiveness threshold, in this example) of the block request for block 3.
In an alternative embodiment, a prediction engine could keep chaining predicted blocks back through the prediction engine to get subsequent predicted blocks, under the assumption that the "first round" predicted blocks were correct. The probabilities of subsequent predicted blocks may be multiplied by the probability of the original predicted block to accurately get a predicted probability for the secondary (ternary, etc) blocks This could continue until the probabilities fell below an aggressiveness threshold A limit may be placed on the max number of predictions returned It should be noted that in the case of a long chain of blocks that follow each other with probability 1 0, the first request may return a list of all subsequent blocks in the chain, which a streaming client can then "blast request" to keep the pipeline at the streaming server full
FIG 7C depicts an exemplary request log 240 and a block request database 242 according to another embodiment FIG 7C is similar to FIG 7B, but entries in the request log 240 include a streaming client associated with a block request. In this example, the streaming client is assumed to have requested the block with which the streaming client is associated For the purposes of determining the predicted block parameters, an entry for a given streaming client, for example client 1, is compared to other entries of the streaming client It may make no difference whether, for example, client 2 requests a block some time after client 1 However, it may make a difference if client 1 requests block 3 after requesting block 8 and client 2 requests block 3 after requesting block 8 Using this principle, predictive parameters can adapt over time based on the block requests of various clients The principles for calculating predictive parameters is the same as described with reference to FIG 7B, but predictive accuracy can be improved by considering the block requests of multiple clients, as shown in the block request database 242 of FIG 7C
In an embodiment, the times are not actually recorded, as depicted in FIG 7C Rather, the times are used as a filter to decide whether one block truly follows another block in terms of predictive value. In such an embodiment, the entry for block 3 may be more similar to the example entry 242-1, which has a value of [(4, 1 0, 0:07-05)], than is depicted for illustrative purposes, in the block request database 242. In other words, in this alternative, the block request database 242 does not include any predictive data for block 5. Similarly, the entry for block 4 may be the empty set, and the entry for block 8 may be similar to the example entry 242-2. This alternative, where the probabilities are omitted for rejected blocks, may be more desirable since it omits data that is probably irrelevant for predictive purposes.
FIGS 8 A and 8B depict request logs and block request databases for use with zero order and first order predictions In an exemplary embodiment, a streaming server builds up the zero, first, or higher order probabilities over time based on which blocks have been requested while streaming an application FIG 8A depicts a request log 244 and a block request database 246 Each block associated with a streaming application has a probability of being requested when streaming the streaming application that is based on how many times the block is requested when streaming the streaming application, compared to the total number of times the streaming application has been streamed. For the purposes of example, one instance, from beginning to end, of streaming a streaming application may be referred to as a session. In a given session, one or more blocks, in one or more combinations, may be requested.
A session may be thought of in terms of the block requests made over the course of streaming a streaming application. For example, in the five sessions depicted in the log request 244 of FIG. 8A, Session 1 consists of three block requests: 7, 8, and 3; Session 2 consists of three block requests: 8, 3, and 4; Session 3 consists of three block requests: 5, 8, and 6; Session 4 consists of two block requests: 8 and 3; and Session 5 consists of two block requests: 8 and 3. Considering only these five sessions, a probability associated with a block request may be determined by adding the number of times a block: is requested in a session and dividing the sum by the total number of sessions (in this case, five). For example, as shown in the block request database 246, Block 3, which is included in Sessions 1, 2, 4, and 5, has a zero order probability of 0.8. Blocks 4, 5, 6, and 7 each have a zero order probability of 0.2 because they are included only in Session 2, 3, 3, and 1, respectively. Since Block 8 is included in each of the Sessions 1- 5, Block 8 has a zero order probability of 1.0. In an embodiment, probabilities are constructed over multiple sessions by multiple users to obtain composite probabilities.
In some embodiments, it may be desirable to utilize a higher order probability. For example, if the streaming application is a game program that starts in a room with four doors that lead to four different rooms, each of which is associated with multiple block requests, the multiple blocks associated with each of the four different rooms may be equally likely (about 25% each). If a door is selected, each of the multiple blocks associated with each of the doors (even those not taken) may be about 25%. By using a first-order predictor, once we see the first block request for one of the rooms, the subsequent blocks for that room will have first-order probabilities near 100%, and so the multiple blocks for that room can be predictively downloaded.
FIG. 8B is intended to illustrate first order probabilities. FIG. 8B depicts a request log 248 and a block request database 250. When determining first order probabilities, a first block request is used as context. In other words, if a first block request is made, the probability of a second block request may be calculated by comparing the probability, based upon previous sessions, that the second block request follows (or occurs during the same session as) the first block request. Since the probability of the second block request is based on one block of context, the probability may be referred to as a first order probability. First-order predictions may or may not be more accurate than zero-order predictions, and both zero- and first-order probabilities may be used, individually or in combination, when making a block request, prediction.
Given the request log 248, the probability that a second block will be requested following a first block request can be determined for one or more sessions. The block request database 250 is organized to show the first-order probability for a block, given the context of another block. The block request database 250 is not intended to illustrate a data structure, but rather to simply illustrate, for exemplary purposes, first-order probability. The probability of a block request (for the block in the Block column) is depicted under the first-order probability for each block. For example, the first-order probability of a block request for Block 3 is 1.0 with Block 4 as context, 0 with Block 5 as context, 0 with Block 6 as context, 1.0 with Block 7 as context, and 0.8 with Block 8 as context. These probabilities are derived as follows. As is shown in the request log 248, when blocks 5 or 6 are requested (Session 3), block 4 is not requested. Accordingly, the first-order probability for block 3 with either block 5 or 6 as context is 0. When blocks 4 or 7 have been requested (Sessions 2 and 1, respectively), block 3 is also requested. Accordingly, the first-order probability for block 3 with block 4 as context is 1.0. When block 8 has been requested (all five sessions), block 3 is also requested 4 out of 5 times. Accordingly, the first- order probability for block 3 with block 8 as context is 0.8. First-order probabilities for each block can be derived in a similar manner. The probabilities are shown in the block request database 250.
Second- and higher-order probabilities can be determined in a similar manner to that described with reference to FIG. 8B, but with multiple block requests as context.
In an exemplary embodiment, a streaming server can use the zero- or higher order probabilities to determine which blocks have probabilities that exceed a "predictive download aggressiveness" parameter and piggyback those block IDs, but not the block data itself, onto the block data returned to a streaming client. The client can then decide which blocks it will predictively download, based on local factors. Local factors may include client system load, contents of the local disk cache, bandwidth, memory, or other factors. It should be noted that in certain embodiments, it may not be possible to incorporate all incoming block requests into the request log because block requests are being "filtered" by the local disk cache. This may result in the request log including only those block requests for blocks that weren't in the local disk cache, which will throw the predictors off. Accordingly, in an embodiment, a streaming server may indicate to a streaming client that the server is interested in collecting prediction data. In this case, the streaming client would send a separate data stream with complete block request statistics to the streaming server. In this example, the separate data stream may include all actual block requests made by the application, regardless of whether that request was successfully predicted or is stored in a local cache of the streaming client.
FIG. 9 depicts a flowchart of an exemplary method for updating predictive parameters. The flowchart starts at module 252 with informing a streaming client of an interest in collecting prediction data. This request may or may not be included in a token file. The request may "be sent to a streaming client when trie streaming client requests streaming of an application. After module 252, the flowchart continues along two paths (254-1 and 254-2). The modules 254 may occur simultaneously, or one may occur before the other.
The flowchart continues at module 254-1 with receiving block requests. The streaming server may receive the block requests from the streaming client. The flowchart continues a.t module 256-1 with sending data associated with the block requests, including predictions, If any. The flowchart continues at decision point 258-1, where it is determined whether the session is over. If the session is not over, the flowchart continues from module 254-1 for another block request. Otherwise, if the session is over, the flowchart ends for modules 254-1 to 258-1.
The flowchart continues at module 254-2 with receiving block request statistics, including block requests for bloclcs stored in a local disk cache. Module 254-2 may or may not begin after module 258-1 ends. Module 254-2 may or may not continue after module 258- 1 ends. The flowchart continues at module 256-2 with updating predictive parameters using the block request statistics. The predictive parameters may be used at module 256-1 to provide predictions. The flowchart continues at decision point 258-2, where it is determined whetrier the session is over. If the session is not over, the flowchart continues from module 254-2 for more block request statistics. Otherwise, if the session is over, the flowchart ends for modules 254-2 to 258-2.
In another embodiment, a streaming client may indicate to a streaming server that the streaming client is interested in receiving predictive block data IDs. Then the streaming server would piggyback the IDs, as described previously. FIG. 10 depicts a flowchart of an exemplary method for providing predictions to a streaming client. The flowchart begins at module 260 with receiving notice that a streaming client is interested in receiving predictive block data IDs. The streaming client may send the notice when requesting streaming of an application from a streaming server. The flowchart continues at module 262 with receiving a block request. The streaming server may receive the block request from the streaming client. The flowchart continues at decision point 264 where it is determined whether a prediction is available. If a prediction is available (264-Y), then the flowchart continues at module 266 with piggybacking one or more predicted block IDs on a reply to the block request, and at module 268 with sending the reply to the block request. If, on the other hand, a prediction is not available (264-NF), then the flowchart continues at module 268 with sending the reply to the block request (with no predicted block ID). In either case, the flowchart continues at decision point 270, where it is determined whether the session is over. If the session is not over (270-N), then the flowchart continues at module 262, as described previously. If, on the other hand, the session is over (270- Y), then the flowchart ends.
In another embodiment, the streaming server may send the block request database to the client along with an initial token file so that the client can do all of the predictive calculations. FIG. 11 depicts a flowchart of an exemplary method for providing predictive capabilities to a streaming client. The flowchart begins at module 272 with sending a block request database to a streaming client. The streaming client may or may not have requested the block request database from a streaming server. The block request database may be sent in response to a request for streaming of an application. The flowchart continues at module 274 with receiving block requests, including predictive block requests, from the streaming client. Since the streaming client has the block request database, the streaming client is able to make predictions about which blocks it should request in advance. The streaming server need not piggyback IDs in this case, since the streaming client makes the decision and requests the blocks directly. The flowchart ends at module 276 with sending data associated with the block requests to the streaming client. The streaming client may or may not send an alternate data stream with actual block request patterns to the streaming server. Modules 274 and 276 may occur intermittently or simultaneously.
In another embodiment, the streaming client may collect predictive data locally and then send the data to the streaming server at the end of the session. FIG. 12 depicts a flowchart of an exemplary method for /eceiving predictive data from a streaming client. The flowchart begins at module 278 with receiving block requests. The block requests may be from a streaming client. The flowchart continues at module 280 with sending block requests, including predictions, if any, to the streaming client. The flowchart continues at decision point 282, where it is determined whether the session is over. If the session is not over (282-N), then the flowchart continues at module 278, as described previously. If, on the other hand, the session is over (282- Y), then the flowchart continues at module 284 with receiving predictive data from the streaming client. The predictive data may include a request log, a block request history, or one or more parameters derived from block request data. The flowchart ends at module 286 with updating a block request database using the predictive data. In this way, the streaming server can adapt the block request database in response to each session.
In an alternative embodiment, a streaming server can maintain a block request database that keeps track of "runs." Runs are sets of blocks for which the block requests occur closely spaced in time. For example, if a sequence of blocks is requested within, say, 0.5 second of a preceding block request in the sequence, the sequence of blocks may be referred to as a run. Runs can be used to efficiently utilize memory resources by recording probabilities on the run level instead of per block. Runs can also be used to reduce the amount of predictive download data. For example, for a level-based game, a user may download a first block, then the rest of the game will be predictively doΛvnloaded for each level, since the probabilities of downloading each level are, for the purposes of this example, nearly 100%.
By keeping track of runs, after the first level has been downloaded, predictive downloads can be "shut off until the start of blocks for the second level begin. For example, if the blocks at the beginning of the run, which trigger or signal the run, are detected, the block IE>s for the rest of the run may be sent to the streaming client, who then chain requests them. However, those subsequent block requests do not trigger any further run. So, the predictive downloads cause the blocks in the middle of a run to not act as a triggering prefix of another run; no further predictions are made from the middle of a run.
FIG. 13 is intended to illustrate maintaining a block request database that includes runs. FIG. 13 depicts a request log 288 and a block request database 290. Parameters associated with blocks and runs are omitted from the block request database 290 so as to more clearly focus on the point being illustrated. The omitted parameters could be any parameters 'derived from the request log for the purpose of facilitating the prediction of future block requests (e.g-, first-order probabilities). The request log 288 includes two sessions, Session 1 and Session 2. In Session 1, a series of blocks are received in succession. For the purposes of example, a run parameter (not shown) is set to one second. If blocks are received within one second of one another, they may be considered part of a run. In the example of FIG. 13, the blocks 17-25 are received within one second of one another, and the blocks 15 and 16 are received more than one second from each of the blocks 17-25. When parameters are derived for inclusion in the block request database 290, the blocks 17-25, which are a run, can be grouped together, as illustrated in block request database 290-1. In Session 2, a different series of blocks are received in succession. The blocks 17-19 and 26-30 are received within one second of one another so, in this example, they can be considered a run. The run can be grouped together, as illustrated in block request database 290- 2. When the block request database 290-1 and block request database 290-2 are combined, since the run 17-25 is no longer certain (i.e., an alternative run could be 17-19, 26-30), the run must be broken into two different runs, as depicted in the block request database 290. In this way, fewer data entries (one for each requested block) are required, since blocks can be grouped as a run.
When a run is stored in the block request database, each block of the run may be downloaded in succession without requiring individual predictions. In an alternative embodiment, each successive block of a run may be given an effective probability of 1.0, which guarantees favorable predictive treatment regardless of the aggressiveness threshold (assuming the threshold allows for some prediction). In yet another alternative, a streaming client may receive an identifier for the run and request successive blocks in the run, using the identifier to identify the successive blocks.
While this invention has been described in terms of certain embodiments, it will be appreciated by those skilled in the art that certain modifications, permutations and equivalents thereof are within the inventive scope of the present invention. It is therefore intended that the following appended claims include all such modifications, permutations and equivalents as fall within the true spirit and scope of the present invention; the invention is limited only by the claims.

Claims

CLAIMSWhat is claimed is:
1. A method, comprising: receiving a request for a first block of a streaming application; checking a block request database; predicting a second block request based on the block request database; sending, in response to the request, data associated with the first block and data associated with the second block.
2. The method of claim 1 , further comprising predicting a plurality of block requests based on the block request database, wherein said sending further includes sending data associated with the plurality of block requests.
3. The method of claim 1 , wherein the data associated with the second block is sufficient to identify the second block so as to facilitate making a request for the second block.
4. The method of claim 1 , wherein the data associated with the second block includes data sufficient to render a request for the second block unnecessary.
5. The method of claim 1 , further comprising piggybacking the data associated with the second block on a reply to the request for the first block.
6. The method of claim 1 , further comprising: logging the request for the first block; and updating the block request database to incorporate data associated with the logged request.
7. The method of claim 6, wherein the logged request is a first logged request, further comprising setting a temporal aggressiveness parameter, wherein the block request database is updated to incorporate data associated with the first logged request to the extent that associations with a second logged request is made if a difference between a receive time associated with the first logged request and the second logged request is less than the temporal aggressiveness parameter.
8. The method of claim 1, further comprising setting an aggressiveness parameter, wherein the data associated with the second block is sent when a probability of the second block request is higher than the aggressiveness parameter.
9. A system comprising: a means for receiving a request for a first block of a streaming application; a means for checking a block request database; a means for predicting a second block request based on the block request database; a means for sending, in response to the request, data associated with the first block and data associated with the second block.
10. The system of claim 9, wherein the data associated with the second block is sufficient to identify the second block so as to facilitate making a request for the second block.
11. The system of c laim 9, wherein the data associated with the second block includes data sufficient to render a. request for the second block unnecessary.
12. The system of claim 9, further comprising a means for piggybacking the data associated with the second block on a reply to the request for the first block.
13. The system of claim 9, further comprising a means for logging the request for the first block.
14. The system of claim 13, further comprising a means for updating the block request database to incorporate data associated with the logged request.
15. A system comprising: a processor; a block request database that includes predictive parameters; a prediction engine, coupled to the processor and the block request database, that is configured to check the block request database and predict a second block request for a second block based upon a first block request for a first block and predictive parameters associated with the first block; a streaming server, coupled to the prediction engine, that is configured to: obtain the prediction about the second block request from the prediction engine, include data associated with the second block in a response to the first block request, in addition to data associated with the first block, and send the response in reply to the first block request.
16. The system of claim 15, wherein the data associated with the second block is sufficient to identify the second block so as to facilitate making the second block request.
17. The system of claim 15, wherein the data associated with the second block includes data sufficient to render the second block request unnecessary.
18. The system of claim 15, wherein the streaming server is further configured to piggyback the data associated with the second block on a reply to the first block request.
19. The system of claim 15, further comprising a request log, wherein the streaming server is further configured to log the first block request in the request log.
20. The system of claim 19, wherein the prediction engine is further configured to update the block request database according to the request log.
PCT/US2005/037351 2004-10-22 2005-10-17 System and method for predictive streaming WO2006047133A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2007537956A JP2008518508A (en) 2004-10-22 2005-10-17 System and method for predictive streaming
EP05808446A EP1836581A2 (en) 2004-10-22 2005-10-17 System and method for predictive streaming

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US62117804P 2004-10-22 2004-10-22
US60/621,178 2004-10-22
US10/988,014 2004-11-12
US10/988,014 US7240162B2 (en) 2004-10-22 2004-11-12 System and method for predictive streaming

Publications (2)

Publication Number Publication Date
WO2006047133A2 true WO2006047133A2 (en) 2006-05-04
WO2006047133A3 WO2006047133A3 (en) 2007-05-03

Family

ID=36228229

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/037351 WO2006047133A2 (en) 2004-10-22 2005-10-17 System and method for predictive streaming

Country Status (4)

Country Link
US (1) US7240162B2 (en)
EP (1) EP1836581A2 (en)
JP (1) JP2008518508A (en)
WO (1) WO2006047133A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008030431A2 (en) * 2006-09-05 2008-03-13 Edda Technology, Inc. System and method for processing function/data on demand over network
GB2446832A (en) * 2007-02-23 2008-08-27 Quantel Ltd A file server system
US8261345B2 (en) 2006-10-23 2012-09-04 Endeavors Technologies, Inc. Rule-based application access management
US8359591B2 (en) 2004-11-13 2013-01-22 Streamtheory, Inc. Streaming from a media device
US8527706B2 (en) 2005-03-23 2013-09-03 Numecent Holdings, Inc. Opportunistic block transmission with time constraints
US8831995B2 (en) 2000-11-06 2014-09-09 Numecent Holdings, Inc. Optimized server for streamed applications
US9094480B2 (en) 1997-06-16 2015-07-28 Numecent Holdings, Inc. Software streaming system and method
US9654548B2 (en) 2000-11-06 2017-05-16 Numecent Holdings, Inc. Intelligent network streaming and execution system for conventionally coded applications
US10445210B2 (en) 2007-11-07 2019-10-15 Numecent Holdings, Inc. Deriving component statistics for a stream enabled application

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020087883A1 (en) * 2000-11-06 2002-07-04 Curt Wohlgemuth Anti-piracy system for remotely served computer applications
US20020083183A1 (en) * 2000-11-06 2002-06-27 Sanjay Pujare Conventionally coded application conversion system for streamed delivery and execution
US20060048136A1 (en) * 2004-08-25 2006-03-02 Vries Jeff D Interception-based resource detection system
US20060136389A1 (en) * 2004-12-22 2006-06-22 Cover Clay H System and method for invocation of streaming application
US7827141B2 (en) * 2005-03-10 2010-11-02 Oracle International Corporation Dynamically sizing buffers to optimal size in network layers when supporting data transfers related to database applications
US20060218165A1 (en) * 2005-03-23 2006-09-28 Vries Jeffrey De Explicit overlay integration rules
US9716609B2 (en) * 2005-03-23 2017-07-25 Numecent Holdings, Inc. System and method for tracking changes to files in streaming applications
US8888592B1 (en) 2009-06-01 2014-11-18 Sony Computer Entertainment America Llc Voice overlay
JP5061619B2 (en) * 2007-01-24 2012-10-31 日本電気株式会社 Resource securing method, relay device, distribution system, and program
TWI339522B (en) * 2007-02-27 2011-03-21 Nat Univ Tsing Hua Generation method of remote objects with network streaming ability and system thereof
US20090138876A1 (en) 2007-11-22 2009-05-28 Hsuan-Yeh Chang Method and system for delivering application packages based on user demands
US8968087B1 (en) 2009-06-01 2015-03-03 Sony Computer Entertainment America Llc Video game overlay
US8613673B2 (en) 2008-12-15 2013-12-24 Sony Computer Entertainment America Llc Intelligent game loading
US8147339B1 (en) 2007-12-15 2012-04-03 Gaikai Inc. Systems and methods of serving game video
US8095679B1 (en) * 2008-03-19 2012-01-10 Symantec Corporation Predictive transmission of content for application streaming and network file systems
US8434093B2 (en) 2008-08-07 2013-04-30 Code Systems Corporation Method and system for virtualization of software applications
US8776038B2 (en) 2008-08-07 2014-07-08 Code Systems Corporation Method and system for configuration of virtualized software applications
AU2009319665B2 (en) 2008-11-26 2015-08-20 Calgary Scientific Inc. Method and system for providing remote access to a state of an application program
US8926435B2 (en) 2008-12-15 2015-01-06 Sony Computer Entertainment America Llc Dual-mode program execution
US10055105B2 (en) 2009-02-03 2018-08-21 Calgary Scientific Inc. Method and system for enabling interaction with a plurality of applications using a single user interface
SG173483A1 (en) * 2009-02-03 2011-09-29 Calgary Scient Inc Method and system for enabling interaction with a plurality of applications using a single user interface
JP2010182270A (en) * 2009-02-09 2010-08-19 Toshiba Corp Mobile electronic apparatus and data management method in mobile electronic apparatus
US8506402B2 (en) 2009-06-01 2013-08-13 Sony Computer Entertainment America Llc Game execution environments
US8782323B2 (en) * 2009-10-30 2014-07-15 International Business Machines Corporation Data storage management using a distributed cache scheme
US8954958B2 (en) 2010-01-11 2015-02-10 Code Systems Corporation Method of configuring a virtual application
US8959183B2 (en) * 2010-01-27 2015-02-17 Code Systems Corporation System for downloading and executing a virtual application
US9104517B2 (en) 2010-01-27 2015-08-11 Code Systems Corporation System for downloading and executing a virtual application
US9229748B2 (en) 2010-01-29 2016-01-05 Code Systems Corporation Method and system for improving startup performance and interoperability of a virtual application
US8763009B2 (en) 2010-04-17 2014-06-24 Code Systems Corporation Method of hosting a first application in a second application
US9218359B2 (en) 2010-07-02 2015-12-22 Code Systems Corporation Method and system for profiling virtual application resource utilization patterns by executing virtualized application
US8560331B1 (en) 2010-08-02 2013-10-15 Sony Computer Entertainment America Llc Audio acceleration
KR102003007B1 (en) 2010-09-13 2019-07-23 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 A Method and System of Providing a Computer Game at a Computer Game System Including a Video Server and a Game Server
KR102126910B1 (en) 2010-09-13 2020-06-25 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 Add-on Management
US9021015B2 (en) 2010-10-18 2015-04-28 Code Systems Corporation Method and system for publishing virtual applications to a web server
US9209976B2 (en) 2010-10-29 2015-12-08 Code Systems Corporation Method and system for restricting execution of virtual applications to a managed process environment
US9021537B2 (en) * 2010-12-09 2015-04-28 Netflix, Inc. Pre-buffering audio streams
US9043782B2 (en) * 2010-12-28 2015-05-26 Microsoft Technology Licensing, Llc Predictive software streaming
US9741084B2 (en) 2011-01-04 2017-08-22 Calgary Scientific Inc. Method and system for providing remote access to data for display on a mobile device
CA2734860A1 (en) 2011-03-21 2012-09-21 Calgary Scientific Inc. Method and system for providing a state model of an application program
WO2012146985A2 (en) 2011-04-28 2012-11-01 Approxy Inc. Ltd. Adaptive cloud-based application streaming
US8676938B2 (en) 2011-06-28 2014-03-18 Numecent Holdings, Inc. Local streaming proxy server
WO2013024342A1 (en) 2011-08-15 2013-02-21 Calgary Scientific Inc. Method for flow control and for reliable communication in a collaborative environment
CA2844871C (en) 2011-08-15 2021-02-02 Calgary Scientific Inc. Non-invasive remote access to an application program
WO2013046015A1 (en) 2011-09-30 2013-04-04 Calgary Scientific Inc. Uncoupled application extensions including interactive digital surface layer for collaborative remote application sharing and annotating
KR101593344B1 (en) * 2011-11-10 2016-02-18 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Speculative rendering using historical player data
US8886752B2 (en) * 2011-11-21 2014-11-11 Sony Computer Entertainment America System and method for optimizing transfers of downloadable content
CN104040946B (en) 2011-11-23 2017-07-14 卡尔加里科学公司 For shared and meeting the method and system of the remote application that cooperates
US9313083B2 (en) 2011-12-09 2016-04-12 Empire Technology Development Llc Predictive caching of game content data
WO2013109984A1 (en) 2012-01-18 2013-07-25 Numecent Holdings, Inc. Application streaming and execution for localized clients
US9602581B2 (en) 2012-03-02 2017-03-21 Calgary Scientific Inc. Remote control of an application using dynamic-linked library (DLL) injection
US8396983B1 (en) 2012-03-13 2013-03-12 Google Inc. Predictive adaptive media streaming
US8407747B1 (en) * 2012-03-13 2013-03-26 Google Inc. Adaptive trick play streaming
US9485304B2 (en) * 2012-04-30 2016-11-01 Numecent Holdings, Inc. Asset streaming and delivery
US9729673B2 (en) 2012-06-21 2017-08-08 Calgary Scientific Inc. Method and system for providing synchronized views of multiple applications for display on a remote computing device
WO2014043277A2 (en) 2012-09-11 2014-03-20 Numecent Holdings Ltd. Application streaming using pixel streaming
US20140173070A1 (en) * 2012-12-13 2014-06-19 Microsoft Corporation Updating of digital content buffering order
US9977596B2 (en) 2012-12-27 2018-05-22 Dropbox, Inc. Predictive models of file access patterns by application and file type
WO2014108207A1 (en) * 2013-01-11 2014-07-17 Telefonaktiebolaget L M Ericsson (Publ) Technique for operating client and server devices in a broadcast communication network
US9661048B2 (en) 2013-01-18 2017-05-23 Numecent Holding, Inc. Asset streaming and delivery
CA2931762C (en) 2013-11-29 2020-09-22 Calgary Scientific Inc. Method for providing a connection of a client to an unmanaged service in a client-server remote access system
US9537971B2 (en) * 2015-01-29 2017-01-03 Huawei Technologies Co., Ltd. Systems, devices and methods for distributed content pre-fetching in mobile communication networks
US10015264B2 (en) 2015-01-30 2018-07-03 Calgary Scientific Inc. Generalized proxy architecture to provide remote access to an application framework
AU2016210974A1 (en) 2015-01-30 2017-07-27 Calgary Scientific Inc. Highly scalable, fault tolerant remote access architecture and method of connecting thereto
US20220212100A1 (en) * 2021-01-04 2022-07-07 Microsoft Technology Licensing, Llc Systems and methods for streaming interactive applications

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6816950B2 (en) * 2002-05-08 2004-11-09 Lsi Logic Corporation Method and apparatus for upgrading disk drive firmware in a RAID storage system
US6891740B2 (en) * 2003-08-29 2005-05-10 Hitachi Global Storage Technologies Netherlands B.V. Method for speculative streaming data from a disk drive

Family Cites Families (123)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5109413A (en) 1986-11-05 1992-04-28 International Business Machines Corporation Manipulating rights-to-execute in connection with a software copy protection mechanism
US4796220A (en) 1986-12-15 1989-01-03 Pride Software Development Corp. Method of controlling the copying of software
US5063500A (en) 1988-09-29 1991-11-05 Ibm Corp. System for executing segments of application program concurrently/serially on different/same virtual machine
US5701427A (en) 1989-09-19 1997-12-23 Digital Equipment Corp. Information transfer arrangement for distributed computer system
US5210850A (en) 1990-06-15 1993-05-11 Compaq Computer Corporation Memory address space determination using programmable limit registers with single-ended comparators
US5293556A (en) 1991-07-29 1994-03-08 Storage Technology Corporation Knowledge based field replaceable unit management
AU3944793A (en) 1992-03-31 1993-11-08 Aggregate Computing, Inc. An integrated remote execution system for a heterogenous computer network environment
EP0728333A1 (en) 1993-11-09 1996-08-28 Arcada Software Data backup and restore system for a computer network
US5495411A (en) 1993-12-22 1996-02-27 Ananda; Mohan Secure software rental system using continuous asynchronous password verification
CA2140850C (en) 1994-02-24 1999-09-21 Howard Paul Katseff Networked system for display of multimedia presentations
US5666293A (en) 1994-05-27 1997-09-09 Bell Atlantic Network Services, Inc. Downloading operating system software through a broadcast channel
US5696965A (en) 1994-11-03 1997-12-09 Intel Corporation Electronic information appraisal agent
US5715403A (en) 1994-11-23 1998-02-03 Xerox Corporation System for controlling the distribution and use of digital works having attached usage rights where the usage rights are defined by a usage rights grammar
US6282712B1 (en) 1995-03-10 2001-08-28 Microsoft Corporation Automatic software installation on heterogeneous networked computer systems
US5805809A (en) 1995-04-26 1998-09-08 Shiva Corporation Installable performance accelerator for maintaining a local cache storing data residing on a server computer
US5724571A (en) 1995-07-07 1998-03-03 Sun Microsystems, Inc. Method and apparatus for generating query responses in a computer-based document retrieval system
US5706440A (en) 1995-08-23 1998-01-06 International Business Machines Corporation Method and system for determining hub topology of an ethernet LAN segment
US5809144A (en) 1995-08-24 1998-09-15 Carnegie Mellon University Method and apparatus for purchasing and delivering digital goods over a network
US6047323A (en) 1995-10-19 2000-04-04 Hewlett-Packard Company Creation and migration of distributed streams in clusters of networked computers
US5778395A (en) 1995-10-23 1998-07-07 Stac, Inc. System for backing up files from disk volumes on multiple nodes of a computer network
US5948062A (en) 1995-10-27 1999-09-07 Emc Corporation Network file server using a cached disk array storing a network file directory including file locking information and data mover computers each having file system software for shared read-write file access
US5933603A (en) 1995-10-27 1999-08-03 Emc Corporation Video file server maintaining sliding windows of a video data set in random access memories of stream server computers for immediate video-on-demand service beginning at any specified location
US5764906A (en) 1995-11-07 1998-06-09 Netword Llc Universal electronic resource denotation, request and delivery system
US5909545A (en) 1996-01-19 1999-06-01 Tridia Corporation Method and system for on demand downloading of module to enable remote control of an application program over a network
JPH09231156A (en) 1996-02-28 1997-09-05 Nec Corp Remote execution device with program receiving function
US5838910A (en) 1996-03-14 1998-11-17 Domenikos; Steven D. Systems and methods for executing application programs from a memory device linked to a server at an internet site
US5752259A (en) * 1996-03-26 1998-05-12 Advanced Micro Devices, Inc. Instruction cache configured to provide instructions to a microprocessor having a clock cycle time less than a cache access time of said instruction cache
US5764910A (en) 1996-04-02 1998-06-09 National Semiconductor Corporation Method and apparatus for encoding and using network resource locators
US5903892A (en) 1996-05-24 1999-05-11 Magnifi, Inc. Indexing of media content on a network
US6018619A (en) 1996-05-24 2000-01-25 Microsoft Corporation Method, system and apparatus for client-side usage tracking of information server systems
US6151643A (en) 1996-06-07 2000-11-21 Networks Associates, Inc. Automatic updating of diverse software products on multiple client computer systems by downloading scanning application to client computer and generating software list on client computer
US5943424A (en) 1996-06-17 1999-08-24 Hewlett-Packard Company System, method and article of manufacture for processing a plurality of transactions from a single initiation point on a multichannel, extensible, flexible architecture
US6014686A (en) 1996-06-21 2000-01-11 Telcordia Technologies, Inc. Apparatus and methods for highly available directory services in the distributed computing environment
US6138271A (en) 1996-06-26 2000-10-24 Rockwell Technologies, Llc Operating system for embedded computers
US5835722A (en) 1996-06-27 1998-11-10 Logon Data Corporation System to control content and prohibit certain interactive attempts by a person using a personal computer
US5903732A (en) 1996-07-03 1999-05-11 Hewlett-Packard Company Trusted gateway agent for web server programs
US6061738A (en) 1997-06-27 2000-05-09 D&I Systems, Inc. Method and system for accessing information on a network using message aliasing functions having shadow callback functions
US6038610A (en) 1996-07-17 2000-03-14 Microsoft Corporation Storage of sitemaps at server sites for holding information regarding content
US5881232A (en) 1996-07-23 1999-03-09 International Business Machines Corporation Generic SQL query agent
EP0853788A1 (en) 1996-08-08 1998-07-22 Agranat Systems, Inc. Embedded web server
US5878425A (en) 1996-08-21 1999-03-02 International Business Machines Corp. Intuitive technique for visually creating resource files
US6601103B1 (en) 1996-08-22 2003-07-29 Intel Corporation Method and apparatus for providing personalized supplemental programming
US5991306A (en) 1996-08-26 1999-11-23 Microsoft Corporation Pull based, intelligent caching system and method for delivering data over a network
KR100487012B1 (en) 1996-09-11 2005-06-16 마츠시타 덴끼 산교 가부시키가이샤 Program reception/execution apparatus which can start execution of program even when only part of program is received, and program transmitter for it
US6226665B1 (en) 1996-09-19 2001-05-01 Microsoft Corporation Application execution environment for a small device with partial program loading by a resident operating system
US6085186A (en) 1996-09-20 2000-07-04 Netbot, Inc. Method and system using information written in a wrapper description language to execute query on a network
US6028925A (en) 1996-09-23 2000-02-22 Rockwell International Corp. Telephonic switching system, telephonic switch and method for servicing telephone calls using virtual memory spaces
US5911043A (en) 1996-10-01 1999-06-08 Baker & Botts, L.L.P. System and method for computer-based rating of information retrieved from a computer network
IL119486A0 (en) 1996-10-24 1997-01-10 Fortress U & T Ltd Apparatus and methods for collecting value
US5923885A (en) 1996-10-31 1999-07-13 Sun Microsystems, Inc. Acquisition and operation of remotely loaded software using applet modification of browser software
US6347398B1 (en) 1996-12-12 2002-02-12 Microsoft Corporation Automatic software downloading from a computer network
US5953506A (en) 1996-12-17 1999-09-14 Adaptive Media Technologies Method and apparatus that provides a scalable media delivery system
US5963944A (en) 1996-12-30 1999-10-05 Intel Corporation System and method for distributing and indexing computerized documents using independent agents
US6099408A (en) 1996-12-31 2000-08-08 Walker Digital, Llc Method and apparatus for securing electronic games
US5949877A (en) 1997-01-30 1999-09-07 Intel Corporation Content protection for transmission systems
US5903721A (en) 1997-03-13 1999-05-11 cha|Technologies Services, Inc. Method and system for secure online transaction processing
US6278992B1 (en) 1997-03-19 2001-08-21 John Andrew Curtis Search engine using indexing method for storing and retrieving data
US5948065A (en) 1997-03-28 1999-09-07 International Business Machines Corporation System for managing processor resources in a multisystem environment in order to provide smooth real-time data streams while enabling other types of applications to be processed concurrently
US6108420A (en) 1997-04-10 2000-08-22 Channelware Inc. Method and system for networked installation of uniquely customized, authenticable, and traceable software application
US5895454A (en) 1997-04-17 1999-04-20 Harrington; Juliette Integrated interface for vendor/product oriented internet websites
US5892915A (en) 1997-04-25 1999-04-06 Emc Corporation System having client sending edit commands to server during transmission of continuous media from one clip in play list for editing the play list
US5987454A (en) 1997-06-09 1999-11-16 Hobbs; Allen Method and apparatus for selectively augmenting retrieved text, numbers, maps, charts, still pictures and/or graphics, moving pictures and/or graphics and audio information from a network resource
US6453334B1 (en) * 1997-06-16 2002-09-17 Streamtheory, Inc. Method and apparatus to allow remotely located computer programs and/or data to be accessed on a local computer in a secure, time-limited manner, with persistent caching
CA2209549C (en) 1997-07-02 2000-05-02 Ibm Canada Limited-Ibm Canada Limitee Method and apparatus for loading data into a database in a multiprocessor environment
US5905868A (en) 1997-07-22 1999-05-18 Ncr Corporation Client/server distribution of performance monitoring data
US5933822A (en) 1997-07-22 1999-08-03 Microsoft Corporation Apparatus and methods for an information retrieval system that employs natural language processing of search results to improve overall precision
US5960411A (en) 1997-09-12 1999-09-28 Amazon.Com, Inc. Method and system for placing a purchase order via a communications network
US6101482A (en) 1997-09-15 2000-08-08 International Business Machines Corporation Universal web shopping cart and method of on-line transaction processing
US6192408B1 (en) 1997-09-26 2001-02-20 Emc Corporation Network file server sharing local caches of file access information in data processors assigned to respective file systems
US6085193A (en) 1997-09-29 2000-07-04 International Business Machines Corporation Method and system for dynamically prefetching information via a server hierarchy
US6192398B1 (en) 1997-10-17 2001-02-20 International Business Machines Corporation Remote/shared browser cache
US6253234B1 (en) 1997-10-17 2001-06-26 International Business Machines Corporation Shared web page caching at browsers for an intranet
US6026166A (en) 1997-10-20 2000-02-15 Cryptoworx Corporation Digitally certifying a user identity and a computer system in combination
US6219693B1 (en) 1997-11-04 2001-04-17 Adaptec, Inc. File array storage architecture having file system distributed across a data processing platform
US6094649A (en) 1997-12-22 2000-07-25 Partnet, Inc. Keyword searches of structured databases
US6415373B1 (en) 1997-12-24 2002-07-02 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US6298356B1 (en) 1998-01-16 2001-10-02 Aspect Communications Corp. Methods and apparatus for enabling dynamic resource collaboration
US6301629B1 (en) * 1998-03-03 2001-10-09 Alliance Semiconductor Corporation High speed/low speed interface with prediction cache
US6601110B2 (en) 1998-03-17 2003-07-29 Sun Microsystems, Inc. System and method for translating file-level operations in a non-door-based operating system to door invocations on a door server
US6185608B1 (en) 1998-06-12 2001-02-06 International Business Machines Corporation Caching dynamic web pages
US6330561B1 (en) 1998-06-26 2001-12-11 At&T Corp. Method and apparatus for improving end to end performance of a data network
US6587857B1 (en) 1998-06-30 2003-07-01 Citicorp Development Center, Inc. System and method for warehousing and retrieving data
US6154878A (en) 1998-07-21 2000-11-28 Hewlett-Packard Company System and method for on-line replacement of software
US6418555B2 (en) 1998-07-21 2002-07-09 Intel Corporation Automatic upgrade of software
US20020138640A1 (en) * 1998-07-22 2002-09-26 Uri Raz Apparatus and method for improving the delivery of software applications and associated data in web-based systems
US20010037400A1 (en) * 1998-07-22 2001-11-01 Uri Raz Method and system for decreasing the user-perceived system response time in web-based systems
US20010044850A1 (en) * 1998-07-22 2001-11-22 Uri Raz Method and apparatus for determining the order of streaming modules
US6574618B2 (en) 1998-07-22 2003-06-03 Appstream, Inc. Method and system for executing network streamed application
US6311221B1 (en) * 1998-07-22 2001-10-30 Appstream Inc. Streaming modules
US7197570B2 (en) * 1998-07-22 2007-03-27 Appstream Inc. System and method to send predicted application streamlets to a client device
US6510462B2 (en) 1998-09-01 2003-01-21 Nielsen Media Research, Inc. Collection of images in Web use reporting system
US6356946B1 (en) 1998-09-02 2002-03-12 Sybase Inc. System and method for serializing Java objects in a tubular data stream
US6622171B2 (en) 1998-09-15 2003-09-16 Microsoft Corporation Multimedia timeline modification in networked client/server systems
US6370686B1 (en) 1998-09-21 2002-04-09 Microsoft Corporation Method for categorizing and installing selected software components
US6418554B1 (en) 1998-09-21 2002-07-09 Microsoft Corporation Software implementation installer mechanism
US7225264B2 (en) * 1998-11-16 2007-05-29 Softricity, Inc. Systems and methods for delivering content over a computer network
US6763370B1 (en) * 1998-11-16 2004-07-13 Softricity, Inc. Method and apparatus for content protection in a secure content delivery system
US6374402B1 (en) 1998-11-16 2002-04-16 Into Networks, Inc. Method and apparatus for installation abstraction in a secure content delivery system
US6510466B1 (en) 1998-12-14 2003-01-21 International Business Machines Corporation Methods, systems and computer program products for centralized management of application programs on a network
US6584507B1 (en) 1999-03-02 2003-06-24 Cisco Technology, Inc. Linking external applications to a network management system
US7370071B2 (en) * 2000-03-17 2008-05-06 Microsoft Corporation Method for serving third party software applications from servers to client computers
US6938096B1 (en) * 1999-04-12 2005-08-30 Softricity, Inc. Method and system for remote networking using port proxying by detecting if the designated port on a client computer is blocked, then encapsulating the communications in a different format and redirecting to an open port
US6636961B1 (en) 1999-07-09 2003-10-21 International Business Machines Corporation System and method for configuring personal systems
US6510458B1 (en) 1999-07-15 2003-01-21 International Business Machines Corporation Blocking saves to web browser cache based on content rating
US6711619B1 (en) * 1999-12-15 2004-03-23 Hewlett-Packard Development Company, L.P. Method, system, and apparatus for distributing and using computer-based applications over a network
US6779179B1 (en) * 2000-03-20 2004-08-17 Exent Technologies, Inc. Registry emulation
US6598125B2 (en) 2000-05-25 2003-07-22 Exent Technologies, Ltd Method for caching information between work sessions
US6622137B1 (en) 2000-08-14 2003-09-16 Formula Telecom Solutions Ltd. System and method for business decision implementation in a billing environment using decision operation trees
US7051315B2 (en) * 2000-09-26 2006-05-23 Appstream, Inc. Network streaming of multi-application program code
US6757894B2 (en) * 2000-09-26 2004-06-29 Appstream, Inc. Preprocessed applications suitable for network streaming applications and method for producing same
US6918113B2 (en) * 2000-11-06 2005-07-12 Endeavors Technology, Inc. Client installation and execution system for streamed applications
US6959320B2 (en) * 2000-11-06 2005-10-25 Endeavors Technology, Inc. Client-side performance optimization system for streamed applications
US8831995B2 (en) * 2000-11-06 2014-09-09 Numecent Holdings, Inc. Optimized server for streamed applications
US20020083183A1 (en) * 2000-11-06 2002-06-27 Sanjay Pujare Conventionally coded application conversion system for streamed delivery and execution
US20020087883A1 (en) * 2000-11-06 2002-07-04 Curt Wohlgemuth Anti-piracy system for remotely served computer applications
US7043524B2 (en) * 2000-11-06 2006-05-09 Omnishift Technologies, Inc. Network caching system for streamed applications
US7062567B2 (en) * 2000-11-06 2006-06-13 Endeavors Technology, Inc. Intelligent network streaming and execution system for conventionally coded applications
US7028305B2 (en) * 2001-05-16 2006-04-11 Softricity, Inc. Operating system abstraction and protection layer
US7093077B2 (en) * 2001-11-30 2006-08-15 Intel Corporation Method and apparatus for next-line prefetching from a predicted memory address
US7735057B2 (en) * 2003-05-16 2010-06-08 Symantec Corporation Method and apparatus for packaging and streaming installation software
US20060048136A1 (en) * 2004-08-25 2006-03-02 Vries Jeff D Interception-based resource detection system
JP2008527468A (en) * 2004-11-13 2008-07-24 ストリーム セオリー,インコーポレイテッド Hybrid local / remote streaming
US20060136389A1 (en) * 2004-12-22 2006-06-22 Cover Clay H System and method for invocation of streaming application

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6816950B2 (en) * 2002-05-08 2004-11-09 Lsi Logic Corporation Method and apparatus for upgrading disk drive firmware in a RAID storage system
US6891740B2 (en) * 2003-08-29 2005-05-10 Hitachi Global Storage Technologies Netherlands B.V. Method for speculative streaming data from a disk drive

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9094480B2 (en) 1997-06-16 2015-07-28 Numecent Holdings, Inc. Software streaming system and method
US8831995B2 (en) 2000-11-06 2014-09-09 Numecent Holdings, Inc. Optimized server for streamed applications
US9654548B2 (en) 2000-11-06 2017-05-16 Numecent Holdings, Inc. Intelligent network streaming and execution system for conventionally coded applications
US8359591B2 (en) 2004-11-13 2013-01-22 Streamtheory, Inc. Streaming from a media device
US8527706B2 (en) 2005-03-23 2013-09-03 Numecent Holdings, Inc. Opportunistic block transmission with time constraints
US11121928B2 (en) 2005-03-23 2021-09-14 Numecent Holdings, Inc. Opportunistic block transmission with time constraints
US10587473B2 (en) 2005-03-23 2020-03-10 Numecent Holdings, Inc. Opportunistic block transmission with time constraints
US9781007B2 (en) 2005-03-23 2017-10-03 Numecent Holdings, Inc. Opportunistic block transmission with time constraints
WO2008030431A3 (en) * 2006-09-05 2008-07-10 Edda Technology Inc System and method for processing function/data on demand over network
WO2008030431A2 (en) * 2006-09-05 2008-03-13 Edda Technology, Inc. System and method for processing function/data on demand over network
US7877441B2 (en) 2006-09-05 2011-01-25 Edda Technology, Inc. System and method for delivering function/data by decomposing into task oriented analytical components
US9699194B2 (en) 2006-10-23 2017-07-04 Numecent Holdings, Inc. Rule-based application access management
US8261345B2 (en) 2006-10-23 2012-09-04 Endeavors Technologies, Inc. Rule-based application access management
US9825957B2 (en) 2006-10-23 2017-11-21 Numecent Holdings, Inc. Rule-based application access management
US10057268B2 (en) 2006-10-23 2018-08-21 Numecent Holdings, Inc. Rule-based application access management
US10356100B2 (en) 2006-10-23 2019-07-16 Numecent Holdings, Inc. Rule-based application access management
US9054963B2 (en) 2006-10-23 2015-06-09 Numecent Holdings, Inc. Rule-based application access management
US9054962B2 (en) 2006-10-23 2015-06-09 Numecent Holdings, Inc. Rule-based application access management
US11451548B2 (en) 2006-10-23 2022-09-20 Numecent Holdings, Inc Rule-based application access management
GB2446832A (en) * 2007-02-23 2008-08-27 Quantel Ltd A file server system
US10445210B2 (en) 2007-11-07 2019-10-15 Numecent Holdings, Inc. Deriving component statistics for a stream enabled application
US11119884B2 (en) 2007-11-07 2021-09-14 Numecent Holdings, Inc. Deriving component statistics for a stream enabled application
US11740992B2 (en) 2007-11-07 2023-08-29 Numecent Holdings, Inc. Deriving component statistics for a stream enabled application

Also Published As

Publication number Publication date
US7240162B2 (en) 2007-07-03
WO2006047133A3 (en) 2007-05-03
JP2008518508A (en) 2008-05-29
EP1836581A2 (en) 2007-09-26
US20060106770A1 (en) 2006-05-18

Similar Documents

Publication Publication Date Title
US7240162B2 (en) System and method for predictive streaming
US6088803A (en) System for virus-checking network data during download to a client device
KR100869421B1 (en) Splicing persistent connections
EP2636268B1 (en) Optimization of resource polling intervals to satisfy mobile device requests
US7107406B2 (en) Method of prefetching reference objects using weight values of referrer objects
US6038601A (en) Method and apparatus for storing and delivering documents on the internet
US6574618B2 (en) Method and system for executing network streamed application
Klemm WebCompanion: A friendly client-side Web prefetching agent
US7287082B1 (en) System using idle connection metric indicating a value based on connection characteristic for performing connection drop sequence
US10931773B1 (en) Faster web browsing using HTTP over an aggregated TCP transport
US20020138640A1 (en) Apparatus and method for improving the delivery of software applications and associated data in web-based systems
EP2772041B1 (en) Connection cache method and system
US20140019577A1 (en) Intelligent edge caching
US20030236862A1 (en) Method and system for determining receipt of a delayed cookie in a client-server architecture
GB2499747A (en) Adjusting a polling interval for a first service based on a polling interval of a second service to align traffic received from distinct hosts
US8589477B2 (en) Content information display device, system, and method used for creating content list information based on a storage state of contents in a cache
US7149800B2 (en) Auditing computer systems components in a network
US20040059827A1 (en) System for controlling network flow by monitoring download bandwidth
WO2006102621A2 (en) System and method for tracking changes to files in streaming applications
EP2596658A1 (en) Aligning data transfer to optimize connections established for transmission over a wireless network
US7069326B1 (en) System and method for efficiently managing data transports
US6934761B1 (en) User level web server cache control of in-kernel http cache
Padmanabhan et al. Improving world wide web latency
WO2002010929A1 (en) System and method for serving compressed content over a computer network
EP2290557A2 (en) A method and system for retrieving a resource

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BW BY BZ CA CH CN CO CR CU CZ DK DM DZ EC EE EG ES FI GB GD GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV LY MD MG MK MN MW MX MZ NA NG NO NZ OM PG PH PL PT RO RU SC SD SG SK SL SM SY TJ TM TN TR TT TZ UG US UZ VC VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SZ TZ UG ZM ZW AM AZ BY KG MD RU TJ TM AT BE BG CH CY DE DK EE ES FI FR GB GR HU IE IS IT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007537956

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2005808446

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2005808446

Country of ref document: EP