US6356996B1 - Cache fencing for interpretive environments - Google Patents

Cache fencing for interpretive environments Download PDF

Info

Publication number
US6356996B1
US6356996B1 US09/118,262 US11826298A US6356996B1 US 6356996 B1 US6356996 B1 US 6356996B1 US 11826298 A US11826298 A US 11826298A US 6356996 B1 US6356996 B1 US 6356996B1
Authority
US
United States
Prior art keywords
cache
instruction
processor
instructions
virtual machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/118,262
Inventor
Phillip M. Adams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Novell Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novell Inc filed Critical Novell Inc
Priority to US09/118,262 priority Critical patent/US6356996B1/en
Assigned to NOVELL, INC. reassignment NOVELL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ADAMS, PHILLIP M.
Priority to US09/705,370 priority patent/US6408384B1/en
Application granted granted Critical
Publication of US6356996B1 publication Critical patent/US6356996B1/en
Assigned to CPTN HOLDINGS LLC reassignment CPTN HOLDINGS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NOVELL, INC.
Assigned to ORACLE INTERNATIONAL CORPORATION reassignment ORACLE INTERNATIONAL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CPTN HOLDINGS LLC
Assigned to ORACLE INTERNATIONAL CORPORATION reassignment ORACLE INTERNATIONAL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CPTN HOLDINGS LLC
Assigned to CPTN HOLDINGS LLC reassignment CPTN HOLDINGS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NOVELL, INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3814Implementation provisions of instruction buffers, e.g. prefetch buffer; banks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0879Burst mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0888Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement

Definitions

  • the present invention relates to the use of processor caches. More particularly, the present invention is directed to apparatus and methods for programmatically controlling the access and duration of stay of selected executables within processor cache.
  • a processor of a computer Operations executed by a processor of a computer proceed in a synchronization dictated by a system clock. Accordingly one characteristic of a processor is a clock speed.
  • a clock speed may be 33 megahertz, indicating that 33 million cycles per second occur in the controlling clock.
  • a processor may execute one instruction per clock cycle, less than one instruction per clock cycle, or more than one instruction per clock cycle.
  • Multiple execution units such as are contained in a PentiumTM processor, may be operated simultaneously. Accordingly, this simultaneous operation of multiple execution units, arithmetic logic units (ALU), may provide more than a single instruction execution during a single clock cycle.
  • ALU arithmetic logic units
  • processing proceeds according to a clock's speed. Operations occur only as the clock advances from cycle to cycle. That is, operations occur as the clock cycles.
  • processors may exist. Each processor may have its own clock. Thus, an arithmetic logic unit (ALU) may have a clock operating at one speed, while a bus interface unit may operate at another speed. Likewise, a bus itself may have a bus controller that operates at its own clock speed.
  • ALU arithmetic logic unit
  • clock speed of a central processing unit does not dictate the speed of any operation of a device not totally controlled by that processor.
  • each device must all interface with one another. The slowest speed will limit the performance of all interfacing elements. Moreover, each device must be placed in the state required to comply with a request passed between elements. Any device that requires another device to wait while some higher priority activity occurs, may delay an entire process.
  • a request for an instruction or data within a hard drive, or even a main, random-access memory, associated with a computer must negotiate across a main system bus.
  • a central processing unit has a clock operating at one speed.
  • the bus has a controller with a clock that may operate at another speed.
  • the memory device has a memory management unit that may operate at another speed.
  • a PentiumTM processor having a clock speed of 100 megahertz may be connected to peripheral devices or main memory by an industry standard architecture (ISA) bus.
  • the ISA bus has a specified clock speed of 8 megahertz.
  • the data may not be processed or delivered at a speed greater than that of the bus at 8 megahertz.
  • a bus typically gives low priority to the central processing unit. In order to avoid underruns and overruns, the input/output devices receive priority over the processor.
  • the 100 megahertz processor may be “put on hold” by the bus while other peripheral devices have their requests filled.
  • a hardware handshake must occur for any communication.
  • a handshake including a request and an acknowledgement, must occur in addition to a transfer of actual data or signals.
  • Handshake protocols may actually involve several, even many, clock counts for the request alone, the acknowledgement alone, and for passing the data itself.
  • a transmission may be interrupted by a transaction having a higher priority.
  • Hardware interfacing may greatly reduce or eliminate the benefits of a high-speed processor.
  • processor caches In general, processors benefit from maintaining as close to themselves as possible all instructions, data, and clock control. This proximity reduces the need for interfaces, the number of interfaces, the interface complexity, and thus, the time required for compliance with any instruction or necessary execution. Thus, caches have been moved closer and closer to the processor.
  • Memory caches are common. Such a cache is created within a dedicated portion of a memory device. These are different, however, from caches dedicated to a processor.
  • the INTEL 386TM processor contains an optional external cache connected to the processor through a cache controller chip.
  • the INTEL 486TM TM contains an internal 8 kilobyte cache on the central processing unit itself. Within the chip containing the processor, is integrated a cache. This cache is dedicated to both code and data accesses.
  • the 486TM also supports another cache (a level-2 cache, as opposed to the primary or level-1 cache just described above). Access to the level-2 cache is through an external cache controller chip, similar to that of the 386TM. In each case, for both the 386TM and 486TM processors, the external cache controller is itself positioned on a side of the processor's internal bus (CPU bus) opposite that of the processor.
  • a level-2 cache as opposed to the primary or level-1 cache just described above.
  • an external cache controller chip similar to that of the 386TM.
  • the external cache controller is itself positioned on a side of the processor's internal bus (CPU bus) opposite that of the processor.
  • the PentiumTM processors contain a level-1 (primary) data cache as well as a level-1 code cache. Thus, code and data are segregated, cached separately.
  • the PentiumTM processors continue to support an external, level-2 cache across a CPU bus.
  • bus refers to the processor bus, rather than the system bus.
  • main system bus connects a processor to the main memory.
  • a cache has some fixed amount of memory.
  • a code cache will contain certain executable instructions, a data cache will contain data, and a non-segregated cache may contain both.
  • the memory of any type of cache is typically subdivided into cache lines. For example, a typical cache line may contain 32 bytes of information. Thus, a cache line contains a standard number of bytes in which space may be stored a copy of certain information obtained from a main memory device.
  • each cache line Associated with each cache line is a tag.
  • the tag binds a physical address and a logical address corresponding to the contents of an associated cache line.
  • the physical and logical addresses contained in the tag associated with a cache line may correspond to a physical location in the main memory device, and a logical position within an application respectively.
  • Caches associated with a processor are transparent, even hidden, with respect to a user and an application.
  • Each cache has an associated controller.
  • a cache controller effectively “short circuits” a request from a processor to a memory unit. That is, if a particular address is referenced, and that address exists in a tag associated with the contents of a cache line in a cache, the cache controller will fulfill the request for the instruction out of the cache line containing it. The request is thus fulfilled transparently to the processor.
  • the effect of a cache is to eliminate, as much as possible, communication through hardware interfaces as described above.
  • a cache may greatly improve the processing speed of applications running on processors.
  • Tags may also have associated therewith two numbers referred to as “use bits.”
  • the use bits may typically represent a simple count of use. This count may be useful to the cache controller in determining which cache lines are the least recently used (LRU). Accordingly, a cache controller may refer to the LRU count to determine which cache lines have been referenced the least number of times.
  • some cache controllers may churn a cache. That is, if an insignificant number of bits is contained in the LRU or use bits, then a counter may be improperly reset to zero due to count “wrap-around” during high use. Thus, highly-used cache lines may actually be swapped out, churning the cache and dramatically decreasing efficiency.
  • a cache controller has a general purpose function to service address requests generally.
  • a virtual machine may be implemented in some limited number of instructions.
  • a computer processor has an underlying native language in which the virtual machine instructions are written.
  • the virtual machine instructions will be requested repeatedly.
  • the virtual machine instructions are accessed relatively slowly if they are treated simply as another general purpose instruction being retrieved periodically into the cache.
  • interpreter instructions are generally a set of native code instructions that together implement an instruction of a high level language that has not been compiled or linked for use on the particular hardware platform of the processor on which the interpretive environment is operating.
  • Java virtual machine can operate upon any platform that also has access to the Java virtual machine.
  • the Java virtual machine comprises separately executable modules or interpreter instructions that recognize the instructions of the Java language and translate on the fly the Java instructions into the native machine code of the processor for which the virtual machine is designed.
  • interpretive instructions be created that each occupy a single line of cache memory.
  • the interpretive instructions are loaded into cache, and “pinned,” so that they are not purged or replaced. Typically this pinning is accomplished through privileged systems levels commands to the cache memory.
  • an apparatus and method are disclosed in one embodiment of the present invention as including a central processing unit (CPU) having an operably associated processor cache, preferably a level-1 cache.
  • the level-1 cache is closest to the actual processor in the CPU.
  • the cache may actually be integrated into the CPU.
  • the processor may be programmed to install a full set of virtual machine instructions (VMI) in the cache.
  • VMI virtual machine instructions
  • the contents of physical memory may then be “fenced” to keep from displacing the VMI set from cache, thereby eliminating the “misses” of the individual VMI interpreter instructions by the processor that significantly slows down virtual machines.
  • an apparatus and method in accordance with the invention may “programmatically control” the contents of the cache.
  • the cache may be loaded with a full set of virtual machine instructions, properly compiled or assembled, linked, and loaded.
  • the set may incorporate in a length not to exceed a standardized specified number of cache lines, the executable, machine-language implementation of each command or instruction provided in an interpretative environment.
  • the set fit to the total available cache lines, may define a virtual machine (the entire interpreter).
  • the set may be pinned, after being loaded into a previously evacuated cache. Alternatively, the contents of physical memory other than the VMI set may be fenced from the cache.
  • Loading may be accomplished by running a, simple application having no particular meaning, but containing all of the VMIs at least once. Knowing that the cache will respond as designed, one may thus load all of the native code segments implementing the VMIs automatically into the cache in the fastest mode possible, controlled by the cache controller. Yet, the entire process is prompted by programmatic instructions, knowingly applied.
  • a pin manager may be interposed or hooked into an operating system to pin and unpin the processor cache associated with a processor hosting a multi-tasking operating system.
  • a pin manager may perform several functions in sequence. It tests for the presence of an interpretive process as the next in line to be executed by a processor. If such is present, the pin manager disables interrupts, flushes the processor cache (preferably with write-back if a non-segregated cache, in order to save data changes), loads the processor cache (preferably by execution of a mock application containing all the instructions of the interpretive environment), disables the processor cache to effectively pin the processor cache to continue operating without being able to change its contents, and then re-enables the interrupts to continue normal operation of the processor.
  • the pin manager may be adapted to achieve fencing as an alternative to disabling the processor cache. Fencing involves accessing information registers that control the paging of memory. These information registers typically include an “uncacheable” provision for preventing caching of a particular page. Under the present invention, all of the pages of physical memory with the exception of those that contain the virtual machine interpreter instructions, which are left as cacheable. A loading program is then called to load the interpretive instructions into cache memory. The virtual machine may be quickly swapped into and out of memory using fencing.
  • the invention may disable interrupts in order to eliminate all possibility of a change in control flow during “loading” of the cache with the desired contents. Otherwise, an interrupt from a hardware device may pre-empt current execution, loading an interrupt service routine into the processor cache.
  • the pin manager may then flush the processor cache.
  • a flush of a processor cache invalidates all of the contents of the cache lines in the cache.
  • Write-back saves the contents of altered (dirty) cache lines back to main memory.
  • the pin manager then loads the processor cache, preferably by running a mock application.
  • the mock application may introduce every desired code segment, each implementing an individual interpreter instruction into the cache.
  • the pin manager may re-enable the interrupts. Re-enablement returns the processor to normal operation.
  • the virtual machine interpreter instructions remain in cache so long as the contents of the rest of physical memory remains fenced.
  • FIG. 1 is a schematic block diagram of an apparatus in accordance with the invention
  • FIG. 2 is a schematic block diagram showing implementation details for one embodiment of the apparatus of FIG. 1;
  • FIG. 3 is a schematic block diagram of executable modules and data structures consistent with one implementation of an apparatus and method in accordance with the invention
  • FIG. 4 is a schematic block diagram of a method in accordance with the invention.
  • FIG. 5 is a schematic block diagram of registers used for addressing
  • FIG. 6 is a schematic block diagram of an operating system that may be executed by the processor of FIG. 1;
  • FIG. 7 is a schematic block diagram of processes occurring in a scheduler of FIG. 6, illustrating hooking a pin manager therein;
  • FIG. 8 is a schematic block diagram of an alternative representation of processes of FIG. 7 illustrating states of a process or thread executed by the processor in accordance with the scheduler;
  • FIG. 9 is a schematic block diagram of steps associated with a pin manager, generalizing the fast loading process of FIG. 4, and adapting it to a multi-tasking environment;
  • FIG. 10 is a schematic block diagram illustrating the use of paging within physical memory to achieve cache fencing
  • FIG. 11 is a schematic block diagram illustrating a page table entry used under one embodiment of cache fencing
  • FIG. 12 is a schematic block diagram illustrating physical memory and MTRRs associated with logical pages of physical memory.
  • FIG. 13 is a schematic block diagram of one embodiment of a method of cache fencing.
  • an apparatus 10 may include a node 11 (client 11 , computer 11 ) containing a processor 12 or CPU 12 .
  • the CPU 12 may be operably connected to a memory device 14 .
  • a memory device 14 may include one or more devices such as a hard drive or non-volatile storage device 16 , a read-only memory 18 (ROM) and a random access (and usually volatile) memory 20 (RAM).
  • the apparatus 10 may include an input device 22 for receiving inputs from a user or another device. Similarly, an output device 24 may be provided within the node 11 , or accessible within the apparatus 10 . A network card 26 (interface card) or port 28 may be provided for connecting to outside devices, such as the network 30 .
  • a bus 32 may operably interconnect the processor 12 , memory devices 14 , input devices 22 , output devices 24 , network card 26 and port 28 .
  • the bus 32 may be thought of as a data carrier.
  • the bus 32 may be embodied in numerous configurations. Wire, fiber optic line, wireless electromagnetic communications by visible light, infrared, and radio frequencies may likewise be implemented as appropriate for the bus 32 and the network 30 .
  • Input devices 22 may include one or more physical embodiments.
  • a keyboard 34 may be used for interaction with the user, as may a mouse 36 .
  • a touch screen 38 , a telephone 39 , or simply a telephone line 39 may be used for communication with other devices, with a user, or the like.
  • a scanner 40 may be used to receive graphical inputs which may or may not be translated to other character formats.
  • a hard drive 41 or other memory device 14 may be used as an input device whether resident within the node 11 or some other node 52 (e.g., 52 a , 52 b , etc.) on the network 30 , or from another network 50 .
  • Output devices 24 may likewise include one or more physical hardware units.
  • the port 28 may be used to accept inputs and send outputs from the node 11 .
  • a monitor 42 may provide outputs to a user for feedback during a process, or for assisting two-way communication between the processor 12 and a user.
  • a printer 44 or a hard drive 46 may be used for outputting information as output devices 24 .
  • a network 30 to which a node 11 connects may, in turn, be connected through a router 48 to another network 50 .
  • two nodes 11 , 52 may be on a network 30 , adjoining networks 30 , 50 , or may be separated by multiple routers 48 and multiple networks 50 as individual nodes 11 , 52 on an internetwork.
  • the individual nodes 52 e.g. 52 a , 52 b , 52 c , 52 d ) may have various communication capabilities.
  • a minimum of logical capability may be available in any node 52 .
  • any of the individual nodes 52 a- 52 d may be referred to, as may all together, as a node 52 .
  • a network 30 may include one or more servers 54 .
  • Servers may be used to manage, store, communicate, transfer, access, update, and the like, any number of files for a network 30 .
  • a server 54 may be accessed by all nodes 11 , 52 on a network 30 .
  • other special functions, including communications, applications, and the like may be implemented by an individual server 54 or multiple servers 54 .
  • a node 11 may need to communicate over a network 30 with a server 54 , a router 48 , or nodes 52 .
  • a node 11 may need to communicate over another network ( 50 ) in an internetwork connection with some remote node 52 .
  • individual components 12 - 46 may need to communicate data with one another.
  • a communication link may exist, in general, between any pair of devices.
  • a processor 12 may include several internal elements. Connected to the bus 32 , a bus interface unit 56 handles the bus protocols enabling the processor 12 to communicate to other devices over the bus 32 . For example, the instructions or data received from a ROM 18 or data read from or written to the RAM 20 may pass through the bus interface unit 56 .
  • a processor cache e.g. cache 58 , 64
  • a level-1 cache 58 may be integrated into the processor 12 .
  • the level-1 cache 58 may be optionally subdivided into an instruction cache 60 and a data cache 62 .
  • a level-1 cache 58 is not required in a processor 12 . Moreover, segregation of the instruction cache 60 from the data cache 62 is not required. However, a level-1 cache 58 provides rapid access to instructions and data without resort to the main memory 18 , 20 (RAM 20 ). Thus, the processor 12 need not access (cross) the bus interface unit 56 to obtain cached instructions and data.
  • the external cache 64 is identified as a level-2 cache in FIG. 2 . Nevertheless, the level-2 cache 64 may be a level-1 cache if no level-1 cache 58 is present on the processor 12 directly. Similarly, the external cache 64 may or may not be segregated between an instruction cache 66 and a data cache 68 . Any suitable processor cache may be used.
  • Execution normally associated with a processor 12 , is actually most closely related to a fetch/decode unit 70 , an execute unit 72 , and a write-back unit 74 .
  • a cache controller associated with each cache 58 , 64 , is typically an inherent, integrated, hardware controller.
  • the cache controller may be thought of as control logic built into the cache hardware.
  • the level-1 cache 58 makes a determination whether or not the request can be satisfied by data or instructions identified with the logical address requested from cached data and instructions.
  • the level-2 cache 64 may respond to the request. If the desired item (data or instruction) is not present in either the level-1 cache 58 or the level-2 cache 64 , then the main memory 18 , 20 may respond with the desired item. Once the request has been fulfilled by the fastest unit 58 , 64 , 20 , 18 to respond with the desired item, the request is completed, and no other devices will respond.
  • Main memory may include the ROM 18 , the RAM 20 , or both. Nevertheless, many computers boot up using the contents of the ROM 18 and thereafter use the RAM 20 for temporary storage of data associated with applications and the operating system. Whenever “main memory” is mentioned, it is contemplated that it may include any combination of the ROM 18 and RAM 20 .
  • the fetch 71 a and decode 71 b are typically highly integrated, and perform in an overlapped fashion. Accordingly, a fetch/decode unit 70 is typical.
  • the decode unit 71 b may identify a current instruction to be executed. Identification may involve identification of what type of instruction, what type of addressing, what registers will be involved, and the like. The presence of the instruction in an instruction register, may itself stimulate execution on the next clock count.
  • an execute unit 72 may immediately process the instruction through low-level, control-loop hardware. For example, sequencers, registers, and arithmetic logic units may be included in an execute unit 72 .
  • the registers 76 are hidden from programmers and applications. Nevertheless, the hardware architecture of the processor 12 provides a hardware logic governing interaction between the units 70 , 72 , 74 and between the registers 76 and the units, 70 , 72 , 74 .
  • a write-back unit 74 may provide an output. Accordingly, the output may be passed to the bus interface unit 56 to be stored as appropriate. As a practical matter, a result may be stored in a cache 58 of a level-1 variety or in a level-2 cache 64 . In either event, a writeback unit 74 will typically write through to the main memory 18 , 20 an image of the result.
  • Modem processors 12 particularly the PentiumTM processors, use a technique called pipelining.
  • Pipelining passes an instruction through each of the fetch/decode/execute steps undergone by that instruction as quickly as possible. An individual instruction is not passed completely through all of its processing steps before the next instruction in order is begun.
  • a first instruction may be fetched, and on the next clock count another instruction may be fetched while the first instruction is being decoded.
  • a certain parallel, although slightly offset in time, processing occurs for instructions.
  • An advantage of a method and apparatus in accordance with the invention is that instructions may be more effectively pipelined. That is, prediction routines have been built into hardware in the PentiumTM class of processors 12 . However, prediction is problematic. Inasmuch as a branch may occur, within approximately every five machine code instructions on average, the pipeline of instructions will be in error periodically. Depending on the sophistication of a prediction methodology, one or more instructions in a pipeline may be flushed after entering a pipeline at the fetch unit 71 a.
  • FIG. 3 a virtual machine 90 or an instruction set 90 implementing a virtual machine 90 on a processor 12 is illustrated schematically. Relationships are illustrated for caching 80 or a cache system 80 for storing loaded and executable instructions 106 (e.g. 106 a ) corresponding to virtual machine instructions 91 (e.g. 91 a ) of a virtual machine 90 or virtual machine instruction set 90 .
  • loaded and executable instructions 106 e.g. 106 a
  • virtual machine instructions 91 e.g. 91 a
  • a virtual machine 90 may be built upon any available programming environment. Such virtual machines 90 may sometimes be referred to as interpreters, or interpreted systems. Alternatively, virtual machines 90 are sometimes referred to as emulators, wherein a set of instructions 91 a-n may be hosted on a processor 12 of one type to mimic or emulate the functional characteristics of a processor 12 in a hardware device of any other type.
  • An application may be written to run on or in an environment created for a first hardware device. After the application is fully developed and operational, the application may then be “ported” to another machine. Porting may simply include writing a virtual machine 90 for the second hardware platform.
  • an application may be developed in the native language of a first machine, and a single set 90 of virtual machine instructions 91 a-n may be created to emulate the first machine on a second machine.
  • a virtual machine 90 is sometimes referred to as an emulation layer.
  • an emulation layer or virtual machine 90 may provide an environment so that an application may be platform-independent.
  • a JAVA interpreter for example, performs such a function.
  • An executable 82 loaded into main memory 18 , 20 contains the original images of the contents of the cache system 80 .
  • a building system 84 that may be thought of as an apparatus, modules running on an apparatus, or a system of steps to be performed by an apparatus, is responsible to build contents to be loaded into the executable 82 .
  • a builder 86 may be tasked with building and loading an executable image 100 of a virtual machine 90 .
  • a builder 88 may build an executable image 130 of the instructions 106 implementing an application written in the virtual machine instructions 91 constituting the virtual machine 90 .
  • the executable 130 or executable image 130 may represent any application ready to be executed by the execute unit 72 of the processor 12 .
  • One embodiment of an executable 130 or an image 130 may be an application written specifically to prompt a high speed loading as described with respect to FIG. 4 below.
  • a virtual machine 90 or a set 90 of virtual machine instructions 91 a-n may contain an individual instruction (e.g. 91 a , 91 b , 91 n ) corresponding to each specific, unique function that must be accommodated by the virtual machine 90 .
  • the virtual machine instruction 91 n for example, provides the ability to terminate execution.
  • the builder 86 may include source code 90 , virtual machine source code 90 .
  • the source code 90 may be assembled or compiled by an assembler 92 or compiler 92 , as appropriate.
  • the virtual machine may operate adequately, whether dependent on assembly or compilation.
  • the assembler 92 or compiler 92 operates for native code.
  • Native code may be thought of as code executable directly on a processor 12 in the apparatus 10 .
  • native code By native code is indicated the processor-specific instructions 91 that may be executed directly by a processor 12 . By directly is not necessarily meant that the native code is always written in binary ones and zeros.
  • Native code 106 may be written in a language to be assembled 92 or compiled 92 into object code 94 and to be eventually linked 96 into an executable 100 loaded for execution. Executables 100 may then be loaded 99 into a memory device 20 , 18 for ready execution on or by an execute unit 72 of a processor 12 .
  • An executable 100 stored in a non-volatile storage device 16 may sometimes be referred to as an executable file. Once properly loaded 99 into the main memory 18 , 20 associated with a processor 12 an executable 100 may be executed by a processor 12 .
  • the assembler 92 or compiler 92 provides object code 94 in native code instructions.
  • the object code 94 may be linked to library routines or the like by a linker 96 .
  • the linker 96 may provide all other supporting instructions necessary to run the object code 94 .
  • the linker 96 provides, as output, executable code 98 .
  • the executable code 98 will be run directly from main memory 18 , 20 as a loaded executable 100 .
  • a loader 99 may load the executable code 98 into main memory 18 , 20 as the loaded code 100 .
  • Code segments 106 a-n are written in native code.
  • any code segment 106 a-n e.g. 106 a , 106 b , 106 c , 106 n
  • the result is the desired output from the corresponding virtual machine instruction 91 a-n (e.g. 91 a , 91 b , 91 c , 91 n , respectively).
  • Virtual machine instructions 91 a-n identify every available function that may be performed by the virtual machine 90 .
  • the instructions 106 a-n illustrate segments 106 a-n , implementations in native code, executable the hardware, processor 12 , that must produce the result associated with each individual virtual machine instruction 91 a-n.
  • Each of the code segments 106 a-n contains a FETCH instruction 108 DECODE instruction 110 and JUMP instruction 112 .
  • the instructions 108 - 112 promote pipelining.
  • the subject of each of the respective instructions decode 110 , fetch 108 , and JUMP 112 correspond to the very next instruction, the second next instruction, and the third next instruction, respectively, following an instruction 91 a-n being executed and corresponding to a code segment 106 a-n in question.
  • a virtual machine instruction set 90 should include a HALT instruction 91 n .
  • a virtual machine instruction 91 n within the virtual machine 90 will contain a segment 106 n of native code indicating to the processor 12 the fetching and decoding process for instructions used in all applications.
  • the last virtual machine instruction 91 a-n contained within a loaded application 130 is a HALT instruction 91 n ( 106 n ).
  • the loaded executable 100 may be stored in a block 114 separated by block boundaries 116 .
  • each block 114 contains 32 bytes of data.
  • the instruction set 90 or virtual machine 90 contains no more than 256 virtual machine instructions 91 a-n .
  • the code segments 106 a-n when compiled, linked, and loaded, may each be loaded by the loader 99 to begin at a block boundary 116 , in one currently preferred embodiment.
  • the number of blocks 114 and the size of each block 114 may be configured to correspond to a cache line 140 in the cache 60 .
  • an image of a code segment 106 a-n , compiled, linked, and loaded for each virtual machine instruction 91 a-n exists in a single cache line 140 .
  • every such virtual machine instruction 91 a-n and its native code segment 106 a-n has an addressable, tagged, cache line 140 available in the 256 cache lines.
  • a builder 88 may build any virtual machine application 120 .
  • FIG. 3 the process of building an application 120 is illustrated.
  • a mock application may be constructed for the exclusive purposes of high-speed loading of the code segments 106 into the cache lines 140 .
  • virtual machine source language code 120 or source code 120 may be written to contain instructions 91 arranged in any particular order.
  • instructions 91 are used by a programmer in any suitable order to provide and execute an application 120 .
  • the source code 120 may simply contain each of the virtual machine instructions 91 in the virtual machine language.
  • the source code 120 may be assembled or compiled by an assembler 122 or compiler 122 depending on whether the language is an assembled or a compiled language.
  • the assembler 122 or compiler 122 generates (emits, outputs) virtual machine code.
  • the output of the assembler 122 or compiler 122 is object code 124 .
  • the object code 124 may be linked by a linker 126 to produce an executable code 128 .
  • the executable code 128 may be loaded by a loader 129 into main memory 18 , 20 as the loaded executable 130 .
  • the loaded executable 130 is still in virtual machine code. Thus, an application developed in the virtual machine language must be run on a virtual machine.
  • the virtual machine 90 is stored in the cache 60 .
  • the cache 60 may actually be thought of as any processor cache, but the closest cache to a processor 12 , is capable of the fastest performance.
  • the loaded executable 130 is comprised of assembled or compiled, linked, and loaded, virtual machine instructions 132 .
  • a main memory device 20 is byte addressable.
  • Each of the virtual machine instructions 132 begins at an address 134 .
  • each virtual machine instruction 132 may be of any suitable length required.
  • a virtual machine address zero 135 may be identified by a pointer as the zero position in the virtual machine 130 .
  • Each subsequent address 134 may thus be identified as an offset from the virtual machine zero 135 .
  • a last instruction 136 should be effective to provide an exit from the loaded executable 130 .
  • loaded executables 130 are executed in the order they are stored in the memory device 20 .
  • the cache 60 has associated therewith a tag table 142 .
  • an appropriate tag line 144 exists (e.g. 144 a , 144 b , 144 c ).
  • a logical address 146 corresponding to the address 134 of the cache line 140 in question.
  • a physical address 148 in a tag line 144 corresponds to an address 116 or block boundary 116 at which the code 114 is stored in the main memory 18 , 20 .
  • a control field 144 c may contain symbols or parameters identifying access rights, and the like for each cache line 140 .
  • a loaded executable 130 (application 130 ) has a logical address 134 associated with each virtual machine instruction 132 .
  • the logical address 134 associated with the beginning of an instruction 132 is bound by the tag table 142 to the physical address 116 associated with the executable code 100 associated with the corresponding code segment 106 whose compiled, linked, and loaded image is stored at the respective cache line 140 associated with the tag line 144 binding the logical address 134 , 146 to the physical address 116 , 148 .
  • the method 160 locks or pins a cache after loading the native code implementation of individual virtual machine instructions into the cache.
  • a disable 162 may be executed by the processor to disable interrupts from being serviced.
  • the disable 162 provides temporary isolation for the cache 60 , enabling completion of the process 160 or method 160 .
  • the cache 60 is next flushed 164 typically with write-back, which causes “dirty” cache data to be written back to main memory 18 , 20 .
  • the control field 150 may be a byte indicating that each cache line 140 is available.
  • the processor 12 need not thereafter execute the multiple steps to remove the contents of any cache line 140 in preparation for loading new contents.
  • the execute steps 166 correspond to execution by the processor 12 of individual instructions 132 in a loaded application 130 .
  • the processor 12 Upon fetching for execution 166 each instruction 132 , the processor 12 places a request for the instruction 132 next in order in the loaded application 130 .
  • the cache controller for the cache 60 first reviews the contents of the tag table 142 to determine whether or not the desired instruction is present in the cache 60 . Having been flushed, the cache 60 has no instructions initially. Accordingly, with each execute 166 , a new instruction 132 is loaded from the main memory 18 , 20 into the cache 60 at some appropriate cache line 140 . Immediately after loading into the cache 60 , each instruction 132 in order is executed by the processor 12 . However, at this point, any output is ignored. The execution 166 is simply a by-product of “fooling” the cache into loading all the instructions 132 as rapidly as possible, as pre-programmed into the hardware.
  • a loaded application 130 contains every instruction 132 required to form a complete set of instructions for a virtual machine.
  • the instructions 132 are actually code segments 106 implementing a virtual machine instruction 91 in the native code of the processor 12 . No output is needed from the initial application 130 run during the method 160 .
  • the virtual machine instruction set 100 is written so that each block 114 contains a single instruction 91 . Moreover, the instruction set 90 is written to occupy exactly the number of cache lines 140 available in the cache 60 .
  • an individual instruction 91 may occupy more than a single cache line 140 .
  • some caches may have a 16 byte line length.
  • a 32 byte length for an instruction 91 may require two cache lines 140 .
  • a number of cache lines 140 may correspond exactly to the number of blocks 114 required to hold all of the instructions 91 , such that each instruction 91 may be addressed by referring to a unique cache line 140 .
  • each cache line 140 contains a code segment 106 or native code segment 106 implementing a virtual machine instruction 91 .
  • Each cache line 140 contains the code segment 106 corresponding to a virtual machine instruction 91 in a cache 60 having a line length of 32 bytes.
  • a disable 168 may disable the cache 60 .
  • the effect of the disable 168 is to pin the contents of each cache line 140 .
  • Pinning indicates that the cache controller is disabled from replacing the contents of any cache line 140 .
  • the cache 60 continues to operate normally, otherwise.
  • the controller of the cache 60 will continue to refer to the tag table 142 to determine whether or not an address 146 , 148 requested is present.
  • every instruction 91 will be present in the cache 60 , if the instructions are designed in accordance with the invention.
  • the tag table 142 will always contain the code 106 associated with any address 146 , 148 representing any virtual machine instruction 91 .
  • unused cache lines 140 may be devoted to other code, loaded in a similar way, prior to pinning. Code may be selected according to recency of use, cost/benefit analysis of use, or cost/benefit analysis of retrieval from main memory 18 , 20 .
  • the cache 60 is used by way of example.
  • the virtual machine 90 will operate fastest by using the cache 60 closest to the fetch/decode unit 70 .
  • another cache 64 may be used.
  • everything describing the cache 60 may be applied to the cache 66 or the cache 64 so far as loading and pinning of the cache 60 are concerned.
  • the enable 170 may re-enable the interrupts so that the processor 12 may resume normal operations.
  • an efficient fetch/decode/JUMP algorithm may begin with an XOR of the contents of a register EAX 180 against itself.
  • the effect of the XOR is to zero out the contents of the EAX register 180 .
  • the contents of register EAX 180 may represent a pointer.
  • a MOVE instruction MOV
  • the register AL 186 is the lower eight bits of the AX register 182 .
  • the AX register 182 is the lower 16 bits of a 32 bit EAX register 180 .
  • the upper eight bits of the AX register 182 constitute the AH register 184 .
  • the AL 186 or lower register 186 thus receives the contents of a memory location corresponding to a current instruction 91 being pointed at by the contents of the EBX 190 register.
  • a SHIFT instruction may shift left by five bits (effectively a multiplication by a value of 32) the contents of the EAX register 180 . Since the EAX register 180 was zeroed out, and only the AL register was filled, a shift left of the EAX register 186 multiplies its value by 32. This shift left is effectively a decoding of the instruction that was fetched by the MOVE instruction.
  • a JUMP instruction may be implemented to position EAX in the set of virtual machine instructions.
  • each virtual machine instruction 91 in the complete set 90 when loaded, is written within the same number of bytes (32 bytes for the native code segment implementing the virtual machine instruction).
  • the code segment 106 for each instruction 91 begins at a block boundary 116 and at the beginning of a cache line 140 .
  • a virtual machine instruction number multiplied by 32 will step through each of the native code segments 106 .
  • a JUMP to EAX constitutes a direct addressing of the native code segment 106 required to implement a particular virtual machine instruction 91 .
  • Cache technology is described in detail in Computer Architecture: A Quantitative Approach by John L. Hennessy and David A. Patterson published in 1990 by Morgan Kaufman Publishers, Inc. of San Mateo, Calif. (See Chapter 8).
  • any type of cache 60 may be used.
  • a two-way set associative cache 60 may be used.
  • a cache line 140 may contain some selected number of bytes, as determined by the hardware. Typical cache lines 140 have a length of 16 or 32 bytes. Likewise, each cache structure will have some number of addressable lines. An eight bit addressing scheme provides 256 cache lines in a cache.
  • Each byte of memory within a memory device 14 is directly addressable.
  • One common caching scheme for a direct mapped cache architecture may map a memory device 20 to cache lines 140 by block.
  • the memory's addressable space may be subdivided into blocks, each of the same size as a cache line. For example, an entire random access memory 20 may be subdivided into 32-byte blocks for potential caching.
  • a significant feature of a direct-mapped cache is that every block of memory within the source memory device 20 has a specific cache line 140 to which it will be cached any time it is cached.
  • the least significant bits in an address corresponding to a block within a memory device may be truncated to the same size as the address of a cache line 140 .
  • every block of memory 20 is assigned to a cache line 140 having the same least significant bit address.
  • a cache line 140 space to a particular block of memory 20 is made as needed according to some addressing scheme. Typical schemes may include random replacement. That is, a particular cache line 140 may simply be selected at random to receive an incoming block to be cached.
  • Alternative schemes may include a least-recently-used (LRU) algorithm.
  • LRU least-recently-used
  • a count of accesses may be maintained in association with each cache line 140 .
  • the cache line 140 that has been least recently accessed by the processor 12 may be selected to have its contents replaced by a incoming block from the memory device 20 .
  • a set-associative architecture subdivides an associative cache into some number of associative caches. For example, all the lines 140 of a cache 60 may typically be divided into groups of two, four, eight, or sixteen, called “ways.” Referring to the number of these ways or subcaches within the overall cache 60 , as n, this subdivision has created an n-way set-associative cache 60 .
  • Mapping of block-frame addresses from a main memory device 20 to a cache line 140 uses the associative principle. That is, each way includes an n th fraction of all the available cache lines 140 from the overall cache 60 . Each block from the main memory device 20 is mapped to one of the ways. However, that block may actually be sent to any of the cache lines 140 within an individual way according to some available scheme. Either the LRU or the random method may be used to place a block into an individual cache line 140 within a way.
  • a main memory address may be mapped to a way by a MODULO operation on the main memory address by the number of ways.
  • the MODULO result then provides the number of a “way” to which the memory block may be allocated.
  • An allocation algorithm may then allocate the memory block to a particular cache line 140 within an individual way.
  • Another cache may be used, with less effective results. Loading and pinning may also be done using test instructions, although more time-consuming. Instead of test instructions, the proposed method flushes the cache, running a simple application 130 containing every VMI 91 of a desired set 90 to be loaded. Before disabling the processor cache 60 , the method may use the cache's internal programming, built into the fundamental hardware architecture, to provide a high-speed load. Disabling permits access to the processor cache 60 , but not replacement, completing an effective pinning operation.
  • the closest cache to the processor is used as the processor cache 60 .
  • the level-1 code cache 60 may be used.
  • an external cache 64 or a level-1 integrated (not segregated between code and data) cache 58 may be used.
  • any cache 58 , 60 , 64 may be used, and the closest is preferred.
  • Pinning is particularly advantageous once an environment, or rather the executable instructions constituting an environment, have been programmed in a form that fits the entire instruction set into an individual processor cache 60 , with one instruction corresponding to one cache line 140 .
  • Benefits derived from this method of architecturing and pinning the virtual machine are several.
  • no cache line 140 during execution of a virtual machine 90 , need ever be reloaded from main memory 18 , 20 .
  • access times within memory devices 14 themselves vary.
  • a cache access time is an order of magnitude less than the access time for a main memory location. Reloading a cache line 140 is likewise a time-consuming operation.
  • every branch destination (the object of a JUMP) within the virtual machine 90 may be located at a fixed cache line position. Thus, no penalty is created for address generation within the cache 60 itself. Rather, each cache line 140 may be addressed directly as the address of the instruction 91 being requested.
  • a cache controller must manage an addressing algorithm that first searches for a requested reference within the cache. If the reference is not present, then the cache controller requests over the bus 32 from main memory the reference.
  • the address generation, management, and accessing functions of the cache controller are dramatically simplified since every desired address is known to be in the cache for all code references.
  • processors such as the PentiumTM series by INTELTM contain hardware supporting branch prediction. That is, when a branch operation is to be executed, the processor predicts the destination (destination of a JUMP) to which the branch will transfer control. With a pinned cache containing the entire instruction set 90 of the virtual machine 90 , all branch destinations are known. Every instruction has a cache line 140 associated therewith which will never vary. Not only does this correspondence not vary within a single execution of the virtual machine, but may actually be permanent for all loadings of the virtual machine.
  • a branch prediction table is typically updated along with cache line replacement operations. Since the cache lines 140 need never be replaced while the virtual machine is loaded into the cache, and pinned, the branch prediction table becomes static. Inasmuch as the prediction table becomes static, its entries do not change. Moreover, every referenced code instruction is guaranteed to be in the cache. Therefore, any benefits available to a branch prediction algorithm are virtually guaranteed for an apparatus and method operating in accordance with the invention. Flushes of the pipelined instructions now approach a theoretical minimum.
  • ALUs arithmetic logic units
  • V arithmetic logic unit
  • Typical optimal programming on PentiumTM processors may achieve 17 to 20 percent pairing between instructions.
  • pairing is meant that instructions are being executed in both the ‘U’ and ‘V’ pipelines. Here that occurs about 17 to 20 percent of the time in a PentiumTM processor.
  • a method and apparatus in accordance with the invention may routinely obtain 60 percent utilization of the ‘V’ (secondary) pipeline.
  • the selection and ordering of the virtual machine instructions have been implemented to optimize pairing of instructions through the pipelines.
  • a virtual machine application 120 may run in an interpretive environment 90 (the virtual machine 90 ) that is one among several native-code applications 218 , 220 (FIG. 6 ).
  • a small fraction of available processing time may be required for execution of native code 128 implementing a virtual machine application 120 . This time is fragmented across the entire time line of a processor 12 , shared by all multi-tasked processes.
  • a method 160 and apparatus 10 to pin a processor cache 60 for a user of a virtual machine 90 hosted on a computer 11 (individual) are taught previously herein. Pinning into individual cache lines 140 the code segments 106 implementing the individual instructions 91 of the virtual machine 90 dramatically improves the processing speed for virtual machine applications 120 (applications operating in the virtual machine environment 90 ).
  • a virtual machine 90 is pinned, consuming the entire processor cache 58 of a multi-tasking operating system 214 , it eliminates the availability of the processor cache 64 to service other native-code applications 218 , 220 . In a multi-tasking environment, this may degrade performance significantly.
  • a virtual machine application 120 by its very presence, may degrade the operation of the entire panoply of applications 218 , 220 (including itself) being executed by the processor 12 .
  • the need is to load, pin, run, and then unpin rapidly and frequently for interpretive applications 120 in order to provide a faster execution of all applications 218 , 220 running. Otherwise, the pinned processor cache 60 will degrade performance of all native-code applications 218 , 220 . For example, in one test, multi-tasked, native-code applications 218 , 220 ran 3 to 5 times slower with a pinned processor code cache 60 .
  • a mock application 120 may serve to load all the VMI code segments 100 into the respective cache lines 140 .
  • a hooked pin manager 240 in a scheduler 228 , executing a scheduling process 230 in an operating system 214 may control persistence of the contents of a processor cache 60 .
  • Persistence may encompass the enabling of the processor cache and the interrupts.
  • hooking is meant the process of altering the control flow of a base code, in order to include an added function not originally included in the base code. Hooks are often architected into base codes with the intention of permitting users to add customized segments of code at the hooks. Customized code might be added directly at the hook, or by a call or jump positioned as the hook within a base code.
  • a hook into the scheduler 228 need not be an architected hook.
  • the scheduler 228 may have a jump instruction added surgically into it, with a new “hooked” code segment placed at the destination of the jump, followed by the displaced code from where the jump was written in, and a return.
  • the scheduler 228 may be modified at some appropriate jump instruction, having an original destination, to jump to the destination at which is located a “hooked” code segment, such as a pin manager. Thereafter, the pin manager may, upon completion of its own execution, provide a jump instruction directing the processor 12 to the original destination of the “hooked” jump instruction.
  • a “hooked” code segment such as a pin manager.
  • FIG. 6 certain processes 212 , 214 , 216 or modes 212 , 214 , 216 are illustrated for an apparatus 10 with an associated processor 12 .
  • applications 218 , 220 in some number may be executing in a multi-tasking environment hosted by a processor 12 .
  • the applications 218 , 220 operate at a user level 212 or a user mode 212 . Accordingly, the applications 218 , 220 are “visible” to a user.
  • the operating system level 214 may also be referred to as kernel mode 214 .
  • the operating system (O/S) 214 is executed by the processor 12 to control resources associated with the computer 11 .
  • Resources may be thought of as hardware 10 as well as processes available to a computer 11 .
  • access to memory 18 , 20 , storage 16 , I/O devices 22 , 24 , peripheral devices 28 , and operating system services 222 are all controlled resources available in a computer system.
  • Functional features such as serving files, locking files or memory locations, locking processes into or out of execution, transfer of data, process synchronization through primitives, executing applications and other executables, may all be controlled as process resources by the operating system 214 .
  • Applications 218 , 220 at a user level 212 may communicate with a systems services module 222 or systems services 222 in an operating system 214 .
  • the system services 222 may provide for communication of a request from applications 218 , 220 and for eventual execution by the processor 12 of those tasks necessary to satisfy such requests.
  • a file system 224 may provide for addressing and accessing of files.
  • System services 222 may communicate with the file system 224 as necessary.
  • the file system 224 may communicate with a memory and device management module 226 .
  • Each of the modules 222 , 224 , 226 , 228 may be thought of as one or more executables within an operating system 214 for accomplishing the mission or responsibilities assigned according to some architecture of the operating system 214 . Whether or not a module exists as a single continuous group of executable lines of code is not relevant to the invention. Any suitable mechanism may be used to provide the functionality of the system services 222 , while system 224 , memory and device management 226 , and the scheduler 228 .
  • the memory and device management module 226 may control a memory management unit associated with a memory device 14 or the main memory 20 . Likewise, the device management function of the memory and device management module 226 may control access and operation of the processor 12 with respect to input devices 22 , output devices 24 , and other devices that may be connected peripherally through the port 28 .
  • the scheduler 228 provides for scheduling of the execution of the processor 12 . Accordingly, the scheduler 228 determines what processes or threads will be executed by the processor 12 .
  • the hardware level 216 may include any or all of the components of the computer 11 controlled by the operating system 214 .
  • the scheduler 228 may provide for execution of certain processes 160 (see FIG. 4 ), 230 (see FIG. 7 ), 250 (see FIG. 8 ), 290 (see FIG. 9 ).
  • the processes 250 represented in rectangular boxes may be executed by the processor 12 in advancing a particular thread, process, program, or application between various states 251 .
  • the scheduler 228 may give control of the processor 12 to the process 230 .
  • the process 230 may select 232 a process or thread having a highest priority among such processes or threads, and being in a ready state 258 .
  • a change 234 may follow the select 232 in order to convert the selected process or thread to a running state 268 .
  • a context switch 236 may be performed to support the selected process or thread.
  • a context switch may involve a setup of particular components in the hardware level 216 required to support a selected process or thread.
  • the selected process or thread may execute 238 .
  • the process or thread may not execute to completion with one continuous block of time in control of the processor 12 . Nevertheless, a selected process or thread may execute 238 until some change in the associated state 251 occurs, or until some allocated time expires.
  • the process 230 may have an interposed process 240 hooked into it.
  • the interposed process 240 may include a test 242 .
  • the test 242 may determine whether or not a selected process or thread is a native process or not.
  • a native process may operate in native code.
  • a non-native process may operate in some other environment such as an interpretive environment.
  • the test 242 may therefore determine whether a virtual machine 90 needs to be loaded into the processor cache 60 .
  • a load process 244 may execute with a selected process or thread.
  • the load process 244 may be implemented in any suitable manner.
  • the load 244 may use a fast load process 160 .
  • test instructions or any other mechanism may be used to perform a generic load process 290 .
  • a fast load process 160 requires substantially fewer instructions and less time in execution by the processor 12 .
  • the fast load process 160 takes advantage of the architecture of the hardware level 216 to load a processor cache 60 in the minimum amount of time.
  • An initialize process 252 may create or initialize a selected process or thread. The selected process or thread will then be in an initialized state 254 .
  • the processor 12 when time and resources become available, may queue 256 a process or thread into a ready state 258 . From the ready state 258 , a selection 250 may occur for a process or thread having a highest priority. The selection 250 may be thought of as corresponding to a select 232 .
  • a selection 250 may advance a process or thread selected to a standby state 262 . Nevertheless, priorities may shift. Thus, a preemption 264 may move a selected process or thread from a standby state 262 to a ready state 258 .
  • a context switch 266 may occur to dispatch a process or thread from a standby state 262 to a running state 268 .
  • a running state 268 indicates that a selected thread or process has control of the processor 12 and is executing.
  • One may think of the standby state 262 as existing between the selection 250 process and the context switch 266 process.
  • the select step 232 and the change step 234 of FIG. 7 may correspond to the selection 250 and context switch 266 , respectively.
  • an executing process or thread may move from a running state 268 to a terminated state 272 if completion 270 occurs. Execution completion 270 frequently occurs for any given process or thread since an available quantum of time allocated for a running state 268 in often sufficient for completion 270 .
  • a requirement 276 for resources For example, the process or thread may need some input device 22 or output device 24 to perform an operation prior to continued processing. Accordingly, a requirement 276 may change a process or thread to a waiting state 278 .
  • the availability 280 of resources may thereafter advance a process or thread from a waiting 278 to a ready state 258 .
  • expiration of the quantum of time allocated to the running state 268 of a thread or process may cause a preemption 274 .
  • the preemption 274 step or procedure may return the thread or process to the ready state 258 to be cycled again by a selection 250 .
  • a cache load and pin process 282 may precede a context switch 284 , corresponding to the context switch 266 for a native process.
  • the load 282 occurs only for interpretive processes as detected by the test 242 executed between the select step 232 (e.g. selection 250 ) and the change step 234 (e.g. context switch 284 ).
  • a context switch 266 , 284 may be thought of as operating on affected registers, such as by saving or loading context data, changing the map registers of the memory management unit, and the like, followed by changing the state of the processor 12 between one of the states 251 .
  • the load 282 may be completed by any suitable method. For example, notwithstanding their less desirable approach, test instructions may be used to fashion a load process 282 . Nevertheless, the process 160 (see FIG. 4) may properly be referred to as a fast load process 160 or a fast load 160 of a processor cache 60 .
  • a load step 282 (driver 282 , pin manager 282 ) before a context switch 284 is to set up an environment (e.g. virtual machine 90 ) in which to execute an interpretive application 218 (see FIG. 6) such as a virtual machine application 120 (see FIG. 3 ).
  • an interpretive application 218 such as a virtual machine application 120 (see FIG. 3 ).
  • a selection 250 of a native process or thread results in the immediate context switch 266 as the subject process or thread transitions from a standby state 262 to a running state 268 .
  • the processor cache 60 operates normally for any native process following the context switch 266 .
  • a dynamic load and pin process 282 such as the fast load 160 , may be executed very rapidly prior to a context switch 284 prior to placing an interpretive process or thread into a running state 268 .
  • a test 292 may determine whether or not a process resulting from a selection 260 is an interpretive process.
  • the test 292 may be hooked in any suitable location among the processes 250 .
  • a flag may be set to determine whether or not to activate or hook a load and pin process 282 in any procedure occurring between a standby state 262 and a running state 268 .
  • the interposer routine 240 (see FIG. 7) may be hooked into the select 232 (e.g. selection process 260 ) or the context switch process 266 .
  • the entire interposer routine 240 may be hooked as the cache load and pin process 282 in the context switch 284 , but before any substantive steps occur therein.
  • the context switch 284 may be different from the context switch 286 for a native process or thread.
  • the load 282 (processor cache 60 load and pin process 282 ) may be as illustrated in FIG. 9 .
  • a portion 290 of the load 282 may be replaced by the fast load 160 .
  • the disable 294 may correspond to a disable 162 and the re-enable 302 may correspond to the enable 170 of interrupts.
  • the flush 296 may correspond to the flush 164 described above.
  • the load instructions step 298 may or may not correspond to the execute 166 of the fast load 160 . Any suitable method may be used for the load 298 .
  • the example mentioned before, using test instructions, is completely tractable.
  • the fast load 160 using execution of a mock application 120 architected to use every instruction of a 91 of a virtual machine 90 in order to load each of the native code segments 106 corresponding thereto is simply the fastest currently contemplated method for a load 298 .
  • the disable 300 corresponds to a disable 168 .
  • the disable 300 specifically disables only the ability of a cache controller to change the contents of a cache line 140 in the processor cache 60 is affected.
  • the processor cache 60 may operate normally following the re-enable 302 of interrupts.
  • the enable 304 of the processor cache 60 may not be required as a separate step in certain embodiments.
  • the re-enable 302 with only a limited disable 300 may fully enable 304 a processor cache 60 .
  • an extra enable step 304 may be required to return all the functionality to a processor cache 60 .
  • processor cache 60 is meant any of the caches 58 , 60 , 64 for use by the processor, although a segregated, code cache 60 , closest to the processor is one preferred embodiment.
  • the pin manager 240 , 282 may be added at an operating systems (O/S) level 214 as a driver 226 (see FIG. 6) or contained in a driver 282 recognized and allowed by the O/S to be loaded by the O/S.
  • This driver is at a systems level 214 of privilege.
  • a reason why the pin manager is a driver is that this is a way to obtain systems level privileges.
  • the O/S loads the driver 240 , 282 , and allows the driver 240 , 282 to initialize 252 , transferring control to an initialization routine 252 .
  • the driver 240 , 282 either hooks, or creates hooks to later hook into, the operating system 214 . It is important to note that the driver 240 , 282 is in control of the processor 12 , once loaded, and the O/S 214 has turned over control to the driver and its initialization routine 252 , until that control is returned.
  • Drivers 226 have a standard set of commands that may be executed. Drivers 226 also recognize certain commands receivable from the O/S 214 .
  • the pin manager 282 could not communicate with the processor cache 60 absent this systems privilege level, nor could it attach (hook) itself into the O/S 214 .
  • the pin manager 240 , 282 by being a driver 226 , fitting the device driver formats and protocols, may be recognized by the O/S 214 . This recognition is not available to an application 218 , 220 . With this recognition, the pin manager 240 , 282 (driver 240 , 282 ) is designated as privileged-level code and can therefore contain privileged-level instructions of the processor 12 .
  • Certain instructions may exist at multiple privilege levels. However, each such instruction is treated differently, according to the associated privilege level. For example, a MOVE instruction may mean the same in any context, but may only be able to access certain memory locations having corresponding, associated, privilege levels.
  • the interrupt disable 162 , 294 (CLI instruction), flush 164 , 296 (FLUSH or WBFLUSH), disable cache 168 , 300 , and enable cache 304 are privileged level instructions. They are available in the operating system environment 214 (privileged or kernel mode 214 ) to systems programmers writing operating systems 214 , device drivers 226 , and the like. So long as a user is authorized at the appropriate privilege level 214 , the instructions are directly executable. If a user is not at the required level 214 of privilege, then the processor 12 generates an “exception” to vector off to an operating system handler to determine what to do with an errant program using such instructions improperly.
  • BIOS Basic Input/Output System
  • the processor cache 60 is folly by conventional wisdom.
  • the processor cache 60 is highly counter-intuitive.
  • conventional wisdom is superceded to good effect.
  • dynamic pinning 282 and “programmatic management” of a processor cache 60 reflect the exercise, at run time, of control of both cache contents and their duration in accordance with the individual needs determined for a specific program 218 , 220 .
  • a major benefit of dynamic pinning 298 of a processor cache 60 is an ability to manage the loading 298 and pinning 300 of a virtual machine 90 (VM, interpretive environment 90 ) in a processor cache 60 (e.g. level-1 code cache 60 ) in order to optimize the entire workload of a processor 12 . This also maximizes the speed of the virtual machine 90 when run.
  • VM virtual machine 90
  • interpretive environment 90 e.g. level-1 code cache 60
  • a processor cache 60 may be any cache adapted to store instructions executable by a processor.
  • the cache may be segregated or not segregated, to have a portion for instructions and a portion for data.
  • Perhaps the most significant feature of a processor cache 58 , 60 , 64 is the lack of direct programmatic addressing as part of the main memory address space.
  • the processor cache 58 , 60 , 64 is thus “hidden” from a programmer.
  • pre-programmed instructions associated with the architecture of a processor cache 58 , 60 , 64 determine what is loaded into each cache line 140 , when, and for how long. This is typically based on an LRU or pseudo-LRU replacement algorithm.
  • the instant invention relies on direct programmatic controls, and knowledge of the cache architecture to prompt the processor cache 60 to store a certain desired set of contents for use by a specified program. Thus, careful programmatic controls may obtain certain reflexive responses from the processor cache 60 and its internal cache controller, which responses are manipulated by a choice of programmatic actions.
  • Algorithmic management of a hardware cache on a processor 12 has never allowed “dynamic programmatic control” of a hidden cache.
  • the use of knowledge of the architected response of the cache hardware system 60 programmatically optimizes the processor cache 60 behavior, as the processor cache 60 responds to privileged programmatic commands at an operating system level 214 .
  • Cache fencing will be discussed in conjunction with certain memory management concepts implemented by Intel Corporation for their Pentium Processors. Nevertheless, one skilled in the art will readily recognize that the concepts discussed in terms of Intel's architecture also apply to other types of architectures, and the manner of implementing the present invention with other types of memory management architectures will be readily apparent.
  • the logical address 312 or pointers 312 may be constructed of a segment selector 314 and an offset 316 .
  • a global descriptor table 318 is pointed to by the value of the segment selector 314 .
  • the segment selector 314 points to a base address 319 of a segment descriptor 320 in the global descriptor table 318 .
  • the segment descriptor 320 in turn points to a linear address space 322 . Specifically, the segment descriptor 320 points to a base address 324 or segment base address 324 . The offset 316 in combination with the segment base address 324 point to a linear address 326 within the linear address space 322 .
  • the linear address 326 exists within a page 325 and within a segment 327 in the linear address space 322 .
  • a linear address space 322 may be thought of literally as a mathematical space addressable by virtue of the ability of a processor 12 to store a number corresponding to a maximum address. Addressing may be done in a flat mode with the linear address 326 directly accessible, or heiarchically, through segmentation 327 , paging 325 , 327 , or both.
  • a Linear address 326 contains different component parts that may be separated or subdivided in order to navigate a memory device 14 such as random access memory 20 .
  • a linear address 326 includes a pointer 328 or page directory pointer 328 .
  • An offset 330 and a table pointer 332 form the remainder of the linear address 326 .
  • the pointer 328 identifies an entry 334 in a page directory 336 .
  • the entry 334 or page entry 334 points to a base address 335 .
  • the entry 334 or base address 335 in combination with the table pointer 332 or table entry pointer 332 , points to a page table entry 338 in a page 340 .
  • the page table entry 338 combined with the offset 330 from the linear address 326 , points to the physical address 342 in the physical address space 344 of a memory device 14 , 20 .
  • the combinations of base address 346 identified directly by the entry 338 in the page table 340 effectively leverages or multiplies the ability to address more physical address space 334 in terms of individual pages 347 and offsets 330 therein.
  • page 347 corresponds to the page address 325 . Nevertheless, the page address 325 or page address range 325 exists mathematically in a linear address space 322 .
  • the physical address space 344 is likewise a mathematical construct. However, for each page 347 , base address 346 , physical address 342 , and the like, an actual location in the memory device 14 corresponds to a value from the physical address space 344 .
  • a page entry 338 from a page table 340 includes representations of attributes 348 corresponding to a page 347 .
  • a 32-bit physical address includes a page base address 349 along with other attributes 348 .
  • a cache disable flag 350 is a cache disable flag 350 .
  • the cache disable flag 350 also called the PCD register 350 , when set disables the ability of a page 347 to be cached in cache memory 60 , 66 .
  • Other attributes 348 include an availability entry 352 , a global page entry 354 , a reserved bit 356 , a flag 358 identifying whether a page 347 has been written to and is thus dirty, and an access bit 360 identifying whether a page 347 has been accessed.
  • Other attributes 348 include a write-through bit 362 identifying information to write through a page 347 , while a user bit 364 or user/supervisor bit 364 may be set to provide privileges to system administrators.
  • a read/write bit 366 identifies whether permission to read/write, or both are permitted and a bit 368 identifies the presence of a page 347 .
  • a page 347 may be disabled from being cached in a cache 60 by setting the cache disable flag 350 .
  • one alternative to pinning the cache 60 or other caches 64 , such as an instruction cache 66 comprises fencing.
  • the cache disable flags 350 corresponding to all pages 347 not included in the virtual machine 90 are set, thereby precluding all such pages 347 from being loaded into the cache 60 , 66 .
  • all pages 347 not storing portions of the virtual machine 90 may be fenced out of the caches 60 , 66 by a proper setting of the cache disabled flag 350 .
  • interpreters 90 or interpretive environments 90 when designed for and committed to a processor cache 60 , 66 . This is particularly true when a level 1 cache 60 is committed to the use of an interpretive environment 90 .
  • Studies made on an apparatus and method in accordance with the invention indicate that execution times of an interpretive environment 90 may be improved by an order of magnitude, and some times more by virtue of pinning 250 the selected interpretive instructions 91 within the level one cache 60 .
  • cache pinning may be obtained under the present invention by manipulating the memory management unit (MMU) 225 or the paging unit 310 . Accordingly, regions of the physical address space 344 may be designated as cacheable (capable of being cached) or uncacheable. Manipulation of the cache disable flags 350 , allows an interpreter instruction 90 to be marked as cacheable while all other pages 347 are marked as non-cacheable.
  • MMU memory management unit
  • the native code instructions 106 are segregated, further augmenting the underlining harvard architecture that supports a split “I” (instruction) and “D” (data). Since operating systems 214 (see FIG. 6) are required to support management of the memory 20 , system calls are present in virtually all operating systems 214 widely used and supported today. Thus, commands are readily accessible to set the cache disable flag 350 for all page table entries 338 not part of a virtual machine 90 .
  • heuristic pinning of a level-1 code cache may significantly improve performance of various operating environments 214 .
  • accessing a processor cache 166 can be cumbersome using test instructions.
  • a fast loading technique was described for improving the speed for loading a processor cache 60 , 66 without the use of test instructions.
  • cache fencing is similarly managed.
  • memory type range registers (MTRRs) of the INTEL x86 Pentium processors provide flexible paging by the use of memory type range registers (MTRRS 370 ).
  • the benefits of cache pinning 250 may be obtained for interpretive environments 90 without direct manipulation of the processor cache 60 , 66 . That is, without using test instructions.
  • each page 371 containing instructions 91 of the virtual machine 90 or interpreter 90 may be identified with the cache disable flag 350 as cacheable. All other code pages 371 , may be set as non-cacheable.
  • a memory type restriction register (MTRR) 370 contains a type register 372 , a start register 374 and a length register 376 . Accordingly, types may be identified as uncacheable, write-protected, write-combining, write-through, and write-back. Designation of a page 371 as uncacheable may be conducted through the MTRR and prevents the contents of that page 371 from having access to the cache 60 , 66 .
  • indicating, marking, or otherwise setting pages 371 associated with a virtual machine 90 as cacheable provides access to the cache by the virtual machine 90 under the direct management of the MMU 225 .
  • the start register 374 provides a base address 374
  • the length register 376 provides an offset as the outer boundary of a flexible page 371 identified by the memory type range register (MTRR) 370 .
  • MTRR memory type range register
  • page size manipulation may be used. So doing allows the pages for which cache disable flags 350 must be set to be reduced to just a few.
  • page definitions may be prepared for the pages 371 that segregate the contents of virtual memory.
  • the interpreter instructions 91 which together form the virtual machine 90 are segregated from the rest of the operational data in the physical memory 20 . Since the MTTRs 370 are related to the memory management unit, access to a systems programmer is more readily available than is accessed to the processor's caches 60 , 66 .
  • a method of cache fencing 380 may include a save step 382 in which existing values of MTTRs 370 corresponding to existing pages 371 are saved.
  • the values of all start points 374 , lengths 376 , and types 372 of pages 371 may be saved to a memory device 14 .
  • new MTTRs are defined.
  • the new MTTRs 370 may change the page boundaries 377 to reduce the number of pages in the physical memory 20 .
  • a portion of physical memory 20 may be defined as a single contiguous page 371 containing an amount of memory 20 sufficient to store all of the interpretive instructions 91 associated with a virtual machine 90 .
  • the remainder of physical memory 344 , 20 may be partitioned into one or two flexible pages 371 by selectively setting the start registers 374 and the length registers 376 of the pages 377 . Even with fragmentation of files, some minimal number of pages 371 , in one embodiment, two contiguous 4 kilobyte pages, will include virtual machine instructions 91 . Consolidation or defragmentation of the virtual machine instructions 91 may produce a very compact, contiguous page 371 .
  • the type 372 corresponding to each MTTR 370 and associated page 371 may be set as cacheable or uncacheable, as appropriate.
  • a set cacheable step 386 is preferably applied to the MTTRs 370 of all the pages 346 corresponding to the interpreter 90 .
  • the pages 346 corresponding to the virtual machine 90 are contiguous.
  • a non-cacheable status or type 372 may be applied to all MTTRs 370 associated with pages 347 of physical memory 344 , 20 not associated with the virtual machine 90 .
  • an operate step 390 may simply operate the interpreter 90 as previously discussed.
  • a reload step 392 may be conducted to reinstate all saved, “old” memory type range registers (MTRRs) 370 .
  • MTRRs memory type range registers
  • MTRRS 370 contiguous locations for virtual machine instructions 91 within a single flexible page 371 b may require less physical space than that required for two fixed pages 347 .
  • the remainder of physical memory 20 with the virtual machine instructions 91 contiguous to one another, may be divided into as few as one or two additional flexible pages 371 , theoretically following the reload step 392 , a continue step 394 may return control of the processor 12 to any application that was present when the virtual machine 90 was engaged.
  • Modules for conducting the method steps 382 through 392 may also be included under the present invention.
  • the modules may be defined in accordance with functional steps conducted by the modules. Accordingly, for instance, the save “old” MTRRs step could be employed by a save “old” MTRRs module, the define new MTRRs step could be conducted by a define new MTRRs module, and so forth.

Abstract

An apparatus and method for cache fencing allows programmatic control of the access and duration of stay of selected executables within processor cache. In one example, an instruction set implementing a virtual machine may store each instruction in a single cache line as a compiled, linked loaded image. After loading, cache fencing is conducted to prevent the cache from flushing the contents or replacing the contents of any cache line. Typically, in so doing, attributes associated with pages in physical memory are employed. The attributes include an “uncacheable” attribute flag, which is set for the entire contents of physical memory except that containing the selected executables which are intended to remain within cache memory. The attributes may also include page sizing attributes which are utilized to define pages that contain interpreter instructions and pages that do not contain interpreter instructions. The number of pages not containing interpretive instructions are minimized to streamline the operation of setting the uncacheable attribute flags. A fast load may flush the cache and run an application containing the entire instruction set. A pin manager may be hooked into a scheduler in a multi-tasking operating system to load the processor cache and fence and unfence physical memory as rapidly as needed. Thus, the processor cache may be available for general use, except when physical memory is pinned.

Description

This application claims benefit to U.S. application Ser. No. 60/079,185, filed Mar. 23, 1998.
BACKGROUND
1. The Field of the Invention
The present invention relates to the use of processor caches. More particularly, the present invention is directed to apparatus and methods for programmatically controlling the access and duration of stay of selected executables within processor cache.
2. The Background Art
Operations executed by a processor of a computer proceed in a synchronization dictated by a system clock. Accordingly one characteristic of a processor is a clock speed. For example, a clock speed may be 33 megahertz, indicating that 33 million cycles per second occur in the controlling clock.
A processor may execute one instruction per clock cycle, less than one instruction per clock cycle, or more than one instruction per clock cycle. Multiple execution units, such as are contained in a Pentium™ processor, may be operated simultaneously. Accordingly, this simultaneous operation of multiple execution units, arithmetic logic units (ALU), may provide more than a single instruction execution during a single clock cycle.
In general, processing proceeds according to a clock's speed. Operations occur only as the clock advances from cycle to cycle. That is, operations occur as the clock cycles. In any computer, any number of processors may exist. Each processor may have its own clock. Thus, an arithmetic logic unit (ALU) may have a clock operating at one speed, while a bus interface unit may operate at another speed. Likewise, a bus itself may have a bus controller that operates at its own clock speed.
Whenever any operation occurs, a request for interaction is made by an element of a computer. Then, a transfer of information, setup of input/output devices, and setup of the state of any interfacing devices, must all occur.
Each controller of any hardware must operate within the speed or at the speed dictated by its clock. Thus, clock speed of a central processing unit does not dictate the speed of any operation of a device not totally controlled by that processor.
These devices must all interface with one another. The slowest speed will limit the performance of all interfacing elements. Moreover, each device must be placed in the state required to comply with a request passed between elements. Any device that requires another device to wait while some higher priority activity occurs, may delay an entire process.
For example, a request for an instruction or data within a hard drive, or even a main, random-access memory, associated with a computer, must negotiate across a main system bus. A central processing unit has a clock operating at one speed. The bus has a controller with a clock that may operate at another speed. The memory device has a memory management unit that may operate at another speed.
Further to the example, a Pentium™ processor having a clock speed of 100 megahertz may be connected to peripheral devices or main memory by an industry standard architecture (ISA) bus. The ISA bus has a specified clock speed of 8 megahertz. Thus, any time the Pentium™ processor operating at 100 megahertz requests data from the memory device, the request passes to the opposite side of the ISA bus. The data may not be processed or delivered at a speed greater than that of the bus at 8 megahertz. Moreover, a bus typically gives low priority to the central processing unit. In order to avoid underruns and overruns, the input/output devices receive priority over the processor. Thus, the 100 megahertz processor may be “put on hold” by the bus while other peripheral devices have their requests filled.
Any time a processor must access any device beyond its own hardware pins, the hardware interface to the computer outside the processor proper, the required task cannot be accomplished within one clock count of the processor. As a practical matter, a task is not usually completed in less than several clock counts of the processor. Due to other priorities and the speeds of other devices, as well as the need to adjust or obtain the state configurations of interfacing devices, many clock counts of a processor may occur before a task is completed as required.
Associated with every hardware interface between hardware components, elements, and the like (anything outside an individual integrated chip), a hardware handshake must occur for any communication. A handshake, including a request and an acknowledgement, must occur in addition to a transfer of actual data or signals. Handshake protocols may actually involve several, even many, clock counts for the request alone, the acknowledgement alone, and for passing the data itself. Moreover, a transmission may be interrupted by a transaction having a higher priority. Thus, communicating over hardware interfaces is relatively time consuming for any processor. Hardware interfacing may greatly reduce or eliminate the benefits of a high-speed processor.
To alleviate the need to communicate across hardware interfaces during routine processing, modern computer architectures have included processor caches. In general, processors benefit from maintaining as close to themselves as possible all instructions, data, and clock control. This proximity reduces the need for interfaces, the number of interfaces, the interface complexity, and thus, the time required for compliance with any instruction or necessary execution. Thus, caches have been moved closer and closer to the processor.
Memory caches are common. Such a cache is created within a dedicated portion of a memory device. These are different, however, from caches dedicated to a processor.
The INTEL 386™ processor contains an optional external cache connected to the processor through a cache controller chip. The INTEL 486™ TM contains an internal 8 kilobyte cache on the central processing unit itself. Within the chip containing the processor, is integrated a cache. This cache is dedicated to both code and data accesses.
The 486™ also supports another cache (a level-2 cache, as opposed to the primary or level-1 cache just described above). Access to the level-2 cache is through an external cache controller chip, similar to that of the 386™. In each case, for both the 386™ and 486™ processors, the external cache controller is itself positioned on a side of the processor's internal bus (CPU bus) opposite that of the processor.
The Pentium™ processors contain a level-1 (primary) data cache as well as a level-1 code cache. Thus, code and data are segregated, cached separately. The Pentium™ processors continue to support an external, level-2 cache across a CPU bus.
One should understand that the expression “bus”, hereinabove, refers to the processor bus, rather than the system bus. For example, the main system bus connects a processor to the main memory. However, the cache controllers and caches on a processor, or external to the processor but simply located across a processor's internal bus interface unit, do not rely on the main system bus.
A cache has some fixed amount of memory. A code cache will contain certain executable instructions, a data cache will contain data, and a non-segregated cache may contain both. The memory of any type of cache is typically subdivided into cache lines. For example, a typical cache line may contain 32 bytes of information. Thus, a cache line contains a standard number of bytes in which space may be stored a copy of certain information obtained from a main memory device.
Associated with each cache line is a tag. The tag binds a physical address and a logical address corresponding to the contents of an associated cache line.
The physical and logical addresses contained in the tag associated with a cache line may correspond to a physical location in the main memory device, and a logical position within an application respectively.
Caches associated with a processor are transparent, even hidden, with respect to a user and an application. Each cache has an associated controller. In operation, a cache controller effectively “short circuits” a request from a processor to a memory unit. That is, if a particular address is referenced, and that address exists in a tag associated with the contents of a cache line in a cache, the cache controller will fulfill the request for the instruction out of the cache line containing it. The request is thus fulfilled transparently to the processor. However, the effect of a cache is to eliminate, as much as possible, communication through hardware interfaces as described above. Thus, a cache may greatly improve the processing speed of applications running on processors.
Tags may also have associated therewith two numbers referred to as “use bits.” The use bits may typically represent a simple count of use. This count may be useful to the cache controller in determining which cache lines are the least recently used (LRU). Accordingly, a cache controller may refer to the LRU count to determine which cache lines have been referenced the least number of times.
Incidently, but significantly, with respect to the invention, some cache controllers may churn a cache. That is, if an insignificant number of bits is contained in the LRU or use bits, then a counter may be improperly reset to zero due to count “wrap-around” during high use. Thus, highly-used cache lines may actually be swapped out, churning the cache and dramatically decreasing efficiency.
Several difficulties exist with caches. A cache controller has a general purpose function to service address requests generally. For example, a virtual machine may be implemented in some limited number of instructions. In operating such a virtual machine, a computer processor has an underlying native language in which the virtual machine instructions are written. The virtual machine instructions will be requested repeatedly. The virtual machine instructions are accessed relatively slowly if they are treated simply as another general purpose instruction being retrieved periodically into the cache.
Many processors pipeline instructions. Two problems may occur with pipelining. The first is flushing a pipeline as a result of a branch. The other is stalling due to requested data not arriving within a next clock count in sequence. That is, whenever a cache “miss” occurs, a request has been made to the cache, but the cache cannot respond because the information is not resident. Misses may occur repeatedly over extensive numbers of clock counts while a cache controller accesses a main memory device to load the requested instructions or data. Misses decimate the efficiency of processors. Meanwhile, even with branch prediction methods, a pipeline may flush several instructions with a resulting loss of processing performance.
Cache Pinning
In a related application, the inventor has overcome many of the above problems. One manner of solving the above-discussed problems involves the use of processor cache. Interpretive environments, such as virtual machines, typically involve the use of a series of interpreter instructions. The interpreter instructions are generally a set of native code instructions that together implement an instruction of a high level language that has not been compiled or linked for use on the particular hardware platform of the processor on which the interpretive environment is operating.
Thus, in the case of a Java virtual machine, generic Java code can operate upon any platform that also has access to the Java virtual machine. The Java virtual machine comprises separately executable modules or interpreter instructions that recognize the instructions of the Java language and translate on the fly the Java instructions into the native machine code of the processor for which the virtual machine is designed.
The latency of execution of virtual machine instructions is one drawback that has prevented the virtual machine concept from gaining more widespread acceptance. Typically, when an interpretive instruction, such as an instruction in the Java language, is loaded into a microprocessor for execution, the processor also has to go out and find the corresponding interpretive instruction.
The inventor has proposed that interpretive instructions be created that each occupy a single line of cache memory. The interpretive instructions are loaded into cache, and “pinned,” so that they are not purged or replaced. Typically this pinning is accomplished through privileged systems levels commands to the cache memory.
Several limitations arise that also need to be addressed. For instance, the use of system access may not be desirable. Additionally, this method makes no provision for use of the cache memory by input and output devices.
Accordingly, a need exists for an alternative to cache pinning to programmatically controlling the access and duration of stay of selected executables within processor cache.
BRIEF SUMMARY AND OBJECTS OF THE INVENTION
In view of the foregoing, it is a primary object of the present invention to provide an alternative to pin management of an accelerator for increasing the execution speed of interpretive environments.
It is another object of the invention to provide programmatic control of persistence of executables stored in a processor code cache by the pin management alternative.
It is another object of the invention to provide a heuristic determination for the alternative to pinning the contents of a cache programmatically by a processor.
It is another object of the invention to provide such an alternative to cache pinning with which a virtual machine containing an instruction set sized to fit completely within a cache, can be maintained within a cache.
It is another object of the invention to provide such an alternative to cache pinning in which programmatic control is maintained over the content and persistence of the contents of a cache, particularly a code cache, and more particularly a level-1 code cache, especially a level-1 code cache integrated into a central processing unit.
It is another object of the invention to provide such an alternative to cache pinning that can be used with a method to accelerate execution of an interpretive environment by copying instructions of an instruction set into the code cache and pinning those instructions for the duration of the use by the processor of any instructions in the set, in order to increase the speed of processing the virtual machine instructions, eliminate cache misses, optimize pipelining within the processor, while minimizing supporting calculations such as those for addressing and the like.
It is another object of the invention to provide such an alternative to cache pinning which can be used with heuristic determination of when to pin a cache, particularly a code cache, based on a cost function of some performance parameter, such as frequency of use, infrequency of use, size, and inconvenience of reloading a particular instruction to be cached.
Consistent with the foregoing objects, and in accordance with the invention as embodied and broadly described herein, an apparatus and method are disclosed in one embodiment of the present invention as including a central processing unit (CPU) having an operably associated processor cache, preferably a level-1 cache. The level-1 cache is closest to the actual processor in the CPU.
The cache may actually be integrated into the CPU. The processor may be programmed to install a full set of virtual machine instructions (VMI) in the cache. The contents of physical memory may then be “fenced” to keep from displacing the VMI set from cache, thereby eliminating the “misses” of the individual VMI interpreter instructions by the processor that significantly slows down virtual machines.
In one embodiment, an apparatus and method in accordance with the invention may “programmatically control” the contents of the cache. The cache may be loaded with a full set of virtual machine instructions, properly compiled or assembled, linked, and loaded.
The set may incorporate in a length not to exceed a standardized specified number of cache lines, the executable, machine-language implementation of each command or instruction provided in an interpretative environment. The set, fit to the total available cache lines, may define a virtual machine (the entire interpreter). The set may be pinned, after being loaded into a previously evacuated cache. Alternatively, the contents of physical memory other than the VMI set may be fenced from the cache.
Loading may be accomplished by running a, simple application having no particular meaning, but containing all of the VMIs at least once. Knowing that the cache will respond as designed, one may thus load all of the native code segments implementing the VMIs automatically into the cache in the fastest mode possible, controlled by the cache controller. Yet, the entire process is prompted by programmatic instructions, knowingly applied.
This “programmatic control,” in lieu of general purpose control, of a cache, especially a code cache, may completely eliminate cache “misses.” This greatly enhances the effective operating speed of an interpreted or interpretive environment.
A pin manager may be interposed or hooked into an operating system to pin and unpin the processor cache associated with a processor hosting a multi-tasking operating system. A pin manager may perform several functions in sequence. It tests for the presence of an interpretive process as the next in line to be executed by a processor. If such is present, the pin manager disables interrupts, flushes the processor cache (preferably with write-back if a non-segregated cache, in order to save data changes), loads the processor cache (preferably by execution of a mock application containing all the instructions of the interpretive environment), disables the processor cache to effectively pin the processor cache to continue operating without being able to change its contents, and then re-enables the interrupts to continue normal operation of the processor.
The pin manager may be adapted to achieve fencing as an alternative to disabling the processor cache. Fencing involves accessing information registers that control the paging of memory. These information registers typically include an “uncacheable” provision for preventing caching of a particular page. Under the present invention, all of the pages of physical memory with the exception of those that contain the virtual machine interpreter instructions, which are left as cacheable. A loading program is then called to load the interpretive instructions into cache memory. The virtual machine may be quickly swapped into and out of memory using fencing.
In so doing, the invention may disable interrupts in order to eliminate all possibility of a change in control flow during “loading” of the cache with the desired contents. Otherwise, an interrupt from a hardware device may pre-empt current execution, loading an interrupt service routine into the processor cache.
The pin manager may then flush the processor cache. A flush of a processor cache invalidates all of the contents of the cache lines in the cache. Write-back saves the contents of altered (dirty) cache lines back to main memory.
The pin manager then loads the processor cache, preferably by running a mock application. The mock application may introduce every desired code segment, each implementing an individual interpreter instruction into the cache.
Finally, the pin manager may re-enable the interrupts. Re-enablement returns the processor to normal operation. The virtual machine interpreter instructions remain in cache so long as the contents of the rest of physical memory remains fenced.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects and features of the present invention will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only typical embodiments of the invention and are, therefore, not to be considered limiting of its scope, the invention will be described with additional specificity and detail through use of the accompanying drawings in which:
FIG. 1 is a schematic block diagram of an apparatus in accordance with the invention;
FIG. 2 is a schematic block diagram showing implementation details for one embodiment of the apparatus of FIG. 1;
FIG. 3 is a schematic block diagram of executable modules and data structures consistent with one implementation of an apparatus and method in accordance with the invention;
FIG. 4 is a schematic block diagram of a method in accordance with the invention;
FIG. 5 is a schematic block diagram of registers used for addressing;
FIG. 6 is a schematic block diagram of an operating system that may be executed by the processor of FIG. 1;
FIG. 7 is a schematic block diagram of processes occurring in a scheduler of FIG. 6, illustrating hooking a pin manager therein;
FIG. 8 is a schematic block diagram of an alternative representation of processes of FIG. 7 illustrating states of a process or thread executed by the processor in accordance with the scheduler;
FIG. 9 is a schematic block diagram of steps associated with a pin manager, generalizing the fast loading process of FIG. 4, and adapting it to a multi-tasking environment;
FIG. 10 is a schematic block diagram illustrating the use of paging within physical memory to achieve cache fencing;
FIG. 11 is a schematic block diagram illustrating a page table entry used under one embodiment of cache fencing;
FIG. 12 is a schematic block diagram illustrating physical memory and MTRRs associated with logical pages of physical memory; and
FIG. 13 is a schematic block diagram of one embodiment of a method of cache fencing.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the system and method of the present invention, as represented in FIGS. 1 through 9, is not intended to limit the scope of the invention, as claimed, but it is merely representative of the presently preferred embodiments of the invention.
The presently preferred embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout.
Those of ordinary skill in the art will, of course, appreciate that various modifications to the details illustrated in the schematic diagrams of FIGS. 1-13 may easily be made without departing from the essential characteristics of the invention. Thus, the following description is intended only as an example, and simply illustrates one presently preferred embodiment consistent with the invention as claimed herein.
Referring now to FIGS. 1-3, and more particularly, an apparatus 10 may include a node 11 (client 11, computer 11) containing a processor 12 or CPU 12. The CPU 12 may be operably connected to a memory device 14. A memory device 14 may include one or more devices such as a hard drive or non-volatile storage device 16, a read-only memory 18 (ROM) and a random access (and usually volatile) memory 20 (RAM).
The apparatus 10 may include an input device 22 for receiving inputs from a user or another device. Similarly, an output device 24 may be provided within the node 11, or accessible within the apparatus 10. A network card 26 (interface card) or port 28 may be provided for connecting to outside devices, such as the network 30.
Internally, a bus 32 (system bus 32) may operably interconnect the processor 12, memory devices 14, input devices 22, output devices 24, network card 26 and port 28. The bus 32 may be thought of as a data carrier. As such, the bus 32 may be embodied in numerous configurations. Wire, fiber optic line, wireless electromagnetic communications by visible light, infrared, and radio frequencies may likewise be implemented as appropriate for the bus 32 and the network 30.
Input devices 22 may include one or more physical embodiments. For example, a keyboard 34 may be used for interaction with the user, as may a mouse 36. A touch screen 38, a telephone 39, or simply a telephone line 39, may be used for communication with other devices, with a user, or the like. Similarly, a scanner 40 may be used to receive graphical inputs which may or may not be translated to other character formats. A hard drive 41 or other memory device 14 may be used as an input device whether resident within the node 11 or some other node 52 (e.g., 52 a, 52 b, etc.) on the network 30, or from another network 50.
Output devices 24 may likewise include one or more physical hardware units. For example, in general, the port 28 may be used to accept inputs and send outputs from the node 11. Nevertheless, a monitor 42 may provide outputs to a user for feedback during a process, or for assisting two-way communication between the processor 12 and a user. A printer 44 or a hard drive 46 may be used for outputting information as output devices 24.
In general, a network 30 to which a node 11 connects may, in turn, be connected through a router 48 to another network 50. In general, two nodes 11, 52 may be on a network 30, adjoining networks 30, 50, or may be separated by multiple routers 48 and multiple networks 50 as individual nodes 11, 52 on an internetwork. The individual nodes 52 (e.g. 52 a, 52 b, 52 c, 52 d) may have various communication capabilities.
In certain embodiments, a minimum of logical capability may be available in any node 52. Note that any of the individual nodes 52 a- 52 d may be referred to, as may all together, as a node 52.
A network 30 may include one or more servers 54. Servers may be used to manage, store, communicate, transfer, access, update, and the like, any number of files for a network 30. Typically, a server 54 may be accessed by all nodes 11, 52 on a network 30. Nevertheless, other special functions, including communications, applications, and the like may be implemented by an individual server 54 or multiple servers 54.
In general, a node 11 may need to communicate over a network 30 with a server 54, a router 48, or nodes 52. Similarly, a node 11 may need to communicate over another network (50) in an internetwork connection with some remote node 52. Likewise, individual components 12-46 may need to communicate data with one another. A communication link may exist, in general, between any pair of devices.
Referring now to FIG. 2, a processor 12 may include several internal elements. Connected to the bus 32, a bus interface unit 56 handles the bus protocols enabling the processor 12 to communicate to other devices over the bus 32. For example, the instructions or data received from a ROM 18 or data read from or written to the RAM 20 may pass through the bus interface unit 56.
In some processors, a processor cache (e.g. cache 58,64), such as a level-1 cache 58 may be integrated into the processor 12. In specific embodiments of processors 12, such as the Pentium™ and Pentium™ Pro processors, as well as the PowerPC™ by Motorola, the level-1 cache 58 may be optionally subdivided into an instruction cache 60 and a data cache 62.
A level-1 cache 58 is not required in a processor 12. Moreover, segregation of the instruction cache 60 from the data cache 62 is not required. However, a level-1 cache 58 provides rapid access to instructions and data without resort to the main memory 18, 20 (RAM 20). Thus, the processor 12 need not access (cross) the bus interface unit 56 to obtain cached instructions and data.
Certain processors 12 maintain an external cache 64. The external cache 64 is identified as a level-2 cache in FIG. 2. Nevertheless, the level-2 cache 64 may be a level-1 cache if no level-1 cache 58 is present on the processor 12 directly. Similarly, the external cache 64 may or may not be segregated between an instruction cache 66 and a data cache 68. Any suitable processor cache may be used.
Execution, normally associated with a processor 12, is actually most closely related to a fetch/decode unit 70, an execute unit 72, and a write-back unit 74. Likewise, associated with each cache 58, 64, is typically an inherent, integrated, hardware controller. The cache controller may be thought of as control logic built into the cache hardware.
When the fetch unit 71 a issues a request for an instruction, the request goes to the bus interface unit 56. The level-1 cache 58 makes a determination whether or not the request can be satisfied by data or instructions identified with the logical address requested from cached data and instructions.
If an instruction cannot be provided by the level-1 cache 58, the level-2 cache 64 may respond to the request. If the desired item (data or instruction) is not present in either the level-1 cache 58 or the level-2 cache 64, then the main memory 18, 20 may respond with the desired item. Once the request has been fulfilled by the fastest unit 58, 64, 20, 18 to respond with the desired item, the request is completed, and no other devices will respond.
Main memory may include the ROM 18, the RAM 20, or both. Nevertheless, many computers boot up using the contents of the ROM 18 and thereafter use the RAM 20 for temporary storage of data associated with applications and the operating system. Whenever “main memory” is mentioned, it is contemplated that it may include any combination of the ROM 18 and RAM 20.
Once an instruction is retrieved for the fetch unit 71 a, the instruction is passed to the decode unit 71 b. The fetch 71 a and decode 71 b are typically highly integrated, and perform in an overlapped fashion. Accordingly, a fetch/decode unit 70 is typical.
As a practical matter, the decode unit 71 b may identify a current instruction to be executed. Identification may involve identification of what type of instruction, what type of addressing, what registers will be involved, and the like. The presence of the instruction in an instruction register, may itself stimulate execution on the next clock count.
Once identification of an instruction is completed by the decode unit 71 b, an execute unit 72 may immediately process the instruction through low-level, control-loop hardware. For example, sequencers, registers, and arithmetic logic units may be included in an execute unit 72.
Each instruction as it is fetched, decoded, executed, and the like, may require interaction between an individual processing unit 70, 72, 74 and a register pool 76. The registers 76 (register pool 76) are hidden from programmers and applications. Nevertheless, the hardware architecture of the processor 12 provides a hardware logic governing interaction between the units 70, 72, 74 and between the registers 76 and the units, 70, 72, 74.
Upon completion of execution of an instruction, a write-back unit 74 may provide an output. Accordingly, the output may be passed to the bus interface unit 56 to be stored as appropriate. As a practical matter, a result may be stored in a cache 58 of a level-1 variety or in a level-2 cache 64. In either event, a writeback unit 74 will typically write through to the main memory 18, 20 an image of the result.
Modem processors 12, particularly the Pentium™ processors, use a technique called pipelining. Pipelining passes an instruction through each of the fetch/decode/execute steps undergone by that instruction as quickly as possible. An individual instruction is not passed completely through all of its processing steps before the next instruction in order is begun.
For example, a first instruction may be fetched, and on the next clock count another instruction may be fetched while the first instruction is being decoded. Thus, a certain parallel, although slightly offset in time, processing occurs for instructions.
An advantage of a method and apparatus in accordance with the invention is that instructions may be more effectively pipelined. That is, prediction routines have been built into hardware in the Pentium™ class of processors 12. However, prediction is problematic. Inasmuch as a branch may occur, within approximately every five machine code instructions on average, the pipeline of instructions will be in error periodically. Depending on the sophistication of a prediction methodology, one or more instructions in a pipeline may be flushed after entering a pipeline at the fetch unit 71 a.
Referring now to FIG. 3, a virtual machine 90 or an instruction set 90 implementing a virtual machine 90 on a processor 12 is illustrated schematically. Relationships are illustrated for caching 80 or a cache system 80 for storing loaded and executable instructions 106 (e.g. 106 a) corresponding to virtual machine instructions 91 (e.g. 91 a) of a virtual machine 90 or virtual machine instruction set 90.
A virtual machine 90 may be built upon any available programming environment. Such virtual machines 90 may sometimes be referred to as interpreters, or interpreted systems. Alternatively, virtual machines 90 are sometimes referred to as emulators, wherein a set of instructions 91 a-n may be hosted on a processor 12 of one type to mimic or emulate the functional characteristics of a processor 12 in a hardware device of any other type.
An application may be written to run on or in an environment created for a first hardware device. After the application is fully developed and operational, the application may then be “ported” to another machine. Porting may simply include writing a virtual machine 90 for the second hardware platform. Alternatively, an application may be developed in the native language of a first machine, and a single set 90 of virtual machine instructions 91 a-n may be created to emulate the first machine on a second machine. A virtual machine 90 is sometimes referred to as an emulation layer. Thus, an emulation layer or virtual machine 90 may provide an environment so that an application may be platform-independent. A JAVA interpreter, for example, performs such a function.
An executable 82 loaded into main memory 18, 20 contains the original images of the contents of the cache system 80. A building system 84 that may be thought of as an apparatus, modules running on an apparatus, or a system of steps to be performed by an apparatus, is responsible to build contents to be loaded into the executable 82.
A builder 86 may be tasked with building and loading an executable image 100 of a virtual machine 90. Similarly, a builder 88 may build an executable image 130 of the instructions 106 implementing an application written in the virtual machine instructions 91 constituting the virtual machine 90. In general, the executable 130 or executable image 130 may represent any application ready to be executed by the execute unit 72 of the processor 12. One embodiment of an executable 130 or an image 130 may be an application written specifically to prompt a high speed loading as described with respect to FIG. 4 below.
A virtual machine 90 or a set 90 of virtual machine instructions 91 a-n may contain an individual instruction (e.g. 91 a, 91 b, 91 n) corresponding to each specific, unique function that must be accommodated by the virtual machine 90. The virtual machine instruction 91 n, for example, provides the ability to terminate execution.
In FIG. 3, the builder 86 may include source code 90, virtual machine source code 90. The source code 90 may be assembled or compiled by an assembler 92 or compiler 92, as appropriate. The virtual machine may operate adequately, whether dependent on assembly or compilation. The assembler 92 or compiler 92 operates for native code. Native code, may be thought of as code executable directly on a processor 12 in the apparatus 10.
By native code is indicated the processor-specific instructions 91 that may be executed directly by a processor 12. By directly is not necessarily meant that the native code is always written in binary ones and zeros. Native code 106 may be written in a language to be assembled 92 or compiled 92 into object code 94 and to be eventually linked 96 into an executable 100 loaded for execution. Executables 100 may then be loaded 99 into a memory device 20, 18 for ready execution on or by an execute unit 72 of a processor 12. An executable 100 stored in a non-volatile storage device 16 may sometimes be referred to as an executable file. Once properly loaded 99 into the main memory 18, 20 associated with a processor 12 an executable 100 may be executed by a processor 12.
The assembler 92 or compiler 92 provides object code 94 in native code instructions. The object code 94 may be linked to library routines or the like by a linker 96. The linker 96 may provide all other supporting instructions necessary to run the object code 94. Thus, the linker 96 provides, as output, executable code 98. As a practical matter, the executable code 98 will be run directly from main memory 18, 20 as a loaded executable 100. Thus, a loader 99 may load the executable code 98 into main memory 18, 20 as the loaded code 100.
Code segments 106 a-n are written in native code. When any code segment 106 a-n (e.g. 106 a, 106 b, 106 c, 106 n) is executed, the result is the desired output from the corresponding virtual machine instruction 91 a-n (e.g. 91 a, 91 b, 91 c, 91 n, respectively). Virtual machine instructions 91 a-n identify every available function that may be performed by the virtual machine 90. The instructions 106 a-n illustrate segments 106 a-n, implementations in native code, executable the hardware, processor 12, that must produce the result associated with each individual virtual machine instruction 91 a-n.
Each of the code segments 106 a-n contains a FETCH instruction 108 DECODE instruction 110 and JUMP instruction 112. The instructions 108-112 promote pipelining. Thus, the subject of each of the respective instructions decode 110, fetch 108, and JUMP 112 correspond to the very next instruction, the second next instruction, and the third next instruction, respectively, following an instruction 91 a-n being executed and corresponding to a code segment 106 a-n in question.
A virtual machine instruction set 90 should include a HALT instruction 91 n. Thus, a virtual machine instruction 91 n within the virtual machine 90 will contain a segment 106 n of native code indicating to the processor 12 the fetching and decoding process for instructions used in all applications. The last virtual machine instruction 91 a-n contained within a loaded application 130 is a HALT instruction 91 n (106 n).
In FIG. 3, the loaded executable 100 may be stored in a block 114 separated by block boundaries 116. In the Pentium™ class of processors, each block 114 contains 32 bytes of data. The instruction set 90 or virtual machine 90 contains no more than 256 virtual machine instructions 91 a-n. Accordingly, the code segments 106 a-n, when compiled, linked, and loaded, may each be loaded by the loader 99 to begin at a block boundary 116, in one currently preferred embodiment. Thus, the number of blocks 114 and the size of each block 114 may be configured to correspond to a cache line 140 in the cache 60. Thus, an image of a code segment 106 a-n, compiled, linked, and loaded for each virtual machine instruction 91 a-n, exists in a single cache line 140. Likewise, every such virtual machine instruction 91 a-n and its native code segment 106 a-n has an addressable, tagged, cache line 140 available in the 256 cache lines.
In addition to the builder 86, a builder 88 may build any virtual machine application 120. In FIG. 3, the process of building an application 120 is illustrated. For example, a mock application may be constructed for the exclusive purposes of high-speed loading of the code segments 106 into the cache lines 140. In the embodiment shown, virtual machine source language code 120 or source code 120 may be written to contain instructions 91 arranged in any particular order. In general, instructions 91 are used by a programmer in any suitable order to provide and execute an application 120.
In an embodiment of an apparatus and method in accordance with the invention, the source code 120 may simply contain each of the virtual machine instructions 91 in the virtual machine language. The source code 120 may be assembled or compiled by an assembler 122 or compiler 122 depending on whether the language is an assembled or a compiled language. The assembler 122 or compiler 122 generates (emits, outputs) virtual machine code. The output of the assembler 122 or compiler 122 is object code 124. The object code 124 may be linked by a linker 126 to produce an executable code 128. The executable code 128 may be loaded by a loader 129 into main memory 18, 20 as the loaded executable 130.
The loaded executable 130 is still in virtual machine code. Thus, an application developed in the virtual machine language must be run on a virtual machine. The virtual machine 90 is stored in the cache 60. The cache 60 may actually be thought of as any processor cache, but the closest cache to a processor 12, is capable of the fastest performance.
The loaded executable 130 is comprised of assembled or compiled, linked, and loaded, virtual machine instructions 132. A main memory device 20 is byte addressable. Each of the virtual machine instructions 132 begins at an address 134. Thus, each virtual machine instruction 132 may be of any suitable length required. Nevertheless, a virtual machine address zero 135 may be identified by a pointer as the zero position in the virtual machine 130. Each subsequent address 134 may thus be identified as an offset from the virtual machine zero 135. A last instruction 136 should be effective to provide an exit from the loaded executable 130. Typically, loaded executables 130 are executed in the order they are stored in the memory device 20.
The cache 60 has associated therewith a tag table 142. For each cache line 140, an appropriate tag line 144 exists (e.g. 144 a, 144 b, 144 c). Associated with each tag line 144, is a logical address 146 corresponding to the address 134 of the cache line 140 in question. Likewise, a physical address 148 in a tag line 144 corresponds to an address 116 or block boundary 116 at which the code 114 is stored in the main memory 18, 20. A control field 144 c may contain symbols or parameters identifying access rights, and the like for each cache line 140.
Thus, in general, a loaded executable 130 (application 130) has a logical address 134 associated with each virtual machine instruction 132. The logical address 134 associated with the beginning of an instruction 132 is bound by the tag table 142 to the physical address 116 associated with the executable code 100 associated with the corresponding code segment 106 whose compiled, linked, and loaded image is stored at the respective cache line 140 associated with the tag line 144 binding the logical address 134, 146 to the physical address 116, 148.
Referring to FIG. 4, a method 160 is described and illustrated schematically. The method 160 locks or pins a cache after loading the native code implementation of individual virtual machine instructions into the cache.
A disable 162 may be executed by the processor to disable interrupts from being serviced. The disable 162 provides temporary isolation for the cache 60, enabling completion of the process 160 or method 160. The cache 60 is next flushed 164 typically with write-back, which causes “dirty” cache data to be written back to main memory 18, 20. Thus, in the control field 150 may be a byte indicating that each cache line 140 is available. Thus, the processor 12 need not thereafter execute the multiple steps to remove the contents of any cache line 140 in preparation for loading new contents.
The execute steps 166 correspond to execution by the processor 12 of individual instructions 132 in a loaded application 130. Upon fetching for execution 166 each instruction 132, the processor 12 places a request for the instruction 132 next in order in the loaded application 130.
The cache controller for the cache 60 first reviews the contents of the tag table 142 to determine whether or not the desired instruction is present in the cache 60. Having been flushed, the cache 60 has no instructions initially. Accordingly, with each execute 166, a new instruction 132 is loaded from the main memory 18, 20 into the cache 60 at some appropriate cache line 140. Immediately after loading into the cache 60, each instruction 132 in order is executed by the processor 12. However, at this point, any output is ignored. The execution 166 is simply a by-product of “fooling” the cache into loading all the instructions 132 as rapidly as possible, as pre-programmed into the hardware.
In one embodiment of an apparatus and method in accordance with the invention, a loaded application 130 contains every instruction 132 required to form a complete set of instructions for a virtual machine. The instructions 132 are actually code segments 106 implementing a virtual machine instruction 91 in the native code of the processor 12. No output is needed from the initial application 130 run during the method 160.
In one currently preferred embodiment of an apparatus and method in accordance with the invention, the virtual machine instruction set 100 is written so that each block 114 contains a single instruction 91. Moreover, the instruction set 90 is written to occupy exactly the number of cache lines 140 available in the cache 60.
In certain embodiments, an individual instruction 91 may occupy more than a single cache line 140. For example, some caches may have a 16 byte line length. Thus, a 32 byte length for an instruction 91 may require two cache lines 140. In one presently preferred embodiment, a number of cache lines 140 may correspond exactly to the number of blocks 114 required to hold all of the instructions 91, such that each instruction 91 may be addressed by referring to a unique cache line 140.
Thus, upon completion of execution of an initial application 130 configured for loading the cache 60, no output may be provided. However, the cache 60 with its controller operating normally, loads every instruction 91 referenced by the application 130. Therefore, each cache line 140 contains a code segment 106 or native code segment 106 implementing a virtual machine instruction 91. Each cache line 140 contains the code segment 106 corresponding to a virtual machine instruction 91 in a cache 60 having a line length of 32 bytes.
After the executions 166 of the virtual machine instructions 132 of the application 130 designed for the loading of virtual machine instruction code 106 into the cache 60, a disable 168 may disable the cache 60. The effect of the disable 168 is to pin the contents of each cache line 140. Pinning (locking) indicates that the cache controller is disabled from replacing the contents of any cache line 140.
Nevertheless, the cache 60 continues to operate normally, otherwise. Thus, the controller of the cache 60 will continue to refer to the tag table 142 to determine whether or not an address 146, 148 requested is present. In the case of a virtual machine 90, every instruction 91 will be present in the cache 60, if the instructions are designed in accordance with the invention. Thus, the tag table 142 will always contain the code 106 associated with any address 146, 148 representing any virtual machine instruction 91.
Less than a full set of instructions 91 may be loaded into a cache 60. Alternatively, for a cache 60 having more cache lines 140 than needed for storing a virtual machine 90 in its entirety, unused cache lines 140 may be devoted to other code, loaded in a similar way, prior to pinning. Code may be selected according to recency of use, cost/benefit analysis of use, or cost/benefit analysis of retrieval from main memory 18, 20.
The cache 60 is used by way of example. The virtual machine 90 will operate fastest by using the cache 60 closest to the fetch/decode unit 70. Alternatively, another cache 64 may be used. Thus, everything describing the cache 60 may be applied to the cache 66 or the cache 64 so far as loading and pinning of the cache 60 are concerned. The enable 170 may re-enable the interrupts so that the processor 12 may resume normal operations.
Referring to FIG. 5, an efficient fetch/decode/JUMP algorithm may begin with an XOR of the contents of a register EAX 180 against itself. The effect of the XOR is to zero out the contents of the EAX register 180. The contents of register EAX 180 may represent a pointer. Following this clearing operation, a MOVE instruction (MOV) may move the contents of a memory location corresponding to a pointer (next logical instruction number) and identified by the label or logical instruction number stored in a register EBX 190 into the register AL 186.
The register AL 186 is the lower eight bits of the AX register 182. The AX register 182 is the lower 16 bits of a 32 bit EAX register 180. The upper eight bits of the AX register 182 constitute the AH register 184. The AL 186 or lower register 186 thus receives the contents of a memory location corresponding to a current instruction 91 being pointed at by the contents of the EBX 190 register.
Following the MOVE instruction, a SHIFT instruction may shift left by five bits (effectively a multiplication by a value of 32) the contents of the EAX register 180. Since the EAX register 180 was zeroed out, and only the AL register was filled, a shift left of the EAX register 186 multiplies its value by 32. This shift left is effectively a decoding of the instruction that was fetched by the MOVE instruction.
Continuing with the procedure, a JUMP instruction may be implemented to position EAX in the set of virtual machine instructions. Note that each virtual machine instruction 91 in the complete set 90, when loaded, is written within the same number of bytes (32 bytes for the native code segment implementing the virtual machine instruction). The code segment 106 for each instruction 91 begins at a block boundary 116 and at the beginning of a cache line 140. Thus, a virtual machine instruction number multiplied by 32 will step through each of the native code segments 106. Thus, a JUMP to EAX constitutes a direct addressing of the native code segment 106 required to implement a particular virtual machine instruction 91.
Other mechanisms exist to address memory 20. For example, vector tables are commonly used. However, such mechanisms require certain calculations to occur in order to execute a JUMP. Moreover, memory access is required in order to complete the determination of a value in a vector table. Thus, the processor 12 must request access to the main memory 18, 20 in order to fulfill the request for a vector table entry. Accessing main memory and other operations requiring requests to be managed by the bus 32 may increase access times by more than orders of magnitude. The simple arithmetic logic unit operation of a JUMP in the preferred embodiment, is much more efficient than the vector table approach that imposes a memory reference on top of a simple JUMP operation.
Different types of caching implementations may exist in hardware. Three common types of cache architectures are direct-mapped, fully-associative, and a set-associative. Cache technology is described in detail in Computer Architecture: A Quantitative Approach by John L. Hennessy and David A. Patterson published in 1990 by Morgan Kaufman Publishers, Inc. of San Mateo, Calif. (See Chapter 8).
In an apparatus and method in accordance with the invention, any type of cache 60 may be used. In one currently preferred embodiment, a two-way set associative cache 60 may be used.
In a direct-mapped cache 60, several blocks or lines 140 exist. A cache line 140 may contain some selected number of bytes, as determined by the hardware. Typical cache lines 140 have a length of 16 or 32 bytes. Likewise, each cache structure will have some number of addressable lines. An eight bit addressing scheme provides 256 cache lines in a cache.
Each byte of memory within a memory device 14, including read/write types as well as read-only types, especially a main random access memory device 20, is directly addressable. One common caching scheme for a direct mapped cache architecture may map a memory device 20 to cache lines 140 by block. The memory's addressable space may be subdivided into blocks, each of the same size as a cache line. For example, an entire random access memory 20 may be subdivided into 32-byte blocks for potential caching.
A significant feature of a direct-mapped cache is that every block of memory within the source memory device 20 has a specific cache line 140 to which it will be cached any time it is cached. In one scheme, the least significant bits in an address corresponding to a block within a memory device may be truncated to the same size as the address of a cache line 140. Thus, every block of memory 20 is assigned to a cache line 140 having the same least significant bit address.
In a fully-associative, caching architecture, no binding need exist between any particular block of memory in the memory device, and any cache line a priori. Allocation of a cache line 140 space to a particular block of memory 20 is made as needed according to some addressing scheme. Typical schemes may include random replacement. That is, a particular cache line 140 may simply be selected at random to receive an incoming block to be cached.
Alternative schemes may include a least-recently-used (LRU) algorithm. In a least-recently-used (LRU) scheme, a count of accesses may be maintained in association with each cache line 140. The cache line 140 that has been least recently accessed by the processor 12 may be selected to have its contents replaced by a incoming block from the memory device 20.
A set-associative architecture subdivides an associative cache into some number of associative caches. For example, all the lines 140 of a cache 60 may typically be divided into groups of two, four, eight, or sixteen, called “ways.” Referring to the number of these ways or subcaches within the overall cache 60, as n, this subdivision has created an n-way set-associative cache 60.
Mapping of block-frame addresses from a main memory device 20 to a cache line 140 uses the associative principle. That is, each way includes an nth fraction of all the available cache lines 140 from the overall cache 60. Each block from the main memory device 20 is mapped to one of the ways. However, that block may actually be sent to any of the cache lines 140 within an individual way according to some available scheme. Either the LRU or the random method may be used to place a block into an individual cache line 140 within a way.
For example, a main memory address may be mapped to a way by a MODULO operation on the main memory address by the number of ways. The MODULO result then provides the number of a “way” to which the memory block may be allocated. An allocation algorithm may then allocate the memory block to a particular cache line 140 within an individual way.
Another cache may be used, with less effective results. Loading and pinning may also be done using test instructions, although more time-consuming. Instead of test instructions, the proposed method flushes the cache, running a simple application 130 containing every VMI 91 of a desired set 90 to be loaded. Before disabling the processor cache 60, the method may use the cache's internal programming, built into the fundamental hardware architecture, to provide a high-speed load. Disabling permits access to the processor cache 60, but not replacement, completing an effective pinning operation.
In one currently preferred embodiment, the closest cache to the processor is used as the processor cache 60. For example, in the Pentium™ processor, the level-1 code cache 60 may be used. In other embodiments, an external cache 64, or a level-1 integrated (not segregated between code and data) cache 58 may be used. Thus, whenever a processor cache 60 is specified, any cache 58, 60, 64 may be used, and the closest is preferred.
Pinning is particularly advantageous once an environment, or rather the executable instructions constituting an environment, have been programmed in a form that fits the entire instruction set into an individual processor cache 60, with one instruction corresponding to one cache line 140. Benefits derived from this method of architecturing and pinning the virtual machine are several.
For example, no cache line 140, during execution of a virtual machine 90, need ever be reloaded from main memory 18, 20. In addition to the time delay associated with having to access the bus 32, access times within memory devices 14 themselves vary. Typically, a cache access time is an order of magnitude less than the access time for a main memory location. Reloading a cache line 140 is likewise a time-consuming operation.
Here, every branch destination (the object of a JUMP) within the virtual machine 90 may be located at a fixed cache line position. Thus, no penalty is created for address generation within the cache 60 itself. Rather, each cache line 140 may be addressed directly as the address of the instruction 91 being requested.
That is, typically, a cache controller must manage an addressing algorithm that first searches for a requested reference within the cache. If the reference is not present, then the cache controller requests over the bus 32 from main memory the reference. The address generation, management, and accessing functions of the cache controller are dramatically simplified since every desired address is known to be in the cache for all code references.
Many modern processors such as the Pentium™ series by INTEL™ contain hardware supporting branch prediction. That is, when a branch operation is to be executed, the processor predicts the destination (destination of a JUMP) to which the branch will transfer control. With a pinned cache containing the entire instruction set 90 of the virtual machine 90, all branch destinations are known. Every instruction has a cache line 140 associated therewith which will never vary. Not only does this correspondence not vary within a single execution of the virtual machine, but may actually be permanent for all loadings of the virtual machine.
Likewise, a branch prediction table is typically updated along with cache line replacement operations. Since the cache lines 140 need never be replaced while the virtual machine is loaded into the cache, and pinned, the branch prediction table becomes static. Inasmuch as the prediction table becomes static, its entries do not change. Moreover, every referenced code instruction is guaranteed to be in the cache. Therefore, any benefits available to a branch prediction algorithm are virtually guaranteed for an apparatus and method operating in accordance with the invention. Flushes of the pipelined instructions now approach a theoretical minimum.
In the Pentium™ processor by INTEL™, two arithmetic logic units (ALUs) correspond to a ‘U’ pipeline and a ‘V’ pipeline. Each arithmetic logic unit (ALU) may execute an instruction with each clock count. However, if two instructions must occur in sequence, then one pipeline may be idled. Thus, the ‘V’ pipeline may be idled during any clock count that requires two instructions to be executed in sequence rather than in parallel.
Typical optimal programming on Pentium™ processors may achieve 17 to 20 percent pairing between instructions. By pairing is meant that instructions are being executed in both the ‘U’ and ‘V’ pipelines. Here that occurs about 17 to 20 percent of the time in a Pentium™ processor.
Due to the careful architecture of the instruction set, as well as pinning the instruction set, a method and apparatus in accordance with the invention may routinely obtain 60 percent utilization of the ‘V’ (secondary) pipeline. The selection and ordering of the virtual machine instructions have been implemented to optimize pairing of instructions through the pipelines.
Referring to FIGS. 6-9, as well as FIGS. 1-3, when multi-tasking, competing processes may try to use the processor 12 and the processor cache 60. A virtual machine application 120 may run in an interpretive environment 90 (the virtual machine 90) that is one among several native-code applications 218, 220 (FIG. 6).
In general, a small fraction of available processing time may be required for execution of native code 128 implementing a virtual machine application 120. This time is fragmented across the entire time line of a processor 12, shared by all multi-tasked processes.
A method 160 and apparatus 10 to pin a processor cache 60 for a user of a virtual machine 90 hosted on a computer 11 (individual) are taught previously herein. Pinning into individual cache lines 140 the code segments 106 implementing the individual instructions 91 of the virtual machine 90 dramatically improves the processing speed for virtual machine applications 120 (applications operating in the virtual machine environment 90).
However, if a virtual machine 90 is pinned, consuming the entire processor cache 58 of a multi-tasking operating system 214, it eliminates the availability of the processor cache 64 to service other native- code applications 218, 220. In a multi-tasking environment, this may degrade performance significantly. A virtual machine application 120, by its very presence, may degrade the operation of the entire panoply of applications 218, 220 (including itself) being executed by the processor 12.
Meanwhile, pinning and unpinning by any conventional method would add processing overhead, burdening the carefully constructed cache contents to render less favorable performance.
Here, the need is to load, pin, run, and then unpin rapidly and frequently for interpretive applications 120 in order to provide a faster execution of all applications 218, 220 running. Otherwise, the pinned processor cache 60 will degrade performance of all native- code applications 218, 220. For example, in one test, multi-tasked, native- code applications 218, 220 ran 3 to 5 times slower with a pinned processor code cache 60.
The invention contemplates very fast loading and pinning. A mock application 120 may serve to load all the VMI code segments 100 into the respective cache lines 140.
Referring to FIG. 7, a hooked pin manager 240, in a scheduler 228, executing a scheduling process 230 in an operating system 214 may control persistence of the contents of a processor cache 60. Persistence may encompass the enabling of the processor cache and the interrupts.
By hooking, is meant the process of altering the control flow of a base code, in order to include an added function not originally included in the base code. Hooks are often architected into base codes with the intention of permitting users to add customized segments of code at the hooks. Customized code might be added directly at the hook, or by a call or jump positioned as the hook within a base code.
Here, a hook into the scheduler 228 need not be an architected hook. For example, the scheduler 228 may have a jump instruction added surgically into it, with a new “hooked” code segment placed at the destination of the jump, followed by the displaced code from where the jump was written in, and a return.
Alternatively the scheduler 228 may be modified at some appropriate jump instruction, having an original destination, to jump to the destination at which is located a “hooked” code segment, such as a pin manager. Thereafter, the pin manager may, upon completion of its own execution, provide a jump instruction directing the processor 12 to the original destination of the “hooked” jump instruction.
Referring now to FIG. 6, certain processes 212, 214, 216 or modes 212, 214, 216 are illustrated for an apparatus 10 with an associated processor 12. In general, applications 218, 220, in some number may be executing in a multi-tasking environment hosted by a processor 12. The applications 218, 220 operate at a user level 212 or a user mode 212. Accordingly, the applications 218, 220 are “visible” to a user.
Below a user level 212 is an operating system level 214. The operating system level 214 may also be referred to as kernel mode 214.
The operating system (O/S) 214 is executed by the processor 12 to control resources associated with the computer 11. Resources may be thought of as hardware 10 as well as processes available to a computer 11. For example, access to memory 18, 20, storage 16, I/ O devices 22, 24, peripheral devices 28, and operating system services 222 are all controlled resources available in a computer system. Functional features such as serving files, locking files or memory locations, locking processes into or out of execution, transfer of data, process synchronization through primitives, executing applications and other executables, may all be controlled as process resources by the operating system 214.
Applications 218, 220 at a user level 212 may communicate with a systems services module 222 or systems services 222 in an operating system 214. The system services 222 may provide for communication of a request from applications 218, 220 and for eventual execution by the processor 12 of those tasks necessary to satisfy such requests.
A file system 224 may provide for addressing and accessing of files. System services 222 may communicate with the file system 224 as necessary. Meanwhile, the file system 224 may communicate with a memory and device management module 226. Each of the modules 222, 224, 226, 228 may be thought of as one or more executables within an operating system 214 for accomplishing the mission or responsibilities assigned according to some architecture of the operating system 214. Whether or not a module exists as a single continuous group of executable lines of code is not relevant to the invention. Any suitable mechanism may be used to provide the functionality of the system services 222, while system 224, memory and device management 226, and the scheduler 228.
The memory and device management module 226 may control a memory management unit associated with a memory device 14 or the main memory 20. Likewise, the device management function of the memory and device management module 226 may control access and operation of the processor 12 with respect to input devices 22, output devices 24, and other devices that may be connected peripherally through the port 28.
The scheduler 228 provides for scheduling of the execution of the processor 12. Accordingly, the scheduler 228 determines what processes or threads will be executed by the processor 12. The hardware level 216 may include any or all of the components of the computer 11 controlled by the operating system 214.
Referring now to FIGS. 7-9, the scheduler 228 may provide for execution of certain processes 160 (see FIG. 4), 230 (see FIG. 7), 250 (see FIG. 8), 290 (see FIG. 9). For example, the processes 250 represented in rectangular boxes may be executed by the processor 12 in advancing a particular thread, process, program, or application between various states 251.
Referring now to FIG. 7, the scheduler 228 may give control of the processor 12 to the process 230. The process 230 may select 232 a process or thread having a highest priority among such processes or threads, and being in a ready state 258.
A change 234 may follow the select 232 in order to convert the selected process or thread to a running state 268. A context switch 236 may be performed to support the selected process or thread. A context switch may involve a setup of particular components in the hardware level 216 required to support a selected process or thread.
Following the context switch 236, the selected process or thread may execute 238. In a multi-tasking environment, the process or thread may not execute to completion with one continuous block of time in control of the processor 12. Nevertheless, a selected process or thread may execute 238 until some change in the associated state 251 occurs, or until some allocated time expires.
The process 230 may have an interposed process 240 hooked into it. In one embodiment, the interposed process 240 may include a test 242. The test 242 may determine whether or not a selected process or thread is a native process or not. A native process may operate in native code. A non-native process may operate in some other environment such as an interpretive environment. The test 242 may therefore determine whether a virtual machine 90 needs to be loaded into the processor cache 60.
A load process 244 may execute with a selected process or thread. The load process 244 may be implemented in any suitable manner. In one currently preferred embodiment of an apparatus and method in accordance with the invention, the load 244 may use a fast load process 160. However, in general, test instructions or any other mechanism may be used to perform a generic load process 290. A fast load process 160 requires substantially fewer instructions and less time in execution by the processor 12. As explained above, the fast load process 160 takes advantage of the architecture of the hardware level 216 to load a processor cache 60 in the minimum amount of time.
Referring to FIG. 8, an alternate view of the processes 250 and the associated states 251 associated therewith are illustrated. An initialize process 252 may create or initialize a selected process or thread. The selected process or thread will then be in an initialized state 254.
The processor 12, when time and resources become available, may queue 256 a process or thread into a ready state 258. From the ready state 258, a selection 250 may occur for a process or thread having a highest priority. The selection 250 may be thought of as corresponding to a select 232.
A selection 250 may advance a process or thread selected to a standby state 262. Nevertheless, priorities may shift. Thus, a preemption 264 may move a selected process or thread from a standby state 262 to a ready state 258.
In normal operation, a context switch 266 may occur to dispatch a process or thread from a standby state 262 to a running state 268. A running state 268 indicates that a selected thread or process has control of the processor 12 and is executing. One may think of the standby state 262 as existing between the selection 250 process and the context switch 266 process. From a different perspective, the select step 232 and the change step 234 of FIG. 7 may correspond to the selection 250 and context switch 266, respectively. In normal operation, an executing process or thread may move from a running state 268 to a terminated state 272 if completion 270 occurs. Execution completion 270 frequently occurs for any given process or thread since an available quantum of time allocated for a running state 268 in often sufficient for completion 270. Nevertheless, another frequent occurrence is a requirement 276 for resources. For example, the process or thread may need some input device 22 or output device 24 to perform an operation prior to continued processing. Accordingly, a requirement 276 may change a process or thread to a waiting state 278.
The availability 280 of resources may thereafter advance a process or thread from a waiting 278 to a ready state 258. Alternatively, expiration of the quantum of time allocated to the running state 268 of a thread or process may cause a preemption 274. The preemption 274 step or procedure may return the thread or process to the ready state 258 to be cycled again by a selection 250.
In one currently preferred embodiment of an apparatus and method in accordance with the invention, a cache load and pin process 282 (cache load 282, load 282) may precede a context switch 284, corresponding to the context switch 266 for a native process. The load 282 occurs only for interpretive processes as detected by the test 242 executed between the select step 232 (e.g. selection 250) and the change step 234 (e.g. context switch 284). A context switch 266, 284 may be thought of as operating on affected registers, such as by saving or loading context data, changing the map registers of the memory management unit, and the like, followed by changing the state of the processor 12 between one of the states 251.
The load 282 may be completed by any suitable method. For example, notwithstanding their less desirable approach, test instructions may be used to fashion a load process 282. Nevertheless, the process 160 (see FIG. 4) may properly be referred to as a fast load process 160 or a fast load 160 of a processor cache 60.
The effect of adding a load step 282 (driver 282, pin manager 282) before a context switch 284 is to set up an environment (e.g. virtual machine 90) in which to execute an interpretive application 218 (see FIG. 6) such as a virtual machine application 120 (see FIG. 3). One may note that a selection 250 of a native process or thread results in the immediate context switch 266 as the subject process or thread transitions from a standby state 262 to a running state 268. Accordingly, the processor cache 60 operates normally for any native process following the context switch 266. By contrast, a dynamic load and pin process 282, such as the fast load 160, may be executed very rapidly prior to a context switch 284 prior to placing an interpretive process or thread into a running state 268.
Referring to FIG. 9, an alternative embodiment of a load and pin process 282 (e.g. interposed process 240) is illustrated. A test 292 may determine whether or not a process resulting from a selection 260 is an interpretive process. The test 292 may be hooked in any suitable location among the processes 250. A flag may be set to determine whether or not to activate or hook a load and pin process 282 in any procedure occurring between a standby state 262 and a running state 268. However, in one currently preferred embodiment, the interposer routine 240 (see FIG. 7) may be hooked into the select 232 (e.g. selection process 260) or the context switch process 266. In one currently preferred embodiment, the entire interposer routine 240 may be hooked as the cache load and pin process 282 in the context switch 284, but before any substantive steps occur therein. The context switch 284 may be different from the context switch 286 for a native process or thread.
Thus, in one currently preferred embodiment, the load 282 (processor cache 60 load and pin process 282) may be as illustrated in FIG. 9. Meanwhile, a portion 290 of the load 282 may be replaced by the fast load 160. Note that the disable 294 may correspond to a disable 162 and the re-enable 302 may correspond to the enable 170 of interrupts. Similarly, the flush 296 may correspond to the flush 164 described above. The load instructions step 298 may or may not correspond to the execute 166 of the fast load 160. Any suitable method may be used for the load 298. The example mentioned before, using test instructions, is completely tractable. The fast load 160 using execution of a mock application 120 architected to use every instruction of a 91 of a virtual machine 90 in order to load each of the native code segments 106 corresponding thereto is simply the fastest currently contemplated method for a load 298.
Likewise, the disable 300 corresponds to a disable 168. However, the disable 300 specifically disables only the ability of a cache controller to change the contents of a cache line 140 in the processor cache 60 is affected. In all other respects, the processor cache 60 may operate normally following the re-enable 302 of interrupts. Thus, the enable 304 of the processor cache 60 may not be required as a separate step in certain embodiments. For example, the re-enable 302 with only a limited disable 300 may fully enable 304 a processor cache 60. However, in certain embodiments, such as when using test instructions, an extra enable step 304 may be required to return all the functionality to a processor cache 60. Again, note that by processor cache 60 is meant any of the caches 58, 60, 64 for use by the processor, although a segregated, code cache 60, closest to the processor is one preferred embodiment.
Referring to FIGS. 7-9, the pin manager 240, 282 may be added at an operating systems (O/S) level 214 as a driver 226 (see FIG. 6) or contained in a driver 282 recognized and allowed by the O/S to be loaded by the O/S. This driver is at a systems level 214 of privilege. A reason why the pin manager is a driver is that this is a way to obtain systems level privileges. The O/S loads the driver 240, 282, and allows the driver 240, 282 to initialize 252, transferring control to an initialization routine 252.
As part of the initialization routine 252, the driver 240, 282 either hooks, or creates hooks to later hook into, the operating system 214. It is important to note that the driver 240, 282 is in control of the processor 12, once loaded, and the O/S 214 has turned over control to the driver and its initialization routine 252, until that control is returned. Drivers 226 have a standard set of commands that may be executed. Drivers 226 also recognize certain commands receivable from the O/S 214.
The pin manager 282 could not communicate with the processor cache 60 absent this systems privilege level, nor could it attach (hook) itself into the O/S 214. Thus, the pin manager 240, 282, by being a driver 226, fitting the device driver formats and protocols, may be recognized by the O/S 214. This recognition is not available to an application 218, 220. With this recognition, the pin manager 240, 282 (driver 240, 282) is designated as privileged-level code and can therefore contain privileged-level instructions of the processor 12.
Certain instructions may exist at multiple privilege levels. However, each such instruction is treated differently, according to the associated privilege level. For example, a MOVE instruction may mean the same in any context, but may only be able to access certain memory locations having corresponding, associated, privilege levels.
The interrupt disable 162, 294 (CLI instruction), flush 164, 296 (FLUSH or WBFLUSH), disable cache 168, 300, and enable cache 304 are privileged level instructions. They are available in the operating system environment 214 (privileged or kernel mode 214) to systems programmers writing operating systems 214, device drivers 226, and the like. So long as a user is authorized at the appropriate privilege level 214, the instructions are directly executable. If a user is not at the required level 214 of privilege, then the processor 12 generates an “exception” to vector off to an operating system handler to determine what to do with an errant program using such instructions improperly.
Typically, to disable 168, 300 or to turn a cache on or off requires a user, such as a system programmer, to execute a setup routine directly controlling the Basic Input/Output System (BIOS). This operation is not usually undertaken. Disabling 168, 300 a processor cache 60 is not routinely done, and to do so selectively is counter-intuitive.
Moreover, to repeatedly disable 168, 300 and re-enable 304 the processor cache 60 is folly by conventional wisdom. Likewise, to dynamically enable 304, load 298, and disable 162, 300 the processor cache 60 is highly counter-intuitive. However, in accordance with the invention, conventional wisdom is superceded to good effect.
The expressions “dynamic pinning” 282 and “programmatic management” of a processor cache 60 reflect the exercise, at run time, of control of both cache contents and their duration in accordance with the individual needs determined for a specific program 218, 220.
A major benefit of dynamic pinning 298 of a processor cache 60 is an ability to manage the loading 298 and pinning 300 of a virtual machine 90 (VM, interpretive environment 90) in a processor cache 60 (e.g. level-1 code cache 60) in order to optimize the entire workload of a processor 12. This also maximizes the speed of the virtual machine 90 when run.
A processor cache 60 (or 58, 64) may be any cache adapted to store instructions executable by a processor. The cache may be segregated or not segregated, to have a portion for instructions and a portion for data. Perhaps the most significant feature of a processor cache 58, 60, 64 is the lack of direct programmatic addressing as part of the main memory address space. The processor cache 58, 60, 64 is thus “hidden” from a programmer.
Typically, pre-programmed instructions associated with the architecture of a processor cache 58, 60, 64 determine what is loaded into each cache line 140, when, and for how long. This is typically based on an LRU or pseudo-LRU replacement algorithm. The instant invention relies on direct programmatic controls, and knowledge of the cache architecture to prompt the processor cache 60 to store a certain desired set of contents for use by a specified program. Thus, careful programmatic controls may obtain certain reflexive responses from the processor cache 60 and its internal cache controller, which responses are manipulated by a choice of programmatic actions.
Algorithmic management of a hardware cache on a processor 12, has never allowed “dynamic programmatic control” of a hidden cache. Here, the use of knowledge of the architected response of the cache hardware system 60 programmatically optimizes the processor cache 60 behavior, as the processor cache 60 responds to privileged programmatic commands at an operating system level 214.
In order to avoid certain drawbacks associated with the particular mechanism for cache pinning referred to above, the present invention also involves cache fencing. Cache fencing will be discussed in conjunction with certain memory management concepts implemented by Intel Corporation for their Pentium Processors. Nevertheless, one skilled in the art will readily recognize that the concepts discussed in terms of Intel's architecture also apply to other types of architectures, and the manner of implementing the present invention with other types of memory management architectures will be readily apparent.
Referring to FIG. 10, shown therein is a sequencing and paging unit 310 that is provided with a logical address 312. The logical address 312 or pointers 312 may be constructed of a segment selector 314 and an offset 316. A global descriptor table 318 is pointed to by the value of the segment selector 314. The segment selector 314 points to a base address 319 of a segment descriptor 320 in the global descriptor table 318.
The segment descriptor 320 in turn points to a linear address space 322. Specifically, the segment descriptor 320 points to a base address 324 or segment base address 324. The offset 316 in combination with the segment base address 324 point to a linear address 326 within the linear address space 322.
The linear address 326 exists within a page 325 and within a segment 327 in the linear address space 322. As a practical matter, a linear address space 322 may be thought of literally as a mathematical space addressable by virtue of the ability of a processor 12 to store a number corresponding to a maximum address. Addressing may be done in a flat mode with the linear address 326 directly accessible, or heiarchically, through segmentation 327, paging 325, 327, or both.
A Linear address 326 contains different component parts that may be separated or subdivided in order to navigate a memory device 14 such as random access memory 20. A linear address 326 includes a pointer 328 or page directory pointer 328. An offset 330 and a table pointer 332 form the remainder of the linear address 326.
The pointer 328 identifies an entry 334 in a page directory 336. The entry 334 or page entry 334 points to a base address 335. The entry 334 or base address 335, in combination with the table pointer 332 or table entry pointer 332, points to a page table entry 338 in a page 340.
The page table entry 338, combined with the offset 330 from the linear address 326, points to the physical address 342 in the physical address space 344 of a memory device 14, 20. The combinations of base address 346 identified directly by the entry 338 in the page table 340 effectively leverages or multiplies the ability to address more physical address space 334 in terms of individual pages 347 and offsets 330 therein.
One may note that the page 347 corresponds to the page address 325. Nevertheless, the page address 325 or page address range 325 exists mathematically in a linear address space 322. The physical address space 344 is likewise a mathematical construct. However, for each page 347, base address 346, physical address 342, and the like, an actual location in the memory device 14 corresponds to a value from the physical address space 344.
Referring to FIG. 11, a page entry 338 from a page table 340 includes representations of attributes 348 corresponding to a page 347. A 32-bit physical address includes a page base address 349 along with other attributes 348. Of particular note is a cache disable flag 350. The cache disable flag 350, also called the PCD register 350, when set disables the ability of a page 347 to be cached in cache memory 60, 66.
Other attributes 348 include an availability entry 352, a global page entry 354, a reserved bit 356, a flag 358 identifying whether a page 347 has been written to and is thus dirty, and an access bit 360 identifying whether a page 347 has been accessed.
Other attributes 348 include a write-through bit 362 identifying information to write through a page 347, while a user bit 364 or user/supervisor bit 364 may be set to provide privileges to system administrators. A read/write bit 366 identifies whether permission to read/write, or both are permitted and a bit 368 identifies the presence of a page 347.
In one embodiment of an apparatus and method in accordance with the invention, a page 347 may be disabled from being cached in a cache 60 by setting the cache disable flag 350. Thus, in accordance with the present invention, one alternative to pinning the cache 60 or other caches 64, such as an instruction cache 66, comprises fencing. In fencing, the cache disable flags 350 corresponding to all pages 347 not included in the virtual machine 90, are set, thereby precluding all such pages 347 from being loaded into the cache 60, 66. Accordingly, rather than pinning the virtual machine 90 or interpreter 90 into the cache 60, 66, all pages 347 not storing portions of the virtual machine 90 may be fenced out of the caches 60, 66 by a proper setting of the cache disabled flag 350.
Significant performance advantages accrue to interpreters 90 or interpretive environments 90 when designed for and committed to a processor cache 60,66. This is particularly true when a level 1 cache 60 is committed to the use of an interpretive environment 90. Studies made on an apparatus and method in accordance with the invention indicate that execution times of an interpretive environment 90 may be improved by an order of magnitude, and some times more by virtue of pinning 250 the selected interpretive instructions 91 within the level one cache 60.
Commensurate improvements also achieved by the use of other caches 66 further removed from the execution unit 72. It is expected that these performance gains will also be achieved and possibly increased with the use of cache fencing in place of cache pinning.
In one embodiment, cache pinning may be obtained under the present invention by manipulating the memory management unit (MMU) 225 or the paging unit 310. Accordingly, regions of the physical address space 344 may be designated as cacheable (capable of being cached) or uncacheable. Manipulation of the cache disable flags 350, allows an interpreter instruction 90 to be marked as cacheable while all other pages 347 are marked as non-cacheable.
The native code instructions 106 are segregated, further augmenting the underlining harvard architecture that supports a split “I” (instruction) and “D” (data). Since operating systems 214 (see FIG. 6) are required to support management of the memory 20, system calls are present in virtually all operating systems 214 widely used and supported today. Thus, commands are readily accessible to set the cache disable flag 350 for all page table entries 338 not part of a virtual machine 90.
In yet another embodiment of an apparatus and method in accordance with the invention, heuristic pinning of a level-1 code cache may significantly improve performance of various operating environments 214. However, accessing a processor cache 166 can be cumbersome using test instructions. Accordingly, in one embodiment discussed above, a fast loading technique was described for improving the speed for loading a processor cache 60, 66 without the use of test instructions. In one embodiment of an apparatus and method in accordance with the invention, cache fencing is similarly managed.
For example, memory type range registers (MTRRs) of the INTEL x86 Pentium processors provide flexible paging by the use of memory type range registers (MTRRS370). In one embodiment of a method and apparatus in accordance with the invention, the benefits of cache pinning 250 may be obtained for interpretive environments 90 without direct manipulation of the processor cache 60, 66. That is, without using test instructions.
Thus, the performance benefits of direct processor cache manipulation techniques may be obtained without the difficulties of direct manipulation, by relying on attribute registers such as the MTTRs of INTEL Pentium processors 12, as well as attributes registers of other common processors 12. Using the MTTRs, flexible pages 371, such as the flexibly spaced or sized pages 371 a, 371 b, 371 n, and so forth may be sized as desired within certain programming limits by a systems programmer.
In one embodiment of an apparatus and method in accordance with the invention, each page 371 containing instructions 91 of the virtual machine 90 or interpreter 90 may be identified with the cache disable flag 350 as cacheable. All other code pages 371, may be set as non-cacheable.
Referring now to FIG. 12, a memory type restriction register (MTRR) 370 contains a type register 372, a start register 374 and a length register 376. Accordingly, types may be identified as uncacheable, write-protected, write-combining, write-through, and write-back. Designation of a page 371 as uncacheable may be conducted through the MTRR and prevents the contents of that page 371 from having access to the cache 60, 66.
Accordingly, indicating, marking, or otherwise setting pages 371 associated with a virtual machine 90 as cacheable provides access to the cache by the virtual machine 90 under the direct management of the MMU 225. The start register 374 provides a base address 374, while the length register 376 provides an offset as the outer boundary of a flexible page 371 identified by the memory type range register (MTRR) 370.
Accordingly, under another aspect of the present invention, in lieu of having to set every cache disable flag 350 of every page 347, page size manipulation may be used. So doing allows the pages for which cache disable flags 350 must be set to be reduced to just a few. Thus, page definitions may be prepared for the pages 371 that segregate the contents of virtual memory.
In accordance with this inventive concept, the interpreter instructions 91 which together form the virtual machine 90 are segregated from the rest of the operational data in the physical memory 20. Since the MTTRs 370 are related to the memory management unit, access to a systems programmer is more readily available than is accessed to the processor's caches 60, 66.
Referring to FIG. 13, a method of cache fencing 380 may include a save step 382 in which existing values of MTTRs 370 corresponding to existing pages 371 are saved. Thus, the values of all start points 374, lengths 376, and types 372 of pages 371 may be saved to a memory device 14.
In a subsequently conducted define step 384, new MTTRs are defined. The new MTTRs 370 may change the page boundaries 377 to reduce the number of pages in the physical memory 20. For example, a portion of physical memory 20 may be defined as a single contiguous page 371 containing an amount of memory 20 sufficient to store all of the interpretive instructions 91 associated with a virtual machine 90.
The remainder of physical memory 344, 20 may be partitioned into one or two flexible pages 371 by selectively setting the start registers 374 and the length registers 376 of the pages 377. Even with fragmentation of files, some minimal number of pages 371, in one embodiment, two contiguous 4 kilobyte pages, will include virtual machine instructions 91. Consolidation or defragmentation of the virtual machine instructions 91 may produce a very compact, contiguous page 371.
After defining the locations and sizes of the pages 371 associated with the MTTRs 370, the type 372 corresponding to each MTTR 370 and associated page 371 may be set as cacheable or uncacheable, as appropriate. Thus, a set cacheable step 386 is preferably applied to the MTTRs 370 of all the pages 346 corresponding to the interpreter 90. Optionally, the pages 346 corresponding to the virtual machine 90 are contiguous.
At a set non-cacheable step 388, a non-cacheable status or type 372 may be applied to all MTTRs 370 associated with pages 347 of physical memory 344, 20 not associated with the virtual machine 90.
As a practical matter, all data that will be moved through data caches 62, 68 need not and usually should not be pinned 250 or set 388 to non-cacheable. After the define step 384, an operate step 390 may simply operate the interpreter 90 as previously discussed. Upon termination of operation of the interpreter 90, a reload step 392 may be conducted to reinstate all saved, “old” memory type range registers (MTRRs) 370. Thus, all of the mappings of pages 371 by MTRRs 370 may be restored to their original state, unaffected by the operation of the interpretive environment.
As a practical matter, no particular benefit seems to be readily apparent for defining MTRRS 370 more numerously than required. Accordingly, contiguous locations for virtual machine instructions 91 within a single flexible page 371 bmay require less physical space than that required for two fixed pages 347. Likewise, the remainder of physical memory 20, with the virtual machine instructions 91 contiguous to one another, may be divided into as few as one or two additional flexible pages 371, theoretically following the reload step 392, a continue step 394 may return control of the processor 12 to any application that was present when the virtual machine 90 was engaged.
Modules for conducting the method steps 382 through 392 may also be included under the present invention. The modules may be defined in accordance with functional steps conducted by the modules. Accordingly, for instance, the save “old” MTRRs step could be employed by a save “old” MTRRs module, the define new MTRRs step could be conducted by a define new MTRRs module, and so forth.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative, and not restrictive. The scope of the invention is, therefore, indicated by the appended claims, rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (21)

What is claimed and desired to be secured by United States Letters Patent is:
1. An apparatus for programmatically managing a processor cache, the apparatus comprising:
a memory device operably connected to a processor and containing executables comprised of instructions;
a processor cache operably connected to the memory device to receive and persistently store the instructions under programmatic control; and
the processor for executing the instructions, the processor programmed to execute a cache fence effective to control the persistence of the instructions in the processor cache, the cache fence operable to selectively allow instructions denoted as cacheable to be cached and to prevent instructions denoted as non-cacheable from being cached.
2. The apparatus of claim 1, wherein the processor cache is a code cache.
3. The apparatus of claim 2, wherein the code cache is a level-1 code cache.
4. The apparatus of claim 1, wherein the executables include a virtual machine and a pin manager.
5. The apparatus of claim 4, wherein:
the virtual machine is further comprised of a set of instructions;
the pin manager is further effective to determine loading of the set of instructions into the processor cache; and
the persistence is a duration of the set of instructions in the processor cache.
6. The apparatus of claim 5, wherein the pin manager comprises an executable effective to test an application to determine whether the application requires the virtual machine, and to control loading of the set of instructions in accordance with the determination.
7. A method for managing a processor cache through selectively caching a plurality of instructions, the method comprising:
identifying at least a portion of the processor cache, the portion reserved for fencing at least one instruction identifiable as cacheable;
identifying a first instruction as cacheable;
identifying a second instruction as non-cacheable;
marking the second instruction as non-cacheable by setting a cache disable flag associated with the second instruction;
caching the first instruction in the reserved portion of the cache; and
not caching the second instruction in the reserved portion of the cache;
so that instructions are selectively cached in the reserved portion of the processor cache without disabling the processor cache.
8. The method of claim 7 further including maintaining the first instruction in the reserved portion.
9. The method of claim, 7 wherein the cache disable flag is associated with a page including the second instruction, so that the page is not cached in the reserved portion when the cache disable flag is set.
10. The method of claim 7 wherein the first instruction comprises at least a portion of a virtual machine, so that the portion of the virtual machine is loaded into the reserved portion of the cache.
11. The method of claim 7 further including:
identifying a portion of a memory associated with the cache disable flag and the second instruction; and
setting the cache disable flag so that the portion of memory is not cached in the reserved portion.
12. The method of claim 7 further including:
identifying a memory location associated with the first instruction;
storing the memory location;
identifying a page associated with the memory location; and
identifying the page as cacheable in the reserved portion, so that the first instruction is identified as cacheable in the reserved portion.
13. The method of claim 12 further including:
identifying a third instruction as cacheable in the reserved portion;
determining if the third instruction is included in the page; and
if the third instruction is not included in the page, altering at least one boundary of the page, so that the page includes the third instruction.
14. The method of claim 7 further including setting a cache enable flag associated with the first instruction, so that the first instruction is cached in the reserved portion.
15. A computer system for managing a processor cache by selectively caching a plurality of instructions, the system comprising:
a processor;
a cache accessible to the processor;
a memory accessible to the cache and the processor, the memory including:
a first portion of the plurality of instructions;
a second portion of the plurality of instructions; and
a software program, the software program including an instruction set for:
reserving and fencing at least a portion of the cache;
identifying the first portion of instructions as cacheable;
identifying the second portion of instructions as non-cacheable;
caching the first portion of instructions in the reserved portion of the cache; and
not caching the second portion of instructions in the reserved portion of the cache;
so that selective caching occurs in the reserved portion of the cache without disabling the cache.
16. The system of claim 15 wherein the software instruction set is further operable for marking the second portion of instructions as non-cacheable in the reserved portion by setting at least one cache disable flag associated with the second portion of instructions.
17. The system of claim 15 wherein at least one instruction of the first portion of instructions comprises a virtual machine.
18. A computer readable medium for storing a computer executable software program for managing a processor cache by selectively caching at least one instruction without disabling the cache, the program including an instruction set operable for:
identifying at least a portion of the processor cache to be reserved;
reserving and fencing the portion of the cache;
receiving the instruction;
determining whether the instruction is cacheable in the reserved portion;
if the instruction is cacheable in the reserved portion, caching the instruction in the reserved portion; and
if the instruction is not cacheable in the reserved portion, not caching the instruction in the reserved portion.
19. The medium of claim 18 wherein the instruction set is further operable for determining whether the instruction is cacheable in the reserved portion by checking the status of a cache disable flag associated with the instruction.
20. The medium of claim 19 wherein the instruction set is further operable for:
determining a page associated with the instruction; and
if the cache disable flag is set, not caching the page in the reserved portion.
21. The medium of claim 18 wherein the instruction set is further operable for maintaining the instruction in the reserved portion of the cache.
US09/118,262 1998-03-24 1998-07-17 Cache fencing for interpretive environments Expired - Lifetime US6356996B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/118,262 US6356996B1 (en) 1998-03-24 1998-07-17 Cache fencing for interpretive environments
US09/705,370 US6408384B1 (en) 1998-03-24 2000-11-03 Cache fencing for interpretive environments

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US7918598P 1998-03-24 1998-03-24
US09/118,262 US6356996B1 (en) 1998-03-24 1998-07-17 Cache fencing for interpretive environments

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US09/705,370 Division US6408384B1 (en) 1998-03-24 2000-11-03 Cache fencing for interpretive environments

Publications (1)

Publication Number Publication Date
US6356996B1 true US6356996B1 (en) 2002-03-12

Family

ID=26761711

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/118,262 Expired - Lifetime US6356996B1 (en) 1998-03-24 1998-07-17 Cache fencing for interpretive environments
US09/705,370 Expired - Lifetime US6408384B1 (en) 1998-03-24 2000-11-03 Cache fencing for interpretive environments

Family Applications After (1)

Application Number Title Priority Date Filing Date
US09/705,370 Expired - Lifetime US6408384B1 (en) 1998-03-24 2000-11-03 Cache fencing for interpretive environments

Country Status (1)

Country Link
US (2) US6356996B1 (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020129135A1 (en) * 2000-12-22 2002-09-12 Delany Shawn P. Determining group membership
US20020138577A1 (en) * 2000-12-22 2002-09-26 Teng Joan C. Domain based workflows
US20020138543A1 (en) * 2000-12-22 2002-09-26 Teng Joan C. Workflows with associated processes
US20020138763A1 (en) * 2000-12-22 2002-09-26 Delany Shawn P. Runtime modification of entries in an identity system
US20020143943A1 (en) * 2000-12-22 2002-10-03 Chi-Cheng Lee Support for multiple data stores
US20020143865A1 (en) * 2000-12-22 2002-10-03 Tung Loo Elise Y. Servicing functions that require communication between multiple servers
US20020147813A1 (en) * 2000-12-22 2002-10-10 Teng Joan C. Proxy system
US20020152254A1 (en) * 2000-12-22 2002-10-17 Teng Joan C. Template based workflow definition
US20020156879A1 (en) * 2000-12-22 2002-10-24 Delany Shawn P. Policies for modifying group membership
US20020166049A1 (en) * 2000-12-22 2002-11-07 Sinn Richard P. Obtaining and maintaining real time certificate status
US20020174238A1 (en) * 2000-12-22 2002-11-21 Sinn Richard P. Employing electronic certificate workflows
US6564301B1 (en) * 1999-07-06 2003-05-13 Arm Limited Management of caches in a data processing apparatus
US20030217127A1 (en) * 2002-05-15 2003-11-20 Richard P. Sinn Employing job code attributes in provisioning
US20030217087A1 (en) * 2002-05-17 2003-11-20 Chase David R. Hybrid threads for multiplexing virtual machine
US20030217101A1 (en) * 2002-05-15 2003-11-20 Sinn Richard P. Provisioning bridge server
US6789100B2 (en) * 1998-12-16 2004-09-07 Mips Technologies, Inc. Interstream control and communications for multi-streaming digital processors
US20050080791A1 (en) * 2003-10-09 2005-04-14 Ghatare Sanjay P. Translating data access requests
US20050080766A1 (en) * 2003-10-09 2005-04-14 Ghatare Sanjay P. Partitioning data access requests
US20050080792A1 (en) * 2003-10-09 2005-04-14 Ghatare Sanjay P. Support for RDBMS in LDAP system
US20050188156A1 (en) * 2004-02-20 2005-08-25 Anoop Mukker Method and apparatus for dedicating cache entries to certain streams for performance optimization
US7020879B1 (en) 1998-12-16 2006-03-28 Mips Technologies, Inc. Interrupt and exception handling for multi-streaming digital processors
US7035997B1 (en) 1998-12-16 2006-04-25 Mips Technologies, Inc. Methods and apparatus for improving fetching and dispatch of instructions in multithreaded processors
US20060195575A1 (en) * 2000-12-22 2006-08-31 Oracle International Corporation Determining a user's groups
US7237093B1 (en) 1998-12-16 2007-06-26 Mips Technologies, Inc. Instruction fetching system in a multithreaded processor utilizing cache miss predictions to fetch instructions from multiple hardware streams
US7257814B1 (en) 1998-12-16 2007-08-14 Mips Technologies, Inc. Method and apparatus for implementing atomicity of memory operations in dynamic multi-streaming processors
US20070299990A1 (en) * 2006-06-27 2007-12-27 Shmuel Ben-Yehuda A Method and System for Memory Address Translation and Pinning
US7529907B2 (en) 1998-12-16 2009-05-05 Mips Technologies, Inc. Method and apparatus for improved computer load and store operations
US7707391B2 (en) 1998-12-16 2010-04-27 Mips Technologies, Inc. Methods and apparatus for improving fetching and dispatch of instructions in multithreaded processors
WO2014062616A1 (en) * 2012-10-18 2014-04-24 Vmware, Inc. System and method for exclusive read caching in a virtualized computing environment
US10592164B2 (en) 2017-11-14 2020-03-17 International Business Machines Corporation Portions of configuration state registers in-memory
US10635602B2 (en) 2017-11-14 2020-04-28 International Business Machines Corporation Address translation prior to receiving a storage reference using the address to be translated
US10642757B2 (en) * 2017-11-14 2020-05-05 International Business Machines Corporation Single call to perform pin and unpin operations
US10664181B2 (en) 2017-11-14 2020-05-26 International Business Machines Corporation Protecting in-memory configuration state registers
US10698686B2 (en) 2017-11-14 2020-06-30 International Business Machines Corporation Configurable architectural placement control
US10761751B2 (en) 2017-11-14 2020-09-01 International Business Machines Corporation Configuration state registers grouped based on functional affinity
US10761983B2 (en) 2017-11-14 2020-09-01 International Business Machines Corporation Memory based configuration state registers
CN112148387A (en) * 2020-10-14 2020-12-29 中国平安人寿保险股份有限公司 Method and device for preloading feedback information, computer equipment and storage medium
US10976931B2 (en) 2017-11-14 2021-04-13 International Business Machines Corporation Automatic pinning of units of memory
US11106490B2 (en) 2017-11-14 2021-08-31 International Business Machines Corporation Context switch by changing memory pointers

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6694418B2 (en) * 2001-03-30 2004-02-17 Intel Corporation Memory hole modification and mixed technique arrangements for maximizing cacheable memory space
DE60223990T2 (en) * 2001-10-31 2008-12-04 Aplix Corp. System for executing intermediate code, method for executing intermediate code, and computer program product for executing intermediate code
EP1308838A3 (en) * 2001-10-31 2007-12-19 Aplix Corporation Intermediate code preprocessing apparatus, intermediate code execution apparatus, intermediate code execution system, and computer program product for preprocessing or executing intermediate code
US8135962B2 (en) * 2002-03-27 2012-03-13 Globalfoundries Inc. System and method providing region-granular, hardware-controlled memory encryption
US7787892B2 (en) 2005-10-05 2010-08-31 Via Technologies, Inc. Method and apparatus for adaptive multi-stage multi-threshold detection of paging indicators in wireless communication systems
US7949826B2 (en) * 2007-07-05 2011-05-24 International Business Machines Corporation Runtime machine supported method level caching
US8132162B2 (en) 2007-07-05 2012-03-06 International Business Machines Corporation Runtime machine analysis of applications to select methods suitable for method level caching
US8904115B2 (en) * 2010-09-28 2014-12-02 Texas Instruments Incorporated Cache with multiple access pipelines
US9922689B2 (en) * 2016-04-01 2018-03-20 Intel Corporation Memory mapping

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4277826A (en) 1978-10-23 1981-07-07 Collins Robert W Synchronizing mechanism for page replacement control
US4583166A (en) 1982-10-08 1986-04-15 International Business Machines Corporation Roll mode for cached data storage
US4811215A (en) 1986-12-12 1989-03-07 Intergraph Corporation Instruction execution accelerator for a pipelined digital machine with virtual memory
US4926322A (en) 1987-08-03 1990-05-15 Compag Computer Corporation Software emulation of bank-switched memory using a virtual DOS monitor and paged memory management
US5023776A (en) 1988-02-22 1991-06-11 International Business Machines Corp. Store queue for a tightly coupled multiple processor configuration with two-level cache buffer storage
US5202993A (en) 1991-02-27 1993-04-13 Sun Microsystems, Inc. Method and apparatus for cost-based heuristic instruction scheduling
US5237669A (en) 1991-07-15 1993-08-17 Quarterdeck Office Systems, Inc. Memory management method
US5274834A (en) 1991-08-30 1993-12-28 Intel Corporation Transparent system interrupts with integrated extended memory addressing
US5325499A (en) 1990-09-28 1994-06-28 Tandon Corporation Computer system including a write protection circuit for preventing illegal write operations and a write poster with improved memory
US5371872A (en) 1991-10-28 1994-12-06 International Business Machines Corporation Method and apparatus for controlling operation of a cache memory during an interrupt
US5394547A (en) 1991-12-24 1995-02-28 International Business Machines Corporation Data processing system and method having selectable scheduler
US5414848A (en) 1993-04-01 1995-05-09 Intel Corporation Method and apparatus for sharing a common routine stored in a single virtual machine with other virtual machines operating in a preemptive muli-tasking computer system
US5471591A (en) 1990-06-29 1995-11-28 Digital Equipment Corporation Combined write-operand queue and read-after-write dependency scoreboard
US5517651A (en) 1993-12-29 1996-05-14 Intel Corporation Method and apparatus for loading a segment register in a microprocessor capable of operating in multiple modes
US5553305A (en) 1992-04-14 1996-09-03 International Business Machines Corporation System for synchronizing execution by a processing element of threads within a process using a state indicator
US5555398A (en) 1994-04-15 1996-09-10 Intel Corporation Write back cache coherency module for systems with a write through cache supporting bus
US5651136A (en) 1995-06-06 1997-07-22 International Business Machines Corporation System and method for increasing cache efficiency through optimized data allocation
US5652889A (en) 1991-03-07 1997-07-29 Digital Equipment Corporation Alternate execution and interpretation of computer program having code at unknown locations due to transfer instructions having computed destination addresses
US5781792A (en) 1996-03-18 1998-07-14 Advanced Micro Devices, Inc. CPU with DSP having decoder that detects and converts instruction sequences intended to perform DSP function into DSP function identifier
US5889996A (en) * 1996-12-16 1999-03-30 Novell Inc. Accelerator for interpretive environments
US5983310A (en) * 1997-02-13 1999-11-09 Novell, Inc. Pin management of accelerator for interpretive environments
US6141732A (en) * 1998-03-24 2000-10-31 Novell, Inc. Burst-loading of instructions into processor cache by execution of linked jump instructions embedded in cache line size blocks

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5226133A (en) * 1989-12-01 1993-07-06 Silicon Graphics, Inc. Two-level translation look-aside buffer using partial addresses for enhanced speed
US5295258A (en) * 1989-12-22 1994-03-15 Tandem Computers Incorporated Fault-tolerant computer system with online recovery and reintegration of redundant components
US5249286A (en) 1990-05-29 1993-09-28 National Semiconductor Corporation Selectively locking memory locations within a microprocessor's on-chip cache
DE69219433T2 (en) * 1991-11-04 1997-12-11 Sun Microsystems Inc Write-through virtual cache Synonym addressing and cache invalidation
US5678025A (en) * 1992-12-30 1997-10-14 Intel Corporation Cache coherency maintenance of non-cache supporting buses
US5481693A (en) * 1994-07-20 1996-01-02 Exponential Technology, Inc. Shared register architecture for a dual-instruction-set CPU
US6085307A (en) * 1996-11-27 2000-07-04 Vlsi Technology, Inc. Multiple native instruction set master/slave processor arrangement and method thereof
US5909698A (en) * 1997-03-17 1999-06-01 International Business Machines Corporation Cache block store instruction operations where cache coherency is achieved without writing all the way back to main memory

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4277826A (en) 1978-10-23 1981-07-07 Collins Robert W Synchronizing mechanism for page replacement control
US4583166A (en) 1982-10-08 1986-04-15 International Business Machines Corporation Roll mode for cached data storage
US4811215A (en) 1986-12-12 1989-03-07 Intergraph Corporation Instruction execution accelerator for a pipelined digital machine with virtual memory
US4926322A (en) 1987-08-03 1990-05-15 Compag Computer Corporation Software emulation of bank-switched memory using a virtual DOS monitor and paged memory management
US5023776A (en) 1988-02-22 1991-06-11 International Business Machines Corp. Store queue for a tightly coupled multiple processor configuration with two-level cache buffer storage
US5471591A (en) 1990-06-29 1995-11-28 Digital Equipment Corporation Combined write-operand queue and read-after-write dependency scoreboard
US5325499A (en) 1990-09-28 1994-06-28 Tandon Corporation Computer system including a write protection circuit for preventing illegal write operations and a write poster with improved memory
US5202993A (en) 1991-02-27 1993-04-13 Sun Microsystems, Inc. Method and apparatus for cost-based heuristic instruction scheduling
US5652889A (en) 1991-03-07 1997-07-29 Digital Equipment Corporation Alternate execution and interpretation of computer program having code at unknown locations due to transfer instructions having computed destination addresses
US5237669A (en) 1991-07-15 1993-08-17 Quarterdeck Office Systems, Inc. Memory management method
US5274834A (en) 1991-08-30 1993-12-28 Intel Corporation Transparent system interrupts with integrated extended memory addressing
US5371872A (en) 1991-10-28 1994-12-06 International Business Machines Corporation Method and apparatus for controlling operation of a cache memory during an interrupt
US5394547A (en) 1991-12-24 1995-02-28 International Business Machines Corporation Data processing system and method having selectable scheduler
US5553305A (en) 1992-04-14 1996-09-03 International Business Machines Corporation System for synchronizing execution by a processing element of threads within a process using a state indicator
US5414848A (en) 1993-04-01 1995-05-09 Intel Corporation Method and apparatus for sharing a common routine stored in a single virtual machine with other virtual machines operating in a preemptive muli-tasking computer system
US5517651A (en) 1993-12-29 1996-05-14 Intel Corporation Method and apparatus for loading a segment register in a microprocessor capable of operating in multiple modes
US5555398A (en) 1994-04-15 1996-09-10 Intel Corporation Write back cache coherency module for systems with a write through cache supporting bus
US5651136A (en) 1995-06-06 1997-07-22 International Business Machines Corporation System and method for increasing cache efficiency through optimized data allocation
US5781792A (en) 1996-03-18 1998-07-14 Advanced Micro Devices, Inc. CPU with DSP having decoder that detects and converts instruction sequences intended to perform DSP function into DSP function identifier
US5889996A (en) * 1996-12-16 1999-03-30 Novell Inc. Accelerator for interpretive environments
US5983310A (en) * 1997-02-13 1999-11-09 Novell, Inc. Pin management of accelerator for interpretive environments
US6141732A (en) * 1998-03-24 2000-10-31 Novell, Inc. Burst-loading of instructions into processor cache by execution of linked jump instructions embedded in cache line size blocks

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
"An Experience Teaching A Graduate Course in Cryptography and Abstract and Introduction," 1996 Aviel D. Rubin excerpt, SPI Database of Software Technologies, (C)1996 Software Patent Institute, 2 pages.
"An Experience Teaching a Graduate Course in Cryptography," 1996 Aviel D. Rubin excerpt, SPI Database of Software Technologies, (C)1996 Software Patent Institute, 2 pages.
"Architecture of the Series 700 Bus," Hewlett Packared Manual excerpt, SPI Database of Software Technologies, (C)1995 Software Patent Institute, 3 pages.
"Dual On-Chip Instruction Cache Organization in High Speed Processors," IBM Technical Disclosure Bulletin, Vol. 37, No. 12 pp. 213-214 (Dec. 1994).
"Generic BIOS Interrupt 13 Driver for Direct Access Storage Device," IBM Technical Disclosure Bulletin, Vol. 37, No.09, pp. 551-553 (Sep. 1994).
"Instruction Cache Block Touch Retro-Fitted onto Microprocessor," IBM Technical Disclosure Bulletin, Vol. 38, No. 07 pp. 53-56 (Jul. 1995).
"Preemptible Cache Line Prefetch Algorithm and Implementation," IBM Technical Disclosure Bulletin, Vol. 33, No. 3B, pp. 371-373 (Aug. 1990).
"An Experience Teaching A Graduate Course in Cryptography and Abstract and Introduction," 1996 Aviel D. Rubin excerpt, SPI Database of Software Technologies, ©1996 Software Patent Institute, 2 pages.
"An Experience Teaching a Graduate Course in Cryptography," 1996 Aviel D. Rubin excerpt, SPI Database of Software Technologies, ©1996 Software Patent Institute, 2 pages.
"Architecture of the Series 700 Bus," Hewlett Packared Manual excerpt, SPI Database of Software Technologies, ©1995 Software Patent Institute, 3 pages.
Dudley, Jr., "Porting C Programs to 80386 Protected Mode," Dr. Dobb's Journal, pp. 16-18, 20 (Aug. 1990).
Margulis, "80386 Protected Mode Initialization," Dr. Dobb's Journal, pp. 36-39 (Oct. 1988).
Margulis, "Advanced 80386 Memory Management," Dr. Dobb's Journal, pp. 24, 28-30 (Apr. 1989).
Schulman, "Subatomic Programming," Dr. Dobb's Journal, pp. 137-139 (Mar. 1991).

Cited By (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529907B2 (en) 1998-12-16 2009-05-05 Mips Technologies, Inc. Method and apparatus for improved computer load and store operations
US6789100B2 (en) * 1998-12-16 2004-09-07 Mips Technologies, Inc. Interstream control and communications for multi-streaming digital processors
US20110154347A1 (en) * 1998-12-16 2011-06-23 Nemirovsky Mario D Interrupt and Exception Handling for Multi-Streaming Digital Processors
US7926062B2 (en) 1998-12-16 2011-04-12 Mips Technologies, Inc. Interrupt and exception handling for multi-streaming digital processors
US7900207B2 (en) 1998-12-16 2011-03-01 Mips Technologies, Inc. Interrupt and exception handling for multi-streaming digital processors
US7765546B2 (en) 1998-12-16 2010-07-27 Mips Technologies, Inc. Interstream control and communications for multi-streaming digital processors
US7707391B2 (en) 1998-12-16 2010-04-27 Mips Technologies, Inc. Methods and apparatus for improving fetching and dispatch of instructions in multithreaded processors
US7020879B1 (en) 1998-12-16 2006-03-28 Mips Technologies, Inc. Interrupt and exception handling for multi-streaming digital processors
US20050081214A1 (en) * 1998-12-16 2005-04-14 Nemirovsky Mario D. Interstream control and communications for multi-streaming digital processors
US8468540B2 (en) 1998-12-16 2013-06-18 Bridge Crossing, Llc Interrupt and exception handling for multi-streaming digital processors
US7650605B2 (en) 1998-12-16 2010-01-19 Mips Technologies, Inc. Method and apparatus for implementing atomicity of memory operations in dynamic multi-streaming processors
US20090241119A1 (en) * 1998-12-16 2009-09-24 Nemirovsky Mario D Interrupt and Exception Handling for Multi-Streaming Digital Processors
US7467385B2 (en) 1998-12-16 2008-12-16 Mips Technologies, Inc. Interrupt and exception handling for multi-streaming digital processors
US20070294702A1 (en) * 1998-12-16 2007-12-20 Mips Technologies, Inc. Method and apparatus for implementing atomicity of memory operations in dynamic multi-streaming processors
US7257814B1 (en) 1998-12-16 2007-08-14 Mips Technologies, Inc. Method and apparatus for implementing atomicity of memory operations in dynamic multi-streaming processors
US7237093B1 (en) 1998-12-16 2007-06-26 Mips Technologies, Inc. Instruction fetching system in a multithreaded processor utilizing cache miss predictions to fetch instructions from multiple hardware streams
US20070061619A1 (en) * 1998-12-16 2007-03-15 Nemirovsky Mario D Interrupt and exception handling for multi-streaming digital processors
US20090125660A1 (en) * 1998-12-16 2009-05-14 Mips Technologies, Inc. Interrupt and Exception Handling for Multi-Streaming Digital Processors
US7035997B1 (en) 1998-12-16 2006-04-25 Mips Technologies, Inc. Methods and apparatus for improving fetching and dispatch of instructions in multithreaded processors
US6564301B1 (en) * 1999-07-06 2003-05-13 Arm Limited Management of caches in a data processing apparatus
US7475151B2 (en) 2000-12-22 2009-01-06 Oracle International Corporation Policies for modifying group membership
US7711818B2 (en) 2000-12-22 2010-05-04 Oracle International Corporation Support for multiple data stores
US9235649B2 (en) 2000-12-22 2016-01-12 Oracle International Corporation Domain based workflows
US20020138577A1 (en) * 2000-12-22 2002-09-26 Teng Joan C. Domain based workflows
US8015600B2 (en) 2000-12-22 2011-09-06 Oracle International Corporation Employing electronic certificate workflows
US20020138543A1 (en) * 2000-12-22 2002-09-26 Teng Joan C. Workflows with associated processes
US20060195575A1 (en) * 2000-12-22 2006-08-31 Oracle International Corporation Determining a user's groups
US7937655B2 (en) 2000-12-22 2011-05-03 Oracle International Corporation Workflows with associated processes
US7213249B2 (en) * 2000-12-22 2007-05-01 Oracle International Corporation Blocking cache flush requests until completing current pending requests in a local server and remote server
US20020138763A1 (en) * 2000-12-22 2002-09-26 Delany Shawn P. Runtime modification of entries in an identity system
US20110055673A1 (en) * 2000-12-22 2011-03-03 Oracle International Corporation Domain based workflows
US20020143943A1 (en) * 2000-12-22 2002-10-03 Chi-Cheng Lee Support for multiple data stores
US7802174B2 (en) 2000-12-22 2010-09-21 Oracle International Corporation Domain based workflows
US20020143865A1 (en) * 2000-12-22 2002-10-03 Tung Loo Elise Y. Servicing functions that require communication between multiple servers
US20020147813A1 (en) * 2000-12-22 2002-10-10 Teng Joan C. Proxy system
US7673047B2 (en) 2000-12-22 2010-03-02 Oracle International Corporation Determining a user's groups
US7349912B2 (en) 2000-12-22 2008-03-25 Oracle International Corporation Runtime modification of entries in an identity system
US7363339B2 (en) 2000-12-22 2008-04-22 Oracle International Corporation Determining group membership
US7380008B2 (en) 2000-12-22 2008-05-27 Oracle International Corporation Proxy system
US7415607B2 (en) 2000-12-22 2008-08-19 Oracle International Corporation Obtaining and maintaining real time certificate status
US20020152254A1 (en) * 2000-12-22 2002-10-17 Teng Joan C. Template based workflow definition
US20020156879A1 (en) * 2000-12-22 2002-10-24 Delany Shawn P. Policies for modifying group membership
US20020129135A1 (en) * 2000-12-22 2002-09-12 Delany Shawn P. Determining group membership
US20020174238A1 (en) * 2000-12-22 2002-11-21 Sinn Richard P. Employing electronic certificate workflows
US20020166049A1 (en) * 2000-12-22 2002-11-07 Sinn Richard P. Obtaining and maintaining real time certificate status
US7581011B2 (en) 2000-12-22 2009-08-25 Oracle International Corporation Template based workflow definition
US20030217101A1 (en) * 2002-05-15 2003-11-20 Sinn Richard P. Provisioning bridge server
US7475136B2 (en) 2002-05-15 2009-01-06 Oracle International Corporation Method and apparatus for provisioning tasks using a provisioning bridge server
US20030217127A1 (en) * 2002-05-15 2003-11-20 Richard P. Sinn Employing job code attributes in provisioning
US7216163B2 (en) 2002-05-15 2007-05-08 Oracle International Corporation Method and apparatus for provisioning tasks using a provisioning bridge server
US20070245349A1 (en) * 2002-05-15 2007-10-18 Oracle International Corporation Method and apparatus for provisioning tasks using a provisioning bridge server
US7840658B2 (en) 2002-05-15 2010-11-23 Oracle International Corporation Employing job code attributes in provisioning
US7039911B2 (en) 2002-05-17 2006-05-02 Naturalbridge, Inc. Hybrid threads for multiplexing virtual machine
WO2003100552A3 (en) * 2002-05-17 2004-01-22 Naturalbridge Inc Hybrid threads for multiplexing virtual machine
US20030217087A1 (en) * 2002-05-17 2003-11-20 Chase David R. Hybrid threads for multiplexing virtual machine
WO2003100552A2 (en) * 2002-05-17 2003-12-04 Naturalbridge, Inc. Hybrid threads for multiplexing virtual machine
US20050080766A1 (en) * 2003-10-09 2005-04-14 Ghatare Sanjay P. Partitioning data access requests
US7904487B2 (en) 2003-10-09 2011-03-08 Oracle International Corporation Translating data access requests
US20050080792A1 (en) * 2003-10-09 2005-04-14 Ghatare Sanjay P. Support for RDBMS in LDAP system
US7340447B2 (en) 2003-10-09 2008-03-04 Oracle International Corporation Partitioning data access requests
US20050080791A1 (en) * 2003-10-09 2005-04-14 Ghatare Sanjay P. Translating data access requests
US7882132B2 (en) 2003-10-09 2011-02-01 Oracle International Corporation Support for RDBMS in LDAP system
US20050188156A1 (en) * 2004-02-20 2005-08-25 Anoop Mukker Method and apparatus for dedicating cache entries to certain streams for performance optimization
US7797492B2 (en) * 2004-02-20 2010-09-14 Anoop Mukker Method and apparatus for dedicating cache entries to certain streams for performance optimization
US20070299990A1 (en) * 2006-06-27 2007-12-27 Shmuel Ben-Yehuda A Method and System for Memory Address Translation and Pinning
US7636800B2 (en) * 2006-06-27 2009-12-22 International Business Machines Corporation Method and system for memory address translation and pinning
AU2016222466B2 (en) * 2012-10-18 2018-02-22 Vmware, Inc. System and method for exclusive read caching in a virtualized computing environment
US9361237B2 (en) 2012-10-18 2016-06-07 Vmware, Inc. System and method for exclusive read caching in a virtualized computing environment
EP3182291A1 (en) * 2012-10-18 2017-06-21 VMware, Inc. System and method for exclusive read caching in a virtualized computing environment
WO2014062616A1 (en) * 2012-10-18 2014-04-24 Vmware, Inc. System and method for exclusive read caching in a virtualized computing environment
US10761751B2 (en) 2017-11-14 2020-09-01 International Business Machines Corporation Configuration state registers grouped based on functional affinity
US11099782B2 (en) 2017-11-14 2021-08-24 International Business Machines Corporation Portions of configuration state registers in-memory
US10642757B2 (en) * 2017-11-14 2020-05-05 International Business Machines Corporation Single call to perform pin and unpin operations
US10664181B2 (en) 2017-11-14 2020-05-26 International Business Machines Corporation Protecting in-memory configuration state registers
US10698686B2 (en) 2017-11-14 2020-06-30 International Business Machines Corporation Configurable architectural placement control
US10592164B2 (en) 2017-11-14 2020-03-17 International Business Machines Corporation Portions of configuration state registers in-memory
US10761983B2 (en) 2017-11-14 2020-09-01 International Business Machines Corporation Memory based configuration state registers
US11579806B2 (en) 2017-11-14 2023-02-14 International Business Machines Corporation Portions of configuration state registers in-memory
US10976931B2 (en) 2017-11-14 2021-04-13 International Business Machines Corporation Automatic pinning of units of memory
US11093145B2 (en) 2017-11-14 2021-08-17 International Business Machines Corporation Protecting in-memory configuration state registers
US10635602B2 (en) 2017-11-14 2020-04-28 International Business Machines Corporation Address translation prior to receiving a storage reference using the address to be translated
US11106490B2 (en) 2017-11-14 2021-08-31 International Business Machines Corporation Context switch by changing memory pointers
US11287981B2 (en) 2017-11-14 2022-03-29 International Business Machines Corporation Automatic pinning of units of memory
CN112148387A (en) * 2020-10-14 2020-12-29 中国平安人寿保险股份有限公司 Method and device for preloading feedback information, computer equipment and storage medium

Also Published As

Publication number Publication date
US6408384B1 (en) 2002-06-18

Similar Documents

Publication Publication Date Title
US6356996B1 (en) Cache fencing for interpretive environments
US6470424B1 (en) Pin management of accelerator for interpretive environments
US5889996A (en) Accelerator for interpretive environments
US5953741A (en) Stack cache for stack-based processor and method thereof
US6141732A (en) Burst-loading of instructions into processor cache by execution of linked jump instructions embedded in cache line size blocks
US6085307A (en) Multiple native instruction set master/slave processor arrangement and method thereof
US5923892A (en) Host processor and coprocessor arrangement for processing platform-independent code
JP4171496B2 (en) Instruction folding processing for arithmetic machines using stacks
EP0976050B1 (en) Processor with array access bounds checking
EP1353267B1 (en) Microprocessor with repeat prefetch instruction
US5925123A (en) Processor for executing instruction sets received from a network or from a local memory
US6950923B2 (en) Method frame storage using multiple memory circuits
US5819063A (en) Method and data processing system for emulating a program
US6253306B1 (en) Prefetch instruction mechanism for processor
JP3820261B2 (en) Data processing system external and internal instruction sets
US6456891B1 (en) System and method for transparent handling of extended register states
US6230259B1 (en) Transparent extended state save
EP0752645B1 (en) Tunable software control of Harvard architecture cache memories using prefetch instructions
US5970242A (en) Replicating code to eliminate a level of indirection during execution of an object oriented computer program
US5727227A (en) Interrupt coprocessor configured to process interrupts in a computer system
US20060026322A1 (en) Interrupt management in dual core processors
JP2007172609A (en) Efficient and flexible memory copy operation
JP2007172610A (en) Validity of address range used in semi-synchronous memory copy operation
GB2297638A (en) Transferring data between memory and processor via vector buffers
US6065108A (en) Non-quick instruction accelerator including instruction identifier and data set storage and method of implementing same

Legal Events

Date Code Title Description
AS Assignment

Owner name: NOVELL, INC., UTAH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADAMS, PHILLIP M.;REEL/FRAME:009332/0130

Effective date: 19980716

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: CPTN HOLDINGS LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELL, INC.;REEL/FRAME:027147/0151

Effective date: 20110427

Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CPTN HOLDINGS LLC;REEL/FRAME:027147/0396

Effective date: 20110909

AS Assignment

Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CPTN HOLDINGS LLC;REEL/FRAME:027769/0111

Effective date: 20110909

Owner name: CPTN HOLDINGS LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELL, INC.;REEL/FRAME:027769/0057

Effective date: 20110427

FPAY Fee payment

Year of fee payment: 12