US20050015430A1 - OS agnostic resource sharing across multiple computing platforms - Google Patents

OS agnostic resource sharing across multiple computing platforms Download PDF

Info

Publication number
US20050015430A1
US20050015430A1 US10/606,636 US60663603A US2005015430A1 US 20050015430 A1 US20050015430 A1 US 20050015430A1 US 60663603 A US60663603 A US 60663603A US 2005015430 A1 US2005015430 A1 US 2005015430A1
Authority
US
United States
Prior art keywords
resource
oob
blade
server
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/606,636
Inventor
Michael Rothman
Vincent Zimmer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US10/606,636 priority Critical patent/US20050015430A1/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROTHMAN, MICHAEL A., ZIMMER, VINCENT J.
Priority to US10/808,656 priority patent/US7730205B2/en
Priority to PCT/US2004/018253 priority patent/WO2005006186A2/en
Priority to CN2004800180348A priority patent/CN101142553B/en
Priority to EP04754766.6A priority patent/EP1636696B1/en
Priority to JP2006509095A priority patent/JP4242420B2/en
Publication of US20050015430A1 publication Critical patent/US20050015430A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4405Initialisation of multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals

Definitions

  • the field of invention relates generally to clustered computing environments, such as blade server computing environments, and, more specifically but not exclusively relates to techniques for sharing resources hosted by individual platforms (nodes) to create global resources that may be shared across all nodes.
  • IT Information Technology
  • CIOs Chief Information Officers
  • a company's IT (information technology) infrastructure is centered around computer servers that are linked together via various types of networks, such as private local area networks (LANs) and private and public wide area networks (WANs).
  • the servers are used to deploy various applications and to manage data storage and transactional processes.
  • these servers will include stand-alone servers and/or higher density rack-mounted servers, such as 4U, 2U and 1U servers.
  • a blade server employs a plurality of closely-spaced “server blades” (blades) disposed in a common chassis to deliver high-density computing functionality.
  • blades servers
  • Each blade provides a complete computing platform, including one or more processors, memory, network connection, and disk storage integrated on-a single system board.
  • other components such as power supplies and fans, are shared among the blades in a given chassis and/or rack. This provides a significant reduction in capital equipment costs when compared to conventional rack-mounted servers.
  • a scalable compute cluster is a group of two or more computer systems, also known as compute nodes, configured to work together to perform computational-intensive tasks.
  • SCC scalable compute cluster
  • the task can be completed much more quickly than if a single system performed the tasks.
  • the more nodes that are applied to a task the quicker the task can be completed.
  • the number of nodes that can effectively be used to complete the task is dependent on the application used.
  • a typical SCC is built using Intel®-based servers running the Linux operating system and cluster infrastructure software. These servers are often referred to as commodity off the shelf (COTS) servers. They are connected through a network to form the cluster.
  • COTS commodity off the shelf
  • An SCC normally needs anywhere from tens to hundreds of servers to be effective at performing computational-intensive tasks. Fulfilling this need to group a large number of servers in one location to form a cluster is a perfect fit for a blade server.
  • the blade server chassis design and architecture provides the ability to place a massive amount of computer horsepower in a single location.
  • the built-in networking and switching capabilities of the blade server architecture enables individual blades to be added or removed, enabling optimal scaling for a given tasks. With such flexibility, blade server-based SCC's provides a cost-effective alternative to other infrastructure for performing computational tasks, such as supercomputers.
  • each blade in a blade server is enabled to provide full platform functionality, thus being able to operate independent of other blades in the server.
  • the resources available to each blade are likewise limited to it own resources. Thus, in many instances resources are inefficiently utilized. Under current architectures, there is no scheme that enables efficient server-wide resource sharing.
  • FIG. 1 a is a frontal isometric view of an exemplary blade server chassis in which a plurality of server blades are installed;
  • FIG. 1 b is a rear isometric view of the blade server chassis of FIG. 1 a;
  • FIG. 1 c is an isometric frontal view of an exemplary blade server rack in which a plurality of rack-mounted blade server chassis corresponding to FIGS. 1 a and 1 b are installed;
  • FIG. 2 shows details of the components of a typical server blade
  • FIG. 3 is a schematic block diagram illustrating various firmware and operating system components used to deploy power management in accordance with the ACPI standard
  • FIG. 4 is a flowchart illustrating operations and logic employed during blade initialization to configure a blade for implementing a power management scheme in accordance with one embodiment of the invention
  • FIG. 5 is a flowchart illustrating operations and logic employed during an initialization process to set up resource sharing in accordance with one embodiment of the invention
  • FIG. 6 is a schematic diagram illustrating various data flows that occur during the initialization process of FIG. 6 ;
  • FIG. 7 is a flowchart illustrating operations and logic employed in response to a resource access request received at a requesting computing platform to service the request in accordance with one embodiment of the invention, wherein the servicing resource is hosted by another computing platform;
  • FIGS. 8 a and 8 b are schematic diagrams illustrating data flows between a pair of computing platforms during a shared resource access, wherein the scheme illustrated in FIG. 8 a employs local global resource maps, and the scheme illustrated in FIG. 8 b employs a single global resource map hosted by a global resource manager;
  • FIG. 9 a is a schematic diagram illustrating a share storage resource configured as a virtual storage volume that aggregates the storage capacity of a plurality of disk drives;
  • FIG. 9 b is a schematic diagram illustrating a variance of the shared storage resource scheme of FIG. 9 a, wherein a RAID- 1 implementation is employed during resource accesses;
  • FIG. 10 a is a schematic diagram illustrating further details of the virtual volume storage scheme of FIG. 9 a;
  • FIG. 10 b is a schematic diagram illustrating further details of the RAID- 1 implementation of FIG. 9 b;
  • FIG. 11 is a schematic diagram illustrating a shared keyboard, video, and mouse (KVM) access scheme in accordance with one embodiment of the invention
  • FIG. 12 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing a video resource.
  • FIG. 13 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing user input resources
  • Embodiments of methods and computer components and systems for performing resource sharing across clustered platform environments are described herein.
  • numerous specific details are set forth to provide a thorough understanding of embodiments of the invention.
  • One skilled in the relevant art will recognize, however, that the invention can be practiced Without one or more of the specific details, or With other methods, components, materials, etc.
  • well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
  • techniques are disclosed herein for sharing resources across clustered platform environments in a manner under which resources hosted by individual platforms are made accessible to other platform nodes
  • the techniques employ firmware-based functionality that provides a “behind the scenes” access mechanisms without requiring any OS complicity.
  • the resource sharing and access operations are completely transparent to operating systems running on the blades, and thus operating system independent.
  • the capabilities afforded by the novel techniques disclosed herein may be employed in existing and future distributed platform environments without requiring any changes to the operating systems targeted for the environments.
  • the resource-sharing mechanism is effectuated by several platforms that “expose” resources that are aggregated to form global resources.
  • Each platform employs a respective set of firmware that runs prior to the operating system load (pre-boot) and coincident with the operating system runtime.
  • runtime deployment is facilitated by a hidden execution mode known as the System Management Mode (SMM), which has the ability to receive and respond to periodic System Management Interrupts (SMI) to allow resource sharing and access information to be transparently passed to firmware SMM code configured to effectuate the mechanisms.
  • SMM resource management code conveys information and messaging to other nodes via an out-of-band (OOB) network or communication channel in an OS-transparent manner.
  • OOB out-of-band
  • FIGS. 1 a - c and 2 For illustrative purposes, several embodiments of the invention are disclosed below in the context of a blade server environment.
  • typical blade server components and systems for which resource sharing schemes in accordance with embodiments of the invention may be generally implemented are shown in FIGS. 1 a - c and 2 .
  • a rack-mounted chassis 100 is employed to provide power and communication functions for a plurality of blades 102 , each of which occupies a corresponding slot. (It is noted that all slots in a chassis do not need to be occupied.)
  • one of more chassis 100 may be installed in a blade server rack 103 shown in FIG. 1 c.
  • Each blade is coupled to an interface plane 104 (i.e., a backplane or mid-plane) upon installation via one or more mating connectors.
  • the interface plane will include a plurality of respective mating connectors that provide power and communication signals to the blades.
  • many interface planes provide “hot-swapping” functionality—that is, blades can be added or removed (“hot-swapped”) on the fly, without taking the entire chassis down through appropriate power and data signal buffering.
  • FIGS. 1 a and 1 b A typical mid-plane interface plane configuration is shown in FIGS. 1 a and 1 b.
  • the backside of interface plane 104 is coupled to one or more power supplies 106 .
  • the power supplies are redundant and hot-swappable, being coupled to appropriate power planes and conditioning circuitry to enable continued operation in the event of a power supply failure.
  • an array of power supplies may be used to supply power to an entire rack of blades, wherein there is not a one-to-one power supply-to-chassis correspondence.
  • a plurality of cooling fans 108 are employed to draw air through the chassis to cool the server blades.
  • a network connect card may include a physical interface comprising a plurality of network port connections (e.g., RJ-45 ports), or may comprise a high-density connector designed to directly connect to a network device, such as a network switch, hub, or router.
  • a network connect card may include a physical interface comprising a plurality of network port connections (e.g., RJ-45 ports), or may comprise a high-density connector designed to directly connect to a network device, such as a network switch, hub, or router.
  • Blades servers usually provide some type of management interface for managing operations of the individual blades. This may generally be facilitated by an out-of-band network or communication channel or channels. For example, one or more buses for facilitating a “private” or “management” network and appropriate switching may be built into the interface plane, or a private network may be implemented through closely-coupled network cabling and a network.
  • the switching and other management functionality may be provided by a management card 112 that is coupled to the backside or frontside of the interface plane.
  • a management server may be employed to manage blade activities, wherein communications are handled via standard computer networking infrastructure, such as Ethernet.
  • each blade comprises a separate computing platform that is configured to perform server-type functions, i.e., is a “server on a card.”
  • each blade includes components common to conventional servers, including a main circuit board 201 providing internal wiring (i.e., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board.
  • These components include one or more processors 202 coupled to system memory 204 (e.g., DDR RAM), cache memory 206 (e.g., SDRAM), and a firmware storage device 208 (e.g., flash memory).
  • system memory 204 e.g., DDR RAM
  • cache memory 206 e.g., SDRAM
  • firmware storage device 208 e.g., flash memory
  • a “public” NIC (network interface) chip 210 is provided for supporting conventional network communication functions, such as to support communication between blades and external network infrastructure.
  • Other illustrated components include status LEDs 212 , an RJ-45 console port 214 , and an interface plane connector 216 .
  • Additional components include various passive components (e.g., resistors, capacitors), power conditioning components, and peripheral device connectors.
  • each blade 200 will also provide on-board storage. This is typically facilitated via one or more built-in disk controllers and corresponding connectors to which one or more disk drives 218 are coupled.
  • typical disk controllers include Ultra ATA controllers, SCSI controllers, and the like.
  • the disk drives may be housed separate from the blades in the same or a separate rack, such as might be the case when a network-attached storage (NAS) appliance is employed to storing large volumes of data.
  • NAS network-attached storage
  • an out-of-band communication channel comprises a communication means that supports communication between devices in an OS-transparent manner—that is, a means to enable inter-blade communication without requiring operating system complicity.
  • a dedicated bus such as a system management bus that implements the SMBUS standard (www.smbus.org), a dedicated private or management network, such as an Ethernet-based network using VLAN-802.1Q), or a serial communication scheme, e.g., employing the RS-485 serial communication standard.
  • interface plane 104 will include corresponding buses or built-in network traces to support the selected OOB scheme.
  • appropriate network cabling and networking devices may be deployed inside or external to chassis 100 .
  • embodiments of the invention employ a firmware-based scheme for effectuating a resource sharing set-up and access mechanism to enable sharing of resources across blade server nodes.
  • resource management firmware code is loaded during initialization of each blade and made available for access during OS run-time.
  • resource information is collected, and global resource information is built. Based on the global resource information, appropriate global resource access is provided back to each blade. This information is handed off to the operating system upon its initialization, such that the global resource appears (from the OS standpoint) as a local resource.
  • OS runtime operations accesses to the shared resources are handled via interaction between OS and/or OS drivers and corresponding firmware in conjunction with resource access management that is facilitated via the OOB channel.
  • resource sharing is facilitated via an extensible firmware framework known as Extensible Firmware Interface (EFI) (specifications and examples of which may be found at http://developer.intel.com/technology/efi).
  • EFI Extensible Firmware Interface
  • the EFI framework include provisions for extending BIOS functionality beyond that provided by the BIOS code stored in a platform's BIOS device (e.g., flash memory).
  • EFI enables firmware, in the form of firmware modules and drivers, to be loaded from a variety of different resources, including primary and secondary flash devices, option ROMs, various persistent storage devices (e.g., hard disks, CD ROMs, etc.), and even over computer networks.
  • firmware in the form of firmware modules and drivers, to be loaded from a variety of different resources, including primary and secondary flash devices, option ROMs, various persistent storage devices (e.g., hard disks, CD ROMs, etc.), and even over computer networks.
  • FIG. 3 shows an event sequence/architecture diagram used to illustrate operations performed by a platform under the framework in response to a cold boot (e.g., a power off/on reset).
  • the process is logically divided into several phases, including a pre-EFI Initialization Environment (PEI) phase, a Driver Execution Environment (DXE) phase, a Boot Device Selection (BDS) phase, a Transient System Load (TSL) phase, and an operating system runtime (RT) phase.
  • PEI pre-EFI Initialization Environment
  • DXE Driver Execution Environment
  • BDS Boot Device Selection
  • TSL Transient System Load
  • RT operating system runtime
  • the PEI phase provides a standardized method of loading and invoking specific initial configuration routines for the processor (CPU), chipset, and motherboard.
  • the PEI phase is responsible for initializing enough of the system to provide a stable base for the follow on phases.
  • Initialization of the platforms core components, including the CPU, chipset and main board (i.e., motherboard) is performed during the PEI phase.
  • This phase is also referred to as the “early initialization” phase.
  • Typical operations performed during this phase include the POST (power-on self test) operations, and discovery of platform resources.
  • the PEI phase discovers memory and prepares a resource map that is handed off to the DXE phase.
  • the state of the system at the end of the PEI phase is passed to the DXE phase through a list of position independent data structures called Hand Off Blocks (HOBs).
  • HOBs Hand Off Blocks
  • the DXE phase is the phase during which most of the system initialization is performed.
  • the DXE phase is facilitated by several components, including the DXE core 300 , the DXE dispatcher 302 , and a set of DXE drivers 304 .
  • the DXE core 300 produces a set of Boot Services 306 , Runtime Services 308 , and DXE Services 310 .
  • the DXE dispatcher 302 is responsible for discovering and executing DXE drivers 304 in the correct order.
  • the DXE drivers 304 are responsible for initializing the processor, chipset, and platform components as well as providing software abstractions for console and boot devices. These components work together to initialize the platform and provide the services required to boot an operating system.
  • the DXE and the Boot Device Selection phases work together to establish consoles and attempt the booting of operating systems.
  • the DXE phase is terminated when an operating system successfully begins its boot process (i.e., the BDS phase starts). Only the runtime services and selected DXE services provided by the DXE core and selected services provided by runtime DXE drivers are allowed to persist into the OS runtime environment.
  • the result of DXE is the presentation of a fully formed EFI interface.
  • the DXE core is designed to be completely portable with no CPU, chipset, or platform dependencies. This is accomplished by designing in several features. First, the DXE core only depends upon the HOB list for its initial state. This means that the DXE core does not depend on any services from a previous phase, so all the prior phases can be unloaded once the HOB list is passed to the DXE core. Second, the DXE core does not contain any hard coded addresses. This further means the DXE core can be loaded anywhere in physical memory, and it can function correctly no matter where physical memory or where Firmware segments are located in the processor's physical address space. Third, the DXE core does not contain any CPU-specific, chipset specific, or platform specific information. Instead, the DXE core is abstracted from the system hardware through a set of architectural protocol interfaces. These architectural protocol interfaces are produced by DXE drivers 304 , which are invoked by DXE Dispatcher 302 .
  • the DXE core produces an EFI System Table 400 and its associated set of Boot Services 306 and Runtime Services 308 , as shown in FIG. 4 .
  • the DXE core also maintains a handle database 402 .
  • the handle database comprises a list of one or more handles, wherein a handle is a list of one or more unique protocol GUIDs (Globally Unique Identifiers) that map to respective protocols 404 .
  • GUIDs Globally Unique Identifiers
  • a protocol is a software abstraction for a set of services. Some protocols abstract I/O devices, and other protocols abstract a common set of system services.
  • a protocol typically contains a set of APIs and some number of data fields. Every protocol is named by a GUID, and the DXE Core produces services that allow protocols to be registered in the handle database. As the DXE Dispatcher executes DXE drivers, additional protocols will be added to the handle database including the architectural protocols used to abstract the DXE Core from platform specific details.
  • the Boot Services comprise a set of services that are used during the DXE and BDS phases. Among others, these services include Memory Services, Protocol Handler Services, and Driver Support Services: Memory Services provide services to allocate and free memory pages and allocate and free the memory pool on byte boundaries. It also provides a service to retrieve a map of all the current physical memory usage in the platform. Protocol Handler Services provides services to add and remove handles from the handle database. It also provides services to add and remove protocols from the handles in the handle database. Addition services are available that allow any component to lookup handles in the handle database, and open and close protocols in the handle database. Support Services provides services to connect and disconnect drivers to devices in the platform. These services are used by the BDS phase to either connect all drivers to all devices, or to connect only the minimum number of drivers to devices required to establish the consoles and boot an operating system (i.e., for supporting a fast boot mechanism).
  • Memory Services provide services to allocate and free memory pages and allocate and free the memory pool on byte boundaries. It also provides a service to retrieve a map of all the
  • Runtime Services are available both during pre-boot and OS runtime operations.
  • One of the Runtime Services that is leveraged by embodiments disclosed herein is the Variable Services.
  • the Variable Services provide services to lookup, add, and remove environmental variables from both volatile and non-volatile storage.
  • the DXE Services Table includes data corresponding to a first set of DXE services 406 A that are available during pre-boot only, and a second set of DXE services 406 B that are available during both pre-boot and OS runtime.
  • the pre-boot only services include Global Coherency Domain Services, which provide services to manage I/O resources, memory mapped I/O resources, and system memory resources in the platform. Also included are DXE Dispatcher Services, which provide services to manage DXE drivers that are being dispatched by the DXE dispatcher.
  • the services offered by each of Boot Services 306 , Runtime Services 308 , and DXE services 310 are accessed via respective sets of API's 312 , 314 , and 316 .
  • the API's provide an abstracted interface that enables subsequently loaded components to leverage selected services provided by the DXE Core.
  • DXE Dispatcher 302 After DXE Core 300 is initialized, control is handed to DXE Dispatcher 302 .
  • the DXE Dispatcher is responsible for loading and invoking DXE drivers found in firmware volumes, which correspond to the logical storage units from which firmware is loaded under the EFI framework.
  • the DXE dispatcher searches for drivers in the firmware volumes described by the HOB List. As execution continues, other firmware volumes might be located. When they are, the dispatcher searches them for drivers as well.
  • DXE drivers There are two subclasses of DXE drivers.
  • the first subclass includes DXE drivers that execute very early in the DXE phase. The execution order of these DXE drivers depends on the presence and contents of an a priori file and the evaluation of dependency expressions.
  • These early DXE drivers will typically contain processor, chipset, and platform initialization code. These early drivers will also typically produce the architectural protocols that are required for the DXE core to produce its full complement of Boot Services and Runtime Services.
  • the second class of DXE drivers are those that comply with the EFI 1.10 Driver Model. These drivers do not perform any hardware initialization when they are executed by the DXE dispatcher. Instead, they register a Driver Binding Protocol interface in the handle database. The set of Driver Binding Protocols are used by the BDS phase to connect the drivers to the devices required to establish consoles and provide access to boot devices.
  • the DXE Drivers that comply with the EFI 1.10 Driver Model ultimately provide software abstractions for console devices and boot devices when they are explicitly asked to do so.
  • Any DXE driver may consume the Boot Services and Runtime Services to perform their functions.
  • the early DXE drivers need to be aware that not all of these services may be available when they execute because all of the architectural protocols might not have been registered yet.
  • DXE drivers must use dependency expressions to guarantee that the services and protocol interfaces they require are available before they are executed.
  • the DXE drivers that comply with the EFI 1.10 Driver Model do not need to be concerned with this possibility. These drivers simply register the Driver Binding Protocol in the handle database when they are executed. This operation can be performed without the use of any architectural protocols.
  • a DXE driver may “publish” an API by using the InstallConfigurationTable function. This published drivers are depicted by API's 318 . Under EFI, publication of an API exposes the API for access by other firmware components. The API's provide interfaces for the Device, Bus, or Service to which the DXE driver corresponds during their respective lifetimes.
  • the BDS architectural protocol executes during the BDS phase.
  • the BDS architectural protocol locates and loads various applications that execute in the pre-boot services environment.
  • Such applications might represent a traditional OS boot loader, or extended services that might run instead of, or prior to loading the final OS.
  • extended pre-boot services might include setup configuration, extended diagnostics, flash update support, OEM value-adds, or the OS boot code.
  • a Boot Dispatcher 320 is used during the BDS phase to enable selection of a Boot target, e.g., an OS to be booted by the system.
  • a final OS Boot loader 322 is run to load the selected OS. Once the OS has been loaded, there is no further need for the Boot Services 306 , and for many of the services provided in connection with DXE drivers 304 via API's 318 , as well as DXE Services 306 A. Accordingly, these reduced sets of API's that may be accessed during OS runtime are depicted as API's 316 A, and 318 A in FIG. 3 .
  • an OS-transparent out-of-band communication scheme is employed to allow various types of resources to be shared across server nodes.
  • firmware-based components e.g., firmware drivers and API's
  • the scheme may be effectuated across multiple computing platforms, including groups of blades, individual chassis, racks, or groups of racks.
  • firmware provided on each platform is loaded and executed to set up the OOB channel and appropriate resource access and data re-routing mechanisms.
  • Each blade then transmits information about its shared resources over the OOB to a global resource manager.
  • the global resource manager aggregates the data and configures a “virtual” global resource.
  • Global resource configuration data in the form of global resource descriptors then sent back to the blades to apprise the blades of the configuration and access mechanism for the global resource.
  • Drivers are then configured to support access to the global resource.
  • the global resource descriptors are handed off to the operating system during OS load, wherein the virtual global resource appears as a local device to the operating system, and thus is employed as such during OS runtime operations without requiring any modification to the OS code.
  • Flowchart operations and logic according to one embodiment of the process are shown in FIGS. 5 and 7 , while corresponding operations and interactions between various components are schematically illustrated in FIGS. 6, 8 a, and 8 b.
  • the process begins by performing several initialization operations on each blade to set up the resource device drivers and the OOB communications framework.
  • the system performs pre-boot system initialization operations in the manner discussed above with reference to FIG. 3 .
  • early initialization operations are performed in a block 502 by loading and executing firmware stored in each blade's boot firmware device (BFD).
  • BFD boot firmware device
  • the BFD comprises the firmware device that stores firmware for booting the system
  • the BFD for server blade 200 comprises firmware device 208 .
  • processor 202 executes reset stub code that jumps execution to the base address of a boot block of the BFD via a reset vector.
  • the boot block contains firmware instructions for performing early initialization, and is executed by processor 202 to initialize the CPU, chipset, and motherboard. (It is noted that during a warm boot (reset) early initialization is not performed, or is at least performed in a limited manner.) Execution of firmware instructions corresponding to an EFI core are executed next, leading to the DXE phase.
  • the Variable Services are setup in the manner discussed above with reference to FIGS. 3 and 4 .
  • DXE dispatcher 302 begins loading DXE drivers 304 .
  • Each DXE driver corresponds to a system component, and provides an interface for directly accessing that component. Included in the DXE drivers is an OOB monitor driver that will be subsequently employed for facilitating OOB communications.
  • the OOB monitor driver is installed in a protected area in each blade.
  • an out-of-band communication channel or network that operates independent of network communications that are managed by the operating systems is employed to facilitate inter-blade communication in an OS-transparent manner.
  • SMRAM 600 (see FIG. 6 ), and is-hidden from the subsequently-loaded operating system.
  • SMM OOB communication code 602 stored in firmware is loaded into SMRAM 600 , and a corresponding OOB communications SMM handler 604 for handling OOB communications are setup.
  • An SMM handler is a type of interrupt handler, and is invoked in response to a system management interrupt (SMI).
  • SMI system management interrupt
  • an SMI interrupt may be asserted via an SMI pin on the system's processor.
  • the processor stores its current context (i.e., information pertaining to current operations, including its current execution mode, stack and register information, etc.), and switches its execution mode to its system management mode.
  • SMM handlers are then sequentially dispatched to determine if they are the appropriate handler for servicing the SMI event.
  • this handler When this handler is identified, it is allowed to execute to completion to service the SMI event. After the SMI event is serviced, an RSM (resume) instruction is issued to return the processor to its previous execution mode using the previously saved context data. The net result is that SMM operation is completely transparent to the operating system.
  • a shared resource is any blade component or device that is to be made accessible for shared access.
  • Such components and devices include but are to limited to fixed storage devices, removable media devices, input devices (e.g., keyboard, mouse), video devices, audio devices, volatile memory (i.e., system RAM), and non-volatile memory.
  • the logic proceeds to perform the loop operations defined within respective start and end loop blocks 508 and 509 for each sharable resource that is discovered. This includes operations in a block 510 , wherein a device path to describe the shared resource is constructed and configuration parameters are collected.
  • the device path provides external users with a means for accessing the resource.
  • the configuration parameters are used to build global resources, as described below in further detail.
  • the device path and resource configuration information is transmitted or broadcasts to a global resource manager 608 via an OOB communication channel 610 in a block 512 .
  • the global resource manager may generally be hosted by an existing component, such as one of the blades or management card 112 .
  • a plurality of local global resource managers are employed, wherein global resource management is handled through a collective process rather than employing a single manager.
  • a selective transmission to that component may be employed.
  • a message is first broadcast over the OOB channel to identify the location of the host component.
  • OOB communications under the aforementioned SMM hidden execution mode are effectuated in the following manner.
  • an SMI is generated to cause the processor to switch into SMM, as shown occurring with BLADE 1 in FIG. 6 .
  • This may be effectuated through one of two means—either an assertion of the processors SMI pin (i.e., a hardware-based generation), or via issuance of an “SMI” instruction (i.e., a software-based generation).
  • an assertion of the SMI pin may be produced by placing an appropriate signal on a management bus or the like.
  • a management bus or the like For example, when an SMBUS is deployed using I 2 C, one of the bus lines may be hardwired to the SMI pins of each blade's processor via that blade's connector.
  • the interface plane may provide a separate means for producing a similar result.
  • all SMI pins may be commonly tied to a single bus line, or the bus may be structured to enable independent SMI pin assertions for respective blades.
  • certain network interface chips such as those made by Intel®, provide a second MAC address for use as a “back channel” in addition to a primary MAC address used for conventional network communications.
  • these NICs provide a built-in system management feature, wherein an incoming communication referencing the second MAC address causes the NIC to assert an SMI signal. This scheme enables an OOB channel to be deployed over the same cabling as the “public” network (not shown).
  • a firmware driver is employed to access the OOB channel.
  • an appropriate firmware driver will be provided to access the network or serial port. Since the configuration of the firmware driver will be known in advance (and thus independent of the operating system), the SMM handler may directly access the OOB channel via the firmware driver.
  • direct access may be available to the SMM handler without a corresponding firmware driver, although this latter option could also be employed.
  • the asserted processor switches to SMM execution mode and begins dispatch of its SMM handler(s) until the appropriate handler (e.g., communication handler 604 ) is dispatched to facilitate the OOB communication.
  • the OOB communications are performed when the blade processors are operating in SMM, whereby the communications are transparent to the operating systems running on those blades.
  • the shared device path and resource configuration information is received by global resource manager 608 .
  • shared device path and resource configuration information for other blades is received by the global resource manager.
  • individual resources may be combined to form a global resource.
  • storage provided by individual storage devices e.g., hard disks and system RAM
  • the resource configuration information might typically include storage capacity, such as number of storage blocks, partitioning information, and other information used for accessing the device.
  • a global resource access mechanism e.g., API
  • global resource descriptor 612 are built.
  • the global resource descriptor contains information identifying how to access the resource, and describes the configuration of the resource (from a global and/or local perspective).
  • the global resource descriptor 612 is transmitted to active nodes in the rack via the OOB channel in a block 518 .
  • This transmission operation may be performed using node-to-node OOB communications, or via an OOB broadcast.
  • it is stored by the receiving node in a block 520 , leading to processing the next resource.
  • the operations of blocks 510 , 512 , 514 , 516 , 518 , and 520 are repeated in a similar manner for each resource that is discovered until all sharable resources are processed.
  • access to shared resources is provided by corresponding firmware device drivers that are configured to access discovered shared resources via their global resource API's in a block 522 . Further details of this access scheme when applied to specific resources are discussed below. As depicted by a continuation block 524 , pre-boot platform initialization operations are then continued as described above to prepare for the OS load.
  • global resource descriptors corresponding to any shared resources that are discovered are handed off to the operation system. It is noted that the global resource descriptors that are handed off to the OS may or may not be identical to those built in block 516 . Essentially, the global resource descriptors contain information to enable the operating system to configure access to the resource via its own device drivers. For example, in the case of a single shared storage volume, the OS receives information indicating that it has access to a “local” storage device (or optionally a networked storage device) having a storage capacity that spans the individual storage capacities of the individual storage devices that are shared. In the case of multiple shared storage volumes, respective storage capacity information will be handed off to the OS for each volume. The completion of the OS load leads to continued OS runtime operations, as depicted by a continuation block 528 .
  • this abstracted access scheme is configured as a multi-layer architecture, as shown in FIGS. 8 a and 8 b.
  • Each of blades BLADE 1 and BLADE 2 have respective copies of the architecture components, including an OS device drivers 800 - 1 and 800 - 2 , management/access driver 802 - 1 and 802 - 2 , resource device drivers 804 - 1 and 804 - 2 , and OOB communication handlers 604 - 1 and 604 - 2 .
  • FIG. 7 A flowchart illustrating an exemplary process for accessing a shared resource in accordance with one embodiment is shown in FIG. 7 .
  • the process begins with an access request from a requester, as depicted in a start block 700 .
  • a typical requestor might be an application running on the operating system for the platform.
  • Executable code corresponding to such applications are generally stored in system memory 204 , as depicted by runtime (RT) applications (APP) 806 and 808 in FIGS. 8 a and 8 b.
  • RT runtime
  • APP runtime applications
  • FIGS. 8 a and 8 b For instance, suppose runtime application 806 wishes to access a shared data storage resource. In this example, the access request corresponds to opening a previously stored file.
  • the runtime application will first make a request to the operating system ( 810 ) to access the file, providing a location for the file (e.g., drive designation, path, and filename).
  • the drive designation is a drive letter previously allocated by the operating system for a virtual global storage resource comprising a plurality of disk drives 218 , which include resource 1 of BLADE 1 and resource 2 on BLADE 2 .
  • operating system 810 employs its OS device driver 800 - 1 to access the storage resource in a block 702 .
  • OS device driver 800 - 1 would interface directly with resource driver 804 - 1 to access resource 1 .
  • management/access driver 802 - 1 is accessed instead.
  • interface information such as an API or the like is handed off to the OS during OS-load, whereby the OS is instructed to access management/access driver 802 - 1 whenever there is a request to access the corresponding resource (e.g., resource 1 ).
  • a mechanism is provided to identify a particular host via which the appropriate resource may be accessed. In one embodiment, this mechanism is facilitated via a global resource map.
  • this mechanism is facilitated via a global resource map.
  • local copies 812 - 1 and 812 - 2 of a common global resource map are stored on respective blades BLADE 1 and BLADE 2 .
  • a shared global resource map 812 a is hosted by global resource manager 608 .
  • the global resource map matches specific resources with the portions of the global resource hosted by those specific resources.
  • the management/access driver queries local global resource map 812 to determine the host of the resource underlying the particular access request.
  • This resource and/or its host is known as the “resource target;” in the illustrated example the resource target comprises a resource 2 hosted by BLADE 2 .
  • OOB communication operations are preformed to pass the resource access request to the resource target.
  • the management/access driver on the requesting platform e.g., 802 - 1
  • the processor on BLADE 1 switches its mode to SMM in a block 708 and dispatches its SMM handlers until OOB communication handler 604 - 1 is launched.
  • the OOB communication handler asserts an SMI signal on the resource target host (BLADE 2 ) to initiate OOB communication between the two blades.
  • the processor mode on BLADE 2 is switched to SMM in a block 710 , launching its OOB communication handler.
  • Blades 1 and 2 are enabled to communicate via OOB channel 610 , and the access request is received by OOB communications handler 604 - 2 .
  • an “RSM” instruction is issued to the processor on BLADE 1 to switch the processor's operating mode back to what it was before being switched to SMM.
  • a block 712 the access request is then passed to management/access driver 802 - 2 via its API.
  • a query is then performed in a block 714 to verity that the platform receiving the access request is the actual host of the target resource. If it isn't the correct host, in one embodiment a message is passed back to the requester indicating so (not shown).
  • am appropriate global resource manager is apprised of the situation. In essence, this situation would occur if the local global resource maps contained different information (i.e., are no longer synchronized). In response, the global resource manager would issue a command to resynchronize the local global resource maps (all not shown).
  • the platform host's resource device driver ( 804 - 2 ) is then employed to access the resource (e.g., resource 2 ) to service the access request.
  • the access returns the requested data file.
  • Data corresponding to the request is then returned to the requester via OOB channel 610 in a block 718 .
  • an RSM instruction is issued to the processor on BLADE 2 to switch the processor's operating mode back to what it was before being switched to SMM.
  • the requester's processor may or may not be operating an SMM at this time.
  • the requester's (BLADE 1 ) processor was switched back out of SMM in a block 708 .
  • a new SMI is asserted to activate the OOB communications handler in a block 722 .
  • the OOB communication handler is already waiting to receive the returned data.
  • the returned data are received via OOB channel 610 , and the data are passed to the requester's management/access driver ( 802 - 1 ) in a block 724 .
  • this firmware driver passes the data to back to OS device driver 800 - 1 in a block 726 , leading to receipt of the data by the requester via the operating system in a block 728 .
  • a similar resource access process is performed using a single global resource map in place of the local copies of the global resource map in the embodiment of FIG. 8 b.
  • many of the operations are the same as those discusses above with reference to FIG. 8 a, except that global resource manager 608 is employed as a proxy for accessing the resource, rather than using local global resource maps.
  • the resource access request is sent to global resource manager 608 via OOB channel 610 rather than directly to an identified resource target.
  • a lookup of global resource map 812 a is performed to determine the resource target.
  • the data request is sent to the identified resource target, along with information identifying the requester.
  • the operations of blocks 712 - 728 are preformed, with the exception of optional operations 714 .
  • a blade that hosts the global resource manager functions is identified through a nomination process, wherein each blade may include firmware for performing the management tasks.
  • the nomination scheme may be based on a physical assignment, such as a chassis slot, or may be based on an activation scheme, such as a first-in ordered scheme. For example, under a slot-based scheme, the blade having the lowest slot assignment for the group would be assigned power arbiter tasks. If that blade Was removed, the blade having the lowest slot assignment from among the remaining blades would be nominated to host the global resource manager. Under a first-in ordered scheme, each blade would be assigned in installation order identifier (e.g., number) based on the order the blades were inserted or activated.
  • installation order identifier e.g., number
  • the global management task would be assigned to the blade with the lowest number, that is the first installed blade to begin with. Upon removal of that blade, the blade with the next lowest installation number would be nominated as the new power arbiter.
  • a redundancy scheme may be implemented wherein a second blade is nominated as a live back-up.
  • global resource mapping data may be stored in either system memory or as firmware variable data. If stored as firmware variable data, the mapping data will be able to persist across platform shutdowns.
  • the mapping data are stored a portion of system memory that is hidden from the operating system. This hidden portion of system memory may include a portion of SMRAM or a portion of memory reserved by firmware during pre-boot operations.
  • Another way to persist global resource mapping data across shutdowns is to store the data on a persistent storage device, such as a disk drive. However, when employing a disk drive it is recommended that the mapping data are stored in a manner that is inaccessible to the platform operating system, such as in the host protected area (HPA) of the disk drive.
  • HPA host protected area
  • FIGS. 9 a - b and 10 a - b A more specific implementation of resource sharing is illustrated in FIGS. 9 a - b and 10 a - b.
  • the resource being shared comprise disk drives 218 .
  • the storage resources provided by a plurality of disk drives 218 are aggregated to form a virtual storage volume “V:”
  • V virtual storage volume
  • the storage resources for each of the disk drives is depicted as respective groups of I/O storage comprising 10 blocks.
  • each of Blades 1 - 16 are depicted as hosting a single disk drive 218 ; it will be understood that an actual implementations each blade may host 0-N disk drives (depending on its configuration), that the number of blocks for each disk drive may vary, and that the actual number of blocks will be several orders of magnitude higher than those depicted herein.
  • virtual storage volume V appears as a single storage device.
  • the shared storage resources may be configured as 1-N virtual storage volumes, with each volume spanning a respective set of storage devices.
  • virtual storage volume V spans 16 disk drives 218 .
  • a global resource map comprising a lookup table 1000 is employed.
  • the lookup table maps respective ranges of I/O blocks to the blade on which the disk drive hosting the I/O blocks resides.
  • the map would contain further information identifying the specific storage device on each blade.
  • an addressing scheme would be employed rather than simply identifying a blade number; however, the illustrated blade number assignments are depicted for clarity and simplicity.
  • FIGS. 9 b and 10 b illustrate a RAID embodiment 902 using mirroring and duplexing in accordance with the RAID (Redundant Array of Individual Disks)- 1 standard.
  • RAID- 1 respective sets of storage devices are paired, and data are mirrored by writing identical sets of data to each storage device in the pair.
  • the aggregated storage appears to the operating system as a virtual volume V:.
  • the number and type of storage devices are identical to those of embodiment 900 , and thus the block I/O storage capacity of the virtual volume is cut in half to 80 blocks.
  • Global resource mappings are contained in a lookup table 1002 for determining what disk drives are to be accessed when the operating system makes a corresponding block I/O access request.
  • the disk drive pairs are divided into logical storage entities labeled A-H.
  • RAID- 1 when a write access to a logical storage entity is performed, the data are written to each of the underlying storage devices. In contrast, during a read access, the data are (generally) retrieved from a single storage device. Depending on the complexity of the RAID- 1 implementation, one of the pair may be assigned as the default read device, or both of the storage devices may facilitate this function, allowing for parallel reads (duplexing).
  • a configuration may employ one or more disk drives 218 as “hot spares.”
  • the hot spare storage devices are not used during normal access operations, but rather sit in reserve to replace any device or blade that has failed.
  • data stored on the non-failed device in the pair
  • the RAID- 1 scheme may be deployed using either a single global resource manager, or via local management.
  • appropriate mapping information can be stored on each blade.
  • this information may be stored as firmware variable data, whereby it will persist through a platform reset or shutdown.
  • RAID- 1 In addition to RAID- 1 , other RAID standard redundant storage schemes may be employed, including RAID- 0 , RAID- 2 , RAID- 3 , RAID- 5 , and RAID- 10 . Since each of these schemes involves some form of striping, the complexity of the global resource maps increase substantially. For this and other reasons, it will generally be easier to implement RAID- 0 , RAID- 2 , RAID- 3 , RAID- 5 , and RAID- 10 via a central global resource manager rather than individual local managers.
  • Each blade may be considered to be a separate platform, such as a rack-mounted server or a stand-alone server, wherein resource sharing across a plurality of platforms may be effectuated via an OOB channel in the manner similar to that discussed above.
  • cabling and/or routing may be provided to support an OOB channel.
  • a particular implement of the invention that is well-suited to rack-mounted servers and the like concerns sharing keyboard, video, and mouse I/O, commonly known as KVM.
  • KVM keyboard, video, and mouse I/O
  • a KVM switch is employed to enable a single keyboard, video display and mouse to be shared by all servers in the rack.
  • the KVM switch routes KVM signals from individual servers (via respective cables) to single keyboard, video and mouse I/O ports, whereby a KVM signals for a selected server may be accessed by tuning a selection knob or otherwise selecting the input signal source.
  • the KVM switch may cost $1500 or more, in addition to costs for cabling and installation. KVM cabling also reduces ventilation and accessibility.
  • each of a plurality of rack-mounted servers 1100 is connected to the other servers via a switch 1102 and corresponding Ethernet cabling (depicted as a network cloud 1104 ).
  • Each server 1100 includes a mainboard 1106 having a plurality of components mounted thereon or coupled thereto, including a processor 1108 , memory 1110 , a firmware storage device 1112 , and a NIC 1114 .
  • a plurality of I/O ports are also coupled to the mainboard, including a mouse and keyboard ports 1116 and 1118 and a video port 1120 .
  • each server will also include a plurality of disk drives 1122 .
  • a second MAC address assigned to the NIC 1114 for each server 1100 is employed to support an OOB channel 1124 .
  • a keyboard 1126 , video display 1128 , and a mouse 1130 are coupled-via respective cables to respective I/O ports 1118 , 1120 , and 1116 disposed on the back of a server 1100 A.
  • Firmware on each of servers 1110 provides support for hosting a local global resource map 1132 that routes KVM signals to keyboard 1126 , video display 1128 , and mouse 1130 via server 1100 A.
  • FIG. 12 A protocol stack exemplifying how video signals (the most complicated of the KVM signals) are handled in accordance with one embodiment is shown in FIG. 12 .
  • video data used to produce corresponding video signals are rerouted from a server 1100 N to server 1100 A.
  • the software side of the protocol stack on server 1100 N includes an operating system video driver 1200 N, while the firmware components include a video router driver 1202 N, a video device driver 1204 N and an OOB communications handler 604 N.
  • the data flow is similar to that described above with reference to FIGS. 7 and 8 a, and proceeds as follows.
  • the operating system running on a server 1100 N receives a request to update the video display, typically in response to a user input to a runtime application.
  • the operating system employs its OS video driver 1200 N to effectuate the change.
  • the OS video driver will generate video data based on a virtual video display maintained by the operating system, wherein a virtual-to-physical display mapping is performed. For example, the same text/graphic content displayed on monitors having different resolutions requires different video data particular to the resolutions.
  • the OS video driver then interfaces with video router driver 1202 N to pass on the video data to the what it thinks is the destination device, server 1100 N's video chip 1206 N.
  • video router driver 1202 N is the firmware video device driver for the server, i.e., is video device driver 1204 N. However, upon receiving the video data, video router driver 1202 N looks up the video data destination server via a lookup of global resource map 1134 N and asserts an SMI to initiate an OOB communication with server 1100 A via respective OOB communication handlers 604 N and 604 A.
  • video chip 1206 A Upon receiving the video data, it is written to a video chip 1206 A via video device driver 1204 A. In a manner similar to that described above, this passing of-video data may be directly from OOB communications handler 604 A to video device driver 1204 A, or it may be routed through video router driver 1202 A. In response to receiving the video data, video chip 1206 A updates its video output signal, which is received by video monitor 1128 via video port i 120 . As an option, a verification lookup of a global resource map 1134 A may be performed to verify that server 1100 A is the correct video data destination server.
  • keyboard and mouse signals are handled in a similar manner.
  • operating systems typically maintain a virtual pointer map from which a virtual location of a pointing device can be cross-referenced to the virtual video display, thereby enabling the location of the cursor relative to the video display to be determined.
  • mouse information will traverse the reverse route of the video signals—that is mouse input received via server 1100 A will be passed via the OOB channel to a selected platform (e.g., server 1100 N). This will require updating the global resource map 1134 A on server 1100 A to reflect the proper destination platform.
  • Routing keyboard signals also will require a similar map update.
  • a difference with keyboard signals is that they are bi-directional, so both input and output data rerouting is required.
  • FIG. 13 An exemplary keyboard input signal processing protocol stack and flow diagram is shown in FIG. 13 .
  • the software side of the protocol stack on server 1100 N includes an operating system keyboard driver 1300 N, while the firmware components include a keyboard router driver 1302 N, a video device driver 1304 N and an OOB communications handler 604 N. Similar components comprise the protocol stack of server 1100 A.
  • keyboard input signal is generated that is received by a keyboard chip 1306 A via keyboard port 1118 A.
  • Keyboard chip 1306 then produces corresponding keyboard (KB) data that is received by keyboard device driver 1304 A.
  • KB keyboard
  • keyboard device driver 1304 A would interface with OS keyboard driver 1300 A to pass the keyboard data to the operating system.
  • the OS keyboard driver that is targeted to receive the keyboard data is running on server 1100 N. Accordingly, video data handled by keyboard device driver 1304 is passed to keyboard router driver 1302 A to facilitate rerouting the keyboard data.
  • keyboard router driver In response to receiving the keyboard data, keyboard router driver queries global resource map 1134 to determine the target server to which the keyboard data is to be rerouted (server 1100 ON in this example). The keyboard router driver then asserts an SMI to kick the processor running on server 1100 A into SMM and passes the keyboard data along with server target identification data to OOB communications handler 604 A. OOB communications handler 604 A then interacts with OOB communication handler 604 N to facilitate OOB communications between the two servers via OOB channel 1124 , leading to the keyboard data being received by OOB communications handler 604 N. In response to receiving the keyboard data, OOB communications handler 604 N forwards the keyboard data to keyboard router driver 1302 N.
  • the keyboard router driver may either directly pass the keyboard data to OS keyboard driver 1300 N, or perform a routing verification lookup of global resource map 1134 N to ensure that server 1100 N is the proper server to receive the keyboard data prior to passing the data to OS keyboard driver 1300 N.
  • the OS keyboard driver then processes the keyboard data and provides the processed data to a runtime application having the current focus.
  • firmware which may typically comprise instructions and data for implementing the various operations described herein, will generally be stored on a non-volatile memory device, such as but not limited to a flash device, a ROM, or an EEPROM.
  • the instructions are machine readable, either directly by a real machine (i.e., machine code) or via interpretation by a virtual machine (e.g., interpreted byte-code).
  • a real machine i.e., machine code
  • a virtual machine e.g., interpreted byte-code
  • embodiments of the invention may be used as or to support firmware executed upon some form of processing core (such as the CPU of a computer) or otherwise implemented or realized upon or within a machine-readable medium.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a processor).
  • a machine-readable medium can include media such as a read only memory (ROM); a random access memory (RAM); a magnetic disk storage media; an optical storage media; and a flash memory device, etc.
  • a machine-readable medium can include propagated signals such as electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).

Abstract

Methods, apparatus, and systems for sharing resources across a plurality of computing platforms. Firmware provided on each platform is loaded for operating system runtime availability. Shared resources are presented to operating systems running on the platforms as local resources, while in reality they are generally hosted by other platforms. An operating system resource access request is received by a requesting platform and rerouted to another platform that actually hosts a target resource used to service the resource access request. Global resource maps are employed to determine the appropriate host platforms. Communications between the platforms is enabled via an out-of-band (OOB) communication channel or network. A hidden execution mode is implemented to effectuate data rerouting via the OOB channel such that the method is performed in a manner that is transparent to operating systems running on the platforms. The shared resources include storage, input, and video devices. The method can be used to support shared KVM resources, and shared disk storage.

Description

    FIELD OF THE INVENTION
  • The field of invention relates generally to clustered computing environments, such as blade server computing environments, and, more specifically but not exclusively relates to techniques for sharing resources hosted by individual platforms (nodes) to create global resources that may be shared across all nodes.
  • BACKGROUND INFORMATION
  • Information Technology (IT) managers and Chief Information Officers (CIOs) are under tremendous pressure to reduce capital and operating expenses without decreasing capacity. The pressure is driving IT management to provide computing resources that more efficiently utilize all infrastructure resources. To meet this objective, aspects of the following questions are often addressed: How to better manage server utilization; how to cope with smaller IT staff levels; how to better utilize floor space; and how to handle power issues.
  • Typically, a company's IT (information technology) infrastructure is centered around computer servers that are linked together via various types of networks, such as private local area networks (LANs) and private and public wide area networks (WANs). The servers are used to deploy various applications and to manage data storage and transactional processes. Generally, these servers will include stand-alone servers and/or higher density rack-mounted servers, such as 4U, 2U and 1U servers.
  • Recently, a new server configuration has been introduced that provides unprecedented server density and economic scalability. This server configuration is known as a “blade server.” A blade server employs a plurality of closely-spaced “server blades” (blades) disposed in a common chassis to deliver high-density computing functionality. Each blade provides a complete computing platform, including one or more processors, memory, network connection, and disk storage integrated on-a single system board. Meanwhile, other components, such as power supplies and fans, are shared among the blades in a given chassis and/or rack. This provides a significant reduction in capital equipment costs when compared to conventional rack-mounted servers.
  • Generally, blade servers are targeted towards two markets: high density server environments under which individual blades handle independent tasks, such as web hosting; and scaled computer cluster environments. A scalable compute cluster (SCC) is a group of two or more computer systems, also known as compute nodes, configured to work together to perform computational-intensive tasks. By configuring multiple nodes to work together to perform a computational task, the task can be completed much more quickly than if a single system performed the tasks. In theory, the more nodes that are applied to a task, the quicker the task can be completed. In reality, the number of nodes that can effectively be used to complete the task is dependent on the application used.
  • A typical SCC is built using Intel®-based servers running the Linux operating system and cluster infrastructure software. These servers are often referred to as commodity off the shelf (COTS) servers. They are connected through a network to form the cluster. An SCC normally needs anywhere from tens to hundreds of servers to be effective at performing computational-intensive tasks. Fulfilling this need to group a large number of servers in one location to form a cluster is a perfect fit for a blade server. The blade server chassis design and architecture provides the ability to place a massive amount of computer horsepower in a single location. Furthermore, the built-in networking and switching capabilities of the blade server architecture enables individual blades to be added or removed, enabling optimal scaling for a given tasks. With such flexibility, blade server-based SCC's provides a cost-effective alternative to other infrastructure for performing computational tasks, such as supercomputers.
  • As discussed above, each blade in a blade server is enabled to provide full platform functionality, thus being able to operate independent of other blades in the server. However, the resources available to each blade are likewise limited to it own resources. Thus, in many instances resources are inefficiently utilized. Under current architectures, there is no scheme that enables efficient server-wide resource sharing.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified:
  • FIG. 1 a is a frontal isometric view of an exemplary blade server chassis in which a plurality of server blades are installed;
  • FIG. 1 b is a rear isometric view of the blade server chassis of FIG. 1 a;
  • FIG. 1 c is an isometric frontal view of an exemplary blade server rack in which a plurality of rack-mounted blade server chassis corresponding to FIGS. 1 a and 1 b are installed;
  • FIG. 2 shows details of the components of a typical server blade;
  • FIG. 3 is a schematic block diagram illustrating various firmware and operating system components used to deploy power management in accordance with the ACPI standard;
  • FIG. 4 is a flowchart illustrating operations and logic employed during blade initialization to configure a blade for implementing a power management scheme in accordance with one embodiment of the invention;
  • FIG. 5 is a flowchart illustrating operations and logic employed during an initialization process to set up resource sharing in accordance with one embodiment of the invention;
  • FIG. 6 is a schematic diagram illustrating various data flows that occur during the initialization process of FIG. 6;
  • FIG. 7 is a flowchart illustrating operations and logic employed in response to a resource access request received at a requesting computing platform to service the request in accordance with one embodiment of the invention, wherein the servicing resource is hosted by another computing platform;
  • FIGS. 8 a and 8 b are schematic diagrams illustrating data flows between a pair of computing platforms during a shared resource access, wherein the scheme illustrated in FIG. 8 a employs local global resource maps, and the scheme illustrated in FIG. 8 b employs a single global resource map hosted by a global resource manager;
  • FIG. 9 a is a schematic diagram illustrating a share storage resource configured as a virtual storage volume that aggregates the storage capacity of a plurality of disk drives;
  • FIG. 9 b is a schematic diagram illustrating a variance of the shared storage resource scheme of FIG. 9 a, wherein a RAID-1 implementation is employed during resource accesses;
  • FIG. 10 a is a schematic diagram illustrating further details of the virtual volume storage scheme of FIG. 9 a;
  • FIG. 10 b is a schematic diagram illustrating further details of the RAID-1 implementation of FIG. 9 b;
  • FIG. 11 is a schematic diagram illustrating a shared keyboard, video, and mouse (KVM) access scheme in accordance with one embodiment of the invention;
  • FIG. 12 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing a video resource; and
  • FIG. 13 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing user input resources;
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • Embodiments of methods and computer components and systems for performing resource sharing across clustered platform environments, such as a blade server environment, are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced Without one or more of the specific details, or With other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
  • Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment”. In various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • In accordance with aspects of the invention, techniques are disclosed herein for sharing resources across clustered platform environments in a manner under which resources hosted by individual platforms are made accessible to other platform nodes The techniques employ firmware-based functionality that provides a “behind the scenes” access mechanisms without requiring any OS complicity. In fact, the resource sharing and access operations are completely transparent to operating systems running on the blades, and thus operating system independent. Thus, the capabilities afforded by the novel techniques disclosed herein may be employed in existing and future distributed platform environments without requiring any changes to the operating systems targeted for the environments.
  • In accordance with one aspect, the resource-sharing mechanism is effectuated by several platforms that “expose” resources that are aggregated to form global resources. Each platform employs a respective set of firmware that runs prior to the operating system load (pre-boot) and coincident with the operating system runtime. In one embodiment, runtime deployment is facilitated by a hidden execution mode known as the System Management Mode (SMM), which has the ability to receive and respond to periodic System Management Interrupts (SMI) to allow resource sharing and access information to be transparently passed to firmware SMM code configured to effectuate the mechanisms. The SMM resource management code conveys information and messaging to other nodes via an out-of-band (OOB) network or communication channel in an OS-transparent manner.
  • For illustrative purposes, several embodiments of the invention are disclosed below in the context of a blade server environment. As an overview, typical blade server components and systems for which resource sharing schemes in accordance with embodiments of the invention may be generally implemented are shown in FIGS. 1 a-c and 2. Under a typical configuration, a rack-mounted chassis 100 is employed to provide power and communication functions for a plurality of blades 102, each of which occupies a corresponding slot. (It is noted that all slots in a chassis do not need to be occupied.) In turn, one of more chassis 100 may be installed in a blade server rack 103 shown in FIG. 1 c. Each blade is coupled to an interface plane 104 (i.e., a backplane or mid-plane) upon installation via one or more mating connectors. Typically, the interface plane will include a plurality of respective mating connectors that provide power and communication signals to the blades. Under current practices, many interface planes provide “hot-swapping” functionality—that is, blades can be added or removed (“hot-swapped”) on the fly, without taking the entire chassis down through appropriate power and data signal buffering.
  • A typical mid-plane interface plane configuration is shown in FIGS. 1 a and 1 b. The backside of interface plane 104 is coupled to one or more power supplies 106. Oftentimes, the power supplies are redundant and hot-swappable, being coupled to appropriate power planes and conditioning circuitry to enable continued operation in the event of a power supply failure. In an optional configuration, an array of power supplies may be used to supply power to an entire rack of blades, wherein there is not a one-to-one power supply-to-chassis correspondence. A plurality of cooling fans 108 are employed to draw air through the chassis to cool the server blades.
  • An important feature required of all blade servers is the ability to communication externally with other IT infrastructure. This is typically facilitated via one or more network connect cards 110, each of which is coupled to interface plane 104. Generally, a network connect card may include a physical interface comprising a plurality of network port connections (e.g., RJ-45 ports), or may comprise a high-density connector designed to directly connect to a network device, such as a network switch, hub, or router.
  • Blades servers usually provide some type of management interface for managing operations of the individual blades. This may generally be facilitated by an out-of-band network or communication channel or channels. For example, one or more buses for facilitating a “private” or “management” network and appropriate switching may be built into the interface plane, or a private network may be implemented through closely-coupled network cabling and a network. Optionally, the switching and other management functionality may be provided by a management card 112 that is coupled to the backside or frontside of the interface plane. As yet another option, a management server may be employed to manage blade activities, wherein communications are handled via standard computer networking infrastructure, such as Ethernet.
  • With reference to FIG. 2, further details of an exemplary blade 200 are shown. As discussed above, each blade comprises a separate computing platform that is configured to perform server-type functions, i.e., is a “server on a card.” Accordingly, each blade includes components common to conventional servers, including a main circuit board 201 providing internal wiring (i.e., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board. These components include one or more processors 202 coupled to system memory 204 (e.g., DDR RAM), cache memory 206 (e.g., SDRAM), and a firmware storage device 208 (e.g., flash memory). A “public” NIC (network interface) chip 210 is provided for supporting conventional network communication functions, such as to support communication between blades and external network infrastructure. Other illustrated components include status LEDs 212, an RJ-45 console port 214, and an interface plane connector 216. Additional components include various passive components (e.g., resistors, capacitors), power conditioning components, and peripheral device connectors.
  • Generally, each blade 200 will also provide on-board storage. This is typically facilitated via one or more built-in disk controllers and corresponding connectors to which one or more disk drives 218 are coupled. For example, typical disk controllers include Ultra ATA controllers, SCSI controllers, and the like. As an option, the disk drives may be housed separate from the blades in the same or a separate rack, such as might be the case when a network-attached storage (NAS) appliance is employed to storing large volumes of data.
  • In accordance with aspects of the invention, facilities are provided for out-of-band communication between blades, and optionally, dedicated management components. As used herein, an out-of-band communication channel comprises a communication means that supports communication between devices in an OS-transparent manner—that is, a means to enable inter-blade communication without requiring operating system complicity. Generally, various approaches may be employed to provide the OOB channel. These include but are not limited to using a dedicated bus, such as a system management bus that implements the SMBUS standard (www.smbus.org), a dedicated private or management network, such as an Ethernet-based network using VLAN-802.1Q), or a serial communication scheme, e.g., employing the RS-485 serial communication standard. One or more appropriate IC's for supporting such communication functions are also mounted to main board 201, as depicted by an OOB channel chip 220. At the same time, interface plane 104 will include corresponding buses or built-in network traces to support the selected OOB scheme. Optionally, in the case of a wired network scheme (e.g., Ethernet), appropriate network cabling and networking devices may be deployed inside or external to chassis 100.
  • As discussed above, embodiments of the invention employ a firmware-based scheme for effectuating a resource sharing set-up and access mechanism to enable sharing of resources across blade server nodes. In particular, resource management firmware code is loaded during initialization of each blade and made available for access during OS run-time. Also during initialization, resource information is collected, and global resource information is built. Based on the global resource information, appropriate global resource access is provided back to each blade. This information is handed off to the operating system upon its initialization, such that the global resource appears (from the OS standpoint) as a local resource. During OS runtime operations, accesses to the shared resources are handled via interaction between OS and/or OS drivers and corresponding firmware in conjunction with resource access management that is facilitated via the OOB channel.
  • In one embodiment, resource sharing is facilitated via an extensible firmware framework known as Extensible Firmware Interface (EFI) (specifications and examples of which may be found at http://developer.intel.com/technology/efi). EFI is a public industry specification (current version 1.10 released Jan. 7, 2003) that describes an abstract programmatic interface between platform firmware and shrink-wrap operation systems or other custom application environments. The EFI framework include provisions for extending BIOS functionality beyond that provided by the BIOS code stored in a platform's BIOS device (e.g., flash memory). More particularly, EFI enables firmware, in the form of firmware modules and drivers, to be loaded from a variety of different resources, including primary and secondary flash devices, option ROMs, various persistent storage devices (e.g., hard disks, CD ROMs, etc.), and even over computer networks.
  • FIG. 3 shows an event sequence/architecture diagram used to illustrate operations performed by a platform under the framework in response to a cold boot (e.g., a power off/on reset). The process is logically divided into several phases, including a pre-EFI Initialization Environment (PEI) phase, a Driver Execution Environment (DXE) phase, a Boot Device Selection (BDS) phase, a Transient System Load (TSL) phase, and an operating system runtime (RT) phase. The phases build upon one another to provide an appropriate run-time environment for the OS and platform.
  • The PEI phase provides a standardized method of loading and invoking specific initial configuration routines for the processor (CPU), chipset, and motherboard. The PEI phase is responsible for initializing enough of the system to provide a stable base for the follow on phases. Initialization of the platforms core components, including the CPU, chipset and main board (i.e., motherboard) is performed during the PEI phase. This phase is also referred to as the “early initialization” phase. Typical operations performed during this phase include the POST (power-on self test) operations, and discovery of platform resources. In particular, the PEI phase discovers memory and prepares a resource map that is handed off to the DXE phase. The state of the system at the end of the PEI phase is passed to the DXE phase through a list of position independent data structures called Hand Off Blocks (HOBs).
  • The DXE phase is the phase during which most of the system initialization is performed. The DXE phase is facilitated by several components, including the DXE core 300, the DXE dispatcher 302, and a set of DXE drivers 304. The DXE core 300 produces a set of Boot Services 306, Runtime Services 308, and DXE Services 310. The DXE dispatcher 302 is responsible for discovering and executing DXE drivers 304 in the correct order. The DXE drivers 304 are responsible for initializing the processor, chipset, and platform components as well as providing software abstractions for console and boot devices. These components work together to initialize the platform and provide the services required to boot an operating system. The DXE and the Boot Device Selection phases work together to establish consoles and attempt the booting of operating systems. The DXE phase is terminated when an operating system successfully begins its boot process (i.e., the BDS phase starts). Only the runtime services and selected DXE services provided by the DXE core and selected services provided by runtime DXE drivers are allowed to persist into the OS runtime environment. The result of DXE is the presentation of a fully formed EFI interface.
  • The DXE core is designed to be completely portable with no CPU, chipset, or platform dependencies. This is accomplished by designing in several features. First, the DXE core only depends upon the HOB list for its initial state. This means that the DXE core does not depend on any services from a previous phase, so all the prior phases can be unloaded once the HOB list is passed to the DXE core. Second, the DXE core does not contain any hard coded addresses. This further means the DXE core can be loaded anywhere in physical memory, and it can function correctly no matter where physical memory or where Firmware segments are located in the processor's physical address space. Third, the DXE core does not contain any CPU-specific, chipset specific, or platform specific information. Instead, the DXE core is abstracted from the system hardware through a set of architectural protocol interfaces. These architectural protocol interfaces are produced by DXE drivers 304, which are invoked by DXE Dispatcher 302.
  • The DXE core produces an EFI System Table 400 and its associated set of Boot Services 306 and Runtime Services 308, as shown in FIG. 4. The DXE core also maintains a handle database 402. The handle database comprises a list of one or more handles, wherein a handle is a list of one or more unique protocol GUIDs (Globally Unique Identifiers) that map to respective protocols 404. A protocol is a software abstraction for a set of services. Some protocols abstract I/O devices, and other protocols abstract a common set of system services. A protocol typically contains a set of APIs and some number of data fields. Every protocol is named by a GUID, and the DXE Core produces services that allow protocols to be registered in the handle database. As the DXE Dispatcher executes DXE drivers, additional protocols will be added to the handle database including the architectural protocols used to abstract the DXE Core from platform specific details.
  • The Boot Services comprise a set of services that are used during the DXE and BDS phases. Among others, these services include Memory Services, Protocol Handler Services, and Driver Support Services: Memory Services provide services to allocate and free memory pages and allocate and free the memory pool on byte boundaries. It also provides a service to retrieve a map of all the current physical memory usage in the platform. Protocol Handler Services provides services to add and remove handles from the handle database. It also provides services to add and remove protocols from the handles in the handle database. Addition services are available that allow any component to lookup handles in the handle database, and open and close protocols in the handle database. Support Services provides services to connect and disconnect drivers to devices in the platform. These services are used by the BDS phase to either connect all drivers to all devices, or to connect only the minimum number of drivers to devices required to establish the consoles and boot an operating system (i.e., for supporting a fast boot mechanism).
  • In contrast to Boot Services, Runtime Services are available both during pre-boot and OS runtime operations. One of the Runtime Services that is leveraged by embodiments disclosed herein is the Variable Services. As described in further detail below, the Variable Services provide services to lookup, add, and remove environmental variables from both volatile and non-volatile storage.
  • The DXE Services Table includes data corresponding to a first set of DXE services 406A that are available during pre-boot only, and a second set of DXE services 406B that are available during both pre-boot and OS runtime. The pre-boot only services include Global Coherency Domain Services, which provide services to manage I/O resources, memory mapped I/O resources, and system memory resources in the platform. Also included are DXE Dispatcher Services, which provide services to manage DXE drivers that are being dispatched by the DXE dispatcher.
  • The services offered by each of Boot Services 306, Runtime Services 308, and DXE services 310 are accessed via respective sets of API's 312, 314, and 316. The API's provide an abstracted interface that enables subsequently loaded components to leverage selected services provided by the DXE Core.
  • After DXE Core 300 is initialized, control is handed to DXE Dispatcher 302. The DXE Dispatcher is responsible for loading and invoking DXE drivers found in firmware volumes, which correspond to the logical storage units from which firmware is loaded under the EFI framework. The DXE dispatcher searches for drivers in the firmware volumes described by the HOB List. As execution continues, other firmware volumes might be located. When they are, the dispatcher searches them for drivers as well.
  • There are two subclasses of DXE drivers. The first subclass includes DXE drivers that execute very early in the DXE phase. The execution order of these DXE drivers depends on the presence and contents of an a priori file and the evaluation of dependency expressions. These early DXE drivers will typically contain processor, chipset, and platform initialization code. These early drivers will also typically produce the architectural protocols that are required for the DXE core to produce its full complement of Boot Services and Runtime Services.
  • The second class of DXE drivers are those that comply with the EFI 1.10 Driver Model. These drivers do not perform any hardware initialization when they are executed by the DXE dispatcher. Instead, they register a Driver Binding Protocol interface in the handle database. The set of Driver Binding Protocols are used by the BDS phase to connect the drivers to the devices required to establish consoles and provide access to boot devices. The DXE Drivers that comply with the EFI 1.10 Driver Model ultimately provide software abstractions for console devices and boot devices when they are explicitly asked to do so.
  • Any DXE driver may consume the Boot Services and Runtime Services to perform their functions. However, the early DXE drivers need to be aware that not all of these services may be available when they execute because all of the architectural protocols might not have been registered yet. DXE drivers must use dependency expressions to guarantee that the services and protocol interfaces they require are available before they are executed.
  • The DXE drivers that comply with the EFI 1.10 Driver Model do not need to be concerned with this possibility. These drivers simply register the Driver Binding Protocol in the handle database when they are executed. This operation can be performed without the use of any architectural protocols. In connection with registration of the Driver Binding Protocols, a DXE driver may “publish” an API by using the InstallConfigurationTable function. This published drivers are depicted by API's 318. Under EFI, publication of an API exposes the API for access by other firmware components. The API's provide interfaces for the Device, Bus, or Service to which the DXE driver corresponds during their respective lifetimes.
  • The BDS architectural protocol executes during the BDS phase. The BDS architectural protocol locates and loads various applications that execute in the pre-boot services environment. Such applications might represent a traditional OS boot loader, or extended services that might run instead of, or prior to loading the final OS. Such extended pre-boot services might include setup configuration, extended diagnostics, flash update support, OEM value-adds, or the OS boot code. A Boot Dispatcher 320 is used during the BDS phase to enable selection of a Boot target, e.g., an OS to be booted by the system.
  • During the TSL phase; a final OS Boot loader 322 is run to load the selected OS. Once the OS has been loaded, there is no further need for the Boot Services 306, and for many of the services provided in connection with DXE drivers 304 via API's 318, as well as DXE Services 306A. Accordingly, these reduced sets of API's that may be accessed during OS runtime are depicted as API's 316A, and 318A in FIG. 3.
  • Under principles of the invention, an OS-transparent out-of-band communication scheme is employed to allow various types of resources to be shared across server nodes. At the same time, firmware-based components (e.g., firmware drivers and API's) are employed to facilitate low-level access to the resources and rerouting of data over the OOB channel. The scheme may be effectuated across multiple computing platforms, including groups of blades, individual chassis, racks, or groups of racks. During system initialization, firmware provided on each platform is loaded and executed to set up the OOB channel and appropriate resource access and data re-routing mechanisms. Each blade then transmits information about its shared resources over the OOB to a global resource manager. The global resource manager aggregates the data and configures a “virtual” global resource. Global resource configuration data in the form of global resource descriptors then sent back to the blades to apprise the blades of the configuration and access mechanism for the global resource. Drivers are then configured to support access to the global resource. Subsequently, the global resource descriptors are handed off to the operating system during OS load, wherein the virtual global resource appears as a local device to the operating system, and thus is employed as such during OS runtime operations without requiring any modification to the OS code. Flowchart operations and logic according to one embodiment of the process are shown in FIGS. 5 and 7, while corresponding operations and interactions between various components are schematically illustrated in FIGS. 6, 8 a, and 8 b.
  • With reference to FIG. 5, the process begins by performing several initialization operations on each blade to set up the resource device drivers and the OOB communications framework. In response to a power on or reset event depicted in a start block 500, the system performs pre-boot system initialization operations in the manner discussed above with reference to FIG. 3. First, early initialization operations are performed in a block 502 by loading and executing firmware stored in each blade's boot firmware device (BFD). Under EFI, the BFD comprises the firmware device that stores firmware for booting the system; the BFD for server blade 200 comprises firmware device 208.
  • Continuing with block 502, processor 202 executes reset stub code that jumps execution to the base address of a boot block of the BFD via a reset vector. The boot block contains firmware instructions for performing early initialization, and is executed by processor 202 to initialize the CPU, chipset, and motherboard. (It is noted that during a warm boot (reset) early initialization is not performed, or is at least performed in a limited manner.) Execution of firmware instructions corresponding to an EFI core are executed next, leading to the DXE phase. During DXE core initialization, the Variable Services are setup in the manner discussed above with reference to FIGS. 3 and 4. After the DXE core is initialized, DXE dispatcher 302 begins loading DXE drivers 304. Each DXE driver corresponds to a system component, and provides an interface for directly accessing that component. Included in the DXE drivers is an OOB monitor driver that will be subsequently employed for facilitating OOB communications.
  • Next, in a block 504, the OOB monitor driver is installed in a protected area in each blade. As discussed above, an out-of-band communication channel or network that operates independent of network communications that are managed by the operating systems is employed to facilitate inter-blade communication in an OS-transparent manner.
  • During the foregoing system initialization operations of block 502, a portion of system memory 204 is setup to be employed for system management purposes. This portion of memory is referred to as SMRAM 600 (see FIG. 6), and is-hidden from the subsequently-loaded operating system.
  • In conjunction with the firmware load, SMM OOB communication code 602 stored in firmware is loaded into SMRAM 600, and a corresponding OOB communications SMM handler 604 for handling OOB communications are setup. An SMM handler is a type of interrupt handler, and is invoked in response to a system management interrupt (SMI). In turn, an SMI interrupt may be asserted via an SMI pin on the system's processor. In response to an SMI interrupt, the processor stores its current context (i.e., information pertaining to current operations, including its current execution mode, stack and register information, etc.), and switches its execution mode to its system management mode. SMM handlers are then sequentially dispatched to determine if they are the appropriate handler for servicing the SMI event. This determination is made very early in the SMM handler code, such that there is little latency in determining which handler is appropriate. When this handler is identified, it is allowed to execute to completion to service the SMI event. After the SMI event is serviced, an RSM (resume) instruction is issued to return the processor to its previous execution mode using the previously saved context data. The net result is that SMM operation is completely transparent to the operating system.
  • Returning to the flowchart of FIG. 5, a determination is made in a decision block 506 to whether one or more sharable resources hosted by the blade is/are discovered. Generally, a shared resource is any blade component or device that is to be made accessible for shared access. Such components and devices include but are to limited to fixed storage devices, removable media devices, input devices (e.g., keyboard, mouse), video devices, audio devices, volatile memory (i.e., system RAM), and non-volatile memory.
  • If the answer to decision block 506 is YES, the logic proceeds to perform the loop operations defined within respective start and end loop blocks 508 and 509 for each sharable resource that is discovered. This includes operations in a block 510, wherein a device path to describe the shared resource is constructed and configuration parameters are collected. The device path provides external users with a means for accessing the resource. The configuration parameters are used to build global resources, as described below in further detail.
  • After the operations of block 510 are performed, in the illustrated embodiment the device path and resource configuration information is transmitted or broadcasts to a global resource manager 608 via an OOB communication channel 610 in a block 512. The global resource manager may generally be hosted by an existing component, such as one of the blades or management card 112. As described below, in one embodiment a plurality of local global resource managers are employed, wherein global resource management is handled through a collective process rather than employing a single manager. In cases in which the address of the component hosting the global resource manager is known a priori, a selective transmission to that component may be employed. In cases in which the address is not known, a message is first broadcast over the OOB channel to identify the location of the host component.
  • OOB communications under the aforementioned SMM hidden execution mode are effectuated in the following manner. First, it is necessary to switch the operating mode of the processors on the blades for which inter-blade communication is to be performed to SMM. Therefore, an SMI is generated to cause the processor to switch into SMM, as shown occurring with BLADE 1 in FIG. 6. This may be effectuated through one of two means—either an assertion of the processors SMI pin (i.e., a hardware-based generation), or via issuance of an “SMI” instruction (i.e., a software-based generation).
  • In one embodiment an assertion of the SMI pin may be produced by placing an appropriate signal on a management bus or the like. For example, when an SMBUS is deployed using I2C, one of the bus lines may be hardwired to the SMI pins of each blade's processor via that blade's connector. Optionally, the interface plane may provide a separate means for producing a similar result. Depending on the configuration, all SMI pins may be commonly tied to a single bus line, or the bus may be structured to enable independent SMI pin assertions for respective blades. As yet another option, certain network interface chips (NIC), such as those made by Intel®, provide a second MAC address for use as a “back channel” in addition to a primary MAC address used for conventional network communications. Furthermore, these NICs provide a built-in system management feature, wherein an incoming communication referencing the second MAC address causes the NIC to assert an SMI signal. This scheme enables an OOB channel to be deployed over the same cabling as the “public” network (not shown).
  • In one embodiment, a firmware driver is employed to access the OOB channel. For instance, when the OOB channel is implemented via a network or serial means, an appropriate firmware driver will be provided to access the network or serial port. Since the configuration of the firmware driver will be known in advance (and thus independent of the operating system), the SMM handler may directly access the OOB channel via the firmware driver. Optionally, in the case of a dedicated management bus, such as I2C, direct access may be available to the SMM handler without a corresponding firmware driver, although this latter option could also be employed.
  • In response to assertion of the SMI pin, the asserted processor switches to SMM execution mode and begins dispatch of its SMM handler(s) until the appropriate handler (e.g., communication handler 604) is dispatched to facilitate the OOB communication. Thus, in each of the OOB communication network/channel options, the OOB communications are performed when the blade processors are operating in SMM, whereby the communications are transparent to the operating systems running on those blades.
  • In accordance with a block 514, the shared device path and resource configuration information is received by global resource manager 608. In a similar manner, shared device path and resource configuration information for other blades is received by the global resource manager.
  • In accordance with one aspect of the invention, individual resources may be combined to form a global resource. For example, storage provided by individual storage devices (e.g., hard disks and system RAM) may be aggregated to form one or more “virtual” storage volumes. This is accomplished, in part, by aggregating the resource configuration information in a block 516. In the case of hard disk resources, the resource configuration information might typically include storage capacity, such as number of storage blocks, partitioning information, and other information used for accessing the device. After the resource configuration information is aggregated, a global resource access mechanism (e.g., API) and global resource descriptor 612 are built. The global resource descriptor contains information identifying how to access the resource, and describes the configuration of the resource (from a global and/or local perspective).
  • After the operations of block 516 are completed, the global resource descriptor 612 is transmitted to active nodes in the rack via the OOB channel in a block 518. This transmission operation may be performed using node-to-node OOB communications, or via an OOB broadcast. In response to receiving the global resource descriptor, it is stored by the receiving node in a block 520, leading to processing the next resource. The operations of blocks 510, 512, 514, 516, 518, and 520 are repeated in a similar manner for each resource that is discovered until all sharable resources are processed.
  • In accordance with one embodiment, access to shared resources is provided by corresponding firmware device drivers that are configured to access discovered shared resources via their global resource API's in a block 522. Further details of this access scheme when applied to specific resources are discussed below. As depicted by a continuation block 524, pre-boot platform initialization operations are then continued as described above to prepare for the OS load.
  • During the OS load in a block 526, global resource descriptors corresponding to any shared resources that are discovered are handed off to the operation system. It is noted that the global resource descriptors that are handed off to the OS may or may not be identical to those built in block 516. Essentially, the global resource descriptors contain information to enable the operating system to configure access to the resource via its own device drivers. For example, in the case of a single shared storage volume, the OS receives information indicating that it has access to a “local” storage device (or optionally a networked storage device) having a storage capacity that spans the individual storage capacities of the individual storage devices that are shared. In the case of multiple shared storage volumes, respective storage capacity information will be handed off to the OS for each volume. The completion of the OS load leads to continued OS runtime operations, as depicted by a continuation block 528.
  • During OS runtime, global resources are access via a combination of the operating system and firmware components configured to provide “low-level” access to the shared resource. Under modern OS/Firmware architectures, the device access scheme is intentionally abstracted such that the operating system vendor is not required to write a device driver that is specific to each individual device. Rather, these more explicit access details are provided by corresponding firmware device drivers. One result of this architecture is that the operating system may not directly access a hardware device. This proves advantageous in many ways. Most notably, this means the operating system does not need to know the particular low-level access configuration of the device. Thus, “virtual” resources that aggregate the resources of individual devices may be “built,” and corresponding access to such devices may be abstracted through appropriately-configured firmware drivers, whereby the OS thinks the virtual resource is a real local device.
  • In one embodiment, this abstracted access scheme is configured as a multi-layer architecture, as shown in FIGS. 8 a and 8 b. Each of blades BLADE 1 and BLADE 2 have respective copies of the architecture components, including an OS device drivers 800-1 and 800-2, management/access driver 802-1 and 802-2, resource device drivers 804-1 and 804-2, and OOB communication handlers 604-1 and 604-2.
  • A flowchart illustrating an exemplary process for accessing a shared resource in accordance with one embodiment is shown in FIG. 7. The process begins with an access request from a requester, as depicted in a start block 700. A typical requestor might be an application running on the operating system for the platform. Executable code corresponding to such applications are generally stored in system memory 204, as depicted by runtime (RT) applications (APP) 806 and 808 in FIGS. 8 a and 8 b. For instance, suppose runtime application 806 wishes to access a shared data storage resource. In this example, the access request corresponds to opening a previously stored file. The runtime application will first make a request to the operating system (810) to access the file, providing a location for the file (e.g., drive designation, path, and filename). Furthermore, the drive designation is a drive letter previously allocated by the operating system for a virtual global storage resource comprising a plurality of disk drives 218, which include resource 1 of BLADE 1 and resource 2 on BLADE 2.
  • In response to the request, operating system 810 employs its OS device driver 800-1 to access the storage resource in a block 702. Normally, OS device driver 800-1 would interface directly with resource driver 804-1 to access resource 1. However, management/access driver 802-1 is accessed instead. In order to effectuate this change, interface information such as an API or the like is handed off to the OS during OS-load, whereby the OS is instructed to access management/access driver 802-1 whenever there is a request to access the corresponding resource (e.g., resource 1).
  • In order to determine which shared resource is to service the request, a mechanism is provided to identify a particular host via which the appropriate resource may be accessed. In one embodiment, this mechanism is facilitated via a global resource map. In the embodiment of FIG. 8 a, local copies 812-1 and 812-2 of a common global resource map are stored on respective blades BLADE 1 and BLADE 2. In the embodiment of FIG. 8 b, a shared global resource map 812 a is hosted by global resource manager 608. The global resource map matches specific resources with the portions of the global resource hosted by those specific resources.
  • Continuing with the flowchart of FIG. 7, in a block 704 the management/access driver queries local global resource map 812 to determine the host of the resource underlying the particular access request. This resource and/or its host is known as the “resource target;” in the illustrated example the resource target comprises a resource 2 hosted by BLADE 2.
  • Once the resource target,is identified, OOB communication operations are preformed to pass the resource access request to the resource target. First, the management/access driver on the requesting platform (e.g., 802-1) asserts an SMI to activate that platforms local OOB communications handler 604-1. In response, the processor on BLADE 1 switches its mode to SMM in a block 708 and dispatches its SMM handlers until OOB communication handler 604-1 is launched. In response, the OOB communication handler asserts an SMI signal on the resource target host (BLADE 2) to initiate OOB communication between the two blades. In response to the SMI, the processor mode on BLADE 2 is switched to SMM in a block 710, launching its OOB communication handler. At this point, Blades 1 and 2 are enabled to communicate via OOB channel 610, and the access request is received by OOB communications handler 604-2. After the resource access request has been sent, in one embodiment an “RSM” instruction is issued to the processor on BLADE 1 to switch the processor's operating mode back to what it was before being switched to SMM.
  • In a block 712 the access request is then passed to management/access driver 802-2 via its API. In an optional embodiment, a query is then performed in a block 714 to verity that the platform receiving the access request is the actual host of the target resource. If it isn't the correct host, in one embodiment a message is passed back to the requester indicating so (not shown). In another embodiment, am appropriate global resource manager is apprised of the situation. In essence, this situation would occur if the local global resource maps contained different information (i.e., are no longer synchronized). In response, the global resource manager would issue a command to resynchronize the local global resource maps (all not shown).
  • Continuing with a block 716, the platform host's resource device driver (804-2) is then employed to access the resource (e.g., resource 2) to service the access request. Under the present example, the access returns the requested data file. Data corresponding to the request is then returned to the requester via OOB channel 610 in a block 718. At the completion of the communication, an RSM instruction is issued to the processor on BLADE 2 to switch the processor's operating mode back to what it was before being switched to SMM.
  • Depending on the particular implementation, the requester's processor may or may not be operating an SMM at this time. For example, in the embodiment discussed above, the requester's (BLADE 1) processor was switched back out of SMM in a block 708. In this case, a new SMI is asserted to activate the OOB communications handler in a block 722. If the SMM mode was not terminated after sending the access request (in accordance with an optional scheme), the OOB communication handler is already waiting to receive the returned data. In either case, the returned data are received via OOB channel 610, and the data are passed to the requester's management/access driver (802-1) in a block 724. In turn, this firmware driver passes the data to back to OS device driver 800-1 in a block 726, leading to receipt of the data by the requester via the operating system in a block 728.
  • A similar resource access process is performed using a single global resource map in place of the local copies of the global resource map in the embodiment of FIG. 8 b. In short, many of the operations are the same as those discusses above with reference to FIG. 8 a, except that global resource manager 608 is employed as a proxy for accessing the resource, rather than using local global resource maps. Thus, the resource access request is sent to global resource manager 608 via OOB channel 610 rather than directly to an identified resource target. Upon receipt of the request, a lookup of global resource map 812 a is performed to determine the resource target. Subsequently, the data request is sent to the identified resource target, along with information identifying the requester. Upon receiving the request, the operations of blocks 712-728 are preformed, with the exception of optional operations 714.
  • Each of the foregoing schemes offers their own advantages. When local global resource maps are employed, there is no need for a proxy, and thus there is not need to change any software components operating on any of the blade server components. However, there should be a mechanism for facilitating global resource map synchronization, and the management overhead for each blade is increased. The primary advantage of employing a single global resource manager is that the synchronicity of the global resource map is ensured (since there is only one copy), and changes to the map can be made without any complicity required of the individual blades. Under most implementations, the main drawback will be providing a host for the global resource manager functions. Typically, the host may be a management component or one of the blades (e.g., a nominated or default-selected blade).
  • In one embodiment, a blade that hosts the global resource manager functions is identified through a nomination process, wherein each blade may include firmware for performing the management tasks. In general, the nomination scheme may be based on a physical assignment, such as a chassis slot, or may be based on an activation scheme, such as a first-in ordered scheme. For example, under a slot-based scheme, the blade having the lowest slot assignment for the group would be assigned power arbiter tasks. If that blade Was removed, the blade having the lowest slot assignment from among the remaining blades would be nominated to host the global resource manager. Under a first-in ordered scheme, each blade would be assigned in installation order identifier (e.g., number) based on the order the blades were inserted or activated. The global management task would be assigned to the blade with the lowest number, that is the first installed blade to begin with. Upon removal of that blade, the blade with the next lowest installation number would be nominated as the new power arbiter. In order to ensure continued operations across a change in the global resource manager, a redundancy scheme may be implemented wherein a second blade is nominated as a live back-up.
  • In general, global resource mapping data may be stored in either system memory or as firmware variable data. If stored as firmware variable data, the mapping data will be able to persist across platform shutdowns. In one embodiment, the mapping data are stored a portion of system memory that is hidden from the operating system. This hidden portion of system memory may include a portion of SMRAM or a portion of memory reserved by firmware during pre-boot operations. Another way to persist global resource mapping data across shutdowns is to store the data on a persistent storage device, such as a disk drive. However, when employing a disk drive it is recommended that the mapping data are stored in a manner that is inaccessible to the platform operating system, such as in the host protected area (HPA) of the disk drive. When global: resource mapping data are stored in a central repository (i.e., as illustrated by the embodiment of FIG. 8 b), various storage options similar to those presented above may be employed. In cases in which the global resource manager is hosted by a component other than the plurality of server blades (such as hosted by management card 112 or an external management server), disk storage may be safely implemented since these hosts are not accessible by the operating systems running on the blades.
  • A more specific implementation of resource sharing is illustrated in FIGS. 9 a-b and 10 a-b. In these cases, the resource being shared comprise disk drives 218. In the embodiment 900 illustrated in FIGS. 9 a and 10 a, the storage resources provided by a plurality of disk drives 218 are aggregated to form a virtual storage volume “V:” For clarity, the storage resources for each of the disk drives is depicted as respective groups of I/O storage comprising 10 blocks. Furthermore, each of Blades 1-16 are depicted as hosting a single disk drive 218; it will be understood that an actual implementations each blade may host 0-N disk drives (depending on its configuration), that the number of blocks for each disk drive may vary, and that the actual number of blocks will be several orders of magnitude higher than those depicted herein.
  • From an operating system perspective, virtual storage volume V: appears as a single storage device. In general, the shared storage resources may be configured as 1-N virtual storage volumes, with each volume spanning a respective set of storage devices. In reality, virtual storage volume V: spans 16 disk drives 218. To effectuate this, a global resource map comprising a lookup table 1000 is employed. The lookup table maps respective ranges of I/O blocks to the blade on which the disk drive hosting the I/O blocks resides. In the case of single blades being able to host multiple disk drives, the map would contain further information identifying the specific storage device on each blade. In general, an addressing scheme would be employed rather than simply identifying a blade number; however, the illustrated blade number assignments are depicted for clarity and simplicity.
  • FIGS. 9 b and 10 b illustrate a RAID embodiment 902 using mirroring and duplexing in accordance with the RAID (Redundant Array of Individual Disks)-1 standard. Under RAID-1, respective sets of storage devices are paired, and data are mirrored by writing identical sets of data to each storage device in the pair. In a manner similar to that discussed above, the aggregated storage appears to the operating system as a virtual volume V:. In the illustrated embodiment, the number and type of storage devices are identical to those of embodiment 900, and thus the block I/O storage capacity of the virtual volume is cut in half to 80 blocks. Global resource mappings are contained in a lookup table 1002 for determining what disk drives are to be accessed when the operating system makes a corresponding block I/O access request. The disk drive pairs are divided into logical storage entities labeled A-H.
  • In accordance with RAID-1 principles, when a write access to a logical storage entity is performed, the data are written to each of the underlying storage devices. In contrast, during a read access, the data are (generally) retrieved from a single storage device. Depending on the complexity of the RAID-1 implementation, one of the pair may be assigned as the default read device, or both of the storage devices may facilitate this function, allowing for parallel reads (duplexing).
  • In addition to the illustrated configuration, a configuration may employ one or more disk drives 218 as “hot spares.” In this instance, the hot spare storage devices are not used during normal access operations, but rather sit in reserve to replace any device or blade that has failed. Under standard practices, when a hot spare replacement occurs, data stored on the non-failed device (in the pair) are written to the replacement device to return the storage system to full redundancy. This may be performed in an interactive fashion (e.g., allowing new data writes concurrently), or may be performed prior to permitting new writes.
  • Generally, the RAID-1 scheme may be deployed using either a single global resource manager, or via local management. For example, in cases in which “static” maps are employed (corresponding to static resource configurations), appropriate mapping information can be stored on each blade. In one embodiment, this information may be stored as firmware variable data, whereby it will persist through a platform reset or shutdown. For dynamic configuration environments, it is advisable to employ a central global resource manager, at least for determining updated resource mappings corresponding configuration changes.
  • In addition to RAID-1, other RAID standard redundant storage schemes may be employed, including RAID-0, RAID-2, RAID-3, RAID-5, and RAID-10. Since each of these schemes involves some form of striping, the complexity of the global resource maps increase substantially. For this and other reasons, it will generally be easier to implement RAID-0, RAID-2, RAID-3, RAID-5, and RAID-10 via a central global resource manager rather than individual local managers.
  • It is noted that although the foregoing principles are discussed in the context of a blade server environment, this is not to be limiting. Each blade may be considered to be a separate platform, such as a rack-mounted server or a stand-alone server, wherein resource sharing across a plurality of platforms may be effectuated via an OOB channel in the manner similar to that discussed above. For example, in a rack-mounted server configuration cabling and/or routing may be provided to support an OOB channel.
  • A particular implement of the invention that is well-suited to rack-mounted servers and the like concerns sharing keyboard, video, and mouse I/O, commonly known as KVM. In a typical rack server, a KVM switch is employed to enable a single keyboard, video display and mouse to be shared by all servers in the rack. The KVM switch routes KVM signals from individual servers (via respective cables) to single keyboard, video and mouse I/O ports, whereby a KVM signals for a selected server may be accessed by tuning a selection knob or otherwise selecting the input signal source. For high-density servers, the KVM switch may cost $1500 or more, in addition to costs for cabling and installation. KVM cabling also reduces ventilation and accessibility.
  • The foregoing problems are overcome by a shared KVM embodiment illustrated in FIGS. 11-13. In FIG. 11, each of a plurality of rack-mounted servers 1100 is connected to the other servers via a switch 1102 and corresponding Ethernet cabling (depicted as a network cloud 1104). Each server 1100 includes a mainboard 1106 having a plurality of components mounted thereon or coupled thereto, including a processor 1108, memory 1110, a firmware storage device 1112, and a NIC 1114. A plurality of I/O ports are also coupled to the mainboard, including a mouse and keyboard ports 1116 and 1118 and a video port 1120. Typically, each server will also include a plurality of disk drives 1122.
  • In accordance with the NIC-based back channel OOB scheme discussed above, a second MAC address assigned to the NIC 1114 for each server 1100 is employed to support an OOB channel 1124. A keyboard 1126, video display 1128, and a mouse 1130 are coupled-via respective cables to respective I/O ports 1118, 1120, and 1116 disposed on the back of a server 1100A. Firmware on each of servers 1110 provides support for hosting a local global resource map 1132 that routes KVM signals to keyboard 1126, video display 1128, and mouse 1130 via server 1100A.
  • A protocol stack exemplifying how video signals (the most complicated of the KVM signals) are handled in accordance with one embodiment is shown in FIG. 12. In the example, video data used to produce corresponding video signals are rerouted from a server 1100N to server 1100A. The software side of the protocol stack on server 1100N includes an operating system video driver 1200N, while the firmware components include a video router driver 1202N, a video device driver 1204N and an OOB communications handler 604N. The data flow is similar to that described above with reference to FIGS. 7 and 8 a, and proceeds as follows.
  • The operating system running on a server 1100N receives a request to update the video display, typically in response to a user input to a runtime application. The operating system employs its OS video driver 1200N to effectuate the change. Generally, the OS video driver will generate video data based on a virtual video display maintained by the operating system, wherein a virtual-to-physical display mapping is performed. For example, the same text/graphic content displayed on monitors having different resolutions requires different video data particular to the resolutions. The OS video driver then interfaces with video router driver 1202N to pass on the video data to the what it thinks is the destination device, server 1100N's video chip 1206N. As far as the operating system is concerned, video router driver 1202N is the firmware video device driver for the server, i.e., is video device driver 1204N. However, upon receiving the video data, video router driver 1202N looks up the video data destination server via a lookup of global resource map 1134N and asserts an SMI to initiate an OOB communication with server 1100A via respective OOB communication handlers 604N and 604A.
  • Upon receiving the video data, it is written to a video chip 1206A via video device driver 1204A. In a manner similar to that described above, this passing of-video data may be directly from OOB communications handler 604A to video device driver 1204A, or it may be routed through video router driver 1202A. In response to receiving the video data, video chip 1206A updates its video output signal, which is received by video monitor 1128 via video port i 120. As an option, a verification lookup of a global resource map 1134A may be performed to verify that server 1100A is the correct video data destination server.
  • Keyboard and mouse signals are handled in a similar manner. As with video, operating systems typically maintain a virtual pointer map from which a virtual location of a pointing device can be cross-referenced to the virtual video display, thereby enabling the location of the cursor relative to the video display to be determined. Generally, mouse information will traverse the reverse route of the video signals—that is mouse input received via server 1100A will be passed via the OOB channel to a selected platform (e.g., server 1100N). This will require updating the global resource map 1134A on server 1100A to reflect the proper destination platform. Routing keyboard signals also will require a similar map update. A difference with keyboard signals is that they are bi-directional, so both input and output data rerouting is required.
  • An exemplary keyboard input signal processing protocol stack and flow diagram is shown in FIG. 13. The software side of the protocol stack on server 1100N includes an operating system keyboard driver 1300N, while the firmware components include a keyboard router driver 1302N, a video device driver 1304N and an OOB communications handler 604N. Similar components comprise the protocol stack of server 1100A.
  • In response to a user input via keyboard 1126, a keyboard input signal is generated that is received by a keyboard chip 1306A via keyboard port 1118A. Keyboard chip 1306 then produces corresponding keyboard (KB) data that is received by keyboard device driver 1304A. At this point, the handling of the keyboard input is identical to that implemented on a single platform that does not employ resource sharing (e.g., a desktop computer). Normally, keyboard device driver 1304A would interface with OS keyboard driver 1300A to pass the keyboard data to the operating system. However, the OS keyboard driver that is targeted to receive the keyboard data is running on server 1100N. Accordingly, video data handled by keyboard device driver 1304 is passed to keyboard router driver 1302A to facilitate rerouting the keyboard data.
  • In response to receiving the keyboard data, keyboard router driver queries global resource map 1134 to determine the target server to which the keyboard data is to be rerouted (server 1100ON in this example). The keyboard router driver then asserts an SMI to kick the processor running on server 1100A into SMM and passes the keyboard data along with server target identification data to OOB communications handler 604A. OOB communications handler 604A then interacts with OOB communication handler 604N to facilitate OOB communications between the two servers via OOB channel 1124, leading to the keyboard data being received by OOB communications handler 604N. In response to receiving the keyboard data, OOB communications handler 604N forwards the keyboard data to keyboard router driver 1302N. At this point, the keyboard router driver may either directly pass the keyboard data to OS keyboard driver 1300N, or perform a routing verification lookup of global resource map 1134N to ensure that server 1100N is the proper server to receive the keyboard data prior to passing the data to OS keyboard driver 1300N. The OS keyboard driver then processes the keyboard data and provides the processed data to a runtime application having the current focus.
  • As discuss above, resource sharing is effectuated, at least in part, through firmware stored on each blade or platform. The firmware, which may typically comprise instructions and data for implementing the various operations described herein, will generally be stored on a non-volatile memory device, such as but not limited to a flash device, a ROM, or an EEPROM. The instructions are machine readable, either directly by a real machine (i.e., machine code) or via interpretation by a virtual machine (e.g., interpreted byte-code). Thus, embodiments of the invention may be used as or to support firmware executed upon some form of processing core (such as the CPU of a computer) or otherwise implemented or realized upon or within a machine-readable medium. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a processor). For example, a machine-readable medium can include media such as a read only memory (ROM); a random access memory (RAM); a magnetic disk storage media; an optical storage media; and a flash memory device, etc. In addition, a machine-readable medium can include propagated signals such as electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).
  • The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
  • These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.

Claims (20)

1. A method for sharing resources across a plurality of computing platforms, comprising:
receiving a resource access request to access a shared resource at a first computing platform;
determining a second computing platform via which the shared resource may be accessed;
sending the resource access request to the second computing platform; and
accessing the shared resource via the second computing platform.
2. The method of claim 1, wherein the plurality of computing platforms comprise a plurality of server blades operating in a blade server environment.
3. The method of claim 1, wherein the method is performed in a manner that is transparent to operating systems running on the plurality of computing platforms.
4. The method of claim 1, wherein the method is facilitated by firmware running on each of the plurality of computing platforms.
5. The method of claim 1, wherein the resource access request is sent to the second computing platform via an out-of-band (OOB) communication channel.
6. The method of claim 5, wherein the OOB communication channel comprises one of a system management bus, an Ethernet-based network, or a serial communication link.
7. The method of claim 5, wherein the target resource comprises a storage device.
8. The method of claim 7, wherein the resource access request comprises a storage device write request, and the method further comprises sending data corresponding to the storage device write request via the OOB communication channel.
9. The method of claim 7, wherein the resource access request comprises a storage device read request, and the method further comprises:
retrieving data corresponding to the read request from the shared resource; and
sending the data that are retrieved back to the first computing platform via the OOB communication channel.
10. The method of claim 1, further comprising:
maintaining global resource mapping data identifying which resources are accessible via which computing platforms; and
employing the global resource mapping data to determine which computing platform to use to access the shared resource.
11. The method of claim 10, wherein a local copy of the global resource mapping data is maintained on each of the plurality of computing platforms.
12. The method of claim 10, wherein the global resource mapping data is maintained by a central global resource manager.
13. A method for sharing a plurality of storage devices across a plurality of computing platforms, comprising:
configuring the plurality of storage devices as a virtual storage volume;
maintaining a global resource map that maps input/output (I/O) blocks defined for the virtual storage volume to corresponding storage devices that actually host the I/O blocks;
receiving a data access request identifying an I/O block from which data are to be accessed via the virtual storage volume;
identifying a computing platform via which a target storage device that actually hosts the I/O block may be accessed through use of the global resource map;
routing the data access request to the computing platform that is identified; and
accessing the I/O block on the target storage device via the computing platform that is identified.
14. The method of claim 13, further comprising:
configuring the plurality of storage devices as at least one RAID (redundant array of independent disks) storage volume;
maintaining RAID configuration mapping information that maps input/output (I/O) blocks defined for said -at least one RAID virtual storage volume to corresponding storage devices that actually host the I/O blocks; and
employing the RAID configuration mapping information to access appropriate storage devices in response to read and write access requests.
15. The method of claim 14, wherein the RAID virtual storage volume is configured in accordance with the RAID-1 standard.
16-26. (Canceled).
27. A blade server system, comprising:
a chassis, including a plurality of slots in which respective server blades may be inserted;
an interface plane having a plurality of connectors for mating with respective connectors on inserted server blades and providing communication paths between the plurality of connectors to facilitate in out of band (OOB) communication channel; and
a plurality of server blades, each including a processor and firmware executable thereon to perform operations including:
receive a resource access request from an operating system running on a requesting server blade to access a shared resource hosted by at least one of the plurality of server blades;
determining a target resource host from among the plurality of server blades that hosts a target resource that may service the resource access request;
sending the resource access request to the target resource host; and
accessing the target resource via the target resource host to service the resource access request.
28. The blade server system of claim 27, wherein the operations are performed in a manner that is transparent to operating systems that may be run on the plurality of server blades.
29. The blade server system of claim 27, wherein communications between the plurality of server blades is facilitated by an out-of-band OOB communication channel.
30. The blade server system of claim 29, wherein each processor supports a hidden execution mode that is employed for facilitating communication via the OOB channel.
US10/606,636 2003-06-25 2003-06-25 OS agnostic resource sharing across multiple computing platforms Abandoned US20050015430A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US10/606,636 US20050015430A1 (en) 2003-06-25 2003-06-25 OS agnostic resource sharing across multiple computing platforms
US10/808,656 US7730205B2 (en) 2003-06-25 2004-03-24 OS agnostic resource sharing across multiple computing platforms
PCT/US2004/018253 WO2005006186A2 (en) 2003-06-25 2004-06-09 Os agnostic resource sharing across multiple computing platforms
CN2004800180348A CN101142553B (en) 2003-06-25 2004-06-09 OS agnostic resource sharing across multiple computing platforms
EP04754766.6A EP1636696B1 (en) 2003-06-25 2004-06-09 Os agnostic resource sharing across multiple computing platforms
JP2006509095A JP4242420B2 (en) 2003-06-25 2004-06-09 Resource sharing independent of OS on many computing platforms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/606,636 US20050015430A1 (en) 2003-06-25 2003-06-25 OS agnostic resource sharing across multiple computing platforms

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/808,656 Continuation-In-Part US7730205B2 (en) 2003-06-25 2004-03-24 OS agnostic resource sharing across multiple computing platforms

Publications (1)

Publication Number Publication Date
US20050015430A1 true US20050015430A1 (en) 2005-01-20

Family

ID=34062276

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/606,636 Abandoned US20050015430A1 (en) 2003-06-25 2003-06-25 OS agnostic resource sharing across multiple computing platforms
US10/808,656 Expired - Fee Related US7730205B2 (en) 2003-06-25 2004-03-24 OS agnostic resource sharing across multiple computing platforms

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/808,656 Expired - Fee Related US7730205B2 (en) 2003-06-25 2004-03-24 OS agnostic resource sharing across multiple computing platforms

Country Status (5)

Country Link
US (2) US20050015430A1 (en)
EP (1) EP1636696B1 (en)
JP (1) JP4242420B2 (en)
CN (1) CN101142553B (en)
WO (1) WO2005006186A2 (en)

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040268368A1 (en) * 2003-06-27 2004-12-30 Mark Doran Methods and apparatus to protect a protocol interface
US20050256942A1 (en) * 2004-03-24 2005-11-17 Mccardle William M Cluster management system and method
US20060053215A1 (en) * 2004-09-07 2006-03-09 Metamachinix, Inc. Systems and methods for providing users with access to computer resources
US20060112474A1 (en) * 2003-05-02 2006-06-01 Landis Timothy J Lightweight ventilated face shield frame
US20060149860A1 (en) * 2004-12-30 2006-07-06 Nimrod Diamant Virtual IDE interface and protocol for use in IDE redirection communication
US20060168099A1 (en) * 2004-12-30 2006-07-27 Nimrod Diamant Virtual serial port and protocol for use in serial-over-LAN communication
US20070014303A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router
US20070016636A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Methods and systems for data transfer and notification mechanisms
US20070014277A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router repository
US20070014307A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router forwarding
US20070014300A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router notification
US20070028293A1 (en) * 2005-07-14 2007-02-01 Yahoo! Inc. Content router asynchronous exchange
US20070038703A1 (en) * 2005-07-14 2007-02-15 Yahoo! Inc. Content router gateway
US20070050765A1 (en) * 2005-08-30 2007-03-01 Geisinger Nile J Programming language abstractions for creating and controlling virtual computers, operating systems and networks
US20070050770A1 (en) * 2005-08-30 2007-03-01 Geisinger Nile J Method and apparatus for uniformly integrating operating system resources
US20070058657A1 (en) * 2005-08-22 2007-03-15 Graham Holt System for consolidating and securing access to all out-of-band interfaces in computer, telecommunication, and networking equipment, regardless of the interface type
US20070067769A1 (en) * 2005-08-30 2007-03-22 Geisinger Nile J Method and apparatus for providing cross-platform hardware support for computer platforms
US20070074192A1 (en) * 2005-08-30 2007-03-29 Geisinger Nile J Computing platform having transparent access to resources of a host platform
US20070074191A1 (en) * 2005-08-30 2007-03-29 Geisinger Nile J Software executables having virtual hardware, operating systems, and networks
US20070101022A1 (en) * 2005-10-28 2007-05-03 Yahoo! Inc. Sharing data in scalable software blade architecture
US20070100975A1 (en) * 2005-10-28 2007-05-03 Yahoo! Inc. Scalable software blade architecture
US20070101021A1 (en) * 2005-10-28 2007-05-03 Yahoo! Inc. Recovering a blade in scalable software blade architecture
US20070109592A1 (en) * 2005-11-15 2007-05-17 Parvathaneni Bhaskar A Data gateway
US20070116110A1 (en) * 2005-11-22 2007-05-24 Nimrod Diamant Optimized video compression using hashing function
US20070156434A1 (en) * 2006-01-04 2007-07-05 Martin Joseph J Synchronizing image data among applications and devices
US20070153009A1 (en) * 2005-12-29 2007-07-05 Inventec Corporation Display chip sharing method
US20070283138A1 (en) * 2006-05-31 2007-12-06 Andy Miga Method and apparatus for EFI BIOS time-slicing at OS runtime
US20080034008A1 (en) * 2006-08-03 2008-02-07 Yahoo! Inc. User side database
US20080270629A1 (en) * 2007-04-27 2008-10-30 Yahoo! Inc. Data snychronization and device handling using sequence numbers
US20080294800A1 (en) * 2007-05-21 2008-11-27 Nimrod Diamant Communicating graphics data via an out of band channel
US20090055599A1 (en) * 2007-08-13 2009-02-26 Linda Van Patten Benhase Consistent data storage subsystem configuration replication
US20090125901A1 (en) * 2007-11-13 2009-05-14 Swanson Robert C Providing virtualization of a server management controller
US20090271498A1 (en) * 2008-02-08 2009-10-29 Bea Systems, Inc. System and method for layered application server processing
WO2010008707A1 (en) * 2008-07-17 2010-01-21 Lsi Corporation Systems and methods for installing a bootable virtual storage appliance on a virtualized server platform
US7873846B2 (en) 2007-07-31 2011-01-18 Intel Corporation Enabling a heterogeneous blade environment
US20110124342A1 (en) * 2008-05-21 2011-05-26 Oliver Speks Blade Cluster Switching Center Server and Method for Signaling
US20110134749A1 (en) * 2008-05-21 2011-06-09 Oliver Speks Resource Pooling in a Blade Cluster Switching Center Server
US20110154065A1 (en) * 2009-12-22 2011-06-23 Rothman Michael A Operating system independent network event handling
US20110153798A1 (en) * 2009-12-22 2011-06-23 Groenendaal Johan Van De Method and apparatus for providing a remotely managed expandable computer system
US8024290B2 (en) 2005-11-14 2011-09-20 Yahoo! Inc. Data synchronization and device handling
US8078637B1 (en) * 2006-07-28 2011-12-13 Amencan Megatrends, Inc. Memory efficient peim-to-peim interface database
WO2011159892A1 (en) * 2010-06-16 2011-12-22 American Megatrends, Inc. Multiple platform support in computer system firmware
US20120110155A1 (en) * 2010-11-02 2012-05-03 International Business Machines Corporation Management of a data network of a computing environment
US20120158923A1 (en) * 2009-05-29 2012-06-21 Ansari Mohamed System and method for allocating resources of a server to a virtual machine
US20120180076A1 (en) * 2011-01-10 2012-07-12 Dell Products, Lp System and Method to Abstract Hardware Routing via a Correlatable Identifier
US8386618B2 (en) 2010-09-24 2013-02-26 Intel Corporation System and method for facilitating wireless communication during a pre-boot phase of a computing device
US8959220B2 (en) 2010-11-02 2015-02-17 International Business Machines Corporation Managing a workload of a plurality of virtual servers of a computing environment
US20150052214A1 (en) * 2011-12-28 2015-02-19 Beijing Qihoo Technology Company Limited Distributed system and data operation method thereof
US8966020B2 (en) 2010-11-02 2015-02-24 International Business Machines Corporation Integration of heterogeneous computing systems into a hybrid computing system
US8984109B2 (en) 2010-11-02 2015-03-17 International Business Machines Corporation Ensemble having one or more computing systems and a controller thereof
US9081613B2 (en) 2010-11-02 2015-07-14 International Business Machines Corporation Unified resource manager providing a single point of control
US20150301764A1 (en) * 2013-07-02 2015-10-22 Huawei Technologies Co., Ltd. Hard Disk and Methods for Forwarding and Acquiring Data by Hard Disk
CN105117309A (en) * 2008-05-21 2015-12-02 艾利森电话股份有限公司 Resource pooling in blade cluster switching center server
US9465660B2 (en) 2011-04-11 2016-10-11 Hewlett Packard Enterprise Development Lp Performing a task in a system having different types of hardware resources
US9558092B2 (en) 2011-12-12 2017-01-31 Microsoft Technology Licensing, Llc Runtime-agnostic management of applications
US9747116B2 (en) 2013-03-28 2017-08-29 Hewlett Packard Enterprise Development Lp Identifying memory of a blade device for use by an operating system of a partition including the blade device
US9781015B2 (en) 2013-03-28 2017-10-03 Hewlett Packard Enterprise Development Lp Making memory of compute and expansion devices available for use by an operating system
US9798568B2 (en) 2014-12-29 2017-10-24 Samsung Electronics Co., Ltd. Method for sharing resource using a virtual device driver and electronic device thereof
US20170331759A1 (en) * 2016-05-16 2017-11-16 International Business Machines Corporation Sla-based agile resource provisioning in disaggregated computing systems
US10289467B2 (en) 2013-03-28 2019-05-14 Hewlett Packard Enterprise Development Lp Error coordination message for a blade device having a logical processor in another system firmware domain
US10353744B2 (en) 2013-12-02 2019-07-16 Hewlett Packard Enterprise Development Lp System wide manageability
US10579421B2 (en) 2016-08-29 2020-03-03 TidalScale, Inc. Dynamic scheduling of virtual processors in a distributed system
US10609130B2 (en) 2017-04-28 2020-03-31 Microsoft Technology Licensing, Llc Cluster resource management in distributed computing systems
US10623479B2 (en) 2012-08-23 2020-04-14 TidalScale, Inc. Selective migration of resources or remapping of virtual processors to provide access to resources
US10966339B1 (en) * 2011-06-28 2021-03-30 Amazon Technologies, Inc. Storage system with removable solid state storage devices mounted on carrier circuit boards
US10992593B2 (en) 2017-10-06 2021-04-27 Bank Of America Corporation Persistent integration platform for multi-channel resource transfers
RU209333U1 (en) * 2021-09-27 2022-03-15 Российская Федерация, от имени которой выступает Государственная корпорация по атомной энергии "Росатом" (Госкорпорация "Росатом") HIGH DENSITY COMPUTING NODE
US11552803B1 (en) * 2018-09-19 2023-01-10 Amazon Technologies, Inc. Systems for provisioning devices
US11803306B2 (en) 2017-06-27 2023-10-31 Hewlett Packard Enterprise Development Lp Handling frequently accessed pages
US11907768B2 (en) 2017-08-31 2024-02-20 Hewlett Packard Enterprise Development Lp Entanglement of pages and guest threads

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7356638B2 (en) 2005-10-12 2008-04-08 International Business Machines Corporation Using out-of-band signaling to provide communication between storage controllers in a computer storage system
US8527542B2 (en) * 2005-12-30 2013-09-03 Sap Ag Generating contextual support requests
US7930681B2 (en) * 2005-12-30 2011-04-19 Sap Ag Service and application management in information technology systems
US7979733B2 (en) 2005-12-30 2011-07-12 Sap Ag Health check monitoring process
JP5082252B2 (en) * 2006-02-09 2012-11-28 株式会社日立製作所 Server information collection method
US7610481B2 (en) 2006-04-19 2009-10-27 Intel Corporation Method and apparatus to support independent systems in partitions of a processing system
JP2007293518A (en) * 2006-04-24 2007-11-08 Hitachi Ltd Computer system configuration method, computer, and system configuration program
US7685476B2 (en) * 2006-09-12 2010-03-23 International Business Machines Corporation Early notification of error via software interrupt and shared memory write
JP2008129869A (en) * 2006-11-21 2008-06-05 Nec Computertechno Ltd Server monitoring operation system
US7853669B2 (en) * 2007-05-04 2010-12-14 Microsoft Corporation Mesh-managing data across a distributed set of devices
US8386614B2 (en) * 2007-05-25 2013-02-26 Microsoft Corporation Network connection manager
US7932479B2 (en) * 2007-05-31 2011-04-26 Abbott Cardiovascular Systems Inc. Method for laser cutting tubing using inert gas and a disposable mask
WO2009067063A1 (en) * 2007-11-22 2009-05-28 Telefonaktiebolaget L M Ericsson (Publ) Method and device for agile computing
TW200931259A (en) * 2008-01-10 2009-07-16 June On Co Ltd Computer adaptor device capable of automatically updating the device-mapping
US8572033B2 (en) 2008-03-20 2013-10-29 Microsoft Corporation Computing environment configuration
US9298747B2 (en) * 2008-03-20 2016-03-29 Microsoft Technology Licensing, Llc Deployable, consistent, and extensible computing environment platform
US8484174B2 (en) * 2008-03-20 2013-07-09 Microsoft Corporation Computing environment representation
US9753712B2 (en) 2008-03-20 2017-09-05 Microsoft Technology Licensing, Llc Application management within deployable object hierarchy
US20090248737A1 (en) * 2008-03-27 2009-10-01 Microsoft Corporation Computing environment representation
US7886021B2 (en) * 2008-04-28 2011-02-08 Oracle America, Inc. System and method for programmatic management of distributed computing resources
US8352371B2 (en) * 2008-04-30 2013-01-08 General Instrument Corporation Limiting access to shared media content
US8555048B2 (en) * 2008-05-17 2013-10-08 Hewlett-Packard Development Company, L.P. Computer system for booting a system image by associating incomplete identifiers to complete identifiers via querying storage locations according to priority level where the querying is self adjusting
CN102067101B (en) * 2008-06-20 2013-07-24 惠普开发有限公司 Low level initializer
US8041794B2 (en) 2008-09-29 2011-10-18 Intel Corporation Platform discovery, asset inventory, configuration, and provisioning in a pre-boot environment using web services
US7904630B2 (en) * 2008-10-15 2011-03-08 Seagate Technology Llc Bus-connected device with platform-neutral layers
CN101783736B (en) * 2009-01-15 2016-09-07 华为终端有限公司 A kind of terminal accepts the method for multiserver administration, device and communication system
CN101594235B (en) * 2009-06-02 2011-07-20 浪潮电子信息产业股份有限公司 Method for managing blade server based on SMBUS
US8271704B2 (en) 2009-06-16 2012-09-18 International Business Machines Corporation Status information saving among multiple computers
US8402186B2 (en) * 2009-06-30 2013-03-19 Intel Corporation Bi-directional handshake for advanced reliabilty availability and serviceability
US7970954B2 (en) * 2009-08-04 2011-06-28 Dell Products, Lp System and method of providing a user-friendly device path
US10185594B2 (en) * 2009-10-29 2019-01-22 International Business Machines Corporation System and method for resource identification
US8667191B2 (en) * 2010-01-15 2014-03-04 Kingston Technology Corporation Managing and indentifying multiple memory storage devices
JP5636703B2 (en) * 2010-03-11 2014-12-10 沖電気工業株式会社 Blade server
US20110288932A1 (en) * 2010-05-21 2011-11-24 Inedible Software, LLC, a Wyoming Limited Liability Company Apparatuses, systems and methods for determining installed software applications on a computing device
US8281043B2 (en) * 2010-07-14 2012-10-02 Intel Corporation Out-of-band access to storage devices through port-sharing hardware
US20120020349A1 (en) * 2010-07-21 2012-01-26 GraphStream Incorporated Architecture for a robust computing system
US8410364B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Universal rack cable management system
US8259450B2 (en) 2010-07-21 2012-09-04 Birchbridge Incorporated Mobile universal hardware platform
US8411440B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Cooled universal hardware platform
US8441793B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal rack backplane system
US8441792B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal conduction cooling platform
US9195509B2 (en) 2011-01-05 2015-11-24 International Business Machines Corporation Identifying optimal platforms for workload placement in a networked computing environment
US8868749B2 (en) 2011-01-18 2014-10-21 International Business Machines Corporation Workload placement on an optimal platform in a networked computing environment
US9858241B2 (en) 2013-11-05 2018-01-02 Oracle International Corporation System and method for supporting optimized buffer utilization for packet processing in a networking device
US8634415B2 (en) 2011-02-16 2014-01-21 Oracle International Corporation Method and system for routing network traffic for a blade server
DE102011078630A1 (en) * 2011-07-05 2013-01-10 Robert Bosch Gmbh Method for setting up a system of technical units
CN102955509B (en) * 2011-08-31 2017-07-21 赛恩倍吉科技顾问(深圳)有限公司 Hard disk backboard and hard disk storage system
JP5966466B2 (en) 2012-03-14 2016-08-10 富士通株式会社 Backup control method and information processing apparatus
CN103379104B (en) * 2012-04-23 2017-03-01 联想(北京)有限公司 A kind of teledata sharing method and device
US9292108B2 (en) 2012-06-28 2016-03-22 Dell Products Lp Systems and methods for remote mouse pointer management
US9712373B1 (en) 2012-07-30 2017-07-18 Rambus Inc. System and method for memory access in server communications
CN104937567B (en) * 2013-01-31 2019-05-03 慧与发展有限责任合伙企业 For sharing the mapping mechanism of address space greatly
US9203772B2 (en) 2013-04-03 2015-12-01 Hewlett-Packard Development Company, L.P. Managing multiple cartridges that are electrically coupled together
US9489327B2 (en) 2013-11-05 2016-11-08 Oracle International Corporation System and method for supporting an efficient packet processing model in a network environment
US9195429B2 (en) * 2014-03-10 2015-11-24 Gazoo, Inc. Multi-user display system and method
CN105808550B (en) * 2014-12-30 2019-02-15 迈普通信技术股份有限公司 A kind of method and device accessing file
US11360673B2 (en) 2016-02-29 2022-06-14 Red Hat, Inc. Removable data volume management
JP6705266B2 (en) * 2016-04-07 2020-06-03 オムロン株式会社 Control device, control method and program
US10034407B2 (en) * 2016-07-22 2018-07-24 Intel Corporation Storage sled for a data center
US11229135B2 (en) 2019-04-01 2022-01-18 Dell Products L.P. Multiple function chassis mid-channel
CN111402083B (en) * 2020-02-21 2021-09-21 浙江口碑网络技术有限公司 Resource information processing method and device, storage medium and terminal

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696895A (en) * 1995-05-19 1997-12-09 Compaq Computer Corporation Fault tolerant multiple network servers
US5884096A (en) * 1995-08-25 1999-03-16 Apex Pc Solutions, Inc. Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch
US20020059170A1 (en) * 2000-04-17 2002-05-16 Mark Vange Load balancing between multiple web servers
US20020124134A1 (en) * 2000-12-28 2002-09-05 Emc Corporation Data storage system cluster architecture
US20020124114A1 (en) * 2001-03-05 2002-09-05 Bottom David A. Modular server architecture with ethernet routed across a backplane utilizing an integrated ethernet switch module
US20030033397A1 (en) * 2001-08-09 2003-02-13 Nagasubramanian Gurumoorthy Remote diagnostics system
US20030074431A1 (en) * 2001-10-17 2003-04-17 International Business Machines Corporation Automatically switching shared remote devices in a dense server environment thereby allowing the remote devices to function as a local device
US20030088655A1 (en) * 2001-11-02 2003-05-08 Leigh Kevin B. Remote management system for multiple servers
US20030191908A1 (en) * 2002-04-04 2003-10-09 International Business Machines Corporation Dense server environment that shares an IDE drive
US20030200345A1 (en) * 2002-04-17 2003-10-23 Dell Products L.P. System and method for using a shared bus for video communications
US20030226031A1 (en) * 2001-11-22 2003-12-04 Proudler Graeme John Apparatus and method for creating a trusted environment
US20040128562A1 (en) * 2002-12-31 2004-07-01 International Business Machines Corporation Non-disruptive power management indication method, system and apparatus for server
US20040181601A1 (en) * 2003-03-14 2004-09-16 Palsamy Sakthikumar Peripheral device sharing
US20040260936A1 (en) * 2003-06-18 2004-12-23 Hiray Sandip M. Provisioning for a modular server
US6901534B2 (en) * 2002-01-15 2005-05-31 Intel Corporation Configuration proxy service for the extended firmware interface environment
US6968414B2 (en) * 2001-12-04 2005-11-22 International Business Machines Corporation Monitoring insertion/removal of server blades in a data processing system
US7073059B2 (en) * 2001-06-08 2006-07-04 Hewlett-Packard Development Company, L.P. Secure machine platform that interfaces to operating systems and customized control programs
US7114180B1 (en) * 2002-07-16 2006-09-26 F5 Networks, Inc. Method and system for authenticating and authorizing requestors interacting with content servers
US7343441B1 (en) * 1999-12-08 2008-03-11 Microsoft Corporation Method and apparatus of remote computer management
US7374974B1 (en) * 2001-03-22 2008-05-20 T-Ram Semiconductor, Inc. Thyristor-based device with trench dielectric material

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6671756B1 (en) 1999-05-06 2003-12-30 Avocent Corporation KVM switch having a uniprocessor that accomodate multiple users and multiple computers
AU2001265090A1 (en) 2000-06-13 2001-12-24 Intel Corporation Providing client accessible network-based storage
US6889340B1 (en) * 2000-10-13 2005-05-03 Phoenix Technologies Ltd. Use of extra firmware flash ROM space as a diagnostic drive
US7424551B2 (en) * 2001-03-29 2008-09-09 Avocent Corporation Passive video multiplexing method and apparatus priority to prior provisional application

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696895A (en) * 1995-05-19 1997-12-09 Compaq Computer Corporation Fault tolerant multiple network servers
US5884096A (en) * 1995-08-25 1999-03-16 Apex Pc Solutions, Inc. Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch
US7343441B1 (en) * 1999-12-08 2008-03-11 Microsoft Corporation Method and apparatus of remote computer management
US20020059170A1 (en) * 2000-04-17 2002-05-16 Mark Vange Load balancing between multiple web servers
US20020124134A1 (en) * 2000-12-28 2002-09-05 Emc Corporation Data storage system cluster architecture
US6477618B2 (en) * 2000-12-28 2002-11-05 Emc Corporation Data storage system cluster architecture
US20020124114A1 (en) * 2001-03-05 2002-09-05 Bottom David A. Modular server architecture with ethernet routed across a backplane utilizing an integrated ethernet switch module
US7339786B2 (en) * 2001-03-05 2008-03-04 Intel Corporation Modular server architecture with Ethernet routed across a backplane utilizing an integrated Ethernet switch module
US7374974B1 (en) * 2001-03-22 2008-05-20 T-Ram Semiconductor, Inc. Thyristor-based device with trench dielectric material
US7073059B2 (en) * 2001-06-08 2006-07-04 Hewlett-Packard Development Company, L.P. Secure machine platform that interfaces to operating systems and customized control programs
US20030033397A1 (en) * 2001-08-09 2003-02-13 Nagasubramanian Gurumoorthy Remote diagnostics system
US7225245B2 (en) * 2001-08-09 2007-05-29 Intel Corporation Remote diagnostics system
US7269630B2 (en) * 2001-10-17 2007-09-11 International Business Machines Corporation Automatically switching shared remote devices in a dense server environment thereby allowing the remote devices to function as a local device
US20030074431A1 (en) * 2001-10-17 2003-04-17 International Business Machines Corporation Automatically switching shared remote devices in a dense server environment thereby allowing the remote devices to function as a local device
US7003563B2 (en) * 2001-11-02 2006-02-21 Hewlett-Packard Development Company, L.P. Remote management system for multiple servers
US20030088655A1 (en) * 2001-11-02 2003-05-08 Leigh Kevin B. Remote management system for multiple servers
US20030226031A1 (en) * 2001-11-22 2003-12-04 Proudler Graeme John Apparatus and method for creating a trusted environment
US6968414B2 (en) * 2001-12-04 2005-11-22 International Business Machines Corporation Monitoring insertion/removal of server blades in a data processing system
US6901534B2 (en) * 2002-01-15 2005-05-31 Intel Corporation Configuration proxy service for the extended firmware interface environment
US6848034B2 (en) * 2002-04-04 2005-01-25 International Business Machines Corporation Dense server environment that shares an IDE drive
US20030191908A1 (en) * 2002-04-04 2003-10-09 International Business Machines Corporation Dense server environment that shares an IDE drive
US7398293B2 (en) * 2002-04-17 2008-07-08 Dell Products L.P. System and method for using a shared bus for video communications
US20030200345A1 (en) * 2002-04-17 2003-10-23 Dell Products L.P. System and method for using a shared bus for video communications
US7114180B1 (en) * 2002-07-16 2006-09-26 F5 Networks, Inc. Method and system for authenticating and authorizing requestors interacting with content servers
US7191347B2 (en) * 2002-12-31 2007-03-13 International Business Machines Corporation Non-disruptive power management indication method, system and apparatus for server
US20040128562A1 (en) * 2002-12-31 2004-07-01 International Business Machines Corporation Non-disruptive power management indication method, system and apparatus for server
US20040181601A1 (en) * 2003-03-14 2004-09-16 Palsamy Sakthikumar Peripheral device sharing
US20040260936A1 (en) * 2003-06-18 2004-12-23 Hiray Sandip M. Provisioning for a modular server
US7440998B2 (en) * 2003-06-18 2008-10-21 Intel Corporation Provisioning for a modular server

Cited By (122)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060112474A1 (en) * 2003-05-02 2006-06-01 Landis Timothy J Lightweight ventilated face shield frame
US7434231B2 (en) * 2003-06-27 2008-10-07 Intel Corporation Methods and apparatus to protect a protocol interface
US20040268368A1 (en) * 2003-06-27 2004-12-30 Mark Doran Methods and apparatus to protect a protocol interface
US20080288873A1 (en) * 2004-03-24 2008-11-20 Mccardle William Michael Cluster Management System and Method
US20050256942A1 (en) * 2004-03-24 2005-11-17 Mccardle William M Cluster management system and method
US8650271B2 (en) * 2004-03-24 2014-02-11 Hewlett-Packard Development Company, L.P. Cluster management system and method
US20060053215A1 (en) * 2004-09-07 2006-03-09 Metamachinix, Inc. Systems and methods for providing users with access to computer resources
US20060168099A1 (en) * 2004-12-30 2006-07-27 Nimrod Diamant Virtual serial port and protocol for use in serial-over-LAN communication
US20110196970A1 (en) * 2004-12-30 2011-08-11 Nimrod Diamant Redirection communication
US9569372B2 (en) 2004-12-30 2017-02-14 Intel Corporation Redirection communication
US7949798B2 (en) 2004-12-30 2011-05-24 Intel Corporation Virtual IDE interface and protocol for use in IDE redirection communication
US8150973B2 (en) 2004-12-30 2012-04-03 Intel Corporation Virtual serial port and protocol for use in serial-over-LAN communication
US8626969B2 (en) 2004-12-30 2014-01-07 Intel Corporation Redirection communication
US20060149860A1 (en) * 2004-12-30 2006-07-06 Nimrod Diamant Virtual IDE interface and protocol for use in IDE redirection communication
US8706839B2 (en) 2004-12-30 2014-04-22 Intel Corporation Virtual serial port and protocol for use in serial-over-LAN communication
US20070014307A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router forwarding
US20070016636A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Methods and systems for data transfer and notification mechanisms
US7849199B2 (en) 2005-07-14 2010-12-07 Yahoo ! Inc. Content router
US20090307370A1 (en) * 2005-07-14 2009-12-10 Yahoo! Inc Methods and systems for data transfer and notification mechanisms
US20070014303A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router
US20070014278A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Counter router core variants
US20070014277A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router repository
US20070014300A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router notification
US20070028000A1 (en) * 2005-07-14 2007-02-01 Yahoo! Inc. Content router processing
US20070028293A1 (en) * 2005-07-14 2007-02-01 Yahoo! Inc. Content router asynchronous exchange
US20070038703A1 (en) * 2005-07-14 2007-02-15 Yahoo! Inc. Content router gateway
US20070058657A1 (en) * 2005-08-22 2007-03-15 Graham Holt System for consolidating and securing access to all out-of-band interfaces in computer, telecommunication, and networking equipment, regardless of the interface type
US20070074192A1 (en) * 2005-08-30 2007-03-29 Geisinger Nile J Computing platform having transparent access to resources of a host platform
US20080028401A1 (en) * 2005-08-30 2008-01-31 Geisinger Nile J Software executables having virtual hardware, operating systems, and networks
US20070074191A1 (en) * 2005-08-30 2007-03-29 Geisinger Nile J Software executables having virtual hardware, operating systems, and networks
US20070050770A1 (en) * 2005-08-30 2007-03-01 Geisinger Nile J Method and apparatus for uniformly integrating operating system resources
US20070050765A1 (en) * 2005-08-30 2007-03-01 Geisinger Nile J Programming language abstractions for creating and controlling virtual computers, operating systems and networks
US20070067769A1 (en) * 2005-08-30 2007-03-22 Geisinger Nile J Method and apparatus for providing cross-platform hardware support for computer platforms
US20070100975A1 (en) * 2005-10-28 2007-05-03 Yahoo! Inc. Scalable software blade architecture
US7779157B2 (en) 2005-10-28 2010-08-17 Yahoo! Inc. Recovering a blade in scalable software blade architecture
US20070101022A1 (en) * 2005-10-28 2007-05-03 Yahoo! Inc. Sharing data in scalable software blade architecture
US7873696B2 (en) 2005-10-28 2011-01-18 Yahoo! Inc. Scalable software blade architecture
US7870288B2 (en) 2005-10-28 2011-01-11 Yahoo! Inc. Sharing data in scalable software blade architecture
US20070101021A1 (en) * 2005-10-28 2007-05-03 Yahoo! Inc. Recovering a blade in scalable software blade architecture
US8024290B2 (en) 2005-11-14 2011-09-20 Yahoo! Inc. Data synchronization and device handling
US8065680B2 (en) 2005-11-15 2011-11-22 Yahoo! Inc. Data gateway for jobs management based on a persistent job table and a server table
US20070109592A1 (en) * 2005-11-15 2007-05-17 Parvathaneni Bhaskar A Data gateway
US7986844B2 (en) 2005-11-22 2011-07-26 Intel Corporation Optimized video compression using hashing function
US20070116110A1 (en) * 2005-11-22 2007-05-24 Nimrod Diamant Optimized video compression using hashing function
US20070153009A1 (en) * 2005-12-29 2007-07-05 Inventec Corporation Display chip sharing method
US9367832B2 (en) 2006-01-04 2016-06-14 Yahoo! Inc. Synchronizing image data among applications and devices
US20070156434A1 (en) * 2006-01-04 2007-07-05 Martin Joseph J Synchronizing image data among applications and devices
US7818558B2 (en) * 2006-05-31 2010-10-19 Andy Miga Method and apparatus for EFI BIOS time-slicing at OS runtime
US20070283138A1 (en) * 2006-05-31 2007-12-06 Andy Miga Method and apparatus for EFI BIOS time-slicing at OS runtime
US8078637B1 (en) * 2006-07-28 2011-12-13 Amencan Megatrends, Inc. Memory efficient peim-to-peim interface database
US20080034008A1 (en) * 2006-08-03 2008-02-07 Yahoo! Inc. User side database
US20080270629A1 (en) * 2007-04-27 2008-10-30 Yahoo! Inc. Data snychronization and device handling using sequence numbers
US7721013B2 (en) * 2007-05-21 2010-05-18 Intel Corporation Communicating graphics data via an out of band channel
US20080294800A1 (en) * 2007-05-21 2008-11-27 Nimrod Diamant Communicating graphics data via an out of band channel
US7873846B2 (en) 2007-07-31 2011-01-18 Intel Corporation Enabling a heterogeneous blade environment
US8402262B2 (en) 2007-07-31 2013-03-19 Intel Corporation Enabling a heterogeneous blade environment
US20110083005A1 (en) * 2007-07-31 2011-04-07 Palsamy Sakthikumar Enabling a heterogeneous blade environment
US20090055599A1 (en) * 2007-08-13 2009-02-26 Linda Van Patten Benhase Consistent data storage subsystem configuration replication
US7716309B2 (en) * 2007-08-13 2010-05-11 International Business Machines Corporation Consistent data storage subsystem configuration replication
US20090125901A1 (en) * 2007-11-13 2009-05-14 Swanson Robert C Providing virtualization of a server management controller
US20090271498A1 (en) * 2008-02-08 2009-10-29 Bea Systems, Inc. System and method for layered application server processing
US8838669B2 (en) 2008-02-08 2014-09-16 Oracle International Corporation System and method for layered application server processing
CN105117309A (en) * 2008-05-21 2015-12-02 艾利森电话股份有限公司 Resource pooling in blade cluster switching center server
US9198111B2 (en) 2008-05-21 2015-11-24 Telefonaktiebolaget L M Ericsson (Publ) Resource pooling in a blade cluster switching center server
US9025592B2 (en) * 2008-05-21 2015-05-05 Telefonaktiebolaget L M Ericsson (Publ) Blade cluster switching center server and method for signaling
US20110134749A1 (en) * 2008-05-21 2011-06-09 Oliver Speks Resource Pooling in a Blade Cluster Switching Center Server
US20110124342A1 (en) * 2008-05-21 2011-05-26 Oliver Speks Blade Cluster Switching Center Server and Method for Signaling
US8654762B2 (en) * 2008-05-21 2014-02-18 Telefonaktiebolaget Lm Ericsson (Publ) Resource pooling in a blade cluster switching center server
WO2010008706A1 (en) * 2008-07-17 2010-01-21 Lsi Corporation Systems and methods for booting a bootable virtual storage appliance on a virtualized server platform
US20110161649A1 (en) * 2008-07-17 2011-06-30 Lsi Corporation Systems and methods for booting a bootable virtual storage appliance on a virtualized server platform
CN102099787A (en) * 2008-07-17 2011-06-15 Lsi公司 Systems and methods for installing a bootable virtual storage appliance on a virtualized server platform
US8578146B2 (en) * 2008-07-17 2013-11-05 Lsi Corporation Systems and methods for booting a bootable virtual storage appliance on a virtualized server platform using a hidden boot partition
WO2010008707A1 (en) * 2008-07-17 2010-01-21 Lsi Corporation Systems and methods for installing a bootable virtual storage appliance on a virtualized server platform
US20120158923A1 (en) * 2009-05-29 2012-06-21 Ansari Mohamed System and method for allocating resources of a server to a virtual machine
US9489029B2 (en) 2009-12-22 2016-11-08 Intel Corporation Operating system independent network event handling
US8667110B2 (en) * 2009-12-22 2014-03-04 Intel Corporation Method and apparatus for providing a remotely managed expandable computer system
US8806231B2 (en) 2009-12-22 2014-08-12 Intel Corporation Operating system independent network event handling
US20110153798A1 (en) * 2009-12-22 2011-06-23 Groenendaal Johan Van De Method and apparatus for providing a remotely managed expandable computer system
US20110154065A1 (en) * 2009-12-22 2011-06-23 Rothman Michael A Operating system independent network event handling
US8370618B1 (en) 2010-06-16 2013-02-05 American Megatrends, Inc. Multiple platform support in computer system firmware
WO2011159892A1 (en) * 2010-06-16 2011-12-22 American Megatrends, Inc. Multiple platform support in computer system firmware
US8386618B2 (en) 2010-09-24 2013-02-26 Intel Corporation System and method for facilitating wireless communication during a pre-boot phase of a computing device
US8972538B2 (en) 2010-11-02 2015-03-03 International Business Machines Corporation Integration of heterogeneous computing systems into a hybrid computing system
US9253017B2 (en) 2010-11-02 2016-02-02 International Business Machines Corporation Management of a data network of a computing environment
US8984115B2 (en) 2010-11-02 2015-03-17 International Business Machines Corporation Ensemble having one or more computing systems and a controller thereof
US8966020B2 (en) 2010-11-02 2015-02-24 International Business Machines Corporation Integration of heterogeneous computing systems into a hybrid computing system
US9081613B2 (en) 2010-11-02 2015-07-14 International Business Machines Corporation Unified resource manager providing a single point of control
US9086918B2 (en) 2010-11-02 2015-07-21 International Business Machiness Corporation Unified resource manager providing a single point of control
US8984109B2 (en) 2010-11-02 2015-03-17 International Business Machines Corporation Ensemble having one or more computing systems and a controller thereof
US20120110155A1 (en) * 2010-11-02 2012-05-03 International Business Machines Corporation Management of a data network of a computing environment
US8959220B2 (en) 2010-11-02 2015-02-17 International Business Machines Corporation Managing a workload of a plurality of virtual servers of a computing environment
US9253016B2 (en) * 2010-11-02 2016-02-02 International Business Machines Corporation Management of a data network of a computing environment
US8819708B2 (en) * 2011-01-10 2014-08-26 Dell Products, Lp System and method to abstract hardware routing via a correlatable identifier
US20120180076A1 (en) * 2011-01-10 2012-07-12 Dell Products, Lp System and Method to Abstract Hardware Routing via a Correlatable Identifier
US9465660B2 (en) 2011-04-11 2016-10-11 Hewlett Packard Enterprise Development Lp Performing a task in a system having different types of hardware resources
US10966339B1 (en) * 2011-06-28 2021-03-30 Amazon Technologies, Inc. Storage system with removable solid state storage devices mounted on carrier circuit boards
US9558092B2 (en) 2011-12-12 2017-01-31 Microsoft Technology Licensing, Llc Runtime-agnostic management of applications
US10154089B2 (en) * 2011-12-28 2018-12-11 Beijing Qihoo Technology Company Limited Distributed system and data operation method thereof
US20150052214A1 (en) * 2011-12-28 2015-02-19 Beijing Qihoo Technology Company Limited Distributed system and data operation method thereof
US10645150B2 (en) 2012-08-23 2020-05-05 TidalScale, Inc. Hierarchical dynamic scheduling
US11159605B2 (en) 2012-08-23 2021-10-26 TidalScale, Inc. Hierarchical dynamic scheduling
US10623479B2 (en) 2012-08-23 2020-04-14 TidalScale, Inc. Selective migration of resources or remapping of virtual processors to provide access to resources
US9747116B2 (en) 2013-03-28 2017-08-29 Hewlett Packard Enterprise Development Lp Identifying memory of a blade device for use by an operating system of a partition including the blade device
US9781015B2 (en) 2013-03-28 2017-10-03 Hewlett Packard Enterprise Development Lp Making memory of compute and expansion devices available for use by an operating system
US10289467B2 (en) 2013-03-28 2019-05-14 Hewlett Packard Enterprise Development Lp Error coordination message for a blade device having a logical processor in another system firmware domain
US20150301764A1 (en) * 2013-07-02 2015-10-22 Huawei Technologies Co., Ltd. Hard Disk and Methods for Forwarding and Acquiring Data by Hard Disk
US10353744B2 (en) 2013-12-02 2019-07-16 Hewlett Packard Enterprise Development Lp System wide manageability
US9798568B2 (en) 2014-12-29 2017-10-24 Samsung Electronics Co., Ltd. Method for sharing resource using a virtual device driver and electronic device thereof
US20170331759A1 (en) * 2016-05-16 2017-11-16 International Business Machines Corporation Sla-based agile resource provisioning in disaggregated computing systems
US10601725B2 (en) * 2016-05-16 2020-03-24 International Business Machines Corporation SLA-based agile resource provisioning in disaggregated computing systems
US10579421B2 (en) 2016-08-29 2020-03-03 TidalScale, Inc. Dynamic scheduling of virtual processors in a distributed system
US10783000B2 (en) 2016-08-29 2020-09-22 TidalScale, Inc. Associating working sets and threads
US10620992B2 (en) 2016-08-29 2020-04-14 TidalScale, Inc. Resource migration negotiation
US11403135B2 (en) 2016-08-29 2022-08-02 TidalScale, Inc. Resource migration negotiation
US11513836B2 (en) 2016-08-29 2022-11-29 TidalScale, Inc. Scheduling resuming of ready to run virtual processors in a distributed system
US10609130B2 (en) 2017-04-28 2020-03-31 Microsoft Technology Licensing, Llc Cluster resource management in distributed computing systems
US11803306B2 (en) 2017-06-27 2023-10-31 Hewlett Packard Enterprise Development Lp Handling frequently accessed pages
US11907768B2 (en) 2017-08-31 2024-02-20 Hewlett Packard Enterprise Development Lp Entanglement of pages and guest threads
US10992593B2 (en) 2017-10-06 2021-04-27 Bank Of America Corporation Persistent integration platform for multi-channel resource transfers
US11552803B1 (en) * 2018-09-19 2023-01-10 Amazon Technologies, Inc. Systems for provisioning devices
US11888996B1 (en) 2018-09-19 2024-01-30 Amazon Technologies, Inc. Systems for provisioning devices
RU209333U1 (en) * 2021-09-27 2022-03-15 Российская Федерация, от имени которой выступает Государственная корпорация по атомной энергии "Росатом" (Госкорпорация "Росатом") HIGH DENSITY COMPUTING NODE

Also Published As

Publication number Publication date
JP2007526527A (en) 2007-09-13
US20050021847A1 (en) 2005-01-27
CN101142553A (en) 2008-03-12
WO2005006186A2 (en) 2005-01-20
CN101142553B (en) 2012-05-30
EP1636696A2 (en) 2006-03-22
EP1636696B1 (en) 2013-07-24
WO2005006186A3 (en) 2007-05-10
JP4242420B2 (en) 2009-03-25
US7730205B2 (en) 2010-06-01

Similar Documents

Publication Publication Date Title
US7730205B2 (en) OS agnostic resource sharing across multiple computing platforms
US7222339B2 (en) Method for distributed update of firmware across a clustered platform infrastructure
US7483974B2 (en) Virtual management controller to coordinate processing blade management in a blade server environment
US7051215B2 (en) Power management for clustered computing platforms
US7624262B2 (en) Apparatus, system, and method for booting using an external disk through a virtual SCSI connection
US7930371B2 (en) Deployment method and system
US8379541B2 (en) Information platform and configuration method of multiple information processing systems thereof
US20100017630A1 (en) Power control system of a high density server and method thereof
JP2007149116A (en) Method and apparatus for initiating execution of application processor in clustered multiprocessor system
EP1756712A1 (en) System and method for managing virtual servers
US20050240669A1 (en) BIOS framework for accommodating multiple service processors on a single server to facilitate distributed/scalable server management
CN1834912B (en) ISCSI bootstrap driving system and method for expandable internet engine
US11055104B2 (en) Network-adapter configuration using option-ROM in multi-CPU devices
US7366867B2 (en) Computer system and storage area allocation method
US20140149658A1 (en) Systems and methods for multipath input/output configuration
JP2010218449A (en) Resource allocation system and resource allocation method
Meier et al. IBM systems virtualization: Servers, storage, and software
US20140280663A1 (en) Apparatus and Methods for Providing Performance Data of Nodes in a High Performance Computing System
US11314455B2 (en) Mapping of RAID-CLI requests to vSAN commands by an out-of-band management platform using NLP
US20230104081A1 (en) Dynamic identity assignment system for components of an information handling system (ihs) and method of using the same
US11544013B2 (en) Array-based copy mechanism utilizing logical addresses pointing to same data block
US20240036881A1 (en) Heterogeneous compute domains with an embedded operating system in an information handling system
WO2023172319A1 (en) Direct swap caching with noisy neighbor mitigation and dynamic address range assignment
Bielski Novel memory and I/O virtualization techniques for next generation data-centers based on disaggregated hardware
Opsahl A Comparison of Management of Virtual Machines with z/VM and ESX Server

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROTHMAN, MICHAEL A.;ZIMMER, VINCENT J.;REEL/FRAME:014243/0379

Effective date: 20030625

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE