US20050228950A1 - External encapsulation of a volume into a LUN to allow booting and installation on a complex volume - Google Patents

External encapsulation of a volume into a LUN to allow booting and installation on a complex volume Download PDF

Info

Publication number
US20050228950A1
US20050228950A1 US11/156,636 US15663605A US2005228950A1 US 20050228950 A1 US20050228950 A1 US 20050228950A1 US 15663605 A US15663605 A US 15663605A US 2005228950 A1 US2005228950 A1 US 2005228950A1
Authority
US
United States
Prior art keywords
host
logical volume
recited
volume
boot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/156,636
Inventor
Ronald Karr
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Symantec Operating Corp
Original Assignee
Veritas Operating Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Veritas Operating Corp filed Critical Veritas Operating Corp
Priority to US11/156,636 priority Critical patent/US20050228950A1/en
Assigned to VERITAS OPERATING CORPORATION reassignment VERITAS OPERATING CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KARR, RONALD S.
Publication of US20050228950A1 publication Critical patent/US20050228950A1/en
Assigned to SYMANTEC CORPORATION reassignment SYMANTEC CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VERITAS OPERATING CORPORATION
Assigned to SYMANTEC OPERATING CORPORATION reassignment SYMANTEC OPERATING CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 019872 FRAME 979. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNEE IS SYMANTEC OPERATING CORPORATION. Assignors: VERITAS OPERATING CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • This invention relates to computer systems and, more particularly, to off-host virtualization of bootable devices within storage environments.
  • specialized storage management software and hardware may be used to provide a more uniform storage model to storage consumers.
  • Such software and hardware may also be configured to present physical storage devices as virtual storage devices (e.g., virtual SCSI disks) to computer hosts, and to add storage features not present in individual storage devices to the storage model.
  • features to increase fault tolerance such as data mirroring, snapshot/fixed image creation, or data parity, as well as features to increase data access performance, such as disk striping, may be implemented in the storage model via hardware or software.
  • the added storage features may be referred to as storage virtualization features, and the software and/or hardware providing the virtual storage devices and the added storage features may be termed “virtualizers” or “virtualization controllers”.
  • Virtualization may be performed within computer hosts, such as within a volume manager layer of a storage software stack at the host, and/or in devices external to the host, such as virtualization switches or virtualization appliances.
  • Such external devices providing virtualization may be termed “off-host” virtualizers, and may be utilized in order to offload processing required for virtualization from the host.
  • Off-host virtualizers may be connected to the external physical storage devices for which they provide virtualization functions via a variety of interconnects, such as Fiber Channel links, Internet Protocol (IP) networks, and the like.
  • IP Internet Protocol
  • a virtualization mechanism that allows hosts to boot and/or install operating system software off a virtual bootable target device may be desirable to support consistent booting and installation for multiple hosts in such environments.
  • a snapshot volume or a replicated volume it may be desirable to be able to boot and/or install off a snapshot volume or a replicated volume, for example in order to be able to re-initialize a host to a state as of a previous point in time (e.g., the time at which the snapshot or replica was created).
  • a system may include a host, one or more physical storage devices, and an off-host virtualizer.
  • the off-host virtualizer i.e., a device external to the host, capable of providing block virtualization functionality
  • the off-host virtualizer may be configured to aggregate storage within the one or more physical storage devices into a logical volume and to generate metadata to emulate the logical volume as a bootable target device.
  • the off-host virtualizer may make the metadata accessible to the host, allowing the host to boot off the logical volume, e.g., off a file system resident in the logical volume.
  • the metadata generated by the host may include such information as the layouts or offsets of various boot-related partitions that the host may need to access during the boot process, for example to load a file system reader, an operating system kernel, or additional boot software such as one or more scripts.
  • the metadata may be operating system-specific, i.e., the location, format and contents of the metadata may differ from one operating system to another.
  • a number of different logical volumes, each associated with a particular boot-related partition or file system may be emulated as part of the bootable target device.
  • the off-host virtualizer may be configured to present an emulated logical volume as an installable partition (i.e., a partition in which at least a portion of an operating system may be installed).
  • the host may also be configured to boot installation software (e.g., off external media), install at least a portion of the operating system on the installable partition, and then boot from a LUN containing the encapsulated volume.
  • the logical volume aggregated by the off-host virtualizer may support a number of different virtualization features in different embodiments.
  • the logical volume may be a snapshot volume (i.e., a point-in-time copy of another logical volume) or a replicated volume.
  • the logical volume may span multiple physical storage devices, and may be striped, mirrored, or a virtual RAID volume.
  • the logical volume may include a multi-layer hierarchy of logical devices, for example implementing mirroring at a first layer and striping at a second layer below the first.
  • the host may be configured to access the logical volumes directly (i.e., without using the metadata) subsequent to an initial phase of the boot process.
  • a volume manager or other virtualization driver may be activated at the host.
  • the volume manager or virtualization driver may be configured to obtain configuration information for the logical volumes (such as volume layouts), e.g., from the off-host virtualizer or some other volume configuration server, to allow direct access.
  • FIG. 1 is a block diagram illustrating one embodiment of a computer system.
  • FIG. 2 is a block diagram illustrating one embodiment of a system where an off-host virtualizer is configured to present one or more logical volumes as a bootable target device for use by host during a boot operation.
  • FIG. 3 a is a block diagram illustrating the mapping of blocks within a logical volume to a virtual LUN according to one embodiment.
  • FIG. 3 b is a block diagram illustrating an example of a virtual LUN including a plurality of partitions, where each partition is mapped to a volume, according to one embodiment.
  • FIG. 4 is a flow diagram illustrating aspects of the operation of a system configured to support off-host virtualization and emulation of a bootable target device, according to one embodiment.
  • FIG. 5 is a block diagram illustrating a logical volume comprising a multi-layer hierarchy of virtual block devices according to one embodiment.
  • FIG. 6 is a block diagram illustrating an embodiment where physical storage devices include fibre channel LUNs accessible through a fibre channel fabric, and an off-host virtualizer includes a virtualizing switch.
  • FIG. 7 is a block diagram illustrating one embodiment where the Internet SCSI (iSCSI) protocol is used to access the physical storage devices.
  • iSCSI Internet SCSI
  • FIG. 8 is a block diagram illustrating an embodiment where physical storage devices may be accessible via storage servers configured to communicate with an off-host virtualizer and a host using an advanced storage protocol.
  • FIG. 9 is a block diagram illustrating an embodiment where some physical storage devices may be accessible via a target-mode host bus adapter.
  • FIG. 10 is a block diagram illustrating a computer accessible medium according to one embodiment.
  • FIG. 1 illustrates a computer system 100 according to one embodiment.
  • system 100 includes a host 101 and a bootable target device 120 .
  • the host 101 includes a processor 110 and a memory 112 containing boot code 114 .
  • Boot code 114 may be configured to read operating-system specific boot metadata 122 at a known location or offset within bootable target device 120 , and to use boot metadata 122 to access one or more partitions 130 (e.g., a partition from among partitions 130 A, 130 B, . . . , 130 N) of bootable target device 120 in order to bring up or boot host 101 .
  • Partitions 130 may be referred to herein as boot partitions, and may contain additional boot code that may be loaded into memory 112 during the boot process.
  • the process of booting a host 101 may include several distinct phases.
  • the host 101 may be powered on or reset, and may then perform a series of “power on self test (POST)” operations to test the status of various constituent hardware elements, such as processor 110 , memory 112 , peripheral devices such as a mouse and/or a keyboard, and storage devices including bootable target device 120 .
  • POST power on self test
  • memory 112 may comprise a number of different memory modules, such as a programmable read only memory (PROM) module containing boot code 114 for early stages of boot, as well as a larger random access memory for use during later stages of boot and during post-boot or normal operation of host 101 .
  • PROM programmable read only memory
  • One or memory caches associate with processor 110 may also be tested during POST operations.
  • bootable target device 120 may typically be a locally attached physical storage device such as a disk, or in some cases a removable physical storage device such as a CD-ROM.
  • the bootable target device may be associated with a SCSI “logical unit” identified by a logical unit number or LUN.
  • LUN may be used herein to refer to both the identifier for a SCSI target device, as well as the SCSI target device itself.
  • boot code 114 may proceed to access the designated bootable target device 120 . That is, boot code 114 may read the operating system-specific boot metadata 122 from a known location in bootable target device 120 .
  • the specific location and format of boot-related metadata may vary from system to system; for example, in many operating systems, boot metadata 122 is stored in the first few blocks of bootable target device 120 .
  • Operating system specific boot metadata 122 may include the location or offsets of one or more partitions (e.g., in the form of a partition table), such as partitions 130 A- 130 N (which may be generically referred to herein as partitions 130 ), to which access may be required during subsequent phases of the boot process.
  • the boot metadata 122 may also include one or more software modules, such as a file system reader, that may be required to access one or more partitions 130 . The file system reader may then be read into memory at the host 101 (such as memory 112 ), and used to load one or more additional or secondary boot programs (i.e., additional boot code) from a partition 130 .
  • the additional or secondary boot programs may then be loaded and executed, resulting for example in an initialization of an operating system kernel, followed by an execution of one or more scripts in a prescribed sequence, ultimately leading to the host reaching a desired “run level” or mode of operation.
  • Various background processes such as network daemon processes in operating systems derived from UNIX, volume managers, etc.
  • designated application processes e.g., a web server or a database management server configured to restart automatically upon reboot
  • host 101 may allow a user to log in and begin desired user-initiated operations, or may begin providing a set of preconfigured services (such as web server or database server functionality). The exact nature and sequence of operations performed during boot may vary from one operating system to another.
  • the boot process may be followed by installation of desired portions of the operating system.
  • the boot process may end with a prompt being displayed to the user or administrator, allowing the user to specify a device from which operating system modules may be installed, and to select from among optional operating system modules.
  • the installation of operating system components on a newly provisioned host may be automated—e.g., one or more scripts run during (or at the end of) the boot process may initiate installation of desired operating system components from a specified device.
  • computer hosts 101 have usually been configured to boot off a local disk (i.e., disks attached to the host) or local removable media.
  • hosts configured to use a UNIXTM-based operating system may be configured to boot off a “root” file system on a local disk
  • hosts configured with a version of the WindowsTM operating system from Microsoft Corporation may be configured to boot off a “system partition” on a local disk.
  • a host 101 may be possible to configure a host 101 to boot off a virtual bootable target device, that is, a device that has been aggregated from one or more backing physical storage devices by a virtualizer or virtualization coordinator, where the backing physical storage may be accessible via a network instead of being locally accessible at the host 101 .
  • FIG. 2 is a block diagram illustrating one embodiment of a system 200 , where an off-host virtualizer 210 is configured to present one or more logical volumes 240 as a bootable target device 250 for use by host 101 during a boot operation.
  • Off-host virtualizer 210 may be coupled to host 101 and to one or more physical storage devices 220 (i.e., physical storage devices 220 A- 220 N) over a network 260 .
  • network 260 may be implemented using a variety of physical interconnects and protocols, and in some embodiments may include a plurality of independently configured networks, such as fibre channel fabrics and/or IP-based networks.
  • virtualization refers to a process of creating or aggregating logical or virtual devices out of one or more underlying physical or logical devices, and making the virtual devices accessible to device consumers for storage operations.
  • the entity or entities that perform the desired virtualization may be termed virtualizers.
  • Virtualizers may be incorporated within hosts (e.g., in one or more software layers within host 101 ) or at external devices such as one or more virtualization switches, virtualization appliances, etc., which may be termed off-host virtualizers.
  • hosts e.g., in one or more software layers within host 101
  • external devices such as one or more virtualization switches, virtualization appliances, etc.
  • off-host virtualizer 210 may be configured to aggregate storage from physical storage devices 220 (i.e., physical storage devices 220 A- 220 N)) into logical volumes 240 (i.e., logical volumes 240 A- 240 M).
  • each physical storage device 220 and logical storage device 240 may be configured as a block device, i.e., a device that provides a collection of linearly addressed data blocks that can be read or written.
  • off-host virtualizer 210 may be said to perform block virtualization.
  • a variety of advanced storage functions may be supported by a block virtualizer such as off-host virtualizer 210 in different embodiments, such as the ability to create snapshots or point-in-time copies, replicas, and the like.
  • a block virtualizer such as off-host virtualizer 210 in different embodiments, such as the ability to create snapshots or point-in-time copies, replicas, and the like.
  • one or more layers of software may rearrange blocks from one or more physical block devices, such as disks.
  • the resulting rearranged collection of blocks may then be presented to a storage consumer, such as an application or a file system at host 101 , as one or more aggregated devices with the appearance of one or more basic disk drives. That is, the more complex structure resulting from rearranging blocks and adding functionality may be presented as if it were one or more simple arrays of blocks, or logical block devices.
  • multiple layers of virtualization may be implemented.
  • one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. Further details on block virtualization, and advanced storage features supported by block virtualization, are provided below.
  • off-host virtualizer 210 may also be configured to emulate storage within one or more logical volumes 240 as a bootable target device 250 . That is, off-host virtualizer 210 may be configured to generate operating system-specific boot metadata 122 to make a range of storage within the one or more logical volumes 240 appear as a bootable partition (e.g., a partition 130 ) and/or file system to host 101 .
  • the generation and presentation of operating system specific metadata, such as boot metadata 122 for the purpose of making a logical volume appear as an addressable storage device (e.g., a LUN) to a host may be termed “volume tunneling”.
  • the virtual addressable storage device presented to the host using such a technique may be termed a “virtual LUN”.
  • Volume tunneling may be employed for other purposes in addition to the emulation of bootable target devices, e.g., to support dynamic mappings of logical volumes to virtual LUNs, to provide an isolating layer between front-end virtual LUNs and back-end or physical LUNs, etc.
  • FIG. 3 a is a block diagram illustrating the mapping of blocks within a logical volume to a virtual LUN according to one embodiment.
  • a source logical volume 305 comprising N blocks of data (numbered from 0 through (N ⁇ 1)) may be encapsulated or tunneled through a virtual LUN 310 comprising (N+H) blocks.
  • Off-host virtualizer 210 may be configured to logically insert operating system specific boot metadata in a header 315 comprising the first H blocks of the virtual LUN 310 , and the remaining N blocks of virtual LUN 310 may map to the N blocks of source logical volume 305 .
  • a host 101 may be configured to boot off virtual LUN 310 , for example by setting the boot target device for the host to the identifier of the virtual LUN 310 .
  • Metadata contained in header 315 may be set up to match the format and content expected by boot code 114 at a LUN header of a bootable device for a desired operating system, and the contents of logical volume 305 may include, for example, the contents expected by boot code 114 in one or more partitions 130 .
  • the metadata and/or the contents of the logical volume may be customized for the particular host being booted: for example, some of the file system contents or scripts accessed by the host 101 during various boot phases may be modified to support requirements specific to the particular host 101 .
  • Examples of such customization may include configuration parameters for hardware devices at the host (e.g., if a particular host employs multiple Ethernet network cards, some of the networking-related scripts may be modified), customized file systems, or customized file system sizes.
  • the generated metadata required for volume tunneling may be located at a variety of different offsets within the logical volume address space, such as within a header 315 , a trailer, at some other designated offset within the virtual LUN 310 , or at a combination of locations within the virtual LUN 310 .
  • the number of data blocks dedicated to operating system specific metadata e.g., the length of header 315 ), as well as the format and content of the metadata, may vary with the operating system in use at host 101 .
  • the metadata inserted within virtual LUN 310 may be stored in persistent storage, e.g., within some blocks of a physical storage device 220 or at off-host virtualizer 210 , in some embodiments, and logically concatenated with the mapped blocks 320 .
  • the metadata may be generated on the fly, whenever a host 101 accesses the virtual LUN 310 .
  • the metadata may be generated by an external agent other than off-host virtualizer 210 .
  • the external agent may be capable of emulating metadata in a variety of formats for different operating systems, including operating systems that may not have been known when the off-host virtualizer 210 was deployed.
  • off-host virtualizer 210 may be configured to support more than one operating system; i.e., off-host virtualizer 210 may logically insert metadata blocks corresponding to any one of a number of different operating systems when presenting virtual LUN 310 to a host 101 , thereby allowing hosts intended to use different operating systems to share virtual LUN 310 .
  • a plurality of virtual LUNs emulating bootable target devices, each corresponding to a different operating system may be set up in advance, and off-host virtualizer 210 may be configured to select a particular virtual LUN for presentation to a host for booting.
  • boot servers In large data centers, a set of relatively inexpensive servers (which may be termed “boot servers”) may be designated to serve as a pool of off-host virtualizers dedicated to provide emulated bootable target devices for use as needed throughout the data center. Whenever a newly provisioned host in the data center needs to be booted and/or installed, a bootable target device presented by one of the boot servers may be used, thus supporting consistent configurations at the hosts of the data center as the data center grows.
  • boot servers may be designated to serve as a pool of off-host virtualizers dedicated to provide emulated bootable target devices for use as needed throughout the data center.
  • off-host virtualizer 210 may emulate a number of different boot-related volumes using a plurality of partitions within the virtual LUN 310 .
  • FIG. 3 b is a block diagram illustrating an exemplary virtual LUN 310 according to one embodiment, where the virtual LUN includes three emulated partitions 341 A- 341 C.
  • An off-host virtualizer 210 (not shown in FIG. 3 b ) may be configured to present virtual LUN 310 to a host bus adapter 330 and/or disk driver 325 at host 101 .
  • Each partition 341 may be mapped to a respective volume 345 that may be accessed during boot and/or operating system installation.
  • partitions corresponding to three volumes 345 A- 345 C used respectively for a “/” (root) file system, a “/usr” file system and a “/swap” file system, each of which may be accessed by a host 101 employing a UNIX-based operating system are shown.
  • additional operating system specific metadata identifying the address ranges within the virtual LUN where the corresponding partitions are located may be provided by off-host virtualizer 210 to host 110 .
  • off-host virtualizer 210 may be provided by off-host virtualizer 210 to host 110 .
  • the address ranges for partitions 341 A- 341 C are provided in a virtual table of contents (VTOC) structure 340 .
  • the additional metadata may be included with boot metadata 122 in some embodiments. In other embodiments, the additional metadata may be provided at some other location within the address space of the virtual LUN, or provided to the host 101 using another mechanism, such as extended SCSI mode pages or messages sent over a network from off-host virtualizer 210 to host 101 . In some embodiments, the additional metadata may also be customized to suit the specific requirements of a particular host 101 ; e.g., not all hosts may require the same modules of an operating system to be installed and/or upgraded.
  • off-host virtualizer 210 may be configured to present an emulated logical volume 240 as an installable partition or volume to host 101 —i.e., a partition or volume to which at least a portion of an operating system may be installed.
  • the host 101 may be configured to boot installation software (e.g., off removable media such as a CD provided by the operating system vendor), and then install desired portions of the operating system onto the installable partition or volume. After the desired installation is completed, in some embodiments the host 101 may be configured to boot from the LUN containing the encapsulated volume.
  • FIG. 4 is a flow diagram illustrating aspects of the operation of a system (such as system 200 ) supporting off-host virtualization and emulation of a bootable target device, according to one embodiment.
  • Off-host virtualizer 210 may be configured to aggregate storage within physical storage devices 220 into one or more logical volumes 240 (block 405 of FIG. 4 ).
  • the logical volumes 240 may be configured to implement a number of different virtualization functions, such as snapshots or replication.
  • Off-host virtualizer 210 may then emulate the logical volumes as a bootable target device 250 (block 415 ), for example by logically inserting operating system-specific boot metadata 315 into a virtual LUN 310 as described above.
  • a subset of the blocks of the logical volumes and/or the metadata may be modified to provide data specific to the host being booted (e.g., a customized boot process may be supported).
  • the emulated bootable target device may be made accessible to a host 101 (block 425 ), e.g., by setting the host's target bootable device address to the address of the virtual LUN 310 .
  • the host 101 may then boot off the emulated bootable target device (block 435 ), for example, off a file system or partition resident in the logical volume (such as a “root” file system in the case of hosts employing UNIX-based operating systems, or a “system partition” in the case of Windows operating systems). That is, the virtualizer may emulate the particular file system or partition expected for booting by the host as being resident in the logical volume in such embodiments.
  • the boot process at host 101 may include several phases. During each successive phase, additional modules of the host's operating system and/or additional software modules may be activated, and various system processes and services may be started. During one such phase, in some embodiments a virtualization driver or volume manager capable of recognizing and interacting with logical volumes may be activated at host 101 . In such embodiments, after the virtualization driver or volume manager is activated, it may be possible for the host to switch to direct interaction with the logical volumes 240 (block 455 of FIG. 4 ), e.g., over network 360 , instead of performing I/O to the logical volumes through the off-host virtualizer 210 .
  • Direct interaction with the logical volumes 240 may support higher levels of performance than indirect interaction via off-host virtualizer 210 , especially in embodiments where off-host virtualizer 210 has limited processing capabilities.
  • off-host virtualizer 210 or some other volume configuration server may be configured to provide configuration information (such as volume layouts) related to the logical volumes 240 to the virtualization driver or volume manager.
  • configuration information such as volume layouts
  • the emulated bootable target device 250 and the off-host virtualizer 210 may no longer be used by host 101 until the next time host 101 is rebooted. During the next reboot, host 101 may switch back to accessing logical volumes 240 via the emulated bootable target device 250 .
  • a number of different virtualization functions may be implemented at a logical volume 240 by off-host virtualizer 210 in different embodiments.
  • a logical volume 240 may be aggregated from storage from multiple physical storage devices 220 , e.g., by striping successive blocks of data across multiple physical storage devices, by spanning multiple physical storage devices (i.e., concatenating physical storage from multiple physical storage devices into the logical volume), or by mirroring data blocks at two or more physical storage devices.
  • a logical volume 240 that is used by off-host virtualizer 210 to emulate a bootable target device 250 may be a replicated volume.
  • the logical volume 240 may be a replica or copy of a source logical volume that may be maintained at a remote data center. Such a technique of replicating bootable volumes may be useful for a variety of purposes, such as to support off-site backup or to support consistency of booting and/or installation in distributed enterprises where hosts at a number of different geographical locations may be required to be set up with similar configurations.
  • a logical volume 240 may be a snapshot volume, such as an instant snapshot or a space-efficient snapshot, i.e., a point-in-time copy of some source logical volume.
  • a logical volume 240 used to emulate a bootable target device may be configured as a virtual RAID (“Redundant Array of Independent Disks”) device or RAID volume, where parity based redundancy computations are implemented to provide high availability. Physical storage from a plurality of storage servers may be aggregated to form the RAID volume, and the redundancy computations may be implemented via a software protocol.
  • RAID Redundant Array of Independent Disks
  • a bootable target device emulated from a RAID volume may be recoverable in the event of a failure at one of its backing storage servers, thus enhancing the availability of boot functionality supported by the off-host virtualizer 210 .
  • a number of different RAID levels e.g., RAID-3, RAID-4, or RAID-5) may be implemented in the RAID volume.
  • a logical volume 240 may include multiple layers of virtual storage devices.
  • FIG. 5 is a block diagram illustrating a logical volume 240 comprising a multi-layer hierarchy of virtual block devices according to one embodiment.
  • logical volume 240 includes logical block devices 504 and 506 .
  • logical block device 504 includes logical block devices 508 and 510
  • logical block device 506 includes logical block device 512 .
  • Logical block devices 508 , 510 , and 512 map to physical block devices 220 A-C of FIG. 2 , respectively.
  • logical volume 240 may be configured to be mounted within a file system or presented to an application or other volume consumer.
  • Each block device within logical volume 240 that maps to or includes another block device may include an interface whereby the mapping or including block device may interact with the mapped or included device.
  • this interface may be a software interface whereby data and commands for block read and write operations is propagated from lower levels of the virtualization hierarchy to higher levels and vice versa.
  • a given block device may be configured to map the logical block spaces of subordinate block devices into its logical block space in various ways in order to realize a particular virtualization function.
  • logical volume 240 may be configured as a mirrored volume, in which a given data block written to logical volume 240 is duplicated, and each of the multiple copies of the duplicated given data block are stored in respective block devices.
  • logical volume 240 may be configured to receive an operation to write a data block from a consumer, such as an application running on host 101 .
  • Logical volume 240 may duplicate the write operation and issue the write operation to both logical block devices 504 and 506 , such that the block is written to both devices.
  • logical block devices 504 and 506 may be referred to as mirror devices.
  • logical volume 240 may read a given data block stored in duplicate in logical block devices 504 and 506 by issuing a read operation to one mirror device or the other, for example by alternating devices or defaulting to a particular device.
  • logical volume 240 may issue a read operation to multiple mirror devices and accept results from the fastest responder.
  • logical block device 504 may be implemented as a striped device in which data is distributed between logical block devices 508 and 510 .
  • even- and odd-numbered blocks of logical block device 504 may be mapped to logical block devices 508 and 510 respectively, each of which may be configured to map in turn to all or some portion of physical block devices 220 A-B respectively.
  • block read/write throughput may be increased over a non-striped configuration, as logical block device 504 may be able to read or write two blocks concurrently instead of one.
  • Numerous striping arrangements involving various distributions of blocks to logical block devices are possible and contemplated; such arrangements may be chosen to optimize for various data usage patterns such as predominantly sequential or random usage patterns.
  • physical block device 220 C may employ a different block size than logical block device 506 .
  • logical block device 512 may be configured to translate between the two physical block sizes and to map the logical block space defined by logical block device 506 to the physical block space defined by physical block device 220 C.
  • FIG. 6 is a block diagram illustrating an embodiment where the physical storage devices include fibre channel LUNs 610 accessible through a fibre channel fabric 620 , and off-host virtualizer 210 includes a virtualizing switch.
  • a “fibre channel LUN”, as used herein, may be defined as a unit of storage addressable using a fibre channel address.
  • a fibre channel address for storage accessible via a fiber channel fabric may consist of a fabric identifier, a port identifier, and a logical unit identifier.
  • the virtual LUN presented by off-host virtualizer to host 110 as a bootable target device 250 in such an embodiment may be a virtual fibre channel LUN.
  • Fibre channel fabric 620 may include additional switches in some embodiments, and host 101 may be coupled to more than one switch. Some of the additional switches may also be configured to provide virtualization functions. That is, in some embodiments off-host virtualizer 210 may include a plurality of cooperating virtualizing switches. In one embodiment, multiple independently-configurable fibre channel fabrics may be employed: e.g., a first set of fibre channel LUNs 610 may be accessible through a first fabric, and a second set of fibre channel LUNs 610 may be accessible through a second fabric.
  • FIG. 7 is a block diagram illustrating one embodiment where the Internet SCSI (iSCSI) protocol is used to access the physical storage devices.
  • iSCSI Internet SCSI
  • iSCSI is a protocol used by storage initiators (such as hosts 101 and/or off-host virtualizers 210 ) to send SCSI storage commands to storage targets (such as disks or tape devices) over an IP (Internet Protocol) network.
  • IP Internet Protocol
  • the physical storage devices accessible in an iSCSI-based storage network may be addressable as iSCSI LUNs, just as SCSI devices locally attached to a host may be addressable as SCSI LUNs, and physical storage devices attached via fibre channel fabrics may be addressable as fibre channel LUNs.
  • an iSCSI address may include an IP address or iSCSI qualified name (iqn), a target device identifier, and a logical unit number.
  • iSCSI LUNs 710 may be attached directly to the off-host virtualizer 210 .
  • the off-host virtualizer 210 may itself be a computer system, comprising its own processor, memory and physical storage devices (e.g., iSCSI LUN 710 A).
  • the remaining iSCSI LUNs 710 B- 710 N may be accessible through other hosts or through iSCSI servers.
  • all the physical storage devices may be attached directly to the off-host virtualizer 210 and may be accessible via iSCSI.
  • a host 101 may require an iSCSI-enabled network adapter to participate in the iSCSI protocol.
  • a network boot protocol similar to BOOTP (a protocol that is typically used to allow diskless hosts to boot using boot code provided by a boot server) may be used to support a first phase boot of a host 101 that does not have an iSCSI-enabled adapter.
  • Additional boot code loaded during the first phase may allow the host to mount a file system over iSCSI, and/or to perform further boot phases, despite the absence of an iSCSI-enabled network card. That is, software provided to the host 101 during an early boot phase (e.g., by off-host virtualizer 210 ) may be used later in the boot process to emulate iSCSI transactions without utilizing an iSCSI-enabled network adapter at the host.
  • host 101 may be configured to boot from an emulated volume using a first network type such as iSCSI, and to then switch to directly accessing the volume using a second network type such as fibre channel.
  • iSCSI-based booting may be less expensive and/or easier to configure than fibre-channel based booting in some embodiments.
  • An off-host virtualizer 210 that uses iSCSI (such as an iSCSI boot appliance) and at the same time accesses fibre-channel based storage devices may allow such a transition between the network type that is used for booting and the network type that is used for subsequent I/O (e.g., for I/Os requested by production applications).
  • physical storage devices 220 may be accessible via storage servers (e.g., 850 A and 850 B) configured to communicate with off-host virtualizer 210 and host 101 using an advanced storage protocol.
  • the advanced storage protocol may support features, such as access security and tagged directives for distributed I/O operations that may not be adequately supported by the traditional storage protocols (such as SCSI or iSCSI) alone.
  • a storage server 850 may translate data access requests from the advanced storage protocol to a lower level protocol or interface (such as SCSI) that may be presented by the physical storage devices 220 managed at the storage server. While the advanced storage protocol may provide enhanced functionality, it may still allow block-level access to physical storage devices 220 .
  • Storage servers 850 may be any device capable of supporting the advanced storage protocol, such as a computer host with one or more processors and one or more memories.
  • FIG. 9 is a block diagram illustrating an embodiment where some physical storage devices 220 may be accessible via a target-mode host bus adapter 902 .
  • a host bus adapter is a hardware device that acts as an interface between a host 101 and an I/O interconnect, such as a SCSI bus or fibre channel link.
  • an HBA is configured as an “initiator”, i.e., a device that initiates storage operations on the I/O interconnect, and receives responses from other devices (termed “targets”) such as disks, disk array devices, or tape devices, coupled to the I/O interconnect.
  • host bus adapters may be configurable (e.g., by modifying the firmware on the HBA) to operate as targets rather than initiators, i.e., to receive commands such as iSCSI commands sent by initiators requesting storage operations.
  • Such host bus adapters may be termed “target-mode” host bus adapters, and may be incorporated within off-host virtualizers 210 as shown in FIG. 9 in some embodiments.
  • the I/O operations corresponding to the received commands may be performed at the physical storage devices, and the response returned to the requesting initiator.
  • all the physical storage devices 220 used to back logical volumes 240 may be accessible via target-mode host bus adapters.
  • an off-host virtualizer 210 may comprise a number of different types of hardware and software entities in different embodiments.
  • an off-host virtualizer 210 may itself be a host with its own processor, memory, peripheral devices and I/O devices, running an operating system and a software stack capable of providing the block virtualization features described above.
  • the off-host virtualizer 210 may include one or more virtualization switches and/or virtualization appliances.
  • a virtualization switch may be an intelligent fiber channel switch, configured with sufficient processing capacity to perform desired virtualization operations in addition to supporting fiber channel connectivity.
  • a virtualization appliance may be an intelligent device programmed to perform virtualization functions, such as providing mirroring, striping, snapshot capabilities, etc.
  • Appliances may differ from general purpose computers in that their software is normally customized for the function they perform, pre-loaded by the vendor, and not alterable by the user.
  • multiple devices or systems may cooperate to provide off-host virtualization; e.g., multiple cooperating virtualization switches may form a single off-host virtualizer.
  • the aggregation of storage within physical storage devices 220 into logical volumes 240 may be performed by one off-host virtualizing device or host, while another off-host virtualizing device may be configured to emulate the logical volumes as bootable target devices and present the bootable target devices to host 101 .
  • FIG. 10 is a block diagram illustrating a computer accessible medium 1000 including virtualization software 1010 configured to provide the functionality of off-host virtualizer 210 and host 101 described above.
  • Virtualization software 1010 may be provided to a computer system using a variety of computer-accessible media including electronic media (e.g., flash memory), magnetic media such as RAM (e.g., SDRAM, RDRAM, SRAM, etc.), optical storage media such as CD-ROM, etc., as well as transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
  • electronic media e.g., flash memory
  • magnetic media e.g., SDRAM, RDRAM, SRAM, etc.
  • optical storage media such as CD-ROM, etc.
  • transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link.

Abstract

A system for external encapsulation of a volume into a logical unit (LUN) to allow booting and installation on a complex volume may include a host, one or more physical storage devices, and an off-host virtualizer. The off-host virtualizer (i.e., a device external to the host, capable of providing block virtualization functionality) may be configured to aggregate storage within the one or more physical storage devices into a logical volume and to generate metadata to emulate the logical volume as a bootable target device. The off-host virtualizer may make the metadata accessible to the host, allowing the host to boot off a file system resident in the logical volume.

Description

  • This application is a continuation-in-part of U.S. patent application Ser. No. 10/722,614, entitled “SYSTEM AND METHOD FOR EMULATING OPERATING SYSTEM METADATA TO PROVIDE CROSS-PLATFORM ACCESS TO STORAGE VOLUMES”, filed Nov. 26, 2003.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention relates to computer systems and, more particularly, to off-host virtualization of bootable devices within storage environments.
  • 2. Description of the Related Art
  • Many business organizations and governmental entities rely upon applications that access large amounts of data, often exceeding a terabyte of data, for mission-critical applications. Often such data is stored on many different storage devices, which may be heterogeneous in nature, including many different types of devices from many different manufacturers.
  • Configuring individual applications that consume data, or application server systems that host such applications, to recognize and directly interact with each different storage device that may possibly be encountered in a heterogeneous storage environment would be increasingly difficult as the environment scaled in size and complexity. Therefore, in some storage environments, specialized storage management software and hardware may be used to provide a more uniform storage model to storage consumers. Such software and hardware may also be configured to present physical storage devices as virtual storage devices (e.g., virtual SCSI disks) to computer hosts, and to add storage features not present in individual storage devices to the storage model. For example, features to increase fault tolerance, such as data mirroring, snapshot/fixed image creation, or data parity, as well as features to increase data access performance, such as disk striping, may be implemented in the storage model via hardware or software. The added storage features may be referred to as storage virtualization features, and the software and/or hardware providing the virtual storage devices and the added storage features may be termed “virtualizers” or “virtualization controllers”. Virtualization may be performed within computer hosts, such as within a volume manager layer of a storage software stack at the host, and/or in devices external to the host, such as virtualization switches or virtualization appliances. Such external devices providing virtualization may be termed “off-host” virtualizers, and may be utilized in order to offload processing required for virtualization from the host. Off-host virtualizers may be connected to the external physical storage devices for which they provide virtualization functions via a variety of interconnects, such as Fiber Channel links, Internet Protocol (IP) networks, and the like.
  • In many corporate data centers, as the application workload increases, additional hosts may need to be provisioned to provide the required processing capabilities. The internal configuration (e.g., file system layout and file system sizes) of each of these additional hosts may be fairly similar, with just a few features unique to each host. Booting and installing each newly provisioned host manually may be a cumbersome and error-prone process, especially in environments where a large number of additional hosts may be required fairly quickly. A virtualization mechanism that allows hosts to boot and/or install operating system software off a virtual bootable target device may be desirable to support consistent booting and installation for multiple hosts in such environments. In addition, in some storage environments it may be desirable to be able to boot and/or install off a snapshot volume or a replicated volume, for example in order to be able to re-initialize a host to a state as of a previous point in time (e.g., the time at which the snapshot or replica was created).
  • SUMMARY
  • Various embodiments of a system and method for external encapsulation of a volume into a logical unit (LUN) to allow booting and installation on a complex volume are disclosed. According to a first embodiment, a system may include a host, one or more physical storage devices, and an off-host virtualizer. The off-host virtualizer (i.e., a device external to the host, capable of providing block virtualization functionality) may be configured to aggregate storage within the one or more physical storage devices into a logical volume and to generate metadata to emulate the logical volume as a bootable target device. The off-host virtualizer may make the metadata accessible to the host, allowing the host to boot off the logical volume, e.g., off a file system resident in the logical volume.
  • The metadata generated by the host may include such information as the layouts or offsets of various boot-related partitions that the host may need to access during the boot process, for example to load a file system reader, an operating system kernel, or additional boot software such as one or more scripts. The metadata may be operating system-specific, i.e., the location, format and contents of the metadata may differ from one operating system to another. In one embodiment, a number of different logical volumes, each associated with a particular boot-related partition or file system, may be emulated as part of the bootable target device. In another embodiment, the off-host virtualizer may be configured to present an emulated logical volume as an installable partition (i.e., a partition in which at least a portion of an operating system may be installed). In such an embodiment, the host may also be configured to boot installation software (e.g., off external media), install at least a portion of the operating system on the installable partition, and then boot from a LUN containing the encapsulated volume.
  • The logical volume aggregated by the off-host virtualizer may support a number of different virtualization features in different embodiments. In one embodiment, the logical volume may be a snapshot volume (i.e., a point-in-time copy of another logical volume) or a replicated volume. The logical volume may span multiple physical storage devices, and may be striped, mirrored, or a virtual RAID volume. In some embodiments, the logical volume may include a multi-layer hierarchy of logical devices, for example implementing mirroring at a first layer and striping at a second layer below the first. In one embodiment, the host may be configured to access the logical volumes directly (i.e., without using the metadata) subsequent to an initial phase of the boot process. For example, during a later phase of the boot process, a volume manager or other virtualization driver may be activated at the host. The volume manager or virtualization driver may be configured to obtain configuration information for the logical volumes (such as volume layouts), e.g., from the off-host virtualizer or some other volume configuration server, to allow direct access.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating one embodiment of a computer system.
  • FIG. 2 is a block diagram illustrating one embodiment of a system where an off-host virtualizer is configured to present one or more logical volumes as a bootable target device for use by host during a boot operation.
  • FIG. 3 a is a block diagram illustrating the mapping of blocks within a logical volume to a virtual LUN according to one embodiment.
  • FIG. 3 b is a block diagram illustrating an example of a virtual LUN including a plurality of partitions, where each partition is mapped to a volume, according to one embodiment.
  • FIG. 4 is a flow diagram illustrating aspects of the operation of a system configured to support off-host virtualization and emulation of a bootable target device, according to one embodiment.
  • FIG. 5 is a block diagram illustrating a logical volume comprising a multi-layer hierarchy of virtual block devices according to one embodiment.
  • FIG. 6 is a block diagram illustrating an embodiment where physical storage devices include fibre channel LUNs accessible through a fibre channel fabric, and an off-host virtualizer includes a virtualizing switch.
  • FIG. 7 is a block diagram illustrating one embodiment where the Internet SCSI (iSCSI) protocol is used to access the physical storage devices.
  • FIG. 8 is a block diagram illustrating an embodiment where physical storage devices may be accessible via storage servers configured to communicate with an off-host virtualizer and a host using an advanced storage protocol.
  • FIG. 9 is a block diagram illustrating an embodiment where some physical storage devices may be accessible via a target-mode host bus adapter.
  • FIG. 10 is a block diagram illustrating a computer accessible medium according to one embodiment.
  • While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates a computer system 100 according to one embodiment. In the illustrated embodiment, system 100 includes a host 101 and a bootable target device 120. The host 101 includes a processor 110 and a memory 112 containing boot code 114. Boot code 114 may be configured to read operating-system specific boot metadata 122 at a known location or offset within bootable target device 120, and to use boot metadata 122 to access one or more partitions 130 (e.g., a partition from among partitions 130A, 130B, . . . , 130N) of bootable target device 120 in order to bring up or boot host 101. Partitions 130 may be referred to herein as boot partitions, and may contain additional boot code that may be loaded into memory 112 during the boot process.
  • The process of booting a host 101 may include several distinct phases. In a first phase, for example, the host 101 may be powered on or reset, and may then perform a series of “power on self test (POST)” operations to test the status of various constituent hardware elements, such as processor 110, memory 112, peripheral devices such as a mouse and/or a keyboard, and storage devices including bootable target device 120. In general, memory 112 may comprise a number of different memory modules, such as a programmable read only memory (PROM) module containing boot code 114 for early stages of boot, as well as a larger random access memory for use during later stages of boot and during post-boot or normal operation of host 101. One or memory caches associate with processor 110 may also be tested during POST operations. In traditional systems, bootable target device 120 may typically be a locally attached physical storage device such as a disk, or in some cases a removable physical storage device such as a CD-ROM. In systems employing the Small Computer System Interface (SCSI) protocol to access storage devices, for example, the bootable target device may be associated with a SCSI “logical unit” identified by a logical unit number or LUN. (The term LUN may be used herein to refer to both the identifier for a SCSI target device, as well as the SCSI target device itself.) During POST, one or more SCSI buses attached to the host may be probed, and SCSI LUNS accessible via the SCSI buses may be identified.
  • In some operating systems, a user such as a system administrator may be allowed to select a bootable target device from among several choices as a preliminary step during boot, and/or to set a particular target as the device from which the next boot should be performed. If the POST operations complete successfully, boot code 114 may proceed to access the designated bootable target device 120. That is, boot code 114 may read the operating system-specific boot metadata 122 from a known location in bootable target device 120. The specific location and format of boot-related metadata may vary from system to system; for example, in many operating systems, boot metadata 122 is stored in the first few blocks of bootable target device 120.
  • Operating system specific boot metadata 122 may include the location or offsets of one or more partitions (e.g., in the form of a partition table), such as partitions 130A-130N (which may be generically referred to herein as partitions 130), to which access may be required during subsequent phases of the boot process. In some environments the boot metadata 122 may also include one or more software modules, such as a file system reader, that may be required to access one or more partitions 130. The file system reader may then be read into memory at the host 101 (such as memory 112), and used to load one or more additional or secondary boot programs (i.e., additional boot code) from a partition 130. The additional or secondary boot programs may then be loaded and executed, resulting for example in an initialization of an operating system kernel, followed by an execution of one or more scripts in a prescribed sequence, ultimately leading to the host reaching a desired “run level” or mode of operation. Various background processes (such as network daemon processes in operating systems derived from UNIX, volume managers, etc.) and designated application processes (e.g., a web server or a database management server configured to restart automatically upon reboot) may also be started up during later boot phases. When the desired mode of operation is reached, host 101 may allow a user to log in and begin desired user-initiated operations, or may begin providing a set of preconfigured services (such as web server or database server functionality). The exact nature and sequence of operations performed during boot may vary from one operating system to another.
  • If host 101 is a newly provisioned host without an installed operating system, or if host 101 is being reinstalled or upgraded with a new version of its operating system, the boot process may be followed by installation of desired portions of the operating system. For example, the boot process may end with a prompt being displayed to the user or administrator, allowing the user to specify a device from which operating system modules may be installed, and to select from among optional operating system modules. In some environments, the installation of operating system components on a newly provisioned host may be automated—e.g., one or more scripts run during (or at the end of) the boot process may initiate installation of desired operating system components from a specified device.
  • As noted above, traditionally, computer hosts 101 have usually been configured to boot off a local disk (i.e., disks attached to the host) or local removable media. For example, hosts configured to use a UNIX™-based operating system may be configured to boot off a “root” file system on a local disk, while hosts configured with a version of the Windows™ operating system from Microsoft Corporation may be configured to boot off a “system partition” on a local disk. However, in some storage environments it may be possible to configure a host 101 to boot off a virtual bootable target device, that is, a device that has been aggregated from one or more backing physical storage devices by a virtualizer or virtualization coordinator, where the backing physical storage may be accessible via a network instead of being locally accessible at the host 101. The file systems and/or partitions expected by the operating system at the host may be emulated as being resident in the virtual bootable target device. FIG. 2 is a block diagram illustrating one embodiment of a system 200, where an off-host virtualizer 210 is configured to present one or more logical volumes 240 as a bootable target device 250 for use by host 101 during a boot operation. Off-host virtualizer 210 may be coupled to host 101 and to one or more physical storage devices 220 (i.e., physical storage devices 220A-220N) over a network 260. As described below in further detail, network 260 may be implemented using a variety of physical interconnects and protocols, and in some embodiments may include a plurality of independently configured networks, such as fibre channel fabrics and/or IP-based networks.
  • In general, virtualization refers to a process of creating or aggregating logical or virtual devices out of one or more underlying physical or logical devices, and making the virtual devices accessible to device consumers for storage operations. The entity or entities that perform the desired virtualization may be termed virtualizers. Virtualizers may be incorporated within hosts (e.g., in one or more software layers within host 101) or at external devices such as one or more virtualization switches, virtualization appliances, etc., which may be termed off-host virtualizers. In FIG. 2, for example, off-host virtualizer 210 may be configured to aggregate storage from physical storage devices 220 (i.e., physical storage devices 220A-220N)) into logical volumes 240 (i.e., logical volumes 240A-240M). In the illustrated embodiment, each physical storage device 220 and logical storage device 240 may be configured as a block device, i.e., a device that provides a collection of linearly addressed data blocks that can be read or written. In such an embodiment, off-host virtualizer 210 may be said to perform block virtualization. A variety of advanced storage functions may be supported by a block virtualizer such as off-host virtualizer 210 in different embodiments, such as the ability to create snapshots or point-in-time copies, replicas, and the like. In one embodiment of block virtualization, one or more layers of software may rearrange blocks from one or more physical block devices, such as disks. The resulting rearranged collection of blocks may then be presented to a storage consumer, such as an application or a file system at host 101, as one or more aggregated devices with the appearance of one or more basic disk drives. That is, the more complex structure resulting from rearranging blocks and adding functionality may be presented as if it were one or more simple arrays of blocks, or logical block devices. In some embodiments, multiple layers of virtualization may be implemented. That is, one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. Further details on block virtualization, and advanced storage features supported by block virtualization, are provided below.
  • In addition to aggregating storage into logical volumes, off-host virtualizer 210 may also be configured to emulate storage within one or more logical volumes 240 as a bootable target device 250. That is, off-host virtualizer 210 may be configured to generate operating system-specific boot metadata 122 to make a range of storage within the one or more logical volumes 240 appear as a bootable partition (e.g., a partition 130) and/or file system to host 101. The generation and presentation of operating system specific metadata, such as boot metadata 122, for the purpose of making a logical volume appear as an addressable storage device (e.g., a LUN) to a host may be termed “volume tunneling”. The virtual addressable storage device presented to the host using such a technique may be termed a “virtual LUN”. Volume tunneling may be employed for other purposes in addition to the emulation of bootable target devices, e.g., to support dynamic mappings of logical volumes to virtual LUNs, to provide an isolating layer between front-end virtual LUNs and back-end or physical LUNs, etc.
  • FIG. 3 a is a block diagram illustrating the mapping of blocks within a logical volume to a virtual LUN according to one embodiment. In the illustrated embodiment, a source logical volume 305 comprising N blocks of data (numbered from 0 through (N−1)) may be encapsulated or tunneled through a virtual LUN 310 comprising (N+H) blocks. Off-host virtualizer 210 may be configured to logically insert operating system specific boot metadata in a header 315 comprising the first H blocks of the virtual LUN 310, and the remaining N blocks of virtual LUN 310 may map to the N blocks of source logical volume 305. A host 101 may be configured to boot off virtual LUN 310, for example by setting the boot target device for the host to the identifier of the virtual LUN 310. Metadata contained in header 315 may be set up to match the format and content expected by boot code 114 at a LUN header of a bootable device for a desired operating system, and the contents of logical volume 305 may include, for example, the contents expected by boot code 114 in one or more partitions 130. In some embodiments, the metadata and/or the contents of the logical volume may be customized for the particular host being booted: for example, some of the file system contents or scripts accessed by the host 101 during various boot phases may be modified to support requirements specific to the particular host 101. Examples of such customization may include configuration parameters for hardware devices at the host (e.g., if a particular host employs multiple Ethernet network cards, some of the networking-related scripts may be modified), customized file systems, or customized file system sizes. In general, the generated metadata required for volume tunneling may be located at a variety of different offsets within the logical volume address space, such as within a header 315, a trailer, at some other designated offset within the virtual LUN 310, or at a combination of locations within the virtual LUN 310. The number of data blocks dedicated to operating system specific metadata (e.g., the length of header 315), as well as the format and content of the metadata, may vary with the operating system in use at host 101.
  • The metadata inserted within virtual LUN 310 may be stored in persistent storage, e.g., within some blocks of a physical storage device 220 or at off-host virtualizer 210, in some embodiments, and logically concatenated with the mapped blocks 320. In other embodiments, the metadata may be generated on the fly, whenever a host 101 accesses the virtual LUN 310. In some embodiments, the metadata may be generated by an external agent other than off-host virtualizer 210. The external agent may be capable of emulating metadata in a variety of formats for different operating systems, including operating systems that may not have been known when the off-host virtualizer 210 was deployed. In one embodiment, off-host virtualizer 210 may be configured to support more than one operating system; i.e., off-host virtualizer 210 may logically insert metadata blocks corresponding to any one of a number of different operating systems when presenting virtual LUN 310 to a host 101, thereby allowing hosts intended to use different operating systems to share virtual LUN 310. In some embodiments, a plurality of virtual LUNs emulating bootable target devices, each corresponding to a different operating system, may be set up in advance, and off-host virtualizer 210 may be configured to select a particular virtual LUN for presentation to a host for booting. In large data centers, a set of relatively inexpensive servers (which may be termed “boot servers”) may be designated to serve as a pool of off-host virtualizers dedicated to provide emulated bootable target devices for use as needed throughout the data center. Whenever a newly provisioned host in the data center needs to be booted and/or installed, a bootable target device presented by one of the boot servers may be used, thus supporting consistent configurations at the hosts of the data center as the data center grows.
  • For some operating systems, off-host virtualizer 210 may emulate a number of different boot-related volumes using a plurality of partitions within the virtual LUN 310. FIG. 3 b is a block diagram illustrating an exemplary virtual LUN 310 according to one embodiment, where the virtual LUN includes three emulated partitions 341A-341C. An off-host virtualizer 210 (not shown in FIG. 3 b) may be configured to present virtual LUN 310 to a host bus adapter 330 and/or disk driver 325 at host 101. Each partition 341 may be mapped to a respective volume 345 that may be accessed during boot and/or operating system installation. In the depicted example, partitions corresponding to three volumes 345A-345C used respectively for a “/” (root) file system, a “/usr” file system and a “/swap” file system, each of which may be accessed by a host 101 employing a UNIX-based operating system, are shown. In such embodiments, where multiple volumes and/or file systems are emulated within the same virtual LUN, additional operating system specific metadata identifying the address ranges within the virtual LUN where the corresponding partitions are located may be provided by off-host virtualizer 210 to host 110. In the example depicted in FIG. 3 b, the address ranges for partitions 341A-341C are provided in a virtual table of contents (VTOC) structure 340. The additional metadata may be included with boot metadata 122 in some embodiments. In other embodiments, the additional metadata may be provided at some other location within the address space of the virtual LUN, or provided to the host 101 using another mechanism, such as extended SCSI mode pages or messages sent over a network from off-host virtualizer 210 to host 101. In some embodiments, the additional metadata may also be customized to suit the specific requirements of a particular host 101; e.g., not all hosts may require the same modules of an operating system to be installed and/or upgraded.
  • As noted above and illustrated in FIG. 3 b, in some embodiments, off-host virtualizer 210 may be configured to present an emulated logical volume 240 as an installable partition or volume to host 101—i.e., a partition or volume to which at least a portion of an operating system may be installed. The host 101 may be configured to boot installation software (e.g., off removable media such as a CD provided by the operating system vendor), and then install desired portions of the operating system onto the installable partition or volume. After the desired installation is completed, in some embodiments the host 101 may be configured to boot from the LUN containing the encapsulated volume.
  • FIG. 4 is a flow diagram illustrating aspects of the operation of a system (such as system 200) supporting off-host virtualization and emulation of a bootable target device, according to one embodiment. Off-host virtualizer 210 may be configured to aggregate storage within physical storage devices 220 into one or more logical volumes 240 (block 405 of FIG. 4). The logical volumes 240 may be configured to implement a number of different virtualization functions, such as snapshots or replication. Off-host virtualizer 210 may then emulate the logical volumes as a bootable target device 250 (block 415), for example by logically inserting operating system-specific boot metadata 315 into a virtual LUN 310 as described above. In some embodiments, as noted above, a subset of the blocks of the logical volumes and/or the metadata may be modified to provide data specific to the host being booted (e.g., a customized boot process may be supported). The emulated bootable target device may be made accessible to a host 101 (block 425), e.g., by setting the host's target bootable device address to the address of the virtual LUN 310. The host 101 may then boot off the emulated bootable target device (block 435), for example, off a file system or partition resident in the logical volume (such as a “root” file system in the case of hosts employing UNIX-based operating systems, or a “system partition” in the case of Windows operating systems). That is, the virtualizer may emulate the particular file system or partition expected for booting by the host as being resident in the logical volume in such embodiments.
  • As noted earlier, the boot process at host 101 may include several phases. During each successive phase, additional modules of the host's operating system and/or additional software modules may be activated, and various system processes and services may be started. During one such phase, in some embodiments a virtualization driver or volume manager capable of recognizing and interacting with logical volumes may be activated at host 101. In such embodiments, after the virtualization driver or volume manager is activated, it may be possible for the host to switch to direct interaction with the logical volumes 240 (block 455 of FIG. 4), e.g., over network 360, instead of performing I/O to the logical volumes through the off-host virtualizer 210. Direct interaction with the logical volumes 240 may support higher levels of performance than indirect interaction via off-host virtualizer 210, especially in embodiments where off-host virtualizer 210 has limited processing capabilities. In order to facilitate a transition to direct access, off-host virtualizer 210 or some other volume configuration server may be configured to provide configuration information (such as volume layouts) related to the logical volumes 240 to the virtualization driver or volume manager. Once the transition to direct access occurs, the emulated bootable target device 250 and the off-host virtualizer 210 may no longer be used by host 101 until the next time host 101 is rebooted. During the next reboot, host 101 may switch back to accessing logical volumes 240 via the emulated bootable target device 250. In later boot phases, when the virtualization driver or volume manager is activated, direct access to the logical volumes may be resumed. Such an ability to transition to direct access to logical volumes 240 may allow off-host virtualizers 180 to be implemented using relatively low-end processors, since off-host virtualizers may be utilized heavily only during boot-related operations in system 200, and boot-related operations may be rare relative to production application processing operations.
  • As noted previously, a number of different virtualization functions may be implemented at a logical volume 240 by off-host virtualizer 210 in different embodiments. In one embodiment, a logical volume 240 may be aggregated from storage from multiple physical storage devices 220, e.g., by striping successive blocks of data across multiple physical storage devices, by spanning multiple physical storage devices (i.e., concatenating physical storage from multiple physical storage devices into the logical volume), or by mirroring data blocks at two or more physical storage devices. In another embodiment, a logical volume 240 that is used by off-host virtualizer 210 to emulate a bootable target device 250 may be a replicated volume. For example, the logical volume 240 may be a replica or copy of a source logical volume that may be maintained at a remote data center. Such a technique of replicating bootable volumes may be useful for a variety of purposes, such as to support off-site backup or to support consistency of booting and/or installation in distributed enterprises where hosts at a number of different geographical locations may be required to be set up with similar configurations. In some embodiments, a logical volume 240 may be a snapshot volume, such as an instant snapshot or a space-efficient snapshot, i.e., a point-in-time copy of some source logical volume. Using snapshot volumes to boot and/or install systems may support the ability to revert a host back to any desired previous configuration from among a set of configurations for which snapshots have been created. Support for automatic roll back (e.g., to a desired point in time) on boot may also be implemented in some embodiments. In one embodiment, a logical volume 240 used to emulate a bootable target device may be configured as a virtual RAID (“Redundant Array of Independent Disks”) device or RAID volume, where parity based redundancy computations are implemented to provide high availability. Physical storage from a plurality of storage servers may be aggregated to form the RAID volume, and the redundancy computations may be implemented via a software protocol. A bootable target device emulated from a RAID volume may be recoverable in the event of a failure at one of its backing storage servers, thus enhancing the availability of boot functionality supported by the off-host virtualizer 210. A number of different RAID levels (e.g., RAID-3, RAID-4, or RAID-5) may be implemented in the RAID volume.
  • In some embodiments, a logical volume 240 may include multiple layers of virtual storage devices. FIG. 5 is a block diagram illustrating a logical volume 240 comprising a multi-layer hierarchy of virtual block devices according to one embodiment. In the illustrated embodiment, logical volume 240 includes logical block devices 504 and 506. In turn, logical block device 504 includes logical block devices 508 and 510, while logical block device 506 includes logical block device 512. Logical block devices 508, 510, and 512 map to physical block devices 220A-C of FIG. 2, respectively.
  • After host 101 has booted, logical volume 240 may be configured to be mounted within a file system or presented to an application or other volume consumer. Each block device within logical volume 240 that maps to or includes another block device may include an interface whereby the mapping or including block device may interact with the mapped or included device. For example, this interface may be a software interface whereby data and commands for block read and write operations is propagated from lower levels of the virtualization hierarchy to higher levels and vice versa.
  • Additionally, a given block device may be configured to map the logical block spaces of subordinate block devices into its logical block space in various ways in order to realize a particular virtualization function. For example, in one embodiment, logical volume 240 may be configured as a mirrored volume, in which a given data block written to logical volume 240 is duplicated, and each of the multiple copies of the duplicated given data block are stored in respective block devices. In one such embodiment, logical volume 240 may be configured to receive an operation to write a data block from a consumer, such as an application running on host 101. Logical volume 240 may duplicate the write operation and issue the write operation to both logical block devices 504 and 506, such that the block is written to both devices. In this context, logical block devices 504 and 506 may be referred to as mirror devices. In various embodiments, logical volume 240 may read a given data block stored in duplicate in logical block devices 504 and 506 by issuing a read operation to one mirror device or the other, for example by alternating devices or defaulting to a particular device. Alternatively, logical volume 240 may issue a read operation to multiple mirror devices and accept results from the fastest responder.
  • In some embodiments, it may be the case that underlying physical block devices 220A-C have dissimilar performance characteristics; specifically, devices 220A-B may be slower than device 220C. In order to balance the performance of the mirror devices, in one embodiment, logical block device 504 may be implemented as a striped device in which data is distributed between logical block devices 508 and 510. For example, even- and odd-numbered blocks of logical block device 504 may be mapped to logical block devices 508 and 510 respectively, each of which may be configured to map in turn to all or some portion of physical block devices 220A-B respectively. In such an embodiment, block read/write throughput may be increased over a non-striped configuration, as logical block device 504 may be able to read or write two blocks concurrently instead of one. Numerous striping arrangements involving various distributions of blocks to logical block devices are possible and contemplated; such arrangements may be chosen to optimize for various data usage patterns such as predominantly sequential or random usage patterns. In another aspect illustrating multiple layers of block virtualization, in one embodiment physical block device 220C may employ a different block size than logical block device 506. In such an embodiment, logical block device 512 may be configured to translate between the two physical block sizes and to map the logical block space defined by logical block device 506 to the physical block space defined by physical block device 220C.
  • The technique of volume tunneling to emulate a bootable target device may be implemented using a variety of different storage and network configurations in different embodiments. FIG. 6 is a block diagram illustrating an embodiment where the physical storage devices include fibre channel LUNs 610 accessible through a fibre channel fabric 620, and off-host virtualizer 210 includes a virtualizing switch. A “fibre channel LUN”, as used herein, may be defined as a unit of storage addressable using a fibre channel address. For example, a fibre channel address for storage accessible via a fiber channel fabric may consist of a fabric identifier, a port identifier, and a logical unit identifier. The virtual LUN presented by off-host virtualizer to host 110 as a bootable target device 250 in such an embodiment may be a virtual fibre channel LUN. Fibre channel fabric 620 may include additional switches in some embodiments, and host 101 may be coupled to more than one switch. Some of the additional switches may also be configured to provide virtualization functions. That is, in some embodiments off-host virtualizer 210 may include a plurality of cooperating virtualizing switches. In one embodiment, multiple independently-configurable fibre channel fabrics may be employed: e.g., a first set of fibre channel LUNs 610 may be accessible through a first fabric, and a second set of fibre channel LUNs 610 may be accessible through a second fabric.
  • FIG. 7 is a block diagram illustrating one embodiment where the Internet SCSI (iSCSI) protocol is used to access the physical storage devices. iSCSI is a protocol used by storage initiators (such as hosts 101 and/or off-host virtualizers 210) to send SCSI storage commands to storage targets (such as disks or tape devices) over an IP (Internet Protocol) network. The physical storage devices accessible in an iSCSI-based storage network may be addressable as iSCSI LUNs, just as SCSI devices locally attached to a host may be addressable as SCSI LUNs, and physical storage devices attached via fibre channel fabrics may be addressable as fibre channel LUNs. In one embodiment, for example, an iSCSI address may include an IP address or iSCSI qualified name (iqn), a target device identifier, and a logical unit number. As shown in FIG. 7, one or more iSCSI LUNs 710 may be attached directly to the off-host virtualizer 210. For example, in one embodiment, the off-host virtualizer 210 may itself be a computer system, comprising its own processor, memory and physical storage devices (e.g., iSCSI LUN 710A). The remaining iSCSI LUNs 710B-710N may be accessible through other hosts or through iSCSI servers. In some embodiments, all the physical storage devices may be attached directly to the off-host virtualizer 210 and may be accessible via iSCSI. In general, a host 101 may require an iSCSI-enabled network adapter to participate in the iSCSI protocol. In some embodiments where the physical storage devices include iSCSI LUNs, a network boot protocol similar to BOOTP (a protocol that is typically used to allow diskless hosts to boot using boot code provided by a boot server) may be used to support a first phase boot of a host 101 that does not have an iSCSI-enabled adapter. Additional boot code loaded during the first phase may allow the host to mount a file system over iSCSI, and/or to perform further boot phases, despite the absence of an iSCSI-enabled network card. That is, software provided to the host 101 during an early boot phase (e.g., by off-host virtualizer 210) may be used later in the boot process to emulate iSCSI transactions without utilizing an iSCSI-enabled network adapter at the host.
  • In some embodiments, host 101 may be configured to boot from an emulated volume using a first network type such as iSCSI, and to then switch to directly accessing the volume using a second network type such as fibre channel. iSCSI-based booting may be less expensive and/or easier to configure than fibre-channel based booting in some embodiments. An off-host virtualizer 210 that uses iSCSI (such as an iSCSI boot appliance) and at the same time accesses fibre-channel based storage devices may allow such a transition between the network type that is used for booting and the network type that is used for subsequent I/O (e.g., for I/Os requested by production applications).
  • In one embodiment, illustrated in FIG. 8, physical storage devices 220 may be accessible via storage servers (e.g., 850A and 850B) configured to communicate with off-host virtualizer 210 and host 101 using an advanced storage protocol. The advanced storage protocol may support features, such as access security and tagged directives for distributed I/O operations that may not be adequately supported by the traditional storage protocols (such as SCSI or iSCSI) alone. In such an embodiment, a storage server 850 may translate data access requests from the advanced storage protocol to a lower level protocol or interface (such as SCSI) that may be presented by the physical storage devices 220 managed at the storage server. While the advanced storage protocol may provide enhanced functionality, it may still allow block-level access to physical storage devices 220. Storage servers 850 may be any device capable of supporting the advanced storage protocol, such as a computer host with one or more processors and one or more memories.
  • FIG. 9 is a block diagram illustrating an embodiment where some physical storage devices 220 may be accessible via a target-mode host bus adapter 902. A host bus adapter (HBA) is a hardware device that acts as an interface between a host 101 and an I/O interconnect, such as a SCSI bus or fibre channel link. Typically, an HBA is configured as an “initiator”, i.e., a device that initiates storage operations on the I/O interconnect, and receives responses from other devices (termed “targets”) such as disks, disk array devices, or tape devices, coupled to the I/O interconnect. However, some host bus adapters may be configurable (e.g., by modifying the firmware on the HBA) to operate as targets rather than initiators, i.e., to receive commands such as iSCSI commands sent by initiators requesting storage operations. Such host bus adapters may be termed “target-mode” host bus adapters, and may be incorporated within off-host virtualizers 210 as shown in FIG. 9 in some embodiments. The I/O operations corresponding to the received commands may be performed at the physical storage devices, and the response returned to the requesting initiator. In some embodiments, all the physical storage devices 220 used to back logical volumes 240 may be accessible via target-mode host bus adapters.
  • As noted above, an off-host virtualizer 210 may comprise a number of different types of hardware and software entities in different embodiments. In some embodiments, an off-host virtualizer 210 may itself be a host with its own processor, memory, peripheral devices and I/O devices, running an operating system and a software stack capable of providing the block virtualization features described above. In other embodiments, the off-host virtualizer 210 may include one or more virtualization switches and/or virtualization appliances. A virtualization switch may be an intelligent fiber channel switch, configured with sufficient processing capacity to perform desired virtualization operations in addition to supporting fiber channel connectivity. A virtualization appliance may be an intelligent device programmed to perform virtualization functions, such as providing mirroring, striping, snapshot capabilities, etc. Appliances may differ from general purpose computers in that their software is normally customized for the function they perform, pre-loaded by the vendor, and not alterable by the user. In some embodiments, multiple devices or systems may cooperate to provide off-host virtualization; e.g., multiple cooperating virtualization switches may form a single off-host virtualizer. In one embodiment, the aggregation of storage within physical storage devices 220 into logical volumes 240 may be performed by one off-host virtualizing device or host, while another off-host virtualizing device may be configured to emulate the logical volumes as bootable target devices and present the bootable target devices to host 101.
  • FIG. 10 is a block diagram illustrating a computer accessible medium 1000 including virtualization software 1010 configured to provide the functionality of off-host virtualizer 210 and host 101 described above. Virtualization software 1010 may be provided to a computer system using a variety of computer-accessible media including electronic media (e.g., flash memory), magnetic media such as RAM (e.g., SDRAM, RDRAM, SRAM, etc.), optical storage media such as CD-ROM, etc., as well as transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
  • Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims (27)

1. A system comprising:
a host;
one or more physical storage devices;
an off-host virtualizer;
wherein the off-host virtualizer is configured to:
aggregate storage within the one or more physical storage devices into a logical volume; and
generate metadata to emulate the logical volume as a bootable target device;
make the metadata accessible to the host; and
wherein the host is configured to use the metadata to boot off a file system residing in the logical volume.
2. The system as recited in claim 1, wherein the logical volume is a snapshot volume.
3. The system as recited in claim 1, wherein the logical volume is a replicated volume.
4. The system as recited in claim 1, wherein the logical volume is a striped volume.
5. The system as recited in claim 1, wherein the one or more physical storage devices include a first and a second physical storage device, and wherein the logical volume spans the first and the second physical storage devices.
6. The system as recited in claim 1, wherein the logical volume is a RAID volume.
7. The system as recited in claim 1, wherein the logical volume maps to a boot partition of a designated operating system.
8. The system as recited in claim 7, wherein the designated operating system is configured to access a plurality of additional boot-related partitions during a boot operation, and wherein the off-host virtualizer is further configured to:
generate additional metadata to emulate the logical volume as the plurality of additional boot-related partitions; and
make the additional metadata accessible to the host.
9. The system as recited in claim 1, wherein, subsequent to an initial phase of a boot process, the host is configured to access the logical volume directly without performing I/O through the off-host virtualizer.
10. The system as recited in claim 9, wherein the host is configured to use a first network type for the initial phase of the boot process, and wherein the host is configured to access the logical volume directly using a second network type.
11. The system as recited in claim 1, wherein a physical storage device of the one or more physical storage devices includes a fiber channel logical unit (LUN).
12. The system as recited in claim 1, wherein a physical storage device of the one or more physical storage devices includes an iSCSI LUN.
13. The system as recited in claim 1, further comprising a storage server, wherein the storage server is configured to provide access to a physical storage device of the one or more physical storage devices.
14. The system as recited in claim 1, wherein a physical storage device of the one or more physical storage devices is accessed using a target-mode host bus adapter of the off-host virtualizer.
15. The system as recited in claim 1, wherein the off-host virtualizer is further configured to:
present the logical volume to the host as an installable partition;
and wherein the host is further configured to:
boot installation software for the operating system from removable media; and
install the at least a portion of the operating system on the installable partition.
16. A method comprising:
aggregating storage within one or more physical storage devices into a logical volume;
generating metadata to emulate the logical volume as a bootable target device;
making the metadata accessible to a host; and
the host using the metadata to boot off a file system resident in the logical volume.
17. The method as recited in claim 16, wherein the logical volume is a snapshot volume.
18. The method as recited in claim 16, wherein the logical volume is a replicated volume.
19. The method as recited in claim 16, wherein a storage device of the one or more physical storage devices includes a fibre channel logical unit (LUN).
20. The method as recited in claim 16, wherein a storage device of the one or more physical storage devices includes an iSCSI (Internet SCSI) LUN.
21. The method as recited in claim 16, further comprising:
the host accessing the logical volume subsequent to the boot operation without performing I/O through the off-host virtualizer.
22. A computer accessible medium comprising program instructions, wherein the instructions are executable to:
aggregate storage within one or more physical storage devices into a logical volume;
generate metadata to emulate the logical volume as a bootable target device;
make the metadata accessible to a host; and
use the metadata to boot the host off a file system resident in the logical volume.
23. The computer accessible medium as recited in claim 22, wherein the logical volume is a snapshot volume.
24. The computer accessible medium as recited in claim 22, wherein the logical volume is a replicated volume.
25. The computer accessible medium as recited in claim 22, wherein a storage device of the one or more physical storage devices includes a fibre channel logical unit (LUN).
26. The computer accessible medium as recited in claim 22, wherein a storage device of the one or more physical storage devices includes an iSCSI (Internet SCSI) LUN.
27. The computer accessible medium as recited in claim 22, wherein the instructions are further executable to:
access the logical volume from the host subsequent to the boot operation without performing I/O through the off-host virtualizer.
US11/156,636 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume Abandoned US20050228950A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/156,636 US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US10/722,614 US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
WOPCT/US04/39306 2004-11-22
PCT/US2004/039306 WO2005055043A1 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US11/156,636 US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/722,614 Continuation-In-Part US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes

Publications (1)

Publication Number Publication Date
US20050228950A1 true US20050228950A1 (en) 2005-10-13

Family

ID=34592023

Family Applications (4)

Application Number Title Priority Date Filing Date
US10/722,614 Abandoned US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US11/156,821 Abandoned US20050235132A1 (en) 2003-11-26 2005-06-20 System and method for dynamic LUN mapping
US11/156,635 Expired - Fee Related US7689803B2 (en) 2003-11-26 2005-06-20 System and method for communication using emulated LUN blocks in storage virtualization environments
US11/156,636 Abandoned US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US10/722,614 Abandoned US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US11/156,821 Abandoned US20050235132A1 (en) 2003-11-26 2005-06-20 System and method for dynamic LUN mapping
US11/156,635 Expired - Fee Related US7689803B2 (en) 2003-11-26 2005-06-20 System and method for communication using emulated LUN blocks in storage virtualization environments

Country Status (5)

Country Link
US (4) US20050114595A1 (en)
EP (1) EP1687706A1 (en)
JP (1) JP4750040B2 (en)
CN (1) CN100552611C (en)
WO (1) WO2005055043A1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050228937A1 (en) * 2003-11-26 2005-10-13 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US20060112251A1 (en) * 2003-11-26 2006-05-25 Veritas Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20070083653A1 (en) * 2005-09-16 2007-04-12 Balasubramanian Chandrasekaran System and method for deploying information handling system images through fibre channel
US20070136389A1 (en) * 2005-11-29 2007-06-14 Milena Bergant Replication of a consistency group of data storage objects from servers in a data network
US20070165659A1 (en) * 2006-01-16 2007-07-19 Hitachi, Ltd. Information platform and configuration method of multiple information processing systems thereof
US20080028035A1 (en) * 2006-07-25 2008-01-31 Andrew Currid System and method for operating system installation on a diskless computing platform
US20080028052A1 (en) * 2006-07-25 2008-01-31 Andrew Currid System and method for operating system installation on a diskless computing platform
US20080043000A1 (en) * 2006-07-25 2008-02-21 Andrew Currid System and method to accelerate identification of hardware platform classes
US7409495B1 (en) * 2004-12-22 2008-08-05 Symantec Operating Corporation Method and apparatus for providing a temporal storage appliance with block virtualization in storage networks
US20080201570A1 (en) * 2006-11-23 2008-08-21 Dell Products L.P. Apparatus, Method and Product for Selecting an iSCSI Target for Automated Initiator Booting
US7441009B2 (en) * 2005-12-27 2008-10-21 Fujitsu Limited Computer system and storage virtualizer
US20090089498A1 (en) * 2007-10-02 2009-04-02 Michael Cameron Hay Transparently migrating ongoing I/O to virtualized storage
US7769722B1 (en) 2006-12-08 2010-08-03 Emc Corporation Replication and restoration of multiple data storage object types in a data network
US7904681B1 (en) * 2006-06-30 2011-03-08 Emc Corporation Methods and systems for migrating data with minimal disruption
US7979260B1 (en) * 2008-03-31 2011-07-12 Symantec Corporation Simulating PXE booting for virtualized machines
US20120005467A1 (en) * 2010-06-30 2012-01-05 International Business Machines Corporation Streaming Virtual Machine Boot Services Over a Network
US8166314B1 (en) 2008-12-30 2012-04-24 Emc Corporation Selective I/O to logical unit when encrypted, but key is not available or when encryption status is unknown
US20120191667A1 (en) * 2011-01-20 2012-07-26 Infinidat Ltd. System and method of storage optimization
US8261068B1 (en) 2008-09-30 2012-09-04 Emc Corporation Systems and methods for selective encryption of operating system metadata for host-based encryption of data at rest on a logical unit
US20120290759A1 (en) * 2006-05-30 2012-11-15 Schneider Electric Industries Sas Virtual Placeholder Configuration for Distributed Input/Output Modules
US8416954B1 (en) 2008-09-30 2013-04-09 Emc Corporation Systems and methods for accessing storage or network based replicas of encrypted volumes with no additional key management
US8635429B1 (en) 2007-06-29 2014-01-21 Symantec Corporation Method and apparatus for mapping virtual drives
US20140052945A1 (en) * 2012-08-14 2014-02-20 International Business Machines Corporation Optimizing storage system behavior in virtualized cloud computing environments by tagging input/output operation data to indicate storage policy
US8706833B1 (en) * 2006-12-08 2014-04-22 Emc Corporation Data storage server having common replication architecture for multiple storage object types
US8738871B1 (en) * 2007-06-29 2014-05-27 Symantec Corporation Method and apparatus for mapping virtual drives
US20140164752A1 (en) * 2012-12-11 2014-06-12 Manikantan Venkiteswaran System and method for selecting a least cost path for performing a network boot in a data center network environment
US8856484B2 (en) * 2012-08-14 2014-10-07 Infinidat Ltd. Mass storage system and methods of controlling resources thereof
US9098325B2 (en) 2012-02-28 2015-08-04 Hewlett-Packard Development Company, L.P. Persistent volume at an offset of a virtual block device of a storage server
US9158568B2 (en) 2012-01-30 2015-10-13 Hewlett-Packard Development Company, L.P. Input/output operations at a virtual block device of a storage server
US9454670B2 (en) 2012-12-03 2016-09-27 International Business Machines Corporation Hybrid file systems
US9946559B1 (en) * 2012-02-13 2018-04-17 Veritas Technologies Llc Techniques for managing virtual machine backups
US10001927B1 (en) * 2014-09-30 2018-06-19 EMC IP Holding Company LLC Techniques for optimizing I/O operations
US20230135096A1 (en) * 2021-11-04 2023-05-04 International Business Machines Corporation File Based Virtual Disk Management

Families Citing this family (272)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9603582D0 (en) 1996-02-20 1996-04-17 Hewlett Packard Co Method of accessing service resource items that are for use in a telecommunications system
US8032701B1 (en) * 2004-03-26 2011-10-04 Emc Corporation System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network
US7024427B2 (en) * 2001-12-19 2006-04-04 Emc Corporation Virtual file system
US7461141B2 (en) * 2004-01-30 2008-12-02 Applied Micro Circuits Corporation System and method for performing driver configuration operations without a system reboot
US20050216680A1 (en) * 2004-03-25 2005-09-29 Itzhak Levy Device to allow multiple data processing channels to share a single disk drive
US7945657B1 (en) * 2005-03-30 2011-05-17 Oracle America, Inc. System and method for emulating input/output performance of an application
EP1769395A2 (en) * 2004-05-21 2007-04-04 Computer Associates Think, Inc. Object-based storage
US9264384B1 (en) * 2004-07-22 2016-02-16 Oracle International Corporation Resource virtualization mechanism including virtual host bus adapters
US7493462B2 (en) * 2005-01-20 2009-02-17 International Business Machines Corporation Apparatus, system, and method for validating logical volume configuration
US8275749B2 (en) * 2005-02-07 2012-09-25 Mimosa Systems, Inc. Enterprise server version migration through identity preservation
US7870416B2 (en) * 2005-02-07 2011-01-11 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US8918366B2 (en) * 2005-02-07 2014-12-23 Mimosa Systems, Inc. Synthetic full copies of data and dynamic bulk-to-brick transformation
US7778976B2 (en) * 2005-02-07 2010-08-17 Mimosa, Inc. Multi-dimensional surrogates for data management
US8543542B2 (en) * 2005-02-07 2013-09-24 Mimosa Systems, Inc. Synthetic full copies of data and dynamic bulk-to-brick transformation
US8271436B2 (en) * 2005-02-07 2012-09-18 Mimosa Systems, Inc. Retro-fitting synthetic full copies of data
US7657780B2 (en) * 2005-02-07 2010-02-02 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US7917475B2 (en) * 2005-02-07 2011-03-29 Mimosa Systems, Inc. Enterprise server version migration through identity preservation
US8161318B2 (en) * 2005-02-07 2012-04-17 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US8812433B2 (en) * 2005-02-07 2014-08-19 Mimosa Systems, Inc. Dynamic bulk-to-brick transformation of data
US8799206B2 (en) * 2005-02-07 2014-08-05 Mimosa Systems, Inc. Dynamic bulk-to-brick transformation of data
US7519851B2 (en) * 2005-02-08 2009-04-14 Hitachi, Ltd. Apparatus for replicating volumes between heterogenous storage systems
US7774514B2 (en) * 2005-05-16 2010-08-10 Infortrend Technology, Inc. Method of transmitting data between storage virtualization controllers and storage virtualization controller designed to implement the method
US7630998B2 (en) * 2005-06-10 2009-12-08 Microsoft Corporation Performing a deletion of a node in a tree data storage structure
US8433770B2 (en) * 2005-07-29 2013-04-30 Broadcom Corporation Combined local and network storage interface
US20070038749A1 (en) * 2005-07-29 2007-02-15 Broadcom Corporation Combined local and network storage interface
US7802000B1 (en) * 2005-08-01 2010-09-21 Vmware Virtual network in server farm
US9813283B2 (en) 2005-08-09 2017-11-07 Oracle International Corporation Efficient data transfer between servers and remote peripherals
CN101356506B (en) * 2005-08-25 2014-01-08 晶像股份有限公司 Smart scalable storage switch architecture
JP2007094578A (en) * 2005-09-27 2007-04-12 Fujitsu Ltd Storage system and its component replacement processing method
US8572330B2 (en) * 2005-12-19 2013-10-29 Commvault Systems, Inc. Systems and methods for granular resource management in a storage network
US8533409B2 (en) * 2006-01-26 2013-09-10 Infortrend Technology, Inc. Method of managing data snapshot images in a storage system
US20070180287A1 (en) * 2006-01-31 2007-08-02 Dell Products L. P. System and method for managing node resets in a cluster
US20070180167A1 (en) * 2006-02-02 2007-08-02 Seagate Technology Llc Dynamic partition mapping in a hot-pluggable data storage apparatus
US7904492B2 (en) * 2006-03-23 2011-03-08 Network Appliance, Inc. Method and apparatus for concurrent read-only access to filesystem
JP2007265001A (en) * 2006-03-28 2007-10-11 Hitachi Ltd Storage device
JP5037881B2 (en) 2006-04-18 2012-10-03 株式会社日立製作所 Storage system and control method thereof
US7617373B2 (en) * 2006-05-23 2009-11-10 International Business Machines Corporation Apparatus, system, and method for presenting a storage volume as a virtual volume
US7536503B1 (en) * 2006-06-30 2009-05-19 Emc Corporation Methods and systems for preserving disk geometry when migrating existing data volumes
US10013268B2 (en) * 2006-08-29 2018-07-03 Prometric Inc. Performance-based testing system and method employing emulation and virtualization
US8095715B1 (en) * 2006-09-05 2012-01-10 Nvidia Corporation SCSI HBA management using logical units
US7584378B2 (en) 2006-09-07 2009-09-01 International Business Machines Corporation Reconfigurable FC-AL storage loops in a data storage system
US7761738B2 (en) 2006-09-07 2010-07-20 International Business Machines Corporation Establishing communications across virtual enclosure boundaries
US8332613B1 (en) * 2006-09-29 2012-12-11 Emc Corporation Methods and systems for managing I/O requests to minimize disruption required for data encapsulation and de-encapsulation
JP2008090657A (en) * 2006-10-03 2008-04-17 Hitachi Ltd Storage system and control method
JP2008112399A (en) * 2006-10-31 2008-05-15 Fujitsu Ltd Storage virtualization switch and computer system
US8489817B2 (en) 2007-12-06 2013-07-16 Fusion-Io, Inc. Apparatus, system, and method for caching data
US8706968B2 (en) 2007-12-06 2014-04-22 Fusion-Io, Inc. Apparatus, system, and method for redundant write caching
US9104599B2 (en) 2007-12-06 2015-08-11 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for destaging cached data
US8443134B2 (en) 2006-12-06 2013-05-14 Fusion-Io, Inc. Apparatus, system, and method for graceful cache device degradation
CN101622594B (en) 2006-12-06 2013-03-13 弗森-艾奥公司 Apparatus, system, and method for managing data in a request device with an empty data token directive
JP4813385B2 (en) * 2007-01-29 2011-11-09 株式会社日立製作所 Control device that controls multiple logical resources of a storage system
US7840790B1 (en) * 2007-02-16 2010-11-23 Vmware, Inc. Method and system for providing device drivers in a virtualization system
WO2008126202A1 (en) * 2007-03-23 2008-10-23 Fujitsu Limited Load distribution program for storage system, load distribution method for storage system, and storage management device
CN100547566C (en) * 2007-06-28 2009-10-07 忆正存储技术(深圳)有限公司 Control method based on multi-passage flash memory apparatus logic strip
US7568051B1 (en) * 2007-06-29 2009-07-28 Emc Corporation Flexible UCB
US8176405B2 (en) * 2007-09-24 2012-05-08 International Business Machines Corporation Data integrity validation in a computing environment
US20090119452A1 (en) * 2007-11-02 2009-05-07 Crossroads Systems, Inc. Method and system for a sharable storage device
US9519540B2 (en) 2007-12-06 2016-12-13 Sandisk Technologies Llc Apparatus, system, and method for destaging cached data
US7836226B2 (en) 2007-12-06 2010-11-16 Fusion-Io, Inc. Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment
WO2009070898A1 (en) * 2007-12-07 2009-06-11 Scl Elements Inc. Auto-configuring multi-layer network
US8032689B2 (en) * 2007-12-18 2011-10-04 Hitachi Global Storage Technologies Netherlands, B.V. Techniques for data storage device virtualization
US8028062B1 (en) * 2007-12-26 2011-09-27 Emc Corporation Non-disruptive data mobility using virtual storage area networks with split-path virtualization
US8055867B2 (en) * 2008-01-11 2011-11-08 International Business Machines Corporation Methods, apparatuses, and computer program products for protecting pre-staged provisioned data in a storage system
US8074020B2 (en) * 2008-02-13 2011-12-06 International Business Machines Corporation On-line volume coalesce operation to enable on-line storage subsystem volume consolidation
US20090216944A1 (en) * 2008-02-22 2009-08-27 International Business Machines Corporation Efficient validation of writes for protection against dropped writes
WO2009120198A1 (en) * 2008-03-27 2009-10-01 Hewlett-Packard Development Company, L.P. Raid array access by a raid array-unaware operating system
JP2009238114A (en) * 2008-03-28 2009-10-15 Hitachi Ltd Storage management method, storage management program, storage management apparatus, and storage management system
US8745336B2 (en) * 2008-05-29 2014-06-03 Vmware, Inc. Offloading storage operations to storage hardware
US8893160B2 (en) * 2008-06-09 2014-11-18 International Business Machines Corporation Block storage interface for virtual memory
GB2460841B (en) 2008-06-10 2012-01-11 Virtensys Ltd Methods of providing access to I/O devices
US8725688B2 (en) * 2008-09-05 2014-05-13 Commvault Systems, Inc. Image level copy or restore, such as image level restore without knowledge of data object metadata
US8073674B2 (en) * 2008-09-23 2011-12-06 Oracle America, Inc. SCSI device emulation in user space facilitating storage virtualization
US8516190B1 (en) * 2008-09-26 2013-08-20 Nvidia Corporation Reporting logical sector alignment for ATA mass storage devices
US8055842B1 (en) 2008-09-26 2011-11-08 Nvidia Corporation Using raid with large sector size ATA mass storage devices
US20100082715A1 (en) * 2008-09-30 2010-04-01 Karl Dohm Reduced-Resource Block Thin Provisioning
US8510352B2 (en) 2008-10-24 2013-08-13 Microsoft Corporation Virtualized boot block with discovery volume
US8417969B2 (en) * 2009-02-19 2013-04-09 Microsoft Corporation Storage volume protection supporting legacy systems
US8073886B2 (en) 2009-02-20 2011-12-06 Microsoft Corporation Non-privileged access to data independent of filesystem implementation
US8074038B2 (en) 2009-05-12 2011-12-06 Microsoft Corporation Converting luns into files or files into luns in real time
US9015198B2 (en) * 2009-05-26 2015-04-21 Pi-Coral, Inc. Method and apparatus for large scale data storage
US8238538B2 (en) 2009-05-28 2012-08-07 Comcast Cable Communications, Llc Stateful home phone service
US9973446B2 (en) 2009-08-20 2018-05-15 Oracle International Corporation Remote shared server peripherals over an Ethernet network for resource virtualization
US8495289B2 (en) * 2010-02-24 2013-07-23 Red Hat, Inc. Automatically detecting discrepancies between storage subsystem alignments
US8539124B1 (en) * 2010-03-31 2013-09-17 Emc Corporation Storage integration plugin for virtual servers
US8756338B1 (en) * 2010-04-29 2014-06-17 Netapp, Inc. Storage server with embedded communication agent
US8261003B2 (en) * 2010-08-11 2012-09-04 Lsi Corporation Apparatus and methods for managing expanded capacity of virtual volumes in a storage system
JP2012058912A (en) * 2010-09-07 2012-03-22 Nec Corp Logical unit number management device, logical unit number management method and program therefor
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US9331963B2 (en) 2010-09-24 2016-05-03 Oracle International Corporation Wireless host I/O using virtualized I/O controllers
CN101986655A (en) * 2010-10-21 2011-03-16 浪潮(北京)电子信息产业有限公司 Storage network and data reading and writing method thereof
US8966184B2 (en) 2011-01-31 2015-02-24 Intelligent Intellectual Property Holdings 2, LLC. Apparatus, system, and method for managing eviction of data
WO2012116369A2 (en) 2011-02-25 2012-08-30 Fusion-Io, Inc. Apparatus, system, and method for managing contents of a cache
US9606747B2 (en) 2011-05-04 2017-03-28 International Business Machines Corporation Importing pre-existing data of a prior storage solution into a storage pool for use with a new storage solution
US8838931B1 (en) * 2012-03-30 2014-09-16 Emc Corporation Techniques for automated discovery and performing storage optimizations on a component external to a data storage system
US8996800B2 (en) 2011-07-07 2015-03-31 Atlantis Computing, Inc. Deduplication of virtual machine files in a virtualized desktop environment
US20130268559A1 (en) 2011-07-13 2013-10-10 Z124 Virtual file system remote search
US9152404B2 (en) 2011-07-13 2015-10-06 Z124 Remote device filter
US8909891B2 (en) 2011-07-21 2014-12-09 International Business Machines Corporation Virtual logical volume for overflow storage of special data sets
US8589640B2 (en) 2011-10-14 2013-11-19 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
US20130268703A1 (en) * 2011-09-27 2013-10-10 Z124 Rules based hierarchical data virtualization
CN102567217B (en) * 2012-01-04 2014-12-24 北京航空航天大学 MIPS platform-oriented memory virtualization method
US9767032B2 (en) 2012-01-12 2017-09-19 Sandisk Technologies Llc Systems and methods for cache endurance
US9251086B2 (en) 2012-01-24 2016-02-02 SanDisk Technologies, Inc. Apparatus, system, and method for managing a cache
US9239776B2 (en) * 2012-02-09 2016-01-19 Vmware, Inc. Systems and methods to simulate storage
US10831727B2 (en) 2012-05-29 2020-11-10 International Business Machines Corporation Application-controlled sub-LUN level data migration
US10831728B2 (en) 2012-05-29 2020-11-10 International Business Machines Corporation Application-controlled sub-LUN level data migration
US10817202B2 (en) 2012-05-29 2020-10-27 International Business Machines Corporation Application-controlled sub-LUN level data migration
US9083550B2 (en) 2012-10-29 2015-07-14 Oracle International Corporation Network virtualization over infiniband
US20140164581A1 (en) * 2012-12-10 2014-06-12 Transparent Io, Inc. Dispersed Storage System with Firewall
US9912713B1 (en) 2012-12-17 2018-03-06 MiMedia LLC Systems and methods for providing dynamically updated image sets for applications
US9277010B2 (en) * 2012-12-21 2016-03-01 Atlantis Computing, Inc. Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment
US9069472B2 (en) 2012-12-21 2015-06-30 Atlantis Computing, Inc. Method for dispersing and collating I/O's from virtual machines for parallelization of I/O access and redundancy of storing virtual machine data
US9633216B2 (en) 2012-12-27 2017-04-25 Commvault Systems, Inc. Application of information management policies based on operation with a geographic entity
US10445229B1 (en) * 2013-01-28 2019-10-15 Radian Memory Systems, Inc. Memory controller with at least one address segment defined for which data is striped across flash memory dies, with a common address offset being used to obtain physical addresses for the data in each of the dies
US9250946B2 (en) 2013-02-12 2016-02-02 Atlantis Computing, Inc. Efficient provisioning of cloned virtual machine images using deduplication metadata
US9372865B2 (en) 2013-02-12 2016-06-21 Atlantis Computing, Inc. Deduplication metadata access in deduplication file system
US9471590B2 (en) 2013-02-12 2016-10-18 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US9459968B2 (en) 2013-03-11 2016-10-04 Commvault Systems, Inc. Single index to query multiple backup formats
US9465521B1 (en) 2013-03-13 2016-10-11 MiMedia, Inc. Event based media interface
US9298758B1 (en) 2013-03-13 2016-03-29 MiMedia, Inc. Systems and methods providing media-to-media connection
US9183232B1 (en) 2013-03-15 2015-11-10 MiMedia, Inc. Systems and methods for organizing content using content organization rules and robust content information
US10257301B1 (en) 2013-03-15 2019-04-09 MiMedia, Inc. Systems and methods providing a drive interface for content delivery
US20140359612A1 (en) * 2013-06-03 2014-12-04 Microsoft Corporation Sharing a Virtual Hard Disk Across Multiple Virtual Machines
US9176890B2 (en) 2013-06-07 2015-11-03 Globalfoundries Inc. Non-disruptive modification of a device mapper stack
US9798596B2 (en) 2014-02-27 2017-10-24 Commvault Systems, Inc. Automatic alert escalation for an information management system
US9871889B1 (en) * 2014-03-18 2018-01-16 EMC IP Holing Company LLC Techniques for automated capture of configuration data for simulation
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US9213485B1 (en) 2014-06-04 2015-12-15 Pure Storage, Inc. Storage system architecture
US9003144B1 (en) 2014-06-04 2015-04-07 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US11068363B1 (en) 2014-06-04 2021-07-20 Pure Storage, Inc. Proactively rebuilding data in a storage cluster
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US9367243B1 (en) 2014-06-04 2016-06-14 Pure Storage, Inc. Scalable non-uniform storage sizes
US9836234B2 (en) 2014-06-04 2017-12-05 Pure Storage, Inc. Storage cluster
US8850108B1 (en) 2014-06-04 2014-09-30 Pure Storage, Inc. Storage cluster
US8868825B1 (en) 2014-07-02 2014-10-21 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9021297B1 (en) 2014-07-02 2015-04-28 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US10853311B1 (en) 2014-07-03 2020-12-01 Pure Storage, Inc. Administration through files in a storage system
US8874836B1 (en) 2014-07-03 2014-10-28 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US9811677B2 (en) 2014-07-03 2017-11-07 Pure Storage, Inc. Secure data replication in a storage grid
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US9082512B1 (en) 2014-08-07 2015-07-14 Pure Storage, Inc. Die-level monitoring in a storage cluster
US9495255B2 (en) 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US10983859B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Adjustable error correction based on memory health in a storage unit
US9766972B2 (en) 2014-08-07 2017-09-19 Pure Storage, Inc. Masking defective bits in a storage array
US9558069B2 (en) 2014-08-07 2017-01-31 Pure Storage, Inc. Failure mapping in a storage array
US9483346B2 (en) 2014-08-07 2016-11-01 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US10079711B1 (en) 2014-08-20 2018-09-18 Pure Storage, Inc. Virtual file server with preserved MAC address
US9389789B2 (en) 2014-12-15 2016-07-12 International Business Machines Corporation Migration of executing applications and associated stored data
JP6435842B2 (en) 2014-12-17 2018-12-12 富士通株式会社 Storage control device and storage control program
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10178169B2 (en) 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US10846275B2 (en) 2015-06-26 2020-11-24 Pure Storage, Inc. Key management in a storage device
WO2017006458A1 (en) * 2015-07-08 2017-01-12 株式会社日立製作所 Computer and memory region management method
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
JP6461347B2 (en) * 2015-07-27 2019-01-30 株式会社日立製作所 Storage system and storage control method
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US11341136B2 (en) 2015-09-04 2022-05-24 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US10762069B2 (en) 2015-09-30 2020-09-01 Pure Storage, Inc. Mechanism for a system where data and metadata are located closely together
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US9965184B2 (en) 2015-10-19 2018-05-08 International Business Machines Corporation Multiple storage subpools of a virtual storage pool in a multiple processor environment
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
US10296250B2 (en) * 2016-06-08 2019-05-21 Intel Corporation Method and apparatus for improving performance of sequential logging in a storage device
EP3308316B1 (en) * 2016-07-05 2020-09-02 Viirii, LLC Operating system independent, secure data storage subsystem
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US9672905B1 (en) 2016-07-22 2017-06-06 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US11422719B2 (en) 2016-09-15 2022-08-23 Pure Storage, Inc. Distributed file deletion and truncation
US10756816B1 (en) 2016-10-04 2020-08-25 Pure Storage, Inc. Optimized fibre channel and non-volatile memory express access
US9747039B1 (en) 2016-10-04 2017-08-29 Pure Storage, Inc. Reservations over multiple paths on NVMe over fabrics
US11550481B2 (en) 2016-12-19 2023-01-10 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
US11955187B2 (en) 2017-01-13 2024-04-09 Pure Storage, Inc. Refresh of differing capacity NAND
US9747158B1 (en) 2017-01-13 2017-08-29 Pure Storage, Inc. Intelligent refresh of 3D NAND
US10620835B2 (en) * 2017-01-27 2020-04-14 Wyse Technology L.L.C. Attaching a windows file system to a remote non-windows disk stack
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10838821B2 (en) 2017-02-08 2020-11-17 Commvault Systems, Inc. Migrating content and metadata from a backup system
US10776329B2 (en) 2017-03-28 2020-09-15 Commvault Systems, Inc. Migration of a database management system to cloud storage
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US10754829B2 (en) 2017-04-04 2020-08-25 Oracle International Corporation Virtual configuration systems and methods
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US10516645B1 (en) 2017-04-27 2019-12-24 Pure Storage, Inc. Address resolution broadcasting in a networked device
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US10524022B2 (en) * 2017-05-02 2019-12-31 Seagate Technology Llc Data storage system with adaptive data path routing
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11138103B1 (en) 2017-06-11 2021-10-05 Pure Storage, Inc. Resiliency groups
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US11947814B2 (en) 2017-06-11 2024-04-02 Pure Storage, Inc. Optimizing resiliency group formation stability
US10425473B1 (en) 2017-07-03 2019-09-24 Pure Storage, Inc. Stateful connection reset in a storage cluster with a stateless load balancer
US10402266B1 (en) 2017-07-31 2019-09-03 Pure Storage, Inc. Redundant array of independent disks in a direct-mapped flash storage system
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US10719265B1 (en) 2017-12-08 2020-07-21 Pure Storage, Inc. Centralized, quorum-aware handling of device reservation requests in a storage system
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US11036596B1 (en) 2018-02-18 2021-06-15 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US11385792B2 (en) 2018-04-27 2022-07-12 Pure Storage, Inc. High availability controller pair transitioning
US20190362075A1 (en) * 2018-05-22 2019-11-28 Fortinet, Inc. Preventing users from accessing infected files by using multiple file storage repositories and a secure data transfer agent logically interposed therebetween
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US11036856B2 (en) 2018-09-16 2021-06-15 Fortinet, Inc. Natively mounting storage for inspection and sandboxing in the cloud
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
CN112748848A (en) * 2019-10-29 2021-05-04 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for storage management
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US10990537B1 (en) 2020-01-07 2021-04-27 International Business Machines Corporation Logical to virtual and virtual to physical translation in storage class memory
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US11409608B2 (en) * 2020-12-29 2022-08-09 Advanced Micro Devices, Inc. Providing host-based error detection capabilities in a remote execution device
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
WO2022157791A1 (en) * 2021-01-25 2022-07-28 Volumez Technologies Ltd. Remote self encrypted drive control method and system
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US11832410B2 (en) 2021-09-14 2023-11-28 Pure Storage, Inc. Mechanical energy absorbing bracket apparatus
US11907551B2 (en) * 2022-07-01 2024-02-20 Dell Products, L.P. Performance efficient and resilient creation of network attached storage objects

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193184A (en) * 1990-06-18 1993-03-09 Storage Technology Corporation Deleted data file space release system for a dynamically mapped virtual data storage subsystem
US20010056525A1 (en) * 2000-06-19 2001-12-27 Storage Technology Corporation Using current internal mapping mechanisms to implement dynamic mapping operations
US6347371B1 (en) * 1999-01-25 2002-02-12 Dell Usa, L.P. System and method for initiating operation of a computer system
US20020156984A1 (en) * 2001-02-20 2002-10-24 Storageapps Inc. System and method for accessing a storage area network as network attached storage
US6658563B1 (en) * 2000-05-18 2003-12-02 International Business Machines Corporation Virtual floppy diskette image within a primary partition in a hard disk drive and method for booting system with virtual diskette
US20040153639A1 (en) * 2003-02-05 2004-08-05 Dell Products L.P. System and method for sharing storage to boot multiple servers
US20050228937A1 (en) * 2003-11-26 2005-10-13 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US20050234846A1 (en) * 2004-04-15 2005-10-20 Raytheon Company System and method for computer cluster virtualization using dynamic boot images and virtual disk
US20060010316A1 (en) * 2002-04-18 2006-01-12 Gintautas Burokas System for and method of network booting of an operating system to a client computer using hibernation
US20060020848A1 (en) * 2002-05-03 2006-01-26 Marc Duncan Systems and methods for out-of-band booting of a computer
US20060112251A1 (en) * 2003-11-26 2006-05-25 Veritas Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20070067435A1 (en) * 2003-10-08 2007-03-22 Landis John A Virtual data center that allocates and manages system resources across multiple nodes

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5829053A (en) * 1996-05-10 1998-10-27 Apple Computer, Inc. Block storage memory management system and method utilizing independent partition managers and device drivers
US6044367A (en) * 1996-08-02 2000-03-28 Hewlett-Packard Company Distributed I/O store
US6493811B1 (en) * 1998-01-26 2002-12-10 Computer Associated Think, Inc. Intelligent controller accessed through addressable virtual space
US6240416B1 (en) * 1998-09-11 2001-05-29 Ambeo, Inc. Distributed metadata system and method
US6311213B2 (en) * 1998-10-27 2001-10-30 International Business Machines Corporation System and method for server-to-server data storage in a network environment
US6434637B1 (en) * 1998-12-31 2002-08-13 Emc Corporation Method and apparatus for balancing workloads among paths in a multi-path computer system based on the state of previous I/O operations
US6370605B1 (en) * 1999-03-04 2002-04-09 Sun Microsystems, Inc. Switch based scalable performance storage architecture
US6467023B1 (en) * 1999-03-23 2002-10-15 Lsi Logic Corporation Method for logical unit creation with immediate availability in a raid storage environment
US6779016B1 (en) * 1999-08-23 2004-08-17 Terraspring, Inc. Extensible computing system
EP1229435A4 (en) * 1999-10-22 2008-08-06 Hitachi Ltd Storage area network system
JP4651230B2 (en) * 2001-07-13 2011-03-16 株式会社日立製作所 Storage system and access control method to logical unit
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
US6912537B2 (en) * 2000-06-20 2005-06-28 Storage Technology Corporation Dynamically changeable virtual mapping scheme
WO2002037300A1 (en) * 2000-11-02 2002-05-10 Pirus Networks Switching system
US6871245B2 (en) * 2000-11-29 2005-03-22 Radiant Data Corporation File system translators and methods for implementing the same
JP4187403B2 (en) * 2000-12-20 2008-11-26 インターナショナル・ビジネス・マシーンズ・コーポレーション Data recording system, data recording method, and network system
WO2002065249A2 (en) * 2001-02-13 2002-08-22 Candera, Inc. Storage virtualization and storage management to provide higher level storage services
JP4105398B2 (en) * 2001-02-28 2008-06-25 株式会社日立製作所 Information processing system
US6779063B2 (en) * 2001-04-09 2004-08-17 Hitachi, Ltd. Direct access storage system having plural interfaces which permit receipt of block and file I/O requests
US20040015864A1 (en) * 2001-06-05 2004-01-22 Boucher Michael L. Method and system for testing memory operations of computer program
US6782401B2 (en) * 2001-07-02 2004-08-24 Sepaton, Inc. Method and apparatus for implementing a reliable open file system
US7548975B2 (en) * 2002-01-09 2009-06-16 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network through a virtual enclosure
US7433948B2 (en) * 2002-01-23 2008-10-07 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network
US6934799B2 (en) * 2002-01-18 2005-08-23 International Business Machines Corporation Virtualization of iSCSI storage
US6954839B2 (en) * 2002-03-13 2005-10-11 Hitachi, Ltd. Computer system
US6889309B1 (en) * 2002-04-15 2005-05-03 Emc Corporation Method and apparatus for implementing an enterprise virtual storage system
US7188194B1 (en) * 2002-04-22 2007-03-06 Cisco Technology, Inc. Session-based target/LUN mapping for a storage area network and associated method
US7107385B2 (en) * 2002-08-09 2006-09-12 Network Appliance, Inc. Storage virtualization by layering virtual disk objects on a file system
US7100089B1 (en) * 2002-09-06 2006-08-29 3Pardata, Inc. Determining differences between snapshots
US7263593B2 (en) * 2002-11-25 2007-08-28 Hitachi, Ltd. Virtualization controller and data transfer control method
US7797392B2 (en) * 2002-11-26 2010-09-14 International Business Machines Corporation System and method for efficiently supporting multiple native network protocol implementations in a single system
US7020760B2 (en) * 2002-12-16 2006-03-28 International Business Machines Corporation Hybrid logical block virtualization system for a storage area network
US6816917B2 (en) * 2003-01-15 2004-11-09 Hewlett-Packard Development Company, L.P. Storage system with LUN virtualization
US7606239B2 (en) * 2003-01-31 2009-10-20 Brocade Communications Systems, Inc. Method and apparatus for providing virtual ports with attached virtual devices in a storage area network
US20050125538A1 (en) * 2003-12-03 2005-06-09 Dell Products L.P. Assigning logical storage units to host computers

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193184A (en) * 1990-06-18 1993-03-09 Storage Technology Corporation Deleted data file space release system for a dynamically mapped virtual data storage subsystem
US6347371B1 (en) * 1999-01-25 2002-02-12 Dell Usa, L.P. System and method for initiating operation of a computer system
US6658563B1 (en) * 2000-05-18 2003-12-02 International Business Machines Corporation Virtual floppy diskette image within a primary partition in a hard disk drive and method for booting system with virtual diskette
US20010056525A1 (en) * 2000-06-19 2001-12-27 Storage Technology Corporation Using current internal mapping mechanisms to implement dynamic mapping operations
US20020156984A1 (en) * 2001-02-20 2002-10-24 Storageapps Inc. System and method for accessing a storage area network as network attached storage
US20060010316A1 (en) * 2002-04-18 2006-01-12 Gintautas Burokas System for and method of network booting of an operating system to a client computer using hibernation
US20060020848A1 (en) * 2002-05-03 2006-01-26 Marc Duncan Systems and methods for out-of-band booting of a computer
US20040153639A1 (en) * 2003-02-05 2004-08-05 Dell Products L.P. System and method for sharing storage to boot multiple servers
US20070067435A1 (en) * 2003-10-08 2007-03-22 Landis John A Virtual data center that allocates and manages system resources across multiple nodes
US20050228937A1 (en) * 2003-11-26 2005-10-13 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US20050235132A1 (en) * 2003-11-26 2005-10-20 Veritas Operating Corporation System and method for dynamic LUN mapping
US20060112251A1 (en) * 2003-11-26 2006-05-25 Veritas Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20050234846A1 (en) * 2004-04-15 2005-10-20 Raytheon Company System and method for computer cluster virtualization using dynamic boot images and virtual disk

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050235132A1 (en) * 2003-11-26 2005-10-20 Veritas Operating Corporation System and method for dynamic LUN mapping
US20060112251A1 (en) * 2003-11-26 2006-05-25 Veritas Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US7689803B2 (en) 2003-11-26 2010-03-30 Symantec Operating Corporation System and method for communication using emulated LUN blocks in storage virtualization environments
US20050228937A1 (en) * 2003-11-26 2005-10-13 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US7669032B2 (en) 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US7409495B1 (en) * 2004-12-22 2008-08-05 Symantec Operating Corporation Method and apparatus for providing a temporal storage appliance with block virtualization in storage networks
US20070083653A1 (en) * 2005-09-16 2007-04-12 Balasubramanian Chandrasekaran System and method for deploying information handling system images through fibre channel
US20070136389A1 (en) * 2005-11-29 2007-06-14 Milena Bergant Replication of a consistency group of data storage objects from servers in a data network
US7765187B2 (en) 2005-11-29 2010-07-27 Emc Corporation Replication of a consistency group of data storage objects from servers in a data network
US7441009B2 (en) * 2005-12-27 2008-10-21 Fujitsu Limited Computer system and storage virtualizer
US8379541B2 (en) 2006-01-16 2013-02-19 Hitachi, Ltd. Information platform and configuration method of multiple information processing systems thereof
US20110153795A1 (en) * 2006-01-16 2011-06-23 Hitachi, Ltd. Information platform and configuration method of multiple information processing systems thereof
US7903677B2 (en) * 2006-01-16 2011-03-08 Hitachi, Ltd. Information platform and configuration method of multiple information processing systems thereof
US20070165659A1 (en) * 2006-01-16 2007-07-19 Hitachi, Ltd. Information platform and configuration method of multiple information processing systems thereof
US20120290759A1 (en) * 2006-05-30 2012-11-15 Schneider Electric Industries Sas Virtual Placeholder Configuration for Distributed Input/Output Modules
US8966028B2 (en) * 2006-05-30 2015-02-24 Schneider Electric USA, Inc. Virtual placeholder configuration for distributed input/output modules
US7904681B1 (en) * 2006-06-30 2011-03-08 Emc Corporation Methods and systems for migrating data with minimal disruption
US8909746B2 (en) 2006-07-25 2014-12-09 Nvidia Corporation System and method for operating system installation on a diskless computing platform
US9003000B2 (en) * 2006-07-25 2015-04-07 Nvidia Corporation System and method for operating system installation on a diskless computing platform
US7610483B2 (en) 2006-07-25 2009-10-27 Nvidia Corporation System and method to accelerate identification of hardware platform classes
US20080043000A1 (en) * 2006-07-25 2008-02-21 Andrew Currid System and method to accelerate identification of hardware platform classes
US20080028052A1 (en) * 2006-07-25 2008-01-31 Andrew Currid System and method for operating system installation on a diskless computing platform
US20080028035A1 (en) * 2006-07-25 2008-01-31 Andrew Currid System and method for operating system installation on a diskless computing platform
US7975135B2 (en) 2006-11-23 2011-07-05 Dell Products L.P. Apparatus, method and product for selecting an iSCSI target for automated initiator booting
US20080201570A1 (en) * 2006-11-23 2008-08-21 Dell Products L.P. Apparatus, Method and Product for Selecting an iSCSI Target for Automated Initiator Booting
US7769722B1 (en) 2006-12-08 2010-08-03 Emc Corporation Replication and restoration of multiple data storage object types in a data network
US8706833B1 (en) * 2006-12-08 2014-04-22 Emc Corporation Data storage server having common replication architecture for multiple storage object types
US8635429B1 (en) 2007-06-29 2014-01-21 Symantec Corporation Method and apparatus for mapping virtual drives
US8738871B1 (en) * 2007-06-29 2014-05-27 Symantec Corporation Method and apparatus for mapping virtual drives
US20090089498A1 (en) * 2007-10-02 2009-04-02 Michael Cameron Hay Transparently migrating ongoing I/O to virtualized storage
US7979260B1 (en) * 2008-03-31 2011-07-12 Symantec Corporation Simulating PXE booting for virtualized machines
US8261068B1 (en) 2008-09-30 2012-09-04 Emc Corporation Systems and methods for selective encryption of operating system metadata for host-based encryption of data at rest on a logical unit
US8416954B1 (en) 2008-09-30 2013-04-09 Emc Corporation Systems and methods for accessing storage or network based replicas of encrypted volumes with no additional key management
US8166314B1 (en) 2008-12-30 2012-04-24 Emc Corporation Selective I/O to logical unit when encrypted, but key is not available or when encryption status is unknown
US8560825B2 (en) * 2010-06-30 2013-10-15 International Business Machines Corporation Streaming virtual machine boot services over a network
US20120005467A1 (en) * 2010-06-30 2012-01-05 International Business Machines Corporation Streaming Virtual Machine Boot Services Over a Network
US8458145B2 (en) * 2011-01-20 2013-06-04 Infinidat Ltd. System and method of storage optimization
US20120191667A1 (en) * 2011-01-20 2012-07-26 Infinidat Ltd. System and method of storage optimization
US9223609B2 (en) 2012-01-30 2015-12-29 Hewlett Packard Enterprise Development Lp Input/output operations at a virtual block device of a storage server
US9158568B2 (en) 2012-01-30 2015-10-13 Hewlett-Packard Development Company, L.P. Input/output operations at a virtual block device of a storage server
US9946559B1 (en) * 2012-02-13 2018-04-17 Veritas Technologies Llc Techniques for managing virtual machine backups
US9098325B2 (en) 2012-02-28 2015-08-04 Hewlett-Packard Development Company, L.P. Persistent volume at an offset of a virtual block device of a storage server
US9116623B2 (en) * 2012-08-14 2015-08-25 International Business Machines Corporation Optimizing storage system behavior in virtualized cloud computing environments by tagging input/output operation data to indicate storage policy
US8856484B2 (en) * 2012-08-14 2014-10-07 Infinidat Ltd. Mass storage system and methods of controlling resources thereof
US20140052945A1 (en) * 2012-08-14 2014-02-20 International Business Machines Corporation Optimizing storage system behavior in virtualized cloud computing environments by tagging input/output operation data to indicate storage policy
US9454670B2 (en) 2012-12-03 2016-09-27 International Business Machines Corporation Hybrid file systems
US9471802B2 (en) 2012-12-03 2016-10-18 International Business Machines Corporation Hybrid file systems
US20140164752A1 (en) * 2012-12-11 2014-06-12 Manikantan Venkiteswaran System and method for selecting a least cost path for performing a network boot in a data center network environment
US9280359B2 (en) * 2012-12-11 2016-03-08 Cisco Technology, Inc. System and method for selecting a least cost path for performing a network boot in a data center network environment
US10001927B1 (en) * 2014-09-30 2018-06-19 EMC IP Holding Company LLC Techniques for optimizing I/O operations
US20230135096A1 (en) * 2021-11-04 2023-05-04 International Business Machines Corporation File Based Virtual Disk Management
US11816363B2 (en) * 2021-11-04 2023-11-14 International Business Machines Corporation File based virtual disk management

Also Published As

Publication number Publication date
US20050228937A1 (en) 2005-10-13
JP2007516523A (en) 2007-06-21
CN1906569A (en) 2007-01-31
CN100552611C (en) 2009-10-21
US20050114595A1 (en) 2005-05-26
US7689803B2 (en) 2010-03-30
EP1687706A1 (en) 2006-08-09
US20050235132A1 (en) 2005-10-20
WO2005055043A1 (en) 2005-06-16
JP4750040B2 (en) 2011-08-17

Similar Documents

Publication Publication Date Title
US20050228950A1 (en) External encapsulation of a volume into a LUN to allow booting and installation on a complex volume
US10360056B2 (en) Redeploying a baseline virtual machine to update a child virtual machine by creating and swapping a virtual disk comprising a clone of the baseline virtual machine
US11093155B2 (en) Automated seamless migration with signature issue resolution
US7624262B2 (en) Apparatus, system, and method for booting using an external disk through a virtual SCSI connection
JP4802527B2 (en) Computer system
US20090049160A1 (en) System and Method for Deployment of a Software Image
US20120079474A1 (en) Reimaging a multi-node storage system
JP2005157713A (en) Disk array device
JP2005135408A (en) Hierarchical storage system
US8984224B2 (en) Multiple instances of mapping configurations in a storage system or storage appliance
US20100146039A1 (en) System and Method for Providing Access to a Shared System Image
US11853234B2 (en) Techniques for providing access of host-local storage to a programmable network interface component while preventing direct host CPU access
CN105068836A (en) SAS (serial attached SCSI) network based remotely-shareable start-up system
US11543973B2 (en) Techniques for software recovery and restoration
US20100169589A1 (en) Redundant storage system using dual-ported drives
US11797404B2 (en) Techniques for peer node recovery
US20220012208A1 (en) Configuring a file server
Dell
US11481138B2 (en) Creating indentical snapshots
US11586354B2 (en) Techniques for role assignment of components of a distributed application
US11899534B2 (en) Techniques for providing direct host-based access to backup data using a proxy file system
US11397539B2 (en) Distributed backup using local access
US11922043B2 (en) Data migration between storage systems
US8732688B1 (en) Updating system status
US20220317892A1 (en) Correlating time on storage network components

Legal Events

Date Code Title Description
AS Assignment

Owner name: VERITAS OPERATING CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KARR, RONALD S.;REEL/FRAME:016715/0345

Effective date: 20050614

AS Assignment

Owner name: SYMANTEC CORPORATION, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019872/0979

Effective date: 20061030

Owner name: SYMANTEC CORPORATION,CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019872/0979

Effective date: 20061030

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: SYMANTEC OPERATING CORPORATION, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 019872 FRAME 979. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNEE IS SYMANTEC OPERATING CORPORATION;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:027819/0462

Effective date: 20061030