US6732104B1 - Uniform routing of storage access requests through redundant array controllers - Google Patents

Uniform routing of storage access requests through redundant array controllers Download PDF

Info

Publication number
US6732104B1
US6732104B1 US09/874,515 US87451501A US6732104B1 US 6732104 B1 US6732104 B1 US 6732104B1 US 87451501 A US87451501 A US 87451501A US 6732104 B1 US6732104 B1 US 6732104B1
Authority
US
United States
Prior art keywords
logical volume
data
storage
access
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/874,515
Inventor
Bret S. Weber
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NetApp Inc
Original Assignee
LSI Logic Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LSI Logic Corp filed Critical LSI Logic Corp
Priority to US09/874,515 priority Critical patent/US6732104B1/en
Assigned to LSI LOGIC CORPORATION reassignment LSI LOGIC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WEBER, BRET S.
Application granted granted Critical
Publication of US6732104B1 publication Critical patent/US6732104B1/en
Assigned to NETAPP, INC. reassignment NETAPP, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LSI LOGIC CORPORATION
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • This invention relates to data storage in a computerized storage system, such as a storage area network (SAN). More particularly, the present invention relates to a new and improved technique of handling data access requests within the storage system in such a manner that the devices that issue the requests (e.g. “servers”) have improved flexibility in selecting the path through which to send the requests without adversely affecting the ability of the devices that receive the requests (e.g. “storage arrays”) to respond to the requests.
  • servers e.g. “servers”
  • storage arrays e.g. “storage arrays”
  • SAN storage area network
  • the servers 104 typically service data storage requirements of several client devices 106 , as shown in FIG. 1 .
  • the servers 104 are typically connected through switches or SAN's, such as Fibre Channel (FC) SAN fabrics 108 , to the storage arrays 102 .
  • FC Fibre Channel
  • the servers 104 access a plurality of logical volumes 110 present on the storage arrays 102 on behalf of the client devices 106 .
  • Each storage array 102 typically includes a bank 112 of individual storage devices (not shown, e.g. hard drives, compact disk (CD) drives, tape drives, etc.), typically arranged in a RAID (Redundant Array of Independent Drives) configuration.
  • the RAID storage devices supply data storage space for the logical volumes 110 .
  • the logical volumes 110 are commonly striped across multiple storage devices in the banks 112 of storage devices, and may be striped across multiple storage arrays 102 .
  • the servers 104 that access a given logical volume 110 must have a striping definition for the logical volume 110 if the logical volume 110 is striped across multiple storage arrays 102 and must have a connection, or path, to the storage array 102 that contains the logical volume 110 , or a portion thereof.
  • a manager device 113 typically sets up the logical volumes 110 and monitors for problems, such as a storage device that has failed or is about to fail.
  • the servers 104 typically discover the logical volumes 110 on the storage arrays 102 and the array controllers 114 through which the servers 104 can access the logical volumes 110 .
  • the servers 104 are thus configured to use the discovered logical volumes 110 .
  • Each storage array 102 also commonly includes more than one array controller 114 , through which the storage devices and logical volumes 110 are accessed.
  • Each array controller 114 typically connects to one of the switched fabrics 108 .
  • a data transfer path between one of the servers 104 and one of the array controllers 114 in one of the storage arrays 102 is established from a host bus adapter (HBA) 116 in the server 104 , through the switched fabric 108 (to which the host bus adapter 116 is attached), and to the array controller 114 (to which the switched fabric 108 is attached).
  • HBA host bus adapter
  • Some of the servers 104 may access the same logical volume 110 through more than one data transfer path through the switched fabrics 108 on behalf of the same or different client devices 106 . Therefore, more than one array controller 114 of a storage array 102 may receive a data access request to the same logical volume 110 , or portion thereof, present on the storage array 102 .
  • an array controller 114 When an array controller 114 receives a data access request to a logical volume 110 that the array controller 114 does not currently own, it transfers ownership of the logical volume 110 to itself in an automatic volume transfer (AVT) process and then processes the data access request.
  • AVT automatic volume transfer
  • the array controller 114 giving up ownership typically must “flush” cached data to the storage devices, so the array controller 114 that is receiving ownership will have the correct, up-to-date data in the storage devices.
  • the time required to perform the cache flush degrades the overall performance of the storage array 102 .
  • the data access request issued by the server 104 may “timeout” causing the server 104 to erroneously determine that the array controller 114 is not operating if the cache flush takes too much time.
  • Thrashing can severely degrade the performance of data accesses to the affected logical volume 110 , since significant time is taken up performing the AVT processes instead of accessing the affected logical volume 110 .
  • the servers 104 access the same logical volume 110 through the same common array controller 114 , then thrashing and unnecessary AVT processes are avoided, because the common array controller 114 can process all of the data access requests.
  • the servers 104 must be able to discover the logical volumes 110 to be able to configure themselves and to route the data access requests to the appropriate array controllers 114 .
  • the complexity of the tasks of the servers 104 increases significantly, such that maintaining and occasionally reconfiguring the complete description or striping definition of the logical volumes 110 in the servers 104 is extremely burdensome.
  • the servers 104 must be reconfigured to handle the changes.
  • the reconfiguration takes time away from handling data access requests by the servers and becomes considerably more complex as the number of storage arrays 102 increases.
  • the present invention enables servers in a storage system to issue data access requests through different data transfer paths without causing ownership of the logical volumes to thrash back and forth between controllers of a storage array.
  • the flexibility of the performance of the servers is increased without sacrificing the performance of the controllers.
  • the servers are relieved of some of the burden of maintaining a description or striping definition of the logical volumes, so reconfiguring and scaling the storage system are simpler and easier.
  • routing capabilities are included in I/O (input/output) modules that control access to the storage devices.
  • the routing capabilities are integrated into switched fabrics between the servers and the storage arrays, instead of into the I/O modules.
  • the routing capabilities are also preferably included in other types of devices (e.g. intelligent storage hubs, network attached storage appliances, snapshot/backup appliances, remote data appliances, etc.) that control access to the storage arrays.
  • the I/O modules receive the data access requests from the servers and, independently of the data transfer path through which the servers sent the requests, the I/O modules route the data access requests to the appropriate switched fabrics and to the appropriate controllers in the storage arrays.
  • the servers are relieved of the burden of managing a complete description of the logical volumes (e.g. storage array ID, volume ID, logical unit number, striping order, striping depth, etc.).
  • the servers are preferably configured only with a “global identifier” or “logical unit number” (LUN) that essentially identifies the logical volume as a single volume (e.g. a “virtual” volume), rather than as a combination of several volumes spread across multiple storage devices, and possibly across multiple storage arrays.
  • LUN logical unit number
  • the servers are also configured with information regarding which host bus adapters and data transfer paths that the servers can use to send the data access requests to the logical volume.
  • the servers preferably select a preferred path to a preferred I/O module when sending the data access request, but can typically switch to another path and I/O module when necessary to normalize the loading on the host bus adapters and increase the overall data transfer bandwidth.
  • the selection of any path has no effect on and is made without regard to the controller that will handle the response to the access request at each storage array.
  • the I/O modules are preferably configured with information by which the global identifiers are “mapped” to a “local identifier” for the complete description of the logical volumes, so that the I/O modules can distribute the data access requests to the correct array controllers that have control over those logical volumes.
  • the I/O modules are also preferably configured with information indicating which controller in each storage array currently has “ownership” of the logical data volumes or portions thereof, so that the I/O modules can route the data access requests to the current owner controller and avoid unnecessary ownership transfers, regardless of the data transfer path selected by the servers.
  • the 110 module receives an access request directed by a server to a logical volume as a single volume whose physical location or distribution is unknown to the server.
  • the I/O module distributes the access request to the relevant controller(s), receives back one or more responses from the controller(s), assembles the responses into a single response if necessary and sends the response to the server.
  • FIG. 1 is a block diagram of a prior art data storage system.
  • FIG. 2 is a block diagram of a data storage system incorporating the present invention.
  • FIG. 3 is a block diagram of a virtual storage device incorporated within the storage system shown in FIG. 2 .
  • FIG. 4 is a flowchart for a procedure for setting up a logical volume within the virtual storage device shown in FIG. 3 to be used by the storage system shown in FIG. 2 .
  • FIG. 5 is a flowchart for a procedure for configuring the storage system shown in FIG. 2 to use the logical volume set up in FIG. 4 .
  • FIG. 6 is a flowchart for a procedure for the virtual storage device shown in FIG. 3 to handle a data access request.
  • FIG. 7 is a flowchart for a procedure for the virtual storage device shown in FIG. 3 to handle an error situation transparent to the rest of the storage system shown in FIG. 2 .
  • a data storage system 120 shown in FIG. 2, such as a storage area network (SAN), generally includes logical volumes 122 that are accessed by one or more conventional servers 124 , 126 and 128 .
  • the servers 124 - 128 issue the access requests on behalf of one or more conventional client devices 130 or applications 132 running on the servers 124 - 128 .
  • the servers 124 - 128 utilize the logical volumes 122 to store data for the applications 132 or the client devices 130 .
  • the logical volumes 122 are contained in a virtual storage device 134 , described in more detail below, that is represented to the servers 124 - 128 as a single device.
  • the virtual storage device 134 actually includes a variety of separate routing devices, controllers and storage devices (e.g.
  • the logical volumes 122 are represented to the servers 124 - 128 as single volumes, but are typically a combination of multiple volumes distributed across multiple storage devices and possibly multiple storage arrays, as described below.
  • the servers 124 - 128 issue data access requests, on behalf of the client devices 130 or applications 132 , to the virtual storage device 134 for access to the logical volumes 122 .
  • the virtual storage device 134 “uniformly” reroutes the access requests to the preferred destinations therein, so the servers 124 - 128 can choose to send the access requests through any available transfer paths, while the virtual storage device 134 ensures that the access requests are consistently distributed to a preferred destination independently of the chosen transfer paths.
  • a remote manager 136 creates and manages the logical volumes 122 on the virtual storage device 134 and labels each logical volume with a unique “port identifier” and “logical unit number” (LUN), which combined define a “global identifier.” For each logical volume 122 that each server 124 - 128 uses, the server 124 - 128 discovers the logical volume 122 on the virtual storage device 134 and configures itself with the global identifier.
  • the global identifier is a unique identifier (e.g. number) that identifies the logical volume 122 and the port, or channel, of the virtual storage device 134 through which the logical volume 122 can be accessed.
  • More than one global identifier may be used to identify the logical volume 122 , so that the logical volume 122 can be accessed through more than one port or channel.
  • the logical volume 122 is identified outside of the virtual storage device 134 as a single volume associated with the server(s) 124 - 128 that can access the logical volume 122 and with the data transfer paths, or channels, through which the server 124 - 128 can access the logical volume 122 .
  • the data transfer paths typically extend through a conventional host bus adapter (HBA) 138 in the server 124 - 128 , through a conventional network, or SAN, fabric (e.g.
  • HBA host bus adapter
  • InfiniBand fabric 140 Small Computer System Interface “SCSI” busses 142 , Fibre Channel fabric 144 or Gigabit Ethernet fabric 146 ) and to an appropriate conventional network interface (e.g. InfiniBand interface 148 , SCSI interface 150 , Fibre Channel interface 152 or Gigabit Ethernet interface 154 ) in the virtual storage device 134 .
  • SCSI Small Computer System Interface
  • the global identifier i.e. port identifier and LUN
  • a “local identifier” with which components (described below) of the virtual storage device 134 determine the physical distribution of the logical volume 122 within the virtual storage device 134 .
  • the local identifier also identifies the array controller, described below, within each storage array, that has control over the logical volume 122 , or portion thereof, contained within the storage devices of the storage array.
  • the function of the servers 124 - 128 to issue an access request is simplified, since the servers 124 - 128 reference only the global identifier in the access request and send the access request through the data transfer path from the server 124 - 128 to the virtual storage device 134 according to the port identifier associated with the global identifier. If more than one global identifier is associated with the logical volume 122 , then the servers 124 - 128 have a selection of available data transfer paths. Regardless of which global identifier is used by the servers 124 - 128 , the virtual storage device 134 assumes the task of distributing and routing the access request to the appropriate devices (e.g.
  • the flexibility of the servers 124 - 128 in issuing an access request is enhanced, since the servers 124 - 128 can use any available data transfer path from the server 124 - 128 to the virtual storage device 134 , independently of the physical components (e.g. the array controllers) within the virtual storage device 134 that will receive and handle the access request. Likewise, the virtual storage device 134 can respond to the access request independently of the data transfer path selected by the server 124 - 128 , except to ensure that the response returns through the same path. In this manner, the servers 124 - 128 can maximize the data throughput or bandwidth through the host bus adapters 138 by alternating access requests between the host bus adapters 138 or using whichever host bus adapter 138 is available.
  • the virtual storage device 134 may interact with other storage-related devices and systems, such as a backup storage system 156 and a remote data facility 158 .
  • the backup storage system 156 typically contains a copy of the data from some or all of the logical volumes 122 made at a particular point in time, so the data can be restored in the event of loss of the original data in the logical volumes 122 .
  • the remote data facility 158 maintains a copy of the data from some or all of the logical volumes 122 in a geographically remote storage 160 , so that the remote data facility 158 can be used for data storage in place of the data storage system 120 in the event of a catastrophic failure of the data storage system 120 , such as due to an earthquake, flood, etc.
  • the virtual storage device 134 and each logical volume 122 are represented as a single device and a single volume, respectively, as described above.
  • the remote manager 136 is shown connected to the servers 124 - 128 through a communication system, such as a local area network (LAN) 162 , along with the client devices 130 .
  • a communication system such as a local area network (LAN) 162
  • the remote manager 136 passes instructions to the virtual storage device 134 through the data transfer capabilities of the servers 124 - 128 .
  • the remote manager 136 may alternatively be connected at any point, such as directly to one of the servers 124 - 128 or at one of the SAN fabrics 140 - 146 or at the virtual storage device 134 , or the remote manager 136 may be incorporated into one of the servers 124 - 128 .
  • the virtual storage device 134 includes a variety of I/O devices (e.g. I/O modules 164 and 166 , network attached storage “NAS” appliance 168 , snapshot/backup appliance 170 and remote data “RD” appliance 172 ) and several storage arrays 174 , 176 and 178 connected together through one or more internal private switched fabrics 180 , such as a Fibre Channel switched network, that is “hidden” from the servers 104 (FIG. 2 ).
  • the virtual storage device 134 contains the data for the logical volumes 122 (FIG. 2) in banks 182 of storage devices (not shown) that contain one or more physical data volumes 184 .
  • the data volumes 184 in the storage arrays 174 - 178 are the components of the logical volumes 122 in the virtual storage device 134 (FIG. 2 ). Each logical volume 122 may be distributed among more than one data volume 184 and possibly more than one storage array 174 - 178 .
  • the servers 124 - 128 send the access requests to one of the I/O modules 164 or 166 , which routes, or distributes, by means of a router 186 included therein, the access requests through the switched fabrics 180 to array controllers 188 in the storage arrays 174 - 178 .
  • the array controllers 188 control access to the data in the data volumes 184 in the banks 182 of storage devices, so the array controllers 188 write data to and read data from the storage devices in response to the access requests.
  • the array controllers 188 also return responses to the access requests through the switched fabrics 180 to the I/O modules 164 and 166 .
  • one logical volume 122 may be made of more than one physical data volume 184
  • an access request from a server 124 - 128 (FIG. 2) directed to a logical volume 122 may be interpreted, or converted, by the receiving I/O module 164 or 166 into “sub” access requests, or actual data access commands, directed to the individual data volumes 184 .
  • the I/O module 164 or 166 sends the access commands to the array controller(s) 188 that has control over the data volumes 184 that make up the logical volume 122 .
  • the array controller 188 responds to the entire access request or to its “portion” of the access request.
  • the I/O module 164 or 166 assembles the responses, if necessary, from each array controller 188 into a single response that is transferred back to the server 124 - 128 .
  • the server 124 - 128 issues the access request to a logical volume 122 that is represented to the server 124 - 128 as a single volume and receives back a response that is represented as being from the single volume.
  • the actual processing of the access request is, thus, “transparent” to the server 124 - 128 .
  • the I/O modules 164 and 166 preferably include a variety of the network interfaces 148 - 154 (see also FIG. 2) which connect to the SAN fabrics 140 - 146 (FIG. 2 ). Therefore, the servers 124 - 128 (FIG. 2) use a conventional transfer protocol determined by the type of SAN fabric 140 - 146 to which they are attached.
  • the I/O modules 164 and 166 convert the transfer protocol used by the servers 124 - 128 into the transfer protocol (e.g. Fibre Channel) that is used within the virtual storage device 134 . In this manner, the data storage system 120 (FIG. 2) enables the additional flexibility of allowing the servers 124 - 128 to use a variety of transfer protocols and SAN fabrics 140 - 146 , including, but not limited to, file-level and block-level transfer protocols.
  • each I/O module 164 and 166 preferably includes the same set of network interfaces 148 - 154 , so multiple data transfer paths can be established from any server 124 - 128 through the connected SAN fabric 140 - 146 to any I/O module 164 or 166 .
  • each server 124 - 128 can make use of both of the host bus adapters 138 (FIG. 2) contained therein to send access requests to the virtual storage device 134 . In this manner, the servers 124 - 128 can maximize their data transfer capability.
  • the I/O modules 164 and 166 are configured with information, in accord with the local identifier, identifying the same array controller 188 to receive the access commands distributed from the I/O modules 164 and 166 , i.e. the array controller 188 that has control over the relevant data volumes 184 in each storage array 174 - 178 . Therefore, the I/O modules 164 and 166 distribute the access commands to the same array controller 188 for each data volume 184 . In this manner, unnecessary transfer of control, or ownership, of the data volumes 184 between the array controllers 188 (i.e. “thrashing”) is avoided.
  • the NAS appliance 168 , the snapshot/backup appliance 170 and the RD appliance 172 also each include one of the routers 186 for routing capabilities similar to those described above for the I/O modules 164 and 166 .
  • the NAS appliance 168 may include a set of the network interfaces 148 , 152 and 154 for connecting through the SAN fabrics 140 , 144 and 146 (FIG. 2) to a variety of devices (not shown) that require conventional “file level” data storage, as opposed to conventional “block level” data storage supplied by the I/O modules 164 and 166 .
  • the NAS appliance 168 may not have any of the network interfaces 148 , 152 or 154 , but may access the variety of devices through the switched fabrics 180 and the I/O modules 164 and 166 .
  • the snapshot/backup appliance 170 communicates with the storage arrays 174 - 178 through the internal switched fabrics 180 and with the backup storage system 156 (FIG. 2) through the internal switched fabrics 180 and the I/O modules 164 and 166 .
  • the snapshot/backup appliance 170 preferably forms a conventional “snapshot” of the data volumes 184 in the storage arrays 174 - 178 in order to copy, or backup, the data volumes 184 to the backup storage system 156 . With the backup data (not shown), the data volumes 184 can be restored in the event of loss of data.
  • the RD appliance 172 communicates with the remote data facility 158 (FIG. 2) through the internal switched fabrics 180 and the I/O modules 164 and 166 .
  • the RD appliance 172 maintains a mirrored copy of the data volumes 184 in the remote storage 160 (FIG. 2 ), so that the remote data facility 158 can take over data storage requirements for the client devices 130 in the event of a failure of the data storage system 120 (FIG. 2 ), such as a power failure.
  • the routing functions of the router 186 distribute the access commands to the array controllers 188 that have control over the relevant data volumes 184 , so the connected devices (the backup storage system 156 and the remote data facility 158 ) do not need to have information regarding the individual data volumes 184 . Therefore, the data volumes 184 are presented to the connected devices 156 and 158 as the single logical volumes 122 (FIG. 2 ), as described above, and the appliances 168 - 172 handle the distribution of the access commands to the array controllers 188 .
  • the remote manager 136 controls the creation of the data volumes 184 (FIG. 3) and the logical volumes 122 (FIG. 2) and the configuration of the I/O devices 164 - 172 (FIG. 3) according to a procedure 190 shown in FIG. 4 .
  • the procedure 190 starts at step 192 .
  • the logical volume 122 is created with its component data volumes 184 .
  • the storage space is set aside, or reserved, in the storage devices (not shown) in the storage arrays 174 - 178 (FIG. 3) for the data volumes 184 that make up the logical volume 122 .
  • the logical volume 122 and the data volumes 184 are created based on attributes, or performance parameters, required by the user of the logical volume 122 . Such performance parameters typically include size, transaction rate, bandwidth and RAID level, among others.
  • the logical volume 122 (FIG. 2) is mapped (step 196 ) to a server, global identifier, LUN combination.
  • the server 124 - 128 (FIG. 2) is one which will issue the access requests to the logical volume.
  • the global identifier identifies the logical volume 122 to the server 124 - 128 .
  • the logical volume 122 is also mapped (step 198 ) to the internal identifier, a logical unit number that identifies the logical volume 122 and its component data volumes 184 (FIG. 3) and actual storage arrays 174 - 178 (FIG. 3) and storage devices that make up the logical volume 122 .
  • the I/O devices 164 - 172 (FIG. 3) that are to use, or have access to, the logical volume 122 (FIG. 2) are configured (step 200 ) to map the server, global identifier and LUN combination to the internal identifier.
  • the I/O modules 164 and 166 can interpret, or convert, the access requests received from the server 124 - 128 (FIG. 2) into the specific access commands for the array controllers 188 (FIG. 3) that have control over the relevant data volumes 184 (FIG. 3 ).
  • the appliances 168 - 172 can do similarly for the backup storage system 156 (FIG. 2 ), the remote data facility 158 (FIG. 2) or other device (not shown).
  • the procedure 190 ends at step 201 .
  • the servers 124 - 128 configure themselves to use the logical volumes 122 (FIG. 2) according to a procedure 202 shown in FIG. 5 .
  • the procedure 202 starts at step 203 .
  • the server 124 - 128 discovers (step 204 ) the port identifier and LUN (i.e. the global identifier) by conventional discovery software that queries the I/O modules 164 and 166 (FIG. 3) for the logical volumes 122 .
  • the host file system (not shown) of the server 124 - 128 is formatted (step 205 ) on the target logical volumes 122 indicated by the global identifier.
  • the data storage system 120 (FIG. 2) is then ready to begin servicing access requests from the server 124 - 128 to the virtual storage device 134 (FIGS. 2 and 3 ).
  • the procedure 202 ends at step 206 .
  • the I/O module 164 or 166 responds according to a procedure 208 shown in FIG. 6 .
  • the procedure 208 starts at step 210 .
  • the access request is received from the server 124 - 128 .
  • the server, global identifier and LUN combination is identified at step 214 from the access request, since the server 124 - 128 includes this information in the access request.
  • the server, global identifier and LUN combination is used to look up (step 216 ) the internal identifier(s) for the data volume(s) 184 (FIG. 3 ).
  • the data volumes 184 that form the logical volume 122 are identified along with the storage array(s) 174 - 178 and the individual storage devices (not shown) in the banks 182 (FIG. 3) of storage devices and the spaces (not shown) therein that are used to form the data volumes 184 .
  • the array controller(s) 188 (FIG. 3) that currently has control over the data volumes 184 is also identified at step 216 .
  • the access request received from the server 124 - 128 (FIG. 2) is interpreted, or converted, (step 218 ) into the access command(s) for the array controller(s) 188 (FIG. 3) that has control over each data volume 184 .
  • the access command(s) is distributed, or sent, (step 220 ) through the switched fabrics 180 (FIG. 3) to the array controller(s) 188 .
  • the array controller(s) 188 and storage devices respond in a conventional manner to carry out the access request, and the response(s) is received (step 222 ) back from the array controller(s) 188 .
  • the return responses are collected and assembled (step 224 ), if necessary, into the single response.
  • the single response is sent (step 226 ) to the server 124 - 128 .
  • the procedure 208 ends at step 228 .
  • the NAS appliance 168 responds in a similar manner to that described above with reference to FIG. 6 upon receiving a file level access request.
  • the snapshot appliance 170 (FIG. 3) and the RD appliance 172 (FIG. 3) also function similarly, except that they typically do not receive access requests from the backup storage system 156 (FIG. 2) or the remote data facility 158 (FIG. 2 ), respectively. Instead, the snapshot appliance 170 and the RD appliance 172 internally initiate access requests to perform the snapshot, data backup and remote mirroring functions.
  • An advantage of the data storage system 120 is that certain errors can be detected and corrective measures taken within the virtual storage device 134 (FIGS. 2 and 3) transparent to, or without the involvement of, the servers 124 - 128 (FIG. 2 ), so the servers 124 - 128 can continue to service the data storage requirements of the client devices 130 (FIG. 2) without being interrupted to correct the errors.
  • An exemplary error recovery procedure 230 performed by the I/O devices 164 - 172 (FIG. 3) is shown in FIG. 7 . The procedure 230 starts at step 232 . At step 234 , one of the I/O devices 164 - 172 encounters an error when attempting to access one of the data volumes 184 (FIG.
  • the source of the error may be a component of the internal switched fabric 180 (FIG. 3) through which the I/O device 164 - 172 sent the access command to the storage array 174 - 178 , the array controller 188 (FIG. 3) of the storage array 174 - 178 to which the I/O device 164 - 172 sent the access command, one of the storage devices (not shown) in the storage array 174 - 178 , etc.
  • the I/O device 164 - 172 (FIG. 3) is reconfigured (step 236 ) with information for the local identifier and a different data transfer path, i.e. different switched fabric 108 and/or array controller 188 .
  • the access commands are effectively rerouted for the data volume 184 through the new data transfer path, thereby avoiding the source of the error.
  • the other I/O devices 164 - 172 are informed (step 238 ) of the new data transfer path, so the other I/O devices 164 - 172 can reroute the access commands they receive that are directed to the same data volume 184 and avoid the same error.
  • the remote manager 136 (FIG.
  • step 240 is also informed (step 240 ) of the new data transfer path and of the error, so the system administrator can be notified to take corrective action to replace faulty equipment and/or to prevent the recurrence of the error.
  • the server 124 - 128 is not informed of the changes that took place in the virtual storage device 134 , since the internal functioning of the virtual storage device 134 is transparent to the servers 124 - 128 .
  • the procedure 230 ends at step 242 .
  • the above described invention enables flexibility in the creation and management of logical volumes on storage arrays according to prescribed parameters, such as transaction rate (I/O's per second) and bandwidth, for the performance of the logical volumes.
  • the above described invention also has the advantage of allowing improved flexibility in operation of the servers 124 - 128 (FIG. 2) without degrading the performance of the storage arrays 174 - 178 (FIG. 3) due to the rerouting capabilities of the access requests by the I/O devices 164 - 172 (FIG. 3 ). Since the functions, components and architecture within the virtual storage device 134 (FIGS. 2 and 3) are transparent to the servers 124 - 128 , the servers 124 - 128 are not restricted thereby and have little effect thereon.
  • the servers 124 - 128 may use any available resources, such as the host bus adapters 116 (FIG. 2) and data transfer paths, without respect to the internal configuration of the virtual storage device 134 .
  • the I/O devices 164 - 172 uniformly use a preferred resource, such as one of the array controllers 188 (FIG. 3 ), without respect to the data transfer path selected by the servers 124 - 128 for a “uniform” routing of the access requests to the array controllers 188 .
  • the “transparency” of the virtual storage device 134 to the servers 124 - 128 also allows changes to the components of the virtual storage device 134 to be made without involvement of the servers 124 - 128 . Therefore, the upgrading and scaling of components in the virtual storage device 134 are performed more easily.

Abstract

Data storage space in a data storage system is represented to the devices that access the data (e.g. “servers”) as a single virtual storage device containing logical volumes of data, even though the storage space is typically formed from several storage devices and possibly from arrays of storage devices containing multiple volumes of data. Therefore, the servers can issue the access requests through any available transfer path from the servers to the virtual storage device. However, I/O (input/output) devices control access between the servers and the devices that control access to the data (e.g. “array controllers”), so that the access requests from the servers are uniformly routed to the preferred array controllers independently of the transfer paths through which the servers issue the access requests.

Description

CROSS-REFERENCE TO RELATED INVENTION
This invention is related to an invention described in U.S. patent application Ser. No. 09/875,475 for “Uniform Routing of Storage Access Requests through Redundant Array Controllers,” filed on Jun. 6, 2001. This application is incorporated herein by this reference.
FIELD OF INVENTION
This invention relates to data storage in a computerized storage system, such as a storage area network (SAN). More particularly, the present invention relates to a new and improved technique of handling data access requests within the storage system in such a manner that the devices that issue the requests (e.g. “servers”) have improved flexibility in selecting the path through which to send the requests without adversely affecting the ability of the devices that receive the requests (e.g. “storage arrays”) to respond to the requests.
BACKGROUND OF THE INVENTION
Current prior art high-capacity computerized data storage systems, such as the one shown in FIG. 1, typically involve a storage area network (SAN) 100 within which one or more conventional storage arrays 102 store data on behalf of one or more servers 104. The servers 104 typically service data storage requirements of several client devices 106, as shown in FIG. 1. The servers 104 are typically connected through switches or SAN's, such as Fibre Channel (FC) SAN fabrics 108, to the storage arrays 102. The servers 104 access a plurality of logical volumes 110 present on the storage arrays 102 on behalf of the client devices 106.
Each storage array 102 typically includes a bank 112 of individual storage devices (not shown, e.g. hard drives, compact disk (CD) drives, tape drives, etc.), typically arranged in a RAID (Redundant Array of Independent Drives) configuration. The RAID storage devices supply data storage space for the logical volumes 110. The logical volumes 110 are commonly striped across multiple storage devices in the banks 112 of storage devices, and may be striped across multiple storage arrays 102. The servers 104 that access a given logical volume 110 must have a striping definition for the logical volume 110 if the logical volume 110 is striped across multiple storage arrays 102 and must have a connection, or path, to the storage array 102 that contains the logical volume 110, or a portion thereof. A manager device 113 typically sets up the logical volumes 110 and monitors for problems, such as a storage device that has failed or is about to fail. Through a discovery process, the servers 104 typically discover the logical volumes 110 on the storage arrays 102 and the array controllers 114 through which the servers 104 can access the logical volumes 110. The servers 104 are thus configured to use the discovered logical volumes 110.
Each storage array 102 also commonly includes more than one array controller 114, through which the storage devices and logical volumes 110 are accessed. Each array controller 114 typically connects to one of the switched fabrics 108. Thus, a data transfer path between one of the servers 104 and one of the array controllers 114 in one of the storage arrays 102 is established from a host bus adapter (HBA) 116 in the server 104, through the switched fabric 108 (to which the host bus adapter 116 is attached), and to the array controller 114 (to which the switched fabric 108 is attached).
Some of the servers 104 may access the same logical volume 110 through more than one data transfer path through the switched fabrics 108 on behalf of the same or different client devices 106. Therefore, more than one array controller 114 of a storage array 102 may receive a data access request to the same logical volume 110, or portion thereof, present on the storage array 102.
When one of the array controllers 114 of a given storage array 102 processes a data access request to a given logical volume 110, that array controller 114 is said to have access control or “ownership” of the logical volume 110. When one array controller 114 has ownership of the logical volume 110, no other array controller 114 in the storage array 102 can access the logical volume 110 without transferring ownership to itself, due to cache coherency issues.
When an array controller 114 receives a data access request to a logical volume 110 that the array controller 114 does not currently own, it transfers ownership of the logical volume 110 to itself in an automatic volume transfer (AVT) process and then processes the data access request. Upon transfer of ownership, the array controller 114 giving up ownership typically must “flush” cached data to the storage devices, so the array controller 114 that is receiving ownership will have the correct, up-to-date data in the storage devices. The time required to perform the cache flush, however, degrades the overall performance of the storage array 102. Additionally, the data access request issued by the server 104 may “timeout” causing the server 104 to erroneously determine that the array controller 114 is not operating if the cache flush takes too much time. Furthermore, when the same logical volume 110 is repeatedly accessed through different array controllers 114, then the array controllers 114 repetitively transfer ownership of the logical volume 110 back and forth between themselves. The repetitive ownership transferring is called “thrashing.” Thrashing can severely degrade the performance of data accesses to the affected logical volume 110, since significant time is taken up performing the AVT processes instead of accessing the affected logical volume 110.
When the servers 104 access the same logical volume 110 through the same common array controller 114, then thrashing and unnecessary AVT processes are avoided, because the common array controller 114 can process all of the data access requests. However, it is sometimes advantageous for one or more of the servers 104 to switch between its host bus adapters 116 for accessing the same logical volume 110, even when no array controller 114 has failed. In this manner, the servers 104 can optimize the use of their host bus adapters 116, but at the expense of thrashing between the array controllers 114, since the host bus adapters 116 are connected to the storage array 102 through different data transfer paths.
Additionally, the servers 104 must be able to discover the logical volumes 110 to be able to configure themselves and to route the data access requests to the appropriate array controllers 114. As the size of the SAN 100 and the number of the storage arrays 102 increases, however, the complexity of the tasks of the servers 104 increases significantly, such that maintaining and occasionally reconfiguring the complete description or striping definition of the logical volumes 110 in the servers 104 is extremely burdensome.
Furthermore, whenever a new storage array 102 or storage device is added to (or deleted from) the SAN 100 or whenever the distribution of a logical volume 110 across the banks 112 of storage devices or across the storage arrays 102 is changed, the servers 104 must be reconfigured to handle the changes. The reconfiguration takes time away from handling data access requests by the servers and becomes considerably more complex as the number of storage arrays 102 increases.
It is with respect to these and other background considerations that the present invention has evolved.
SUMMARY OF THE INVENTION
The present invention enables servers in a storage system to issue data access requests through different data transfer paths without causing ownership of the logical volumes to thrash back and forth between controllers of a storage array. Thus, the flexibility of the performance of the servers is increased without sacrificing the performance of the controllers. Additionally, particularly in situations where logical volumes are striped across multiple storage arrays, the servers are relieved of some of the burden of maintaining a description or striping definition of the logical volumes, so reconfiguring and scaling the storage system are simpler and easier.
In accordance with these features, routing capabilities are included in I/O (input/output) modules that control access to the storage devices. Alternatively, the routing capabilities are integrated into switched fabrics between the servers and the storage arrays, instead of into the I/O modules. The routing capabilities are also preferably included in other types of devices (e.g. intelligent storage hubs, network attached storage appliances, snapshot/backup appliances, remote data appliances, etc.) that control access to the storage arrays.
The I/O modules (or other routing-capability devices) receive the data access requests from the servers and, independently of the data transfer path through which the servers sent the requests, the I/O modules route the data access requests to the appropriate switched fabrics and to the appropriate controllers in the storage arrays. In this manner, when applicable, the servers are relieved of the burden of managing a complete description of the logical volumes (e.g. storage array ID, volume ID, logical unit number, striping order, striping depth, etc.). Instead, the servers are preferably configured only with a “global identifier” or “logical unit number” (LUN) that essentially identifies the logical volume as a single volume (e.g. a “virtual” volume), rather than as a combination of several volumes spread across multiple storage devices, and possibly across multiple storage arrays.
The servers are also configured with information regarding which host bus adapters and data transfer paths that the servers can use to send the data access requests to the logical volume. The servers preferably select a preferred path to a preferred I/O module when sending the data access request, but can typically switch to another path and I/O module when necessary to normalize the loading on the host bus adapters and increase the overall data transfer bandwidth. The selection of any path, however, has no effect on and is made without regard to the controller that will handle the response to the access request at each storage array.
The I/O modules are preferably configured with information by which the global identifiers are “mapped” to a “local identifier” for the complete description of the logical volumes, so that the I/O modules can distribute the data access requests to the correct array controllers that have control over those logical volumes. The I/O modules are also preferably configured with information indicating which controller in each storage array currently has “ownership” of the logical data volumes or portions thereof, so that the I/O modules can route the data access requests to the current owner controller and avoid unnecessary ownership transfers, regardless of the data transfer path selected by the servers. Thus, the 110 module receives an access request directed by a server to a logical volume as a single volume whose physical location or distribution is unknown to the server. The I/O module distributes the access request to the relevant controller(s), receives back one or more responses from the controller(s), assembles the responses into a single response if necessary and sends the response to the server.
A more complete appreciation of the present invention and its scope, and the manner in which it achieves the above noted improvements, can be obtained by reference to the following detailed description of presently preferred embodiments of the invention taken in connection with the accompanying drawings, which are briefly summarized below, and the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a prior art data storage system.
FIG. 2 is a block diagram of a data storage system incorporating the present invention.
FIG. 3 is a block diagram of a virtual storage device incorporated within the storage system shown in FIG. 2.
FIG. 4 is a flowchart for a procedure for setting up a logical volume within the virtual storage device shown in FIG. 3 to be used by the storage system shown in FIG. 2.
FIG. 5 is a flowchart for a procedure for configuring the storage system shown in FIG. 2 to use the logical volume set up in FIG. 4.
FIG. 6 is a flowchart for a procedure for the virtual storage device shown in FIG. 3 to handle a data access request.
FIG. 7 is a flowchart for a procedure for the virtual storage device shown in FIG. 3 to handle an error situation transparent to the rest of the storage system shown in FIG. 2.
DETAILED DESCRIPTION
A data storage system 120 shown in FIG. 2, such as a storage area network (SAN), generally includes logical volumes 122 that are accessed by one or more conventional servers 124, 126 and 128. Typically, the servers 124-128 issue the access requests on behalf of one or more conventional client devices 130 or applications 132 running on the servers 124-128. The servers 124-128 utilize the logical volumes 122 to store data for the applications 132 or the client devices 130. The logical volumes 122 are contained in a virtual storage device 134, described in more detail below, that is represented to the servers 124-128 as a single device. However, the virtual storage device 134 actually includes a variety of separate routing devices, controllers and storage devices (e.g. conventional hard drives, tape drives, compact disk drives, etc.), described below. Likewise, the logical volumes 122 are represented to the servers 124-128 as single volumes, but are typically a combination of multiple volumes distributed across multiple storage devices and possibly multiple storage arrays, as described below.
The servers 124-128 issue data access requests, on behalf of the client devices 130 or applications 132, to the virtual storage device 134 for access to the logical volumes 122. The virtual storage device 134 “uniformly” reroutes the access requests to the preferred destinations therein, so the servers 124-128 can choose to send the access requests through any available transfer paths, while the virtual storage device 134 ensures that the access requests are consistently distributed to a preferred destination independently of the chosen transfer paths.
A remote manager 136 creates and manages the logical volumes 122 on the virtual storage device 134 and labels each logical volume with a unique “port identifier” and “logical unit number” (LUN), which combined define a “global identifier.” For each logical volume 122 that each server 124-128 uses, the server 124-128 discovers the logical volume 122 on the virtual storage device 134 and configures itself with the global identifier. The global identifier is a unique identifier (e.g. number) that identifies the logical volume 122 and the port, or channel, of the virtual storage device 134 through which the logical volume 122 can be accessed. More than one global identifier may be used to identify the logical volume 122, so that the logical volume 122 can be accessed through more than one port or channel. In this manner, the logical volume 122 is identified outside of the virtual storage device 134 as a single volume associated with the server(s) 124-128 that can access the logical volume 122 and with the data transfer paths, or channels, through which the server 124-128 can access the logical volume 122. The data transfer paths typically extend through a conventional host bus adapter (HBA) 138 in the server 124-128, through a conventional network, or SAN, fabric (e.g. InfiniBand fabric 140, Small Computer System Interface “SCSI” busses 142, Fibre Channel fabric 144 or Gigabit Ethernet fabric 146) and to an appropriate conventional network interface (e.g. InfiniBand interface 148, SCSI interface 150, Fibre Channel interface 152 or Gigabit Ethernet interface 154) in the virtual storage device 134.
Within the virtual storage device 134, the global identifier (i.e. port identifier and LUN) is “mapped” to a “local identifier” with which components (described below) of the virtual storage device 134 determine the physical distribution of the logical volume 122 within the virtual storage device 134. The local identifier also identifies the array controller, described below, within each storage array, that has control over the logical volume 122, or portion thereof, contained within the storage devices of the storage array. In this manner, the function of the servers 124-128 to issue an access request is simplified, since the servers 124-128 reference only the global identifier in the access request and send the access request through the data transfer path from the server 124-128 to the virtual storage device 134 according to the port identifier associated with the global identifier. If more than one global identifier is associated with the logical volume 122, then the servers 124-128 have a selection of available data transfer paths. Regardless of which global identifier is used by the servers 124-128, the virtual storage device 134 assumes the task of distributing and routing the access request to the appropriate devices (e.g. array controllers), assembling the various responses from the array controllers into a single response and returning the single response to the server 124-128. Additionally, the flexibility of the servers 124-128 in issuing an access request is enhanced, since the servers 124-128 can use any available data transfer path from the server 124-128 to the virtual storage device 134, independently of the physical components (e.g. the array controllers) within the virtual storage device 134 that will receive and handle the access request. Likewise, the virtual storage device 134 can respond to the access request independently of the data transfer path selected by the server 124-128, except to ensure that the response returns through the same path. In this manner, the servers 124-128 can maximize the data throughput or bandwidth through the host bus adapters 138 by alternating access requests between the host bus adapters 138 or using whichever host bus adapter 138 is available.
In addition to the servers 124-128, the virtual storage device 134 may interact with other storage-related devices and systems, such as a backup storage system 156 and a remote data facility 158. The backup storage system 156 typically contains a copy of the data from some or all of the logical volumes 122 made at a particular point in time, so the data can be restored in the event of loss of the original data in the logical volumes 122. The remote data facility 158 maintains a copy of the data from some or all of the logical volumes 122 in a geographically remote storage 160, so that the remote data facility 158 can be used for data storage in place of the data storage system 120 in the event of a catastrophic failure of the data storage system 120, such as due to an earthquake, flood, etc. For either the backup storage system 156 or the remote data facility 158, the virtual storage device 134 and each logical volume 122 are represented as a single device and a single volume, respectively, as described above.
The remote manager 136 is shown connected to the servers 124-128 through a communication system, such as a local area network (LAN) 162, along with the client devices 130. In this configuration, the remote manager 136 passes instructions to the virtual storage device 134 through the data transfer capabilities of the servers 124-128. However, the remote manager 136 may alternatively be connected at any point, such as directly to one of the servers 124-128 or at one of the SAN fabrics 140-146 or at the virtual storage device 134, or the remote manager 136 may be incorporated into one of the servers 124-128.
The virtual storage device 134, as shown in FIG. 3, includes a variety of I/O devices (e.g. I/ O modules 164 and 166, network attached storage “NAS” appliance 168, snapshot/backup appliance 170 and remote data “RD” appliance 172) and several storage arrays 174, 176 and 178 connected together through one or more internal private switched fabrics 180, such as a Fibre Channel switched network, that is “hidden” from the servers 104 (FIG. 2). The virtual storage device 134 contains the data for the logical volumes 122 (FIG. 2) in banks 182 of storage devices (not shown) that contain one or more physical data volumes 184. The data volumes 184 in the storage arrays 174-178 are the components of the logical volumes 122 in the virtual storage device 134 (FIG. 2). Each logical volume 122 may be distributed among more than one data volume 184 and possibly more than one storage array 174-178.
The servers 124-128 (FIG. 2) send the access requests to one of the I/ O modules 164 or 166, which routes, or distributes, by means of a router 186 included therein, the access requests through the switched fabrics 180 to array controllers 188 in the storage arrays 174-178. The array controllers 188 control access to the data in the data volumes 184 in the banks 182 of storage devices, so the array controllers 188 write data to and read data from the storage devices in response to the access requests. The array controllers 188 also return responses to the access requests through the switched fabrics 180 to the I/ O modules 164 and 166.
Since one logical volume 122 (FIG. 2) may be made of more than one physical data volume 184, an access request from a server 124-128 (FIG. 2) directed to a logical volume 122 may be interpreted, or converted, by the receiving I/ O module 164 or 166 into “sub” access requests, or actual data access commands, directed to the individual data volumes 184. The I/ O module 164 or 166 sends the access commands to the array controller(s) 188 that has control over the data volumes 184 that make up the logical volume 122. Thus, the array controller 188 responds to the entire access request or to its “portion” of the access request. Afterwards, the I/ O module 164 or 166 assembles the responses, if necessary, from each array controller 188 into a single response that is transferred back to the server 124-128. In this manner, the server 124-128 issues the access request to a logical volume 122 that is represented to the server 124-128 as a single volume and receives back a response that is represented as being from the single volume. The actual processing of the access request is, thus, “transparent” to the server 124-128.
The I/ O modules 164 and 166 preferably include a variety of the network interfaces 148-154 (see also FIG. 2) which connect to the SAN fabrics 140-146 (FIG. 2). Therefore, the servers 124-128 (FIG. 2) use a conventional transfer protocol determined by the type of SAN fabric 140-146 to which they are attached. The I/ O modules 164 and 166, however, convert the transfer protocol used by the servers 124-128 into the transfer protocol (e.g. Fibre Channel) that is used within the virtual storage device 134. In this manner, the data storage system 120 (FIG. 2) enables the additional flexibility of allowing the servers 124-128 to use a variety of transfer protocols and SAN fabrics 140-146, including, but not limited to, file-level and block-level transfer protocols.
Additionally, each I/ O module 164 and 166 preferably includes the same set of network interfaces 148-154, so multiple data transfer paths can be established from any server 124-128 through the connected SAN fabric 140-146 to any I/ O module 164 or 166. Thus, each server 124-128 can make use of both of the host bus adapters 138 (FIG. 2) contained therein to send access requests to the virtual storage device 134. In this manner, the servers 124-128 can maximize their data transfer capability.
The I/ O modules 164 and 166 are configured with information, in accord with the local identifier, identifying the same array controller 188 to receive the access commands distributed from the I/ O modules 164 and 166, i.e. the array controller 188 that has control over the relevant data volumes 184 in each storage array 174-178. Therefore, the I/ O modules 164 and 166 distribute the access commands to the same array controller 188 for each data volume 184. In this manner, unnecessary transfer of control, or ownership, of the data volumes 184 between the array controllers 188 (i.e. “thrashing”) is avoided.
The NAS appliance 168, the snapshot/backup appliance 170 and the RD appliance 172 also each include one of the routers 186 for routing capabilities similar to those described above for the I/ O modules 164 and 166. The NAS appliance 168 may include a set of the network interfaces 148, 152 and 154 for connecting through the SAN fabrics 140, 144 and 146 (FIG. 2) to a variety of devices (not shown) that require conventional “file level” data storage, as opposed to conventional “block level” data storage supplied by the I/ O modules 164 and 166. Alternatively, the NAS appliance 168 may not have any of the network interfaces 148, 152 or 154, but may access the variety of devices through the switched fabrics 180 and the I/ O modules 164 and 166.
The snapshot/backup appliance 170 communicates with the storage arrays 174-178 through the internal switched fabrics 180 and with the backup storage system 156 (FIG. 2) through the internal switched fabrics 180 and the I/ O modules 164 and 166. The snapshot/backup appliance 170 preferably forms a conventional “snapshot” of the data volumes 184 in the storage arrays 174-178 in order to copy, or backup, the data volumes 184 to the backup storage system 156. With the backup data (not shown), the data volumes 184 can be restored in the event of loss of data.
The RD appliance 172 communicates with the remote data facility 158 (FIG. 2) through the internal switched fabrics 180 and the I/ O modules 164 and 166. The RD appliance 172 maintains a mirrored copy of the data volumes 184 in the remote storage 160 (FIG. 2), so that the remote data facility 158 can take over data storage requirements for the client devices 130 in the event of a failure of the data storage system 120 (FIG. 2), such as a power failure.
In each of the appliances 168,170 and 172, the routing functions of the router 186 distribute the access commands to the array controllers 188 that have control over the relevant data volumes 184, so the connected devices (the backup storage system 156 and the remote data facility 158) do not need to have information regarding the individual data volumes 184. Therefore, the data volumes 184 are presented to the connected devices 156 and 158 as the single logical volumes 122 (FIG. 2), as described above, and the appliances 168-172 handle the distribution of the access commands to the array controllers 188.
The remote manager 136 (FIG. 2) controls the creation of the data volumes 184 (FIG. 3) and the logical volumes 122 (FIG. 2) and the configuration of the I/O devices 164-172 (FIG. 3) according to a procedure 190 shown in FIG. 4. The procedure 190 starts at step 192. At step 194, the logical volume 122 is created with its component data volumes 184. The storage space is set aside, or reserved, in the storage devices (not shown) in the storage arrays 174-178 (FIG. 3) for the data volumes 184 that make up the logical volume 122. Generally, the logical volume 122 and the data volumes 184 are created based on attributes, or performance parameters, required by the user of the logical volume 122. Such performance parameters typically include size, transaction rate, bandwidth and RAID level, among others.
The logical volume 122 (FIG. 2) is mapped (step 196) to a server, global identifier, LUN combination. The server 124-128 (FIG. 2) is one which will issue the access requests to the logical volume. The global identifier identifies the logical volume 122 to the server 124-128. The logical volume 122 is also mapped (step 198) to the internal identifier, a logical unit number that identifies the logical volume 122 and its component data volumes 184 (FIG. 3) and actual storage arrays 174-178 (FIG. 3) and storage devices that make up the logical volume 122.
The I/O devices 164-172 (FIG. 3) that are to use, or have access to, the logical volume 122 (FIG. 2) are configured (step 200) to map the server, global identifier and LUN combination to the internal identifier. With this configuration (e.g. a routing table), the I/ O modules 164 and 166 can interpret, or convert, the access requests received from the server 124-128 (FIG. 2) into the specific access commands for the array controllers 188 (FIG. 3) that have control over the relevant data volumes 184 (FIG. 3). The appliances 168-172 can do similarly for the backup storage system 156 (FIG. 2), the remote data facility 158 (FIG. 2) or other device (not shown). The procedure 190 ends at step 201.
The servers 124-128 (FIG. 2) configure themselves to use the logical volumes 122 (FIG. 2) according to a procedure 202 shown in FIG. 5. The procedure 202 starts at step 203. The server 124-128 discovers (step 204) the port identifier and LUN (i.e. the global identifier) by conventional discovery software that queries the I/O modules 164 and 166 (FIG. 3) for the logical volumes 122. The host file system (not shown) of the server 124-128 is formatted (step 205) on the target logical volumes 122 indicated by the global identifier. The data storage system 120 (FIG. 2) is then ready to begin servicing access requests from the server 124-128 to the virtual storage device 134 (FIGS. 2 and 3). The procedure 202 ends at step 206.
When one of the servers 124-128 (FIG. 2) issues an access request to one of the I/O modules 164 or 166 (FIG. 3) directed to one of the logical volumes 122 (FIG. 2), the I/ O module 164 or 166 responds according to a procedure 208 shown in FIG. 6. The procedure 208 starts at step 210. At step 212, the access request is received from the server 124-128. The server, global identifier and LUN combination is identified at step 214 from the access request, since the server 124-128 includes this information in the access request. The server, global identifier and LUN combination is used to look up (step 216) the internal identifier(s) for the data volume(s) 184 (FIG. 3). In this manner, the data volumes 184 that form the logical volume 122 are identified along with the storage array(s) 174-178 and the individual storage devices (not shown) in the banks 182 (FIG. 3) of storage devices and the spaces (not shown) therein that are used to form the data volumes 184. The array controller(s) 188 (FIG. 3) that currently has control over the data volumes 184 is also identified at step 216.
The access request received from the server 124-128 (FIG. 2) is interpreted, or converted, (step 218) into the access command(s) for the array controller(s) 188 (FIG. 3) that has control over each data volume 184. The access command(s) is distributed, or sent, (step 220) through the switched fabrics 180 (FIG. 3) to the array controller(s) 188. The array controller(s) 188 and storage devices respond in a conventional manner to carry out the access request, and the response(s) is received (step 222) back from the array controller(s) 188. The return responses are collected and assembled (step 224), if necessary, into the single response. The single response is sent (step 226) to the server 124-128. The procedure 208 ends at step 228.
The NAS appliance 168 (FIG. 3) responds in a similar manner to that described above with reference to FIG. 6 upon receiving a file level access request. The snapshot appliance 170 (FIG. 3) and the RD appliance 172 (FIG. 3) also function similarly, except that they typically do not receive access requests from the backup storage system 156 (FIG. 2) or the remote data facility 158 (FIG. 2), respectively. Instead, the snapshot appliance 170 and the RD appliance 172 internally initiate access requests to perform the snapshot, data backup and remote mirroring functions.
An advantage of the data storage system 120 (FIG. 2) is that certain errors can be detected and corrective measures taken within the virtual storage device 134 (FIGS. 2 and 3) transparent to, or without the involvement of, the servers 124-128 (FIG. 2), so the servers 124-128 can continue to service the data storage requirements of the client devices 130 (FIG. 2) without being interrupted to correct the errors. An exemplary error recovery procedure 230 performed by the I/O devices 164-172 (FIG. 3) is shown in FIG. 7. The procedure 230 starts at step 232. At step 234, one of the I/O devices 164-172 encounters an error when attempting to access one of the data volumes 184 (FIG. 3) on one of the storage arrays 174178 (FIG. 3) and determines the likely source of the error through conventional error detection techniques. For example, the source of the error may be a component of the internal switched fabric 180 (FIG. 3) through which the I/O device 164-172 sent the access command to the storage array 174-178, the array controller 188 (FIG. 3) of the storage array 174-178 to which the I/O device 164-172 sent the access command, one of the storage devices (not shown) in the storage array 174-178, etc.
The I/O device 164-172 (FIG. 3) is reconfigured (step 236) with information for the local identifier and a different data transfer path, i.e. different switched fabric 108 and/or array controller 188. Thus, the access commands are effectively rerouted for the data volume 184 through the new data transfer path, thereby avoiding the source of the error. The other I/O devices 164-172 are informed (step 238) of the new data transfer path, so the other I/O devices 164-172 can reroute the access commands they receive that are directed to the same data volume 184 and avoid the same error. The remote manager 136 (FIG. 2) is also informed (step 240) of the new data transfer path and of the error, so the system administrator can be notified to take corrective action to replace faulty equipment and/or to prevent the recurrence of the error. The server 124-128, however, is not informed of the changes that took place in the virtual storage device 134, since the internal functioning of the virtual storage device 134 is transparent to the servers 124-128. The procedure 230 ends at step 242.
The above described invention enables flexibility in the creation and management of logical volumes on storage arrays according to prescribed parameters, such as transaction rate (I/O's per second) and bandwidth, for the performance of the logical volumes. The above described invention also has the advantage of allowing improved flexibility in operation of the servers 124-128 (FIG. 2) without degrading the performance of the storage arrays 174-178 (FIG. 3) due to the rerouting capabilities of the access requests by the I/O devices 164-172 (FIG. 3). Since the functions, components and architecture within the virtual storage device 134 (FIGS. 2 and 3) are transparent to the servers 124-128, the servers 124-128 are not restricted thereby and have little effect thereon. Therefore, the servers 124-128 may use any available resources, such as the host bus adapters 116 (FIG. 2) and data transfer paths, without respect to the internal configuration of the virtual storage device 134. Likewise, the I/O devices 164-172 uniformly use a preferred resource, such as one of the array controllers 188 (FIG. 3), without respect to the data transfer path selected by the servers 124-128 for a “uniform” routing of the access requests to the array controllers 188. The “transparency” of the virtual storage device 134 to the servers 124-128 also allows changes to the components of the virtual storage device 134 to be made without involvement of the servers 124-128. Therefore, the upgrading and scaling of components in the virtual storage device 134 are performed more easily.
Presently preferred embodiments of the invention and its improvements have been described with a degree of particularity. This description has been made by way of preferred example. It should be understood that the scope of the present invention is defined by the following claims, and should not be unnecessarily limited by the detailed description of the preferred embodiments set forth above.

Claims (15)

The invention claimed is:
1. A method for handling access to data storage in a storage system having a plurality of storage arrays and a plurality of input/output (I/O) devices, the storage arrays contain data in a logical volume on behalf of a server, the logical volume is formed a plurality of portions, the portions of the logical volume are contained in the storage arrays, the server sends access requests to the I/O devices for access to the logical volume, the I/O devices distribute the access requests to a plurality of array controllers configured with the storage arrays, the array controllers control access to the storage arrays and the portions of the logical volume, comprising the steps of:
creating the logical volume with the portions thereof distributed across the storage arrays;
representing the logical volume at the server as a single volume;
representing the logical volume at the I/O devices as a plurality of data volumes corresponding to the portions distributed across storage arrays;
forming an access request at the server for access to the logical volume;
selecting a data transfer path from the server to one of the I/O devices, independently of the array controllers that have control over the portions of the logical volume;
sending the access request through the selected data transfer path to the one I/O device;
selecting at least one of the array controllers to receive the access request, independently of the selected data transfer path from the server to the one I/O device; and
sending the access request from the one I/O device to the selected array controller instructing the selected array controller to respond to the access request.
2. A method as defined in claim 1 comprising the further step of:
after creating the logical volume distributed across the storage arrays, controlling each portion of the logical volume by only one of the array controllers at a time.
3. A method as defined in claim 2 comprising the further step of:
switching control over at least one of the portions of the logical volume between two of the array controllers.
4. A method for handling access to data storage in a storage system in which a plurality of storage arrays contain data in a logical volume on behalf of a server, the server sends access requests through a plurality of input/output (I/O) devices to a plurality of array controllers, the array controllers are configured with the storage arrays and have control over the storage arrays and the logical volume, comprising the steps of:
forming an access request at the server for access to the logical volume;
selecting a data transfer path from the server to one of the I/O devices independently of the array controllers that have control over the logical volume;
sending the access request through the selected data transfer path to the one I/O device;
selecting array controllers that are to receive portions of the access request, independently of the selected data transfer path from the server to the one I/O device; and
distributing the access request from the one I/O device to the selected array controllers instructing each selected array controller to respond to one of the portions of the access request.
5. A method as defined in claim 4 comprising the further steps of:
before forming the access request at the server, representing the logical volume at the server as a single volume;
representing the logical volume at the one I/O device as more than one data volume distributed across the storage arrays;
referencing the represented single volume in the access request;
after sending the access request through the selected data transfer path to the one I/O device, converting the access request into a plurality of access commands for access to the more than one data volumes distributed across the storage arrays; and
distributing the access commands from the one I/O device to the selected array controllers instructing each selected array controller to respond to one of the access commands.
6. A method as defined in claim 5 comprising the further steps of:
after distributing the access commands from the one I/O device to the selected array controllers, responding to the access commands by the selected array controllers accessing predetermined portions of the logical volume on the storage arrays;
sending a plurality of responses to the one I/O device from the selected array controllers;
assembling the plurality of responses into a single response at the one I/O device; and
sending the single response from the one I/O device to the server.
7. A data storage system comprising:
a plurality of storage arrays configured to form a logical volume, each of the storage arrays containing a portion of the logical volume, the logical volume containing data and being distributed across more than one of the storage arrays;
a plurality of array controllers configured with and connected to the storage arrays to control access to the storage arrays, each portion of the logical volume and the data, each portion of the logical volume being under control of only one of the array controllers at a time, control of each portion of the logical volume being changeable to another one of the array controllers;
a server from which access requests are sent to the array controllers through multiple transfer paths for access to the logical volume and the data contained on the logical volume, the server containing information that represents the logical volume as a single volume, the server being operative to select one of the transfer paths independently of which array controllers have control over the portions of the logical volume; and
a plurality of input/output (I/O) devices connected between the server and the array controllers and containing information that represents the logical volume as more than one data volume distributed among the plurality of storage arrays, the I/O devices being operative to receive the access requests from the server, to determine which array controllers currently have control over the portions of the logical volume on the storage array and to send the access request to the determined array controller, independently of the transfer path selected by the server.
8. A data storage system comprising:
a plurality of storage arrays configured to form a logical volume, each storage array containing a portion of the logical volume, the logical volume containing data;
a plurality of array controllers configured with and connected to the storage arrays to control access to the storage arrays, each portion of the logical volume and the data, each portion of the logical volume being under control of only one of the array controllers, control of each portion of the logical volume being changeable to another one of the array controllers;
a server from which access requests are sent to the array controllers through multiple transfer paths for access to the logical volume and the data contained on the logical volume, the server being operative to select one of the transfer paths independently of which array controllers have control over the portions of the logical volume; and
a plurality of input/output (I/O) devices connected to the server and the array controllers and being operative to receive the access requests from the server, to determine which array controllers currently have control over the portions of the logical volume on the storage arrays and to distribute the access requests to the determined array controllers, independently of the transfer path selected by the server, instructing each determined array controller to respond to a corresponding portion of the received access request.
9. A data storage system as defined in claim 8 wherein:
the server contains information representing the logical volume as a single volume;
the I/O devices contain information representing the logical volume as more than one data volume distributed among the storage arrays;
the server is further operative to reference the single volume in the access request;
the I/O devices are further operative to generate more than one access command from the received access request and to distribute the access commands to the determined array controllers, each access command being directed to a corresponding one of the data volumes distributed among the storage arrays, the more than one access commands correspond to the portions of the received access request that instruct each determined array controller to respond;
each determined array controller includes programming under which the determined array controller is operative to respond to a corresponding one of the access commands generated from the received access request by accessing the corresponding data volume on the storage arrays and sending one of a plurality of responses to the I/O device that distributed the access commands to the determined array controllers; and
the I/O devices are further operative to assemble the plurality of responses into a single response and to send the single response to the server.
10. A data storage system, comprising:
a plurality of storage arrays configured for storing data in a logical volume having portions distributed among the plurality of storage arrays, wherein each storage array includes a bank of storage devices and a storage array controller configured for controlling access to the storage devices;
a communication fabric through which requests are transferred to the storage array controller of each of the storage arrays, wherein the requests are used to access the logical volume through the storage array controller of each of the storage arrays; and
a plurality of I/O modules communicatively coupled between the communication fabric and the storage array controller of each of the storage arrays and configured for receiving the requests from a host system and configured for distributing the requests to the storage array controllers such that the storage array controller of each of the storage arrays accesses a corresponding portion of the logical volume.
11. The data storage system of claim 10, wherein the storage array controller of each of the storage arrays comprises a RAID storage array controller configured for managing a corresponding portion of the logical volume according to a RAID management technique.
12. The data storage system of claim 10, wherein the communication fabric comprises a SAN switching fabric.
13. The data storage system of claim 12, wherein the SAN switching fabric is selected from a group consisting of Fibre Channel, SCSI, Ethernet and Gigabit Ethernet.
14. The data storage system of claim 10, wherein each storage array controller is further configured for sending responses to the I/O modules.
15. The data storage system of claim 10, wherein the I/O modules are further configured for assembling the responses from each storage array controller into a single response and for sending the single response through the communication fabric.
US09/874,515 2001-06-06 2001-06-06 Uniform routing of storage access requests through redundant array controllers Expired - Lifetime US6732104B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/874,515 US6732104B1 (en) 2001-06-06 2001-06-06 Uniform routing of storage access requests through redundant array controllers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/874,515 US6732104B1 (en) 2001-06-06 2001-06-06 Uniform routing of storage access requests through redundant array controllers

Publications (1)

Publication Number Publication Date
US6732104B1 true US6732104B1 (en) 2004-05-04

Family

ID=32177049

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/874,515 Expired - Lifetime US6732104B1 (en) 2001-06-06 2001-06-06 Uniform routing of storage access requests through redundant array controllers

Country Status (1)

Country Link
US (1) US6732104B1 (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020069245A1 (en) * 2000-10-13 2002-06-06 Han-Gyoo Kim Disk system adapted to be directly attached to network
US20030005119A1 (en) * 2001-06-28 2003-01-02 Intersan, Inc., A Delaware Corporation Automated creation of application data paths in storage area networks
US20030014569A1 (en) * 2001-07-16 2003-01-16 Han-Gyoo Kim Scheme for dynamically connecting I/O devices through network
US20030023665A1 (en) * 2001-07-27 2003-01-30 Naoto Matsunami Storage system having a plurality of controllers
US20030056063A1 (en) * 2001-09-17 2003-03-20 Hochmuth Roland M. System and method for providing secure access to network logical storage partitions
US20030088591A1 (en) * 2001-10-31 2003-05-08 Seagate Technology Llc Data storage device with deterministic caching and retention capabilities to effect file level data transfers over a network
US20030115447A1 (en) * 2001-12-18 2003-06-19 Duc Pham Network media access architecture and methods for secure storage
US20030229690A1 (en) * 2002-06-11 2003-12-11 Hitachi, Ltd. Secure storage system
US20030229778A1 (en) * 2002-04-19 2003-12-11 Oesterreicher Richard T. Flexible streaming hardware
US20040006635A1 (en) * 2002-04-19 2004-01-08 Oesterreicher Richard T. Hybrid streaming platform
US20040006636A1 (en) * 2002-04-19 2004-01-08 Oesterreicher Richard T. Optimized digital media delivery engine
US20040010605A1 (en) * 2002-07-09 2004-01-15 Hiroshi Furukawa Storage device band control apparatus, method, and program
US20040049564A1 (en) * 2002-09-09 2004-03-11 Chan Ng Method and apparatus for network storage flow control
US20040167972A1 (en) * 2003-02-25 2004-08-26 Nick Demmon Apparatus and method for providing dynamic and automated assignment of data logical unit numbers
US20050086427A1 (en) * 2003-10-20 2005-04-21 Robert Fozard Systems and methods for storage filing
US6889309B1 (en) * 2002-04-15 2005-05-03 Emc Corporation Method and apparatus for implementing an enterprise virtual storage system
US20050120025A1 (en) * 2003-10-27 2005-06-02 Andres Rodriguez Policy-based management of a redundant array of independent nodes
US20050149682A1 (en) * 2001-10-09 2005-07-07 Han-Gyoo Kim Virtual multiple removable media jukebox
US20050193017A1 (en) * 2004-02-19 2005-09-01 Han-Gyoo Kim Portable multimedia player/recorder that accesses data contents from and writes to networked device
US20050193189A1 (en) * 2004-02-17 2005-09-01 Han-Gyoo Kim Device and method for booting an operating system for a computer from a passive directly attached network device
US20050210084A1 (en) * 2004-03-16 2005-09-22 Goldick Jonathan S Systems and methods for transparent movement of file services in a clustered environment
US20060010341A1 (en) * 2004-07-09 2006-01-12 Shoji Kodama Method and apparatus for disk array based I/O routing and multi-layered external storage linkage
US20060026219A1 (en) * 2004-07-29 2006-02-02 Orenstein Jack A Metadata Management for fixed content distributed data storage
US20060045130A1 (en) * 2004-07-22 2006-03-02 Han-Gyoo Kim Low-level communication layers and device employing same
US20060069884A1 (en) * 2004-02-27 2006-03-30 Han-Gyoo Kim Universal network to device bridge chip that enables network directly attached device
US20060067356A1 (en) * 2004-08-23 2006-03-30 Han-Gyoo Kim Method and apparatus for network direct attached storage
US20060155805A1 (en) * 1999-09-01 2006-07-13 Netkingcall, Co., Ltd. Scalable server architecture based on asymmetric 3-way TCP
US20060230218A1 (en) * 2005-04-11 2006-10-12 Emulex Design & Manufacturing Corporation Method and apparatus for SATA tunneling over fibre channel
US20060242312A1 (en) * 2005-04-11 2006-10-26 Emulex Design & Manufacturing Corporation Tunneling SATA targets through fibre channel
US20070008988A1 (en) * 2004-08-23 2007-01-11 Han-Gyoo Kim Enhanced network direct attached storage controller
US20070061446A1 (en) * 2005-08-26 2007-03-15 Toshiaki Matsuo Storage management system and method
US20070124407A1 (en) * 2005-11-29 2007-05-31 Lsi Logic Corporation Systems and method for simple scale-out storage clusters
EP1916595A2 (en) 2006-10-25 2008-04-30 Hitachi, Ltd. Computer system, computer and method for managing performance based on i/o division ratio
US7457880B1 (en) 2003-09-26 2008-11-25 Ximeta Technology, Inc. System using a single host to receive and redirect all file access commands for shared data storage device from other hosts on a network
US20090150608A1 (en) * 2005-05-24 2009-06-11 Masataka Innan Storage system and operation method of storage system
US7606986B1 (en) * 2004-08-30 2009-10-20 Symantec Operating Corporation System and method for resolving SAN fabric partitions
US20090307377A1 (en) * 2008-06-09 2009-12-10 Anderson Gary D Arrangements for I/O Control in a Virtualized System
US20090307396A1 (en) * 2008-06-09 2009-12-10 International Business Machines Corporation Hypervisor to I/O Stack Conduit in Virtual Real Memory
US20100131731A1 (en) * 2005-09-05 2010-05-27 Yasutomo Yamamoto Control method of device in storage system for virtualization
US20100131950A1 (en) * 2008-11-27 2010-05-27 Hitachi, Ltd. Storage system and virtual interface management method
US7849257B1 (en) 2005-01-06 2010-12-07 Zhe Khi Pak Method and apparatus for storing and retrieving data
US20100318579A1 (en) * 2006-08-02 2010-12-16 Hitachi, Ltd. Control Device for Storage System Capable of Acting as a Constituent Element of Virtualization Storage System
US20110289348A1 (en) * 2004-02-04 2011-11-24 Hitachi, Ltd. Anomaly notification control in disk array
US20120290641A1 (en) * 2003-03-19 2012-11-15 Hitachi, Ltd. File Storage Service System, File Management Device, File Management Method, ID Denotative NAS Server and File Reading Method
US20140122816A1 (en) * 2012-10-29 2014-05-01 International Business Machines Corporation Switching between mirrored volumes
US8966172B2 (en) 2011-11-15 2015-02-24 Pavilion Data Systems, Inc. Processor agnostic data storage in a PCIE based shared storage enviroment
US8983911B2 (en) 2011-06-20 2015-03-17 Microsoft Technology Licensing, Llc Storage media abstraction for uniform data storage
US20150277769A1 (en) * 2014-03-28 2015-10-01 Emc Corporation Scale-out storage in a virtualized storage system
US20160080255A1 (en) * 2014-09-17 2016-03-17 Netapp, Inc. Method and system for setting up routing in a clustered storage system
US9565269B2 (en) 2014-11-04 2017-02-07 Pavilion Data Systems, Inc. Non-volatile memory express over ethernet
US9652182B2 (en) 2012-01-31 2017-05-16 Pavilion Data Systems, Inc. Shareable virtual non-volatile storage device for a server
US9712619B2 (en) 2014-11-04 2017-07-18 Pavilion Data Systems, Inc. Virtual non-volatile memory express drive

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5918229A (en) * 1996-11-22 1999-06-29 Mangosoft Corporation Structured data storage using globally addressable memory
US6145028A (en) * 1997-12-11 2000-11-07 Ncr Corporation Enhanced multi-pathing to an array of storage devices
US6343324B1 (en) * 1999-09-13 2002-01-29 International Business Machines Corporation Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access to the devices
US6438714B1 (en) * 1999-03-31 2002-08-20 International Business Machines Corporation Method and apparatus for testing large arrays of storage devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5918229A (en) * 1996-11-22 1999-06-29 Mangosoft Corporation Structured data storage using globally addressable memory
US6145028A (en) * 1997-12-11 2000-11-07 Ncr Corporation Enhanced multi-pathing to an array of storage devices
US6438714B1 (en) * 1999-03-31 2002-08-20 International Business Machines Corporation Method and apparatus for testing large arrays of storage devices
US6343324B1 (en) * 1999-09-13 2002-01-29 International Business Machines Corporation Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access to the devices

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
2001/0049773-Author(s)-Bhavsar.

Cited By (114)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060155805A1 (en) * 1999-09-01 2006-07-13 Netkingcall, Co., Ltd. Scalable server architecture based on asymmetric 3-way TCP
US7483967B2 (en) 1999-09-01 2009-01-27 Ximeta Technology, Inc. Scalable server architecture based on asymmetric 3-way TCP
US7870225B2 (en) 2000-10-13 2011-01-11 Zhe Khi Pak Disk system adapted to be directly attached to network
US7792923B2 (en) * 2000-10-13 2010-09-07 Zhe Khi Pak Disk system adapted to be directly attached to network
US7849153B2 (en) 2000-10-13 2010-12-07 Zhe Khi Pak Disk system adapted to be directly attached
US20020069245A1 (en) * 2000-10-13 2002-06-06 Han-Gyoo Kim Disk system adapted to be directly attached to network
US20060010287A1 (en) * 2000-10-13 2006-01-12 Han-Gyoo Kim Disk system adapted to be directly attached
US20030005119A1 (en) * 2001-06-28 2003-01-02 Intersan, Inc., A Delaware Corporation Automated creation of application data paths in storage area networks
US7343410B2 (en) * 2001-06-28 2008-03-11 Finisar Corporation Automated creation of application data paths in storage area networks
US20030014569A1 (en) * 2001-07-16 2003-01-16 Han-Gyoo Kim Scheme for dynamically connecting I/O devices through network
US7783761B2 (en) 2001-07-16 2010-08-24 Zhe Khi Pak Scheme for dynamically connecting I/O devices through network
US20030023665A1 (en) * 2001-07-27 2003-01-30 Naoto Matsunami Storage system having a plurality of controllers
US7216148B2 (en) * 2001-07-27 2007-05-08 Hitachi, Ltd. Storage system having a plurality of controllers
US7260656B2 (en) 2001-07-27 2007-08-21 Hitachi, Ltd. Storage system having a plurality of controllers
US7500069B2 (en) * 2001-09-17 2009-03-03 Hewlett-Packard Development Company, L.P. System and method for providing secure access to network logical storage partitions
US20030056063A1 (en) * 2001-09-17 2003-03-20 Hochmuth Roland M. System and method for providing secure access to network logical storage partitions
US20050149682A1 (en) * 2001-10-09 2005-07-07 Han-Gyoo Kim Virtual multiple removable media jukebox
US20030088591A1 (en) * 2001-10-31 2003-05-08 Seagate Technology Llc Data storage device with deterministic caching and retention capabilities to effect file level data transfers over a network
US7124152B2 (en) * 2001-10-31 2006-10-17 Seagate Technology Llc Data storage device with deterministic caching and retention capabilities to effect file level data transfers over a network
US20030115447A1 (en) * 2001-12-18 2003-06-19 Duc Pham Network media access architecture and methods for secure storage
US6889309B1 (en) * 2002-04-15 2005-05-03 Emc Corporation Method and apparatus for implementing an enterprise virtual storage system
US7899924B2 (en) 2002-04-19 2011-03-01 Oesterreicher Richard T Flexible streaming hardware
US20030229778A1 (en) * 2002-04-19 2003-12-11 Oesterreicher Richard T. Flexible streaming hardware
US20040006636A1 (en) * 2002-04-19 2004-01-08 Oesterreicher Richard T. Optimized digital media delivery engine
US20040006635A1 (en) * 2002-04-19 2004-01-08 Oesterreicher Richard T. Hybrid streaming platform
US20030229690A1 (en) * 2002-06-11 2003-12-11 Hitachi, Ltd. Secure storage system
US7346670B2 (en) * 2002-06-11 2008-03-18 Hitachi, Ltd. Secure storage system
US20040010605A1 (en) * 2002-07-09 2004-01-15 Hiroshi Furukawa Storage device band control apparatus, method, and program
US7260634B2 (en) * 2002-07-09 2007-08-21 Hitachi, Ltd. Storage device band control apparatus, method, and program
US7725568B2 (en) * 2002-09-09 2010-05-25 Netapp, Inc. Method and apparatus for network storage flow control
US20040049564A1 (en) * 2002-09-09 2004-03-11 Chan Ng Method and apparatus for network storage flow control
US7356574B2 (en) * 2003-02-25 2008-04-08 Hewlett-Packard Development Company, L.P. Apparatus and method for providing dynamic and automated assignment of data logical unit numbers
US20040167972A1 (en) * 2003-02-25 2004-08-26 Nick Demmon Apparatus and method for providing dynamic and automated assignment of data logical unit numbers
US8700573B2 (en) * 2003-03-19 2014-04-15 Hitachi, Ltd. File storage service system, file management device, file management method, ID denotative NAS server and file reading method
US20120290641A1 (en) * 2003-03-19 2012-11-15 Hitachi, Ltd. File Storage Service System, File Management Device, File Management Method, ID Denotative NAS Server and File Reading Method
US20090043971A1 (en) * 2003-09-26 2009-02-12 Ximeta Technology, Inc. Data integrity for data storage devices shared by multiple hosts via a network
US7457880B1 (en) 2003-09-26 2008-11-25 Ximeta Technology, Inc. System using a single host to receive and redirect all file access commands for shared data storage device from other hosts on a network
US20050086427A1 (en) * 2003-10-20 2005-04-21 Robert Fozard Systems and methods for storage filing
US7155466B2 (en) * 2003-10-27 2006-12-26 Archivas, Inc. Policy-based management of a redundant array of independent nodes
US7657586B2 (en) 2003-10-27 2010-02-02 Archivas, Inc. Policy-based management of a redundant array of independent nodes
US20050120025A1 (en) * 2003-10-27 2005-06-02 Andres Rodriguez Policy-based management of a redundant array of independent nodes
US20070094316A1 (en) * 2003-10-27 2007-04-26 Andres Rodriguez Policy-based management of a redundant array of independent nodes
US8365013B2 (en) * 2004-02-04 2013-01-29 Hitachi, Ltd. Anomaly notification control in disk array
US20110289348A1 (en) * 2004-02-04 2011-11-24 Hitachi, Ltd. Anomaly notification control in disk array
US20050193189A1 (en) * 2004-02-17 2005-09-01 Han-Gyoo Kim Device and method for booting an operating system for a computer from a passive directly attached network device
US7664836B2 (en) 2004-02-17 2010-02-16 Zhe Khi Pak Device and method for booting an operation system for a computer from a passive directly attached network device
US20050193017A1 (en) * 2004-02-19 2005-09-01 Han-Gyoo Kim Portable multimedia player/recorder that accesses data contents from and writes to networked device
US20060069884A1 (en) * 2004-02-27 2006-03-30 Han-Gyoo Kim Universal network to device bridge chip that enables network directly attached device
US20050210084A1 (en) * 2004-03-16 2005-09-22 Goldick Jonathan S Systems and methods for transparent movement of file services in a clustered environment
US7577688B2 (en) 2004-03-16 2009-08-18 Onstor, Inc. Systems and methods for transparent movement of file services in a clustered environment
US7131027B2 (en) * 2004-07-09 2006-10-31 Hitachi, Ltd. Method and apparatus for disk array based I/O routing and multi-layered external storage linkage
US20060010341A1 (en) * 2004-07-09 2006-01-12 Shoji Kodama Method and apparatus for disk array based I/O routing and multi-layered external storage linkage
US7746900B2 (en) 2004-07-22 2010-06-29 Zhe Khi Pak Low-level communication layers and device employing same
US20060045130A1 (en) * 2004-07-22 2006-03-02 Han-Gyoo Kim Low-level communication layers and device employing same
US20060026219A1 (en) * 2004-07-29 2006-02-02 Orenstein Jack A Metadata Management for fixed content distributed data storage
US7657581B2 (en) * 2004-07-29 2010-02-02 Archivas, Inc. Metadata management for fixed content distributed data storage
US20070008988A1 (en) * 2004-08-23 2007-01-11 Han-Gyoo Kim Enhanced network direct attached storage controller
US7860943B2 (en) 2004-08-23 2010-12-28 Zhe Khi Pak Enhanced network direct attached storage controller
US20060067356A1 (en) * 2004-08-23 2006-03-30 Han-Gyoo Kim Method and apparatus for network direct attached storage
US7606986B1 (en) * 2004-08-30 2009-10-20 Symantec Operating Corporation System and method for resolving SAN fabric partitions
US7849257B1 (en) 2005-01-06 2010-12-07 Zhe Khi Pak Method and apparatus for storing and retrieving data
WO2006110845A2 (en) * 2005-04-11 2006-10-19 Emulex Design & Manufacturing Corporation Method and apparatus for sata tunneling over fibre channel
US7853741B2 (en) 2005-04-11 2010-12-14 Emulex Design & Manufacturing Corporation Tunneling SATA targets through fibre channel
US7743178B2 (en) * 2005-04-11 2010-06-22 Emulex Design & Manufacturing Corporation Method and apparatus for SATA tunneling over fibre channel
US20060230218A1 (en) * 2005-04-11 2006-10-12 Emulex Design & Manufacturing Corporation Method and apparatus for SATA tunneling over fibre channel
WO2006110845A3 (en) * 2005-04-11 2009-04-16 Emulex Design & Mfg Corp Method and apparatus for sata tunneling over fibre channel
US20060242312A1 (en) * 2005-04-11 2006-10-26 Emulex Design & Manufacturing Corporation Tunneling SATA targets through fibre channel
US8484425B2 (en) 2005-05-24 2013-07-09 Hitachi, Ltd. Storage system and operation method of storage system including first and second virtualization devices
US20090150608A1 (en) * 2005-05-24 2009-06-11 Masataka Innan Storage system and operation method of storage system
EP2975513A1 (en) * 2005-05-24 2016-01-20 Hitachi Ltd. Storage system and operation method of storage system
US20100274963A1 (en) * 2005-05-24 2010-10-28 Hitachi, Ltd. Storage system and operation method of storage system
US20130275690A1 (en) * 2005-05-24 2013-10-17 Hitachi, Ltd. Storage system and operation method of storage system
US8180979B2 (en) * 2005-05-24 2012-05-15 Hitachi, Ltd. Storage system and operation method of storage system
US7953942B2 (en) 2005-05-24 2011-05-31 Hitachi, Ltd. Storage system and operation method of storage system
US7509443B2 (en) * 2005-08-26 2009-03-24 Hitachi, Ltd. Storage management system and method using performance values to obtain optimal input/output paths within a storage network
US20070061446A1 (en) * 2005-08-26 2007-03-15 Toshiaki Matsuo Storage management system and method
US20100131731A1 (en) * 2005-09-05 2010-05-27 Yasutomo Yamamoto Control method of device in storage system for virtualization
US8214615B2 (en) 2005-09-05 2012-07-03 Hitachi, Ltd. Control method of device in storage system for virtualization
US8694749B2 (en) 2005-09-05 2014-04-08 Hitachi, Ltd. Control method of device in storage system for virtualization
US9037671B2 (en) 2005-11-29 2015-05-19 Netapp, Inc. System and method for simple scale-out storage clusters
US8595313B2 (en) * 2005-11-29 2013-11-26 Netapp. Inc. Systems and method for simple scale-out storage clusters
US20070124407A1 (en) * 2005-11-29 2007-05-31 Lsi Logic Corporation Systems and method for simple scale-out storage clusters
US10140045B2 (en) * 2006-08-02 2018-11-27 Hitachi, Ltd. Control device for storage system capable of acting as a constituent element of virtualization storage system
US9898221B2 (en) * 2006-08-02 2018-02-20 Hitachi, Ltd. Control device for storage system capable of acting as a constituent element of virtualization storage system
US20160054950A1 (en) * 2006-08-02 2016-02-25 Hitachi, Ltd. Control device for storage system capable of acting as a constituent element of virtualization storage system
US20100318579A1 (en) * 2006-08-02 2010-12-16 Hitachi, Ltd. Control Device for Storage System Capable of Acting as a Constituent Element of Virtualization Storage System
US20080104356A1 (en) * 2006-10-25 2008-05-01 Takayuki Nagai Computer system, computer and method for managing performance based on i/o division ratio
US8171241B2 (en) 2006-10-25 2012-05-01 Hitachi, Ltd. Computer system, computer and method for managing performance based on I/O division ratio
EP1916595A2 (en) 2006-10-25 2008-04-30 Hitachi, Ltd. Computer system, computer and method for managing performance based on i/o division ratio
EP1916595A3 (en) * 2006-10-25 2010-07-07 Hitachi, Ltd. Computer system, computer and method for managing performance based on i/o division ratio
US9208003B2 (en) * 2008-06-09 2015-12-08 International Business Machines Corporation Hypervisor to I/O stack conduit in virtual real memory
US10628209B2 (en) 2008-06-09 2020-04-21 International Business Machines Corporation Virtual machine monitor to I/O stack conduit in virtual real memory
US10360060B2 (en) 2008-06-09 2019-07-23 International Business Machines Corporation Virtual machine monitor to I/O stack conduit in virtual real memory
US20090307396A1 (en) * 2008-06-09 2009-12-10 International Business Machines Corporation Hypervisor to I/O Stack Conduit in Virtual Real Memory
US20090307377A1 (en) * 2008-06-09 2009-12-10 Anderson Gary D Arrangements for I/O Control in a Virtualized System
US8099522B2 (en) * 2008-06-09 2012-01-17 International Business Machines Corporation Arrangements for I/O control in a virtualized system
US9910691B2 (en) 2008-06-09 2018-03-06 International Business Machines Corporation Hypervisor to I/O stack conduit in virtual real memory
US8387044B2 (en) * 2008-11-27 2013-02-26 Hitachi, Ltd. Storage system and virtual interface management method using physical interface identifiers and virtual interface identifiers to facilitate setting of assignments between a host computer and a storage apparatus
US20100131950A1 (en) * 2008-11-27 2010-05-27 Hitachi, Ltd. Storage system and virtual interface management method
US8983911B2 (en) 2011-06-20 2015-03-17 Microsoft Technology Licensing, Llc Storage media abstraction for uniform data storage
US10303649B2 (en) 2011-06-20 2019-05-28 Microsoft Technology Licensing, Llc Storage media abstraction for uniform data storage
US8966172B2 (en) 2011-11-15 2015-02-24 Pavilion Data Systems, Inc. Processor agnostic data storage in a PCIE based shared storage enviroment
US9285995B2 (en) 2011-11-15 2016-03-15 Pavilion Data Systems, Inc. Processor agnostic data storage in a PCIE based shared storage environment
US9720598B2 (en) 2011-11-15 2017-08-01 Pavilion Data Systems, Inc. Storage array having multiple controllers
US9652182B2 (en) 2012-01-31 2017-05-16 Pavilion Data Systems, Inc. Shareable virtual non-volatile storage device for a server
US20140122816A1 (en) * 2012-10-29 2014-05-01 International Business Machines Corporation Switching between mirrored volumes
US9098466B2 (en) * 2012-10-29 2015-08-04 International Business Machines Corporation Switching between mirrored volumes
US20150277769A1 (en) * 2014-03-28 2015-10-01 Emc Corporation Scale-out storage in a virtualized storage system
US20160080255A1 (en) * 2014-09-17 2016-03-17 Netapp, Inc. Method and system for setting up routing in a clustered storage system
US9712619B2 (en) 2014-11-04 2017-07-18 Pavilion Data Systems, Inc. Virtual non-volatile memory express drive
US9936024B2 (en) 2014-11-04 2018-04-03 Pavilion Data Systems, Inc. Storage sever with hot plug and unplug capabilities
US10079889B1 (en) 2014-11-04 2018-09-18 Pavilion Data Systems, Inc. Remotely accessible solid state drive
US9565269B2 (en) 2014-11-04 2017-02-07 Pavilion Data Systems, Inc. Non-volatile memory express over ethernet
US10348830B1 (en) 2014-11-04 2019-07-09 Pavilion Data Systems, Inc. Virtual non-volatile memory express drive

Similar Documents

Publication Publication Date Title
US6732104B1 (en) Uniform routing of storage access requests through redundant array controllers
US6757753B1 (en) Uniform routing of storage access requests through redundant array controllers
KR100644011B1 (en) Storage domain management system
US6640278B1 (en) Method for configuration and management of storage resources in a storage network
US6553408B1 (en) Virtual device architecture having memory for storing lists of driver modules
US7734712B1 (en) Method and system for identifying storage devices
US7162658B2 (en) System and method for providing automatic data restoration after a storage device failure
US7383381B1 (en) Systems and methods for configuring a storage virtualization environment
US7447939B1 (en) Systems and methods for performing quiescence in a storage virtualization environment
US6598174B1 (en) Method and apparatus for storage unit replacement in non-redundant array
US7236987B1 (en) Systems and methods for providing a storage virtualization environment
US6571354B1 (en) Method and apparatus for storage unit replacement according to array priority
US6538669B1 (en) Graphical user interface for configuration of a storage system
US7290168B1 (en) Systems and methods for providing a multi-path network switch system
US6950914B2 (en) Storage system
US7519769B1 (en) Scalable storage network virtualization
US11474704B2 (en) Target path selection for storage controllers
JP2008529167A (en) Collaborative shared storage architecture
JP2004518217A (en) Remote mirroring in switching environment
EP4139802B1 (en) Methods for managing input-ouput operations in zone translation layer architecture and devices thereof
US20070027989A1 (en) Management of storage resource devices
US7231503B2 (en) Reconfiguring logical settings in a storage system
US20230325102A1 (en) Methods for handling storage devices with different zone sizes and devices thereof
JP2004355638A (en) Computer system and device assigning method therefor
US10768834B2 (en) Methods for managing group objects with different service level objectives for an application and devices thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: LSI LOGIC CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WEBER, BRET S.;REEL/FRAME:011881/0947

Effective date: 20010530

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: NETAPP, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LSI LOGIC CORPORATION;REEL/FRAME:026661/0205

Effective date: 20110506

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12