US20080215596A1 - Selecting storage clusters to use to access storage - Google Patents
Selecting storage clusters to use to access storage Download PDFInfo
- Publication number
- US20080215596A1 US20080215596A1 US12/107,687 US10768708A US2008215596A1 US 20080215596 A1 US20080215596 A1 US 20080215596A1 US 10768708 A US10768708 A US 10768708A US 2008215596 A1 US2008215596 A1 US 2008215596A1
- Authority
- US
- United States
- Prior art keywords
- storage
- storage cluster
- hosts
- cluster
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Abstract
Provided are a method, system and program for selecting storage clusters to use to access storage. Input/Output (I/O) requests are transferred to a first storage cluster over a network to access storage. The storage may be additionally accessed via a second storage cluster over the network and both the first and second storage clusters are capable of accessing the storage. An unavailability of a first storage cluster is detected when the second storage cluster is available. A request is transmitted to hosts over the network to use the second storage cluster to access the storage. Hosts receiving the transmitted request send I/O requests to the storage via the second storage cluster if the second storage cluster is available.
Description
- This application is a continuation of U.S. application Ser. No. 11/286,847, filed Nov. 22, 2005, which application is incorporated herein by reference in its entirety.
- 1. Field of the Invention
- The present invention relates to a method, system, and program for selecting storage clusters to use to access storage.
- 2. Description of the Related Art
- Host systems in a storage network may communicate with a storage controller through multiple paths. The storage controller may be comprised of separate storage clusters, where each storage cluster is capable of accessing the storage and provide redundancy to access the storage. If a storage cluster fails, then the host may failover to using the other storage cluster to access the storage.
- In certain systems, if a storage cluster receives Input/Output (I/O) requests to a storage location, such as a track or volume, that are the subject of I/O requests pending at another storage controller, then the storage cluster receiving the I/O requests will transfer them to the other storage controller to minimize the extent to which different storage clusters apply I/Os to a same storage location. Data consistency errors may result if separate storage controllers write data in an inconsistent manner to a same storage location. This process where a storage cluster transfers I/O requests to another storage cluster to consolidate I/O requests to a same storage location at one storage cluster requires that I/O requests be quiesced to the clusters involved in the transfer during the time that the transfer occurs. This transfer of I/O requests and writes between storage controllers degrades the storage controller performance.
- There is a need in the art for improved techniques for managing how hosts may select which of multiple storage clusters to use to access a storage.
- Provided are a method, system and program for selecting storage clusters to use to access storage. Input/Output (I/O) requests are transferred to a first storage cluster over a network to access storage. The storage may be additionally accessed via a second storage cluster over the network and both the first and second storage clusters are capable of accessing the storage. An unavailability of a first storage cluster is detected when the second storage cluster is available. A request is transmitted to hosts over the network to use the second storage cluster to access the storage. Hosts receiving the transmitted request send I/O requests to the storage via the second storage cluster if the second storage cluster is available.
- Further provided are a method, system and program for processing a network topology to determine hosts connections to first and second storage clusters, wherein the hosts may access a storage through the first and second storage clusters. Selection is made of one of the first and second storage clusters to which all the hosts have access in response to determining that all the hosts have access to the selected storage cluster. Selection is made of one of the first and second storage clusters to which less than all the hosts have access based on a selection policy in response to determining that less than all the hosts have access to the first and second storage clusters. A message is sent to the hosts having access to the selected storage cluster to use the selected storage cluster to access the storage.
-
FIG. 1 illustrates an embodiment of a network computing environment. -
FIGS. 2 a and 2 b illustrate embodiments for how paths may connect hosts to storage clusters. -
FIGS. 3 and 4 illustrate an embodiment of operations a host performs to select a storage cluster to use to access the storage in response to detecting a failure of a current storage cluster used to access the storage. -
FIG. 5 illustrates an embodiment of operations a host performs in response to receiving a request from another host to switch to use a different storage cluster to access the storage. -
FIGS. 6 and 7 illustrate an embodiment of operations to select one storage cluster for hosts to use to access a storage. -
FIG. 8 illustrates an embodiment of host priority information maintained for the hosts and used to select one storage cluster in the operations ofFIG. 7 . -
FIG. 1 illustrates an embodiment of a network computing environment. Astorage controller 2 receives Input/Output (I/O) requests fromhost systems network 6 directed towardstorages more volumes 10 a, 10 b (e.g., Logical Unit Numbers, Logical Devices, etc.). Thestorage controller 2 includes twoclusters cache clusters hosts clusters storages storages respective cache clusters cache clusters storage cluster storages cluster storage management software process host bus 26 provides a communication interface to enable communication between theclusters - The
hosts O manager storage clusters network 6. In certain embodiments, the environment may further include amanager system 28 including anetwork manager program 30 to coordinatehost - The
hosts band network 32 with respect to thenetwork 6. Thehosts storage network 6 topology information to themanager system 28 over the out-of-band network 32 and themanager system 28 may communicate with thehosts band network 32 to coordinate host use of thestorage clusters hosts manager system 28, andstorage controller 2 may communicate I/O requests and coordination related information over a single network, e.g.,network 6. - The
storage controller 2 may comprise suitable storage controllers or servers known in the art, such as the International Business Machines (IBM®) Enterprise Storage Server® (ESS) (IBM and Enterprise Storage Server are registered trademarks of IBM®) Alternatively, thestorage controller 2 may comprise a lower-end storage server as opposed to a high-end enterprise storage server. Theclusters same storage controller 2 as shown inFIG. 1 or in different storage controllers. Thehosts storage network 6 may comprise a Storage Area Network (SAN), Local Area Network (LAN), Intranet, the Internet, Wide Area Network (WAN), etc. The out-of-band network 32 may be separate from thestorage network 6, and use network technology, such as LAN. Thestorage - The
hosts storage cluster storages 8 a, 8 n. Eachhost storage clusters storage cluster host storage 8 a . . . 8 n over the other path and adaptor. Each adaptor may include multiple ports providing multiple end points of access. -
FIG. 1 shows that eachcluster own cache -
FIGS. 2 a and 2 b illustrate different configurations of how thehosts clusters FIG. 1 may connect.FIG. 2 a illustrates one configuration of how thehosts storage clusters storage cluster host -
FIG. 2 b illustrates an alternative configuration where eachhost storage cluster host storage cluster - In one embodiment, either the
storage controller 2 or thestorage management software storage cluster storage clusters storage cluster storage clusters storage cluster -
FIGS. 1 , 2 a, and 2 b show a certain number of elements, such as two sets ofvolumes 10 a, 10 b, twostorages storage controller 2, a certain number of hosts, etc. However, additional embodiments may have any number of the illustrated devices, such as sets of volumes, storages, clusters, host adaptors, switches, etc., and actual implementations are not limited to the specific number of components illustrated in the figures. -
FIG. 3 illustrates an embodiment of operations implemented in the I/O manager hosts storage cluster storage storage cluster storage clusters storage cluster O manager storage cluster storage cluster host other storage cluster network 6, then the I/O manager storage storage cluster O manager hosts network 32, via a broadcast or directly addressing messages to the hosts, instructing the hosts to use an indicatedavailable storage cluster -
FIG. 4 illustrates an additional embodiment of operations performed by the I/O manager storage cluster storage storage cluster storage O manager storage cluster storage O manager storage storage clusters O manager network 32, via a broadcast or directly addressing messages to the hosts, requesting permission to switch to an indicated available storage cluster. In response to receiving (at block 158) responses from thehosts O manager storage cluster - In one embodiment, the
hosts storage cluster storage cluster storage host storage cluster host O manager other hosts storage cluster storage O manager hosts storage cluster O manager certain hosts host - If (at block 162) the I/
O manager O manager storage available storage cluster O manager hosts network 32 to use the indicatedstorage cluster O manager available storage cluster -
FIG. 5 illustrates an embodiment of operations performed by an I/O manager host storage cluster O manager FIG. 3 or 4. In response to receiving (at block 200) the message to switch to use an indicatedstorage cluster O manager storage cluster storage host storage cluster storage cluster preferred storage cluster storage cluster O manager storage cluster storage storage cluster storage - The described operations of
FIGS. 3 , 4, and 5 operate to have asmany hosts same storage cluster different storage clusters certain hosts storage cluster storage clusters single storage cluster FIGS. 3 and 4 seek to minimize the need for thestorage clusters -
FIG. 6 illustrates an embodiment of operations performed by thenetwork manager 30 to assign acommon storage cluster hosts storage network manager 30 may initiate operations (at block 250) to selectstorage cluster hosts storage host storage cluster host storage network manager 30 receives (at block 252) from thehosts other network 6 elements, e.g., switches, hosts, storages, etc. Thenetwork manager 30 may gather information on the network topology by accessing switches in thenetwork 6. Alternatively, topology information on thehosts network 6 may be gathered by agent programs (not shown) executing in thehosts network manager 30 via an in-band 6 or out-of-band network 32. Agents may also handle additional communications between themanager 30 and thehosts - The
network manager 30 processes (at block 254) the network topology of thehost storage clusters storage clusters network manager 30 sends (at block 258) a message to thehosts storage cluster storage clusters network manager 30 applies a selection policy to select one of thestorage clusters hosts storage cluster FIG. 7 illustrates an embodiment of a selection policy that uses priority weights for thehosts network manager 30 sends (at block 262) a message to thehosts storage cluster storage cluster storage network manager 30 may also send (at block 264) a message to thehosts storage cluster storage storage storage clusters storage cluster -
FIG. 7 illustrates an embodiment of operations of a selection policy for thenetwork manager 30 to use to select one of thestorage clusters hosts storage storage cluster block 260 inFIG. 6 , thenetwork manager 30 determines (at block 302) hosts 4 a, 4 b . . . 4 n capable of accessing each of thestorage clusters network manager 30 then determines (at block 304) the priority weights for thedetermined hosts storage clusters FIG. 8 provides an embodiment ofhost priority information 330 thenetwork manager 30 maintains for each host, identified infield 332, indicating arelative priority weight 334 for the identified host. A host that is executing higher priority applications, such as mission critical applications or servicing more critical or important clients, may be assigned ahigher priority weight 334. Returning toFIG. 7 , for eachstorage cluster network manager 30 sums (at block 306) the determined weights of thehosts storage cluster storage cluster O manager 30 selects (at block 308) thestorage cluster - Alternative calculation techniques may be used to determine the
storage cluster hosts - Described embodiments provide techniques for hosts or a storage manager to select one of a plurality of storage clusters the
hosts storage cluster clusters storage clusters - The described operations may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code maintained in a “computer readable medium”, where a processor may read and execute the code from the computer readable medium. A computer readable medium may comprise media such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), etc. The code implementing the described operations may further be implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.). Still further, the code implementing the described operations may be implemented in “transmission signals”, where transmission signals may propagate through space or through a transmission media, such as an optical fiber, copper wire, etc. The transmission signals in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The transmission signals in which the code or logic is encoded is capable of being transmitted by a transmitting station and received by a receiving station, where the code or logic encoded in the transmission signal may be decoded and stored in hardware or a computer readable medium at the receiving and transmitting stations or devices. An “article of manufacture” comprises computer readable medium, hardware logic, and/or transmission signals in which code may be implemented. A device in which the code implementing the described embodiments of operations is encoded may comprise a computer readable medium or hardware logic. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present invention, and that the article of manufacture may comprise suitable information bearing medium known in the art.
- In described embodiments, the I/
O manager network manager 30 selects astorage cluster volumes 10 a, 10 b accessible through thestorage clusters storage cluster different volumes 10 a, 10 b in thestorage certain hosts certain volumes 10 a, 10 b through a selectedstorage cluster FIGS. 4 , 6, and 7), but not be able to access other volumes when astorage cluster certain volumes 10 a, 10 b that is not available to thehost - In described embodiments, the techniques for selecting a storage cluster for hosts to use was done for storage clusters that are managed so that a received write operation is transferred to another storage cluster already having pending I/O requests to the same storage location. In additional embodiments, the described operations for selecting a
storage cluster - The terms “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean “one or more (but not all) embodiments of the present invention(s)” unless expressly specified otherwise.
- The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise.
- The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.
- The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
- Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.
- A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the present invention.
- Further, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
- When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the present invention need not include the device itself.
- The illustrated operations of
FIGS. 3 , 4, 5, 6, and 7 show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units. - The foregoing description of various embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
Claims (9)
1. An article of manufacture comprising a computer readable storage medium including code capable of causing operations to be performed with respect to a first storage cluster, second storage cluster, and hosts over a network, wherein a storage may be accessed via the first and second storage clusters, and wherein the code is implemented in one of the hosts to perform operations, the operations comprising:
transferring input/output (I/O) requests to the first storage cluster over the network to access the storage;
detecting an unavailability of the first storage cluster when the second storage cluster is available; and
transmitting a request to the hosts over the network to use the second storage cluster to access the storage, wherein the hosts receiving the transmitted request send I/O requests to the storage via the second storage cluster.
2. The article of manufacture of claim 1 , wherein the operations further comprise:
sending messages to hosts over the network requesting permission to switch from using the first storage cluster to the second storage cluster;
receiving responses from the hosts to the messages indicating whether the hosts provide permission to switch from the first to the second storage cluster; and
determining from the received responses whether to switch from using the first to the second storage cluster to send I/O requests to the storage.
3. The article of manufacture of claim 2 , wherein the selection is not made to use the second storage cluster and the requests are not transmitted to the hosts if the determination is made to not switch from using the first storage cluster to using the second storage cluster based on the received responses.
4. The article of manufacture of claim 2 , wherein the determination is made to switch from using the first storage cluster to the second storage cluster if all hosts receiving the message respond indicating permission to switch from the first storage cluster to the second storage cluster.
5. The article of manufacture of claim 2 , wherein the operations further comprise:
failing I/O requests to the storage in response to determining not to switch from using the first storage cluster to using the second storage cluster.
6. A system in communication with hosts, a first storage cluster, and a second storage cluster over a network, wherein a storage may be accessed through the first and second storage clusters, wherein the system comprises one of the hosts external to the first and second storage clusters, comprising:
a processor; and
a computer readable storage medium including code executed by the processor to perform operations, the operations comprising:
transferring input/output (I/O) requests to the first storage cluster over the network to access the storage;
detecting an unavailability of the first storage cluster when the second storage cluster is available; and
transmitting a request to the hosts over the network to use the second storage cluster to access the storage, wherein the hosts receiving the transmitted request send I/O requests to the storage via the second storage cluster.
7. The system of claim 6 , wherein the operations further comprise:
sending messages to the hosts over the network requesting permission to switch from using the first storage cluster to the second storage cluster;
receiving responses from the hosts to the messages indicating whether the hosts provide permission to switch from the first to the second storage cluster; and
determining from the received responses whether to switch from using the first to the second storage cluster to send I/O requests to the storage.
8. The system of claim 7 , wherein selection is not made to use the second storage cluster and the requests are not transmitted to the hosts if the determination is made to not switch from using the first storage cluster to using the second storage cluster based on the received responses.
9. The system of claim 7 , wherein the operations further comprise:
failing I/O requests to the storage in response to determining not to switch from using the first storage cluster to using the second storage cluster.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/107,687 US20080215596A1 (en) | 2005-11-22 | 2008-04-22 | Selecting storage clusters to use to access storage |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/286,847 US7380074B2 (en) | 2005-11-22 | 2005-11-22 | Selecting storage clusters to use to access storage |
US12/107,687 US20080215596A1 (en) | 2005-11-22 | 2008-04-22 | Selecting storage clusters to use to access storage |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/286,847 Continuation US7380074B2 (en) | 2005-11-22 | 2005-11-22 | Selecting storage clusters to use to access storage |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080215596A1 true US20080215596A1 (en) | 2008-09-04 |
Family
ID=38054820
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/286,847 Active 2026-07-20 US7380074B2 (en) | 2005-11-22 | 2005-11-22 | Selecting storage clusters to use to access storage |
US12/107,687 Abandoned US20080215596A1 (en) | 2005-11-22 | 2008-04-22 | Selecting storage clusters to use to access storage |
US12/107,693 Active 2026-06-22 US7730267B2 (en) | 2005-11-22 | 2008-04-22 | Selecting storage clusters to use to access storage |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/286,847 Active 2026-07-20 US7380074B2 (en) | 2005-11-22 | 2005-11-22 | Selecting storage clusters to use to access storage |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/107,693 Active 2026-06-22 US7730267B2 (en) | 2005-11-22 | 2008-04-22 | Selecting storage clusters to use to access storage |
Country Status (2)
Country | Link |
---|---|
US (3) | US7380074B2 (en) |
CN (1) | CN100568881C (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007272357A (en) * | 2006-03-30 | 2007-10-18 | Toshiba Corp | Storage cluster system, data processing method and program |
US7711683B1 (en) | 2006-11-30 | 2010-05-04 | Netapp, Inc. | Method and system for maintaining disk location via homeness |
US7613947B1 (en) * | 2006-11-30 | 2009-11-03 | Netapp, Inc. | System and method for storage takeover |
US8417895B1 (en) | 2008-09-30 | 2013-04-09 | Violin Memory Inc. | System for maintaining coherency during offline changes to storage media |
US8442059B1 (en) | 2008-09-30 | 2013-05-14 | Gridiron Systems, Inc. | Storage proxy with virtual ports configuration |
US8838850B2 (en) * | 2008-11-17 | 2014-09-16 | Violin Memory, Inc. | Cluster control protocol |
US8788758B1 (en) | 2008-11-04 | 2014-07-22 | Violin Memory Inc | Least profitability used caching scheme |
US8443150B1 (en) | 2008-11-04 | 2013-05-14 | Violin Memory Inc. | Efficient reloading of data into cache resource |
US8667366B1 (en) | 2009-04-17 | 2014-03-04 | Violin Memory, Inc. | Efficient use of physical address space for data overflow and validation |
US8650362B2 (en) | 2009-04-17 | 2014-02-11 | Violin Memory Inc. | System for increasing utilization of storage media |
US8713252B1 (en) | 2009-05-06 | 2014-04-29 | Violin Memory, Inc. | Transactional consistency scheme |
US8402198B1 (en) | 2009-06-03 | 2013-03-19 | Violin Memory, Inc. | Mapping engine for a storage device |
US9069676B2 (en) | 2009-06-03 | 2015-06-30 | Violin Memory, Inc. | Mapping engine for a storage device |
US8402246B1 (en) | 2009-08-28 | 2013-03-19 | Violin Memory, Inc. | Alignment adjustment in a tiered storage system |
US8949565B2 (en) * | 2009-12-27 | 2015-02-03 | Intel Corporation | Virtual and hidden service partition and dynamic enhanced third party data store |
US8832384B1 (en) | 2010-07-29 | 2014-09-09 | Violin Memory, Inc. | Reassembling abstracted memory accesses for prefetching |
US8959288B1 (en) | 2010-07-29 | 2015-02-17 | Violin Memory, Inc. | Identifying invalid cache data |
US8954808B1 (en) * | 2010-11-30 | 2015-02-10 | Symantec Corporation | Systems and methods for performing input/output path failovers |
US8972689B1 (en) | 2011-02-02 | 2015-03-03 | Violin Memory, Inc. | Apparatus, method and system for using real-time performance feedback for modeling and improving access to solid state media |
US8635416B1 (en) | 2011-03-02 | 2014-01-21 | Violin Memory Inc. | Apparatus, method and system for using shadow drives for alternative drive commands |
US8732520B2 (en) | 2011-04-06 | 2014-05-20 | Lsi Corporation | Clustered array controller for global redundancy in a SAN |
US8738823B2 (en) * | 2012-10-16 | 2014-05-27 | International Business Machines Corporation | Quiescing input/output (I/O) requests to subsets of logical addresses in a storage for a requested operation |
CN103778025A (en) * | 2012-10-22 | 2014-05-07 | 深圳市腾讯计算机系统有限公司 | Storage method and storage device |
CN103634388B (en) * | 2013-11-22 | 2017-06-20 | 华为技术有限公司 | Controller is restarted in treatment storage server method and relevant device and communication system |
US10437510B2 (en) | 2015-02-03 | 2019-10-08 | Netapp Inc. | Monitoring storage cluster elements |
US20160239394A1 (en) * | 2015-02-13 | 2016-08-18 | Netapp, Inc. | Methods for improving management of input or output operations in a network storage environment with a failure and devices thereof |
US10515027B2 (en) * | 2017-10-25 | 2019-12-24 | Hewlett Packard Enterprise Development Lp | Storage device sharing through queue transfer |
CN109725829B (en) * | 2017-10-27 | 2021-11-05 | 伊姆西Ip控股有限责任公司 | System and method for end-to-end QoS solution for data storage system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6343324B1 (en) * | 1999-09-13 | 2002-01-29 | International Business Machines Corporation | Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access to the devices |
US6587970B1 (en) * | 2000-03-22 | 2003-07-01 | Emc Corporation | Method and apparatus for performing site failover |
US20040003171A1 (en) * | 2002-06-27 | 2004-01-01 | Basham Robert Beverley | Virtual sequential data storage (VSDS) system with router conducting data between hosts and physical storage bypassing VSDS controller |
US20040054866A1 (en) * | 1998-06-29 | 2004-03-18 | Blumenau Steven M. | Mapping of hosts to logical storage units and data storage ports in a data processing system |
US20040059830A1 (en) * | 2002-09-17 | 2004-03-25 | Sockeye Networks, Inc. | Network address space clustering employing topological groupings, distance measurements and structural generalization |
US6738818B1 (en) * | 1999-12-27 | 2004-05-18 | Intel Corporation | Centralized technique for assigning I/O controllers to hosts in a cluster |
US6820171B1 (en) * | 2000-06-30 | 2004-11-16 | Lsi Logic Corporation | Methods and structures for an extensible RAID storage architecture |
US20050010682A1 (en) * | 2001-09-07 | 2005-01-13 | Shai Amir | Load balancing method for exchanging data between multiple hosts and storage entities, in ip based storage area network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR0127029B1 (en) * | 1994-10-27 | 1998-04-01 | 김광호 | Memory card and recording, reproducing, and erasing method thereof |
-
2005
- 2005-11-22 US US11/286,847 patent/US7380074B2/en active Active
-
2006
- 2006-10-12 CN CN200610132177.2A patent/CN100568881C/en active Active
-
2008
- 2008-04-22 US US12/107,687 patent/US20080215596A1/en not_active Abandoned
- 2008-04-22 US US12/107,693 patent/US7730267B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040054866A1 (en) * | 1998-06-29 | 2004-03-18 | Blumenau Steven M. | Mapping of hosts to logical storage units and data storage ports in a data processing system |
US6343324B1 (en) * | 1999-09-13 | 2002-01-29 | International Business Machines Corporation | Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access to the devices |
US6738818B1 (en) * | 1999-12-27 | 2004-05-18 | Intel Corporation | Centralized technique for assigning I/O controllers to hosts in a cluster |
US6587970B1 (en) * | 2000-03-22 | 2003-07-01 | Emc Corporation | Method and apparatus for performing site failover |
US6820171B1 (en) * | 2000-06-30 | 2004-11-16 | Lsi Logic Corporation | Methods and structures for an extensible RAID storage architecture |
US20050010682A1 (en) * | 2001-09-07 | 2005-01-13 | Shai Amir | Load balancing method for exchanging data between multiple hosts and storage entities, in ip based storage area network |
US20040003171A1 (en) * | 2002-06-27 | 2004-01-01 | Basham Robert Beverley | Virtual sequential data storage (VSDS) system with router conducting data between hosts and physical storage bypassing VSDS controller |
US20040059830A1 (en) * | 2002-09-17 | 2004-03-25 | Sockeye Networks, Inc. | Network address space clustering employing topological groupings, distance measurements and structural generalization |
Also Published As
Publication number | Publication date |
---|---|
US7380074B2 (en) | 2008-05-27 |
US20080215827A1 (en) | 2008-09-04 |
CN100568881C (en) | 2009-12-09 |
CN1972312A (en) | 2007-05-30 |
US7730267B2 (en) | 2010-06-01 |
US20070118706A1 (en) | 2007-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7380074B2 (en) | Selecting storage clusters to use to access storage | |
US6360306B1 (en) | Relocation of suspended data to a remote site in a distributed storage system | |
US8699322B1 (en) | Port identifier management for path failover in cluster environments | |
US8738821B2 (en) | Selecting a path comprising ports on primary and secondary clusters to use to transmit data at a primary volume to a secondary volume | |
US7069468B1 (en) | System and method for re-allocating storage area network resources | |
US6996741B1 (en) | System and method for redundant communication between redundant controllers | |
US7937617B1 (en) | Automatic clusterwide fail-back | |
US7003688B1 (en) | System and method for a reserved memory area shared by all redundant storage controllers | |
US6883065B1 (en) | System and method for a redundant communication channel via storage area network back-end | |
US8117169B2 (en) | Performing scheduled backups of a backup node associated with a plurality of agent nodes | |
US7127633B1 (en) | System and method to failover storage area network targets from one interface to another | |
US7536508B2 (en) | System and method for sharing SATA drives in active-active RAID controller system | |
US7484039B2 (en) | Method and apparatus for implementing a grid storage system | |
US8707085B2 (en) | High availability data storage systems and methods | |
US7839788B2 (en) | Systems and methods for load balancing storage system requests in a multi-path environment based on transfer speed of the multiple paths | |
US20060059226A1 (en) | Information handling system and method for clustering with internal cross coupled storage | |
GB2366048A (en) | Selecting a preferred path to a storage device | |
US7650463B2 (en) | System and method for RAID recovery arbitration in shared disk applications | |
US7752340B1 (en) | Atomic command retry in a data storage system | |
US6851023B2 (en) | Method and system for configuring RAID subsystems with block I/O commands and block I/O path | |
WO2005031577A1 (en) | Logical partitioning in redundantstorage systems | |
US20050278539A1 (en) | Reserve/release control method | |
US20070156879A1 (en) | Considering remote end point performance to select a remote end point to use to transmit a task | |
US7715378B1 (en) | Error notification and forced retry in a data storage system | |
US20080276255A1 (en) | Alternate Communication Path Between ESSNI Server and CEC |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |
|
AS | Assignment |
Owner name: GOOGLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:027463/0594 Effective date: 20111228 |
|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044142/0357 Effective date: 20170929 |