US20030065780A1 - Data storage system having data restore by swapping logical units - Google Patents

Data storage system having data restore by swapping logical units Download PDF

Info

Publication number
US20030065780A1
US20030065780A1 US10/259,237 US25923702A US2003065780A1 US 20030065780 A1 US20030065780 A1 US 20030065780A1 US 25923702 A US25923702 A US 25923702A US 2003065780 A1 US2003065780 A1 US 2003065780A1
Authority
US
United States
Prior art keywords
logical unit
client
further including
data
logical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/259,237
Inventor
Charles Maurer
Sujit Naik
Ananthan Pillai
Thomas Dings
Michael Wright
John Stockenberg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC Corp filed Critical EMC Corp
Priority to US10/259,237 priority Critical patent/US20030065780A1/en
Assigned to EMC CORPORATION reassignment EMC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WRIGHT, MICHAEL H., DINGS, THOMAS L., MAURER, CHARES F., III, NAIK, SUJIT SURESH, PILLAI, ANANTHAN K., STOCKENBERG, JOHN E.
Publication of US20030065780A1 publication Critical patent/US20030065780A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1456Hardware arrangements for backup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques

Definitions

  • the invention relates generally to managing data in a data storage environment, and more particularly to a system and method for managing replication of data distributed over one or more computer systems.
  • computer systems which process and store large amounts of data typically include a one or more processors in communication with a shared data storage system in which the data is stored.
  • the data storage system may include one or more storage devices, usually of a fairly robust nature and useful for storage spanning various temporal requirements, e.g. disk drives.
  • the one or more processors perform their respective operations using the storage system.
  • the computer systems also can include a backup storage system in communication with the primary processor and the data storage system. Often the connection between the one or more processors and the backup storage system is through a network in which case the processor is sometimes referred to as a “backup client.”
  • the backup storage system can include a backup storage device (such as tape storage or any other storage mechanism), together with a system for placing data into the storage device and recovering the data from that storage device.
  • a backup storage device such as tape storage or any other storage mechanism
  • the client copies data from the shared storage system across the network to the backup storage system.
  • an actual data file may be communicated over the network to the backup storage device.
  • the shared storage system corresponds to the actual physical storage.
  • the client For the client to write the backup data over the network to the backup storage system, the client first converts the backup data into file data i.e., the client retrieves the data from the physical storage system level, and converts the data into application level format (e.g. a file) through a logical volume manager level, a file system level and the application level.
  • application level format e.g. a file
  • the backup storage system can take the application level data file, and convert it to its appropriate file system level format for the backup storage system. The data can then be converted through the logical volume manager level and into physical storage.
  • EMC Data Manager is capable of such backup and restore over a network, as described in numerous publications available from EMC of Hopkinton, Mass., including the EDM User Guide (Network) “Basic EDM Product Manual”.
  • Network “Basic EDM Product Manual”.
  • a backup storage architecture in which a direct connection is established between the shared storage system and the backup storage system was conceived.
  • Such a system is described in U.S. Pat. No. 6,047,294, assigned to assignee of the present invention, and entitled Logical Restore from a Physical Backup in Computer Storage System and herein incorporated by reference.
  • the present invention is a system and method for management of data replicated across one or more computer systems.
  • the method of this invention allows management of data that may be replicated across one or more computer systems.
  • the method includes the computer-executed steps of establishing one or more mirrored copies of data that are copies of one or more volumes of data that are part of a first volume group on a first computer system.
  • the mirrored copies of data are separated or split from the respective one or more volumes of data.
  • Steps include the discovering of logical information related to the one or more volumes of data that are part of the volume group on the first computer system.
  • a map is created from the discovered information to map logical information to physical devices on the first computer system. Then a duplicate of the one or more mirrored copies of data is mounted on the second computer system by using the map to create a second volume group that is substantially identical to the first volume group.
  • the invention includes a system for carrying out method steps.
  • the invention includes a program product for carrying out method steps.
  • a storage system performs a restore by utilizing logical volume or unit swapping.
  • FIG. 1 is a block diagram of a data storage network including host computer systems a data storage system and a backup system and also including logic for enabling the method of the present invention
  • FIG. 2 is an exemplary representation of a computer-readable medium encoded with the logic of FIG. 1 for enabling the method of the present invention
  • FIG. 3 is a schematic representation of the data storage network of FIG. 1 in which the invention may be configured to operate with standard and BCV devices for implementing the method of this invention;
  • FIG. 4 is a representation of an embodiment of the logic of FIG. 1 and showing a preferred functional structure
  • FIG. 5 is a representation of a general overview of the method steps of this invention.
  • FIG. 6 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5;
  • FIG. 7 is another flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5;
  • FIG. 8 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5;
  • FIG. 9 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5;
  • FIG. 10 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5;
  • FIG. 11 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention which are generally part of the steps shown in FIG. 5;
  • FIG. 12 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5;
  • FIG. 13 is a schematic block diagram of a storage system having logical unit swapping in accordance with the present invention shown in an initial configuration
  • FIG. 14 is a schematic block diagram of the storage system of FIG. 13 shown during creation of a logical unit mirror.
  • FIG. 15 is a schematic block diagram of the storage system of FIG. 13 shown after creation of the mirror.
  • FIG. 16 is a schematic block diagram of the storage system of FIG. 13 shown with logical unit swapping
  • FIG. 17 is a schematic block diagram of the storage system of FIG. 13 shown after restoration of data from the mirror of FIG. 13;
  • FIG. 18 is a flow diagram showing an exemplary top level sequence of steps for implementing logical unit swapping in a storage array in accordance with the present invention
  • FIG. 19 is a flow diagram showing further details for a portion of the flow diagram of FIG. 18;
  • FIG. 20 is a flow diagram showing a further portion of the flow diagram of FIG. 18;
  • FIG. 21 is a flow diagram showing additional details for a portion of the flow diagram of FIG. 20.
  • FIG. 22 is a flow diagram showing further details for a portion of the flow diagram of FIG. 20.
  • the methods and apparatus of the present invention are intended for use with data storage systems, such as the Symmetrix Integrated Cache Disk Array system available from EMC Corporation of Hopkinton, Mass. Specifically, this invention is directed to methods and apparatus for use in systems of this type that include transferring a mirrored set of data from a standard device to a redundant device for use in applications such as backup or error recovery, but which is not limited to such applications.
  • data storage systems such as the Symmetrix Integrated Cache Disk Array system available from EMC Corporation of Hopkinton, Mass.
  • this invention is directed to methods and apparatus for use in systems of this type that include transferring a mirrored set of data from a standard device to a redundant device for use in applications such as backup or error recovery, but which is not limited to such applications.
  • the methods and apparatus of this invention may take the form, at least partially, of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, random access or read only-memory, or any other machine-readable storage medium.
  • program code i.e., instructions
  • tangible media such as floppy diskettes, CD-ROMs, hard drives, random access or read only-memory, or any other machine-readable storage medium.
  • the program code When the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • the methods and apparatus of the present invention may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via any other form of transmission.
  • program code when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • a machine such as a computer
  • the program code When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to specific logic circuits.
  • the logic for carrying out the method is embodied as part of the system described below beginning with reference to FIG. 1.
  • One aspect of the invention is embodied as a method that is described below with detailed specificity in reference to FIGS. 5 - 12 .
  • FIG. 1 reference is now made to a data storage network 100 in which the invention is particularly useful and includes a data storage system 119 , host computer systems 113 a and 113 b, and backup system 200 .
  • the data storage system is a Symmetrix Integrated Cache Disk Arrays available from EMC Corporation of Hopkinton, Mass.
  • a Symmetrix Integrated Cache Disk Arrays available from EMC Corporation of Hopkinton, Mass.
  • Such a data storage system and its implementation is fully described in U.S. Pat No. 6,101,497 issued Aug. 8, 2000, and also in U.S. Pat. No. 5,206,939 issued Apr. 27, 1993, each of which is assigned to EMC the assignee of this invention and each of which is hereby incorporated by reference. Consequently, the following discussion makes only general references to the operation of such systems.
  • the invention is useful in an environment wherein replicating to a local volume denoted as a business continuance volume (BCV) is employed (FIG. 2).
  • a local volume denoted as a business continuance volume (BCV)
  • BCV business continuance volume
  • FIG. 2 Such a local system which employs mirroring for allowing access to production volumes while performing backup is also described in the '497 patent incorporated herein.
  • the data storage system 119 includes a system memory 114 and sets or pluralities 115 and 116 of multiple data storage devices or data stores.
  • the system memory 114 can comprise a buffer or cache memory; the storage devices in the pluralities 115 and 116 can comprise disk storage devices, optical storage devices and the like. However, in a preferred embodiment the storage devices are disk storage devices.
  • the sets 115 and 116 represent an array of storage devices in any of a variety of known configurations.
  • a host adapter (HA) 117 provides communications between the host system 113 and the system memory 114 ; disk adapters (DA) 120 and 121 provide pathways between the system memory 114 and the storage device pluralities 115 and 116 .
  • a bus 122 interconnects the system memory 114 , the host adapters 117 and 118 and the disk adapters 120 and 121 .
  • Each system memory 114 and 141 is used by various elements within the respective systems to transfer information and interact between the respective host adapters and disk adapters.
  • a backup storage system 200 is connected to the data storage system 119 .
  • the backup storage system is preferably an EMC Data Manager (EDM) connected to the data storage system as described in Symmetrix Connect User Guide, P/N 200-113-591, Rev. C, December 1997, available from EMC Corporation of Hopkinton, Mass.
  • EMC EMC Data Manager
  • the direct connection between the shared storage system and the backup storage system may be provided as a high-speed data channel 123 such as a SCSI cable or one or more fiber-channel cables. In this system, a user may be permitted to backup data over the network or the direct connection.
  • Backup system 200 includes a backup/restore server 202 , Logic 206 as part of the server, and a tape library unit 204 that may include tape medium (not shown) and a robotic picker mechanism (also not shown) as is available on the preferred EDM system.
  • Logic 206 is installed and becomes part of the EDM for carrying out the method of this invention and the EDM becomes at least part of a system for carrying out the invention.
  • Logic 206 is preferably embodied as software for carrying out the methods of this invention and is preferably included at least as part of a backup/restore server 202 in communication with the data storage system 119 through an adapter 132 (e.g., a SCSI adapter) along communication path 123 .
  • Substantially identical logic may also be installed as software on any host computer system such as 113 a or 113 b, shown as logic 206 a and 206 b, respectively.
  • the software is Unix-based and daemons are launched by the software for execution where needed on the backup system, or host computers. The deamons on each of these computers communicate through sockets.
  • the Logic of this invention in a preferred embodiment is computer program code in the Perl programming language. As shown in FIG. 2, it may be carried out from a computer-readable medium such as CD-ROM 198 encoded with Logic 206 that acts in cooperation with normal computer electronic memory as is known in the art.
  • Perl is a Unix-based language (see e.g. Programming Perl, 2nd Edition by Larry Wall, Randal I. Shwartz, and Tom Christiansen, published by O'Reilly and Associates). Nevertheless, one skilled in the computer arts will recognize that the logic, which may be implemented interchangeably as hardware or software may be implemented in various fashions in accordance with the teachings presented now.
  • the data storage system 119 operates in response to commands from one or more computer or host systems, such as the host systems 113 a and 113 b, that are each connected via a host adapter, such as host adapters 117 and 118 .
  • the host adapters 117 and 118 transfer commands to a command buffer that is part of system memory 114 .
  • the command buffer stores data structures and write requests that the disk adapters generate.
  • the disk adapters such as the disk adapters 120 or 121 , respond by effecting a corresponding operation using the information in a command buffer.
  • the selected disk adapter then initiates a data operation. Reading operations transfer data from the storage devices to the system memory 114 through a corresponding disk adapter and subsequently transfer data from the system memory 114 to the corresponding host adapter, such as host adapter 117 when the host system 113 a initiates the data writing operation.
  • the computer systems 113 a and 113 b may be any conventional computing system, each having an operating system, such as a system available from Sun Microsystems, and running the Solaris operating system (a version of Unix), an HP system running HP-UX (a Hewlett-Packard client, running a Hewlett-Packard version of the Unix operating system) or an IBM system running the AIX operating system (an IBM version of Unix) or any other system with an associated operating system such as the WINDOWS NT operating system.
  • an operating system such as a system available from Sun Microsystems, and running the Solaris operating system (a version of Unix)
  • HP-UX a Hewlett-Packard client, running a Hewlett-Packard version of the Unix operating system
  • IBM system running the AIX operating system an IBM version of Unix
  • any other system with an associated operating system such as the WINDOWS NT operating system.
  • a physical disk is formatted into a “physical volume” for use by the management software such Logical Volume Manager (LVM) software available from EMC.
  • LVM Logical Volume Manager
  • Each physical volume is split up into discrete chunks, called physical partitions or physical extents. Physical volumes are combined into a “volume group.”
  • a volume group is thus a collection of disks, treated as one large storage area.
  • a “logical volume” consists of some number of physical partitions/extents, allocated from a single volume group.
  • filesystem is simply stated a structure or a collection of files. In Unix, filesystem can refer to two very distinct things: the directory tree or the arrangement of files on disk partitions.
  • the data storage system 119 creates a mirror image (copy or replication) of a source or standard volume.
  • a mirror is denoted as a business continuance volume (BCV), also referred to in general terms as a mirrored disk. and in such a context specifically as a BCV device. If data on the standard volume changes, the same changes are immediately applied to the mirrored disk.
  • BCV business continuance volume
  • the preferred Symmetrix data storage system 119 isolates the mirrored version of the disk and no further changes are applied to the mirrored volume. After a split is complete, the primary disk can continue to change but the mirror maintains the point-in-time data that existed at the time of the split.
  • Mirrors can be “synchronized” in either direction (i.e., from the BCV to the standard or visa versa). For example, changes from the standard volume that occurred after a split to the mirror can be applied to the BCV or mirrored disk. This brings the mirrored disk current with the standard. If you synchronize in the other direction you can make the primary disk match the mirror. This is often the final step during a restore.
  • BCV device and its corresponding BCV volume or volumes is more readily understood in terms of data sets stored in logical volumes and is useful for understanding the present invention.
  • any given logical volume may be stored on a portion or all of one physical disk drive or on two or more disk drives.
  • disk adapter 120 (FIG. 1) controls the operations of a series of physical disks 115 that are shown in FIG. 3 in terms of logical volumes 212 .
  • the segmentation or hypering of physical disks into logical volumes is well known in the art.
  • a disk adapter interfaces logical volumes 214 to the data storage system bus 122 (FIG. 1).
  • Each of these volumes 214 is defined as a Business Continuation Volume and is designated a BCV device.
  • BCV device comprises a standard disk controller and related disk storage devices as shown in FIG. 1 especially configured to independently support applications and processes.
  • the use of these BCV devices enables a host such as host 113 a, described from here on as the “source” host computer system to utilize instantaneous copies of the data in the standard volumes 212 .
  • the invention has particular advantages when the target and source host computer system are separate distinct computers, there may also be advantages in having the two combined together.
  • the target and source computer may be integrally combined as one computer.
  • host 113 a may continue online transaction processing (such as database transaction processing) or other processing without any impact or load on the volumes 212 , while their respective mirror images on BCV's 214 are used to back up data in cooperation with backup system 200 .
  • the BCV's may be established for use on another host substantially automatically under control of a computer program, rather than requiring intervention of an operator all along the way. The advantages and details associated with such an operation are described below.
  • the direction of data flow for backup is from the data storage system 119 to the backup system 200 as represented by arrow 211 .
  • the direction of data flow for restore is to the data storage system (opposite from arrow 211 ), but the BCV's may be mounted on another host other than the one originally established in accordance with the method of this invention.
  • the EDM normally offers several options for controlling mirror behavior before and after a backup or restore, which are incorporated with this invention and are therefore discussed now at a high level. (Further detail about such known polices may be found in a white paper available from EMC: Robert Boudrie and David Dysert, EMC Best Practices: Symmetrix Connect and File Level Granularity. )
  • mirror management can be configured to do any of the following:
  • the invention includes a method for managing data that may be replicated across one or more computer systems.
  • the method is carried out in the above-described environment by the Logic of this invention, which in a preferred embodiment is a program code in the Perl programming language as mentioned above.
  • the method includes the computer-executed steps of establishing one or more mirrored copies of data (BCV's) that are copies of one or more volumes of data (Standard Volumes).
  • BCV's are established in a conventional manner as described in the incorporated '497 patent.
  • the BCV's are separated or split from the respective one more volumes of data in a conventional manner and which is also described in the incorporated '497 patent.
  • the Standard volumes are part of a volume group on the source computer system 113 a that has an operating system 210 a (FIG. 3).
  • the operating system is preferably a Unix operating system, such as Solaris from Sun Microsystems of California, AIX from IBM of New York, or HP-UX from Hewlett Packard of California.
  • the method further includes discovering logical information related to the Standard volumes that are part of the volume group on the source computer system 113 a .
  • a map of the logical information to physical devices on the source computer system is created, preferably in the form of a flat file that may be converted into a tree structure for fast verification of the logical information. That map is used to build a substantially identical logical configuration on the target computer system 113 b, preferably after the logical information has been verified by using a tree structure configuration of the logical information.
  • the logical configuration is used to mount a duplicate of the BCV's on the target computer system (denoted as mounted target BCV's).
  • the newly mounted target BCV's then become part of a second volume group on the target computer system 113 b that has an operating system 210 b.
  • the operating system is preferably a Unix operating system, such as Solaris from Sun Microsystems of California, AIX from IBM of New York, or HP-UX from Hewlett Packard of California.
  • the invention is particularly useful when data on the standard volumes and BCV's represents data related to an application 208 a and/or application 208 b, and in particular the invention is particularly useful if the application is a database, such as an Oracle database available from Oracle Corporation of Redwood, Calif.
  • the logic 206 includes program code that enables certain functions and may be thought of as code modules, although the code may or may not be actually structured or compartmentalized in modular form, i.e., this illustrated concept is more logical than physical.
  • D/M module 300 serves a discovery/mapping function
  • E/S module 302 serves an establish/split function
  • B/M module 304 serves a build/mount function
  • B/R module 306 serves a backup/restore function
  • D/C module 308 serves a dismount/cleanup function. Any of the functions may be accomplished by calling a procedure for running such a function as part of the data storage system and the backup system.
  • the discovery/mapping function discovers and maps logical to physical devices on the source host 113 a, and includes such information as physical and logical volumes, volume groups, and file system information.
  • the establish/split function establishes BCV's or splits such from standard volumes, depending on the pre- and post-mirror policies in effect on source host 113 a.
  • the build/mount function substantially exports the BCV's established on the source host 113 a to the target host 113 b. It creates volume group, logical volume, and file system objects on the target host computer system.
  • the backup/restore function performs backup of the target host BCV data that has been exported or migrated from the source host.
  • the dismount/cleanup function removes all volume group, logical volume, and filesystem objects from the target host.
  • FIG. 5 shows an overview of the entire process.
  • the logic 206 maps logical to physical devices on the source host.
  • the logic establishes or splits standard to BCV's (which may be accomplished by a call to another function on the data storage system) in accordance with the mirror policy in effect at the source host.
  • Step 404 logic builds and mounts on the target host so that the BCV's are exported or migrated to the target host.
  • Step 406 is a step for Backup and/or Restore, as described in more detail below.
  • Step 408 is a cleanup step in which all volume group logical volume, and filesystem objects are removed from the target server.
  • FIG. 6 is an overview of the steps of the mapping and discovery process. Such processing begins in step 500 .
  • the filesystem is discovered on the source host in step 502 .
  • the logical volume is discovered in step 504 .
  • the volume group information is discovered on the source host in step 506 .
  • the map is created preferably as a flat file because that is an efficient data structure for compiling and using the information.
  • the method uses a data storage system input file.
  • the input file is a three-column file that contains a list of the standard and BCV device serial numbers containing the data and the data copies respectively, and the physical address of the BCV devices.
  • Mapping file (for IBM AIX) The .std Mapping file is generated by a Unix-based call with the -std option flag.
  • the .std Mapping file is a multi-columned file of information about the Standard devices The columns may include: 1. Device Serial Number-from the Data Storage System 2. Physical Address (i.e., hdisk1) 3. Volume Group 4. Logical Volume Name 5. Volume Group Partition Size 6. File Type 7. Mount Point 8. Logical Volume Partition size 9. Logical Volume source journal log 10. Logical Volume number of devices striped over 11. Logical Volume Stripe size
  • mapping file (for HP-UX)
  • the .std Mapping file is generated by a Unix-based call with the -std option flag.
  • the .std Mapping file is a multi-columned file of information about the Standard devices The columns may include: 1. Device Serial Number-from the data storage system input file 2. Physical Address (i.e., c0d0t1) 3. Volume Group 4. Logical Volume Name 5. Logical Volume Number 6. File Type 7. Mount Point
  • step 600 uses the flat file to create a tree structure.
  • This structure is preferably built by a unix function call from information in the mapping files described above. It may be built on both the target host computer system and the source host computer system. It is referred to as a tree because the Volume group information may be placed as the root of the tree and the branches represent the device information within the group and the logical volumes within the group.
  • It is used in step 602 to verify the accuracy of the map file before the map file is sent to the target host.
  • the tree is converted to a map preferably as a flat file in step 604 . This flat file map is then sent back to the target in step 606 .
  • step 700 the process of establishing/splitting with a backup system is started in step 700 .
  • the mirror policy is checked in step 702 .
  • An inquiry is posed in step 704 to determine if BCV's are established in accordance with the mirror policy. If the answer is no then BCV's are established in step 706 .
  • the BCV's are split from the source host in step 708 .
  • the BCV's are made not ready to the host in step 710 .
  • step 800 the process of beginning to build/mount logical information so the BCV's can be mounted on the target is begun in step 800 .
  • the volume groups are created on the target is step 802 .
  • Logical volumes are created on the target in step 804 .
  • the filesystem is created on the target in step 806 .
  • the device mount may now be completed with this logical information related to the BCV's on the target host in step 808 .
  • the newly mounted target BCV's may now be backed up in step 900 .
  • the application is then shut down on the target in step 902 .
  • the software application on the target host in the source host is a database
  • information related to the data may also be backed up, with the effect that essentially the entire database is backed up.
  • Important information from the database includes any transactional data performed by the database operations, and related control files, table spaces, and archives/redo logs.
  • Control files contain important information in the Oracle database, including information that describes the instance where the datafiles and log files reside.
  • Datafiles may be files on the operating system filesystem.
  • a related term is tablespace that is the lowest logical layer of the Oracle data structure.
  • the tablespace consists of one or more datafiles.
  • the tablespace is important in that it provides the finest granularity for laying out data across datafiles.
  • redo log files In the database there are archive files known as redo log files or simply as the redo log. This is where information that will be used in a restore operation is kept. Without the redo log files a system failure would render the data unrecoverable. When a log switch occurs, the log records in the filled redo log file are copied to an archive log file if archiving is enabled.
  • Step 1002 poses an inquiry to determine if the restore is to be from the BCV's on the target or tape. In accordance with the answer the standard volumes are synchronized or restored from the target mounted BCV's or tape, respectively in steps 1004 or 1006 .
  • Step 1008 begins the notification and cleanup steps that are generally described in FIG. 12.
  • the cleanup/dismount process begins in step in 1100 as shown in FIG. 12.
  • the BCV's are dismount from the target in step 1102 . This may be accomplished for example with the UNIX umount command.
  • the cleanup is completed in step 1108 .
  • the BCV's are re-established on the source (i.e., made ready to the host) in step in 1108 .
  • a data storage system in another aspect of the invention, includes a storage array having logical volumes or units that can be accessed by one or more clients via a switch.
  • a first logical unit can be replicated to create a copy, i.e., a mirrored BCV, of the first logical unit within the storage array.
  • the mirrors are split so that write operations to disk no longer affect the copy.
  • the storage array can provide access to the copy of the first logical unit by the client by swapping the logical unit accessed by the host.
  • the client and/or client application is not aware that the first logical unit, e.g., original or source, logical unit is no longer being accessed. If desired, a restore can be performed from the copy to the first logical unit and application access to the first logical unit can be provided after mirror synchronization for the restore is complete.
  • the first logical unit e.g., original or source
  • a restore can be performed from the copy to the first logical unit and application access to the first logical unit can be provided after mirror synchronization for the restore is complete.
  • FIG. 13 shows an exemplary system 1200 including a storage array 1202 having a series of logical units 1204 a -N and a client 1206 that can access the storage array 1202 via a switch 1208 , such as a Fiber Channel Switch.
  • the client 1206 such as a Unix-based workstation, includes an adapter 1206 a, a disk 1206 b, and an application 1206 c, e.g., an Oracle database.
  • the storage array 1202 includes a host/disk adapter 1203 for providing access to the logical volumes or units, as described above.
  • the client 1206 and/or application 1206 a access the first logical unit 1204 a as described above. That is, the first logical unit 1204 a is presented to the client 1206 as a SCSI device having SCSI attributes, e.g., Port 0 , target, e.g., target 0 , and LUN address, e.g., LUN 0 .
  • SCSI attributes e.g., Port 0
  • target e.g., target 0
  • LUN address e.g., LUN 0 .
  • a copy of the first logical unit 1204 a can be created on a second logical unit 1204 b of the storage array 1202 . That is, mirror synchronization occurs and subsequent writes to the first logical unit 1204 a are also updated on the second logical unit 1204 b. As shown in FIG. 15, at a given time, after synchronization, the mirror is split so that writes to the first logical unit 1204 a are no longer made to the copy on the second logical unit 1204 b.
  • the first logical unit 1204 a contains the same data as the client disk 1206 b, for example, and the second logical unit 1202 b is a point-in-time copy of the first logical unit 1204 a that can be restored to the first logical unit 1204 a at a later time.
  • the storage array 1202 provides access to the second logical unit 1204 b, i.e., the copy, instead of the first logical unit 1204 a without the client's knowledge. That is, the storage array 1202 swaps client access from the first logical unit 1204 a to the second logical unit 1204 b, as described more fully below. With this arrangement, a disk-based “instant restore” by logical unit swapping can be provided.
  • the first logical unit 1204 a which can be provided as a new disk, can be restored from the second logical unit 1204 b after client application access to disk 1206 b is stopped during the mirror synchronization.
  • the storage array 1202 can optionally again provide a connection to the first logical unit 1204 a for the client 1206 and retain the copy on the second logical unit 1204 b.
  • the first logical unit 1204 a is now the restored contents of the copy on the second logical unit 1204 b, which is available for subsequent restore operations as desired.
  • FIG. 18 shows a high level flow diagram having an exemplary sequence of steps for implementing data restores by logical unit swapping in accordance with the present invention.
  • a solve for storage module which is described in FIG. 19, obtains host, storage set, logical unit (LUN) info, and the like, that is associated with the restore.
  • the data replication for a first logical unit is performed in step 2002 .
  • step 2004 the data is restored, as described in detail below.
  • FIG. 19 shows further details of the solve for storage module of FIG. 18.
  • the solve for storage module for various storage types, such as the original host device, storage set, and LUN, which define the STD and the BCV.
  • OSO operating system object
  • the underlying filesystem is identified.
  • the physical device (raw device) node(s) is discovered in step 2104 .
  • step 2108 the storage array details are obtained, such as WWN, subsystem name, and any other information that would be required to identify and communicate to the particular storage array.
  • step 2110 the system determines the storage array device unit name to which the physical name corresponds. This converts a host physical device (e.g. /dev/rdsk/c?t?d?s?) to the corresponding LUN on the array.
  • step 2112 the system checks the STD unit type to see if it can be replicated.
  • the logical unit can contain data in one of the following formats: JBOD (just a bunch of disks), RAID 0 or RAID 1 or RAID 0 +1. Unless otherwise specified, RAID 1 is assumed in this description. The unit should not be a partition of a storage set.
  • RAID 0 Redundant Array of Independent Disks, level 0
  • RAID 1 refers to a storageset, which is known as a mirrorset, of at least two physical disks that provide an independent copy of the virtual disk.
  • RAID 0 +1 refers to a storageset that stripes data across an array of disks and mirrors the data to a BCV.
  • step 2114 the system identifies possible “quick” replication disks if the STD unit is RAID 1 or RAID ( 0 +1).
  • three or more member mirrors indicates that there is at least one mirror that can be used for a quick replication. Without three member mirrors, the replication will take longer.
  • step 2116 the system obtains storageset details for the STD unit. Exemplary information of interest includes a list of disk members for the stripeset, mirrorset or striped mirrorset, and the size of each member.
  • FIG. 20 shows a top level sequence of steps for implementing a restore with logical unit swapping in accordance with the present invention.
  • the system prepares for the replication.
  • preparation includes adding a replication disk to the STD mirrorset if the mirrorset has less than a predetermined number, e.g., three, members. If a mirrorset has three or more members when the replication is run, one of the disks already in the mirrorset is selected for its solution. Exemplary commands are set forth below;
  • the system then checks the state of the STD mirrorset members and waits until all members are in a normalized (synchronized) state.
  • step 2202 the system executes the desired replication.
  • the cloned BCV disk is split or reduced from the STD mirror set using a split or reduce command, for example.
  • the system immediately replaces the reduced disks with additional disks so that the mirrorset is always in a position for a quick replication.
  • a quick replication refers to a process in which the split can occur without having to wait for normalization.
  • the system then creates a one-member BCV mirrorset from the reduced disk.
  • the BCV mirrorset is then initialized so that it is not destroyed and the name of the just-created mirrorset is saved.
  • the system executes the mount further details of which are described below in connection with FIG. 21.
  • step 2300 the system obtains mount host connection information. It is assumed that the mount host has a valid connection(s). The system also obtains mount host YIBA information, such as the WWN(s). In general, the host will have multiple connections if it has multiple HBAs. The mount host information can be determined by the connection data.
  • step 2302 the system defines the BCV LUN for mount host connection(s). It is understood that the BCV LUN is assigned a LUN based on the offset value of the target connection(s). Offset control refers to the LUN number of the unit that will be made available to the host. In one embodiment, the initial visibility of the LUN is to no device.
  • client connections have decimal offsets for logical units. A logical unit number in an offset for a client has that client “see” the storage defined by that logical unit.
  • the system assigns the BCV LUN to a mount host connection(s).
  • a mount host connection For hosts with multi-paths, such as SecurePath or other host-based device failover software, the system assigns the LUN to the all paths involved in the multi-path of the host where the system is to mount the replica.
  • the system should also be aware of the mode in which the controller is configured, e.g., transparent failover or multiple bus failover.
  • the system can also consider the connection offset in order to determine the LUNs that are visible on the connection to the HBA.
  • the preferred path can also be considered.
  • the system makes dynamic BCV LUNs visible on the mount host OS in step 2306 . In an exemplary embodiment, this occurs without rebooting the mount host with the host OS supporting so-called bus rescans.
  • the BCV LUN should be mounted as a specific node. Alternatively, the system can determine the node assigned to the LUN.
  • the system imports volume groups and mounts the filesystem.
  • Preparation for the replication is the same as that described for RAID 1 above except a) that there will be more than one disk to add, since it is a stripe and each column of the stripe needs to have the mirror disk added; and b) there will be more than one set of mirrors to check, since it is a stripe and each column of the stripe needs to have its mirrors normalized.
  • the BCV disks from the striped mirror set are reduced and the mirrors in a striped mirrorset are reduced in one operation.
  • a BCV stripeset is created from the reduced disks. The order of the disks should be the same as the stripeset members that the clone disks came from.
  • the remaining replication steps are substantially similar to those of RAID 1 except that a) the system initializes the replication or clone stripeset that was just created, and b) the system saves the name of the just-created stripeset.
  • the execute mount procedure is also substantially similar to RAID 1 except that the system defines the LUN from the previously created clone stripeset.
  • Replication of a filesystem on a stripeset is similar to that described above for RAID 1 and RAID 0 formats.
  • the STD unit is a stripeset of two or more disks. There are no mirrors of the unit, so the system creates a temporary striped mirrorset or RAID 0 +1 to create a replication or clone of the stripeset. After the cloning has taken place, the system deletes any temporary containers that were created.
  • the system converts a STD stripeset to a striped mirrorset. Each disk member of the stripeset is temporarily turned into a mirrorset. Execution of the replication is substantially similar to RAID 0 +1 and execution of the mount is substantially similar to RAID 1 .
  • FIG. 22 shows an exemplary implementation of restoring a RAID 1 replication with logical unit swapping in accordance with the present invention.
  • the below description assumes that the unit being restored to already exists.
  • RAW raw device
  • the information can come from catalog information created at replication time and can be managed by the an instant restore daemon (IRD).
  • IRD instant restore daemon
  • filesystems will be created as necessary. It is understood that the raw device is how the OS perceives the LUN. Filesystems are built on top of raw devices. An application can use a raw device or filesystem to store data.
  • step 2400 the system discovers, for the physical device node(s), the filesystem name being restored to. If the replication was a RAW device, an instant restore daemon (IRD) provides the device node to which the system is restoring.
  • the system determines the STD storage type, e.g., Symmetrix. Storage array details for the STD physical device are obtained in step 2402 . Exemplary details include world wide name (WWN), subsystem name, and any other information that would be required to communicate to the particular storage array.
  • step 2406 the system determines the array device unit name to which the STD physical name corresponds. This converts a host physical device (e.g. /dev/rdsk/c?t?d?s?) for a Solaris system raw device) to the corresponding LUN on the array. In one embodiment, information from a SymAPI call is used.
  • step 2408 the system obtains STD mirrorset details for the unit, such as a list of disk members for the mirrorset, the size of each member (may be different), status of mirrorset, and fail if restore is not possible.
  • the filesystem is unmounted in step 2410 , such as by a function call.
  • step 2412 the system disables volume groups residing on discovered nodes and in step 2414 , the system renders the STD device “not ready” on the host OS.
  • the “not ready” status of the STD device is preferred since the system may delete the unit.
  • the STD is made “not ready” after disabling access to the LUN (step 2416 ).
  • step 2416 the system saves the STD LUN information and the STD mirrorset name.
  • step 2418 the system disables host access to the STD LUN and in step 2420 the system deletes the unit(s) associated with the STD LUN. The system then deletes the STD mirrorset in step 2422 . It is understood that the mirrorset will be recreated after the restore.
  • One member of the STD mirrorset is added to the BCV mirrorset in step 2424 .
  • the BCV mirrorset has the data the system is to restore from.
  • the system can reuse the spindles/disks that belonged to the units that were just deleted.
  • the user should be responsible for creating a LUN with the same RAID configuration and same number of members as the replication or clone.
  • there is still at least one copy of the STD device until the restore from the BCV has completed. This copy is preserved from step 2420 and 2422 and can be retrieved if the restore fails.
  • step 2426 the system waits during the synchronization process for the BCV mirrorset to normalize.
  • step 2428 the system reduces the STD disk from BCV mirrorset. In one embodiment, the system reduces the disks that were added to the mirrorset.
  • the STD mirrorset is then re-created in step 2430 , such as by using the restored STD disk and the other members of original STD mirrorset.
  • step 2426 through 2434 are unnecessary to perform a so-called instant restore for which waiting, normalization, and preservation of the BCV are not needed.
  • step 2432 The system then waits for the STD mirrorset to normalize in step 2432 . Note that it may be possible to skip this step since the data has been restored, but it is currently unprotected by mirrors while the synchronization is taking place. It may be possible to initiate the mirror synchronization process and then continue.
  • step 2434 the restored mirrorset is initialized so that it is not destroyed.
  • step 2436 STD LUN is re-created using the re-created STD mirrorset.
  • step 2438 the system assigns the STD LUN to a connection(s). In one embodiment, the system assigns the same unit numbers that were discovered previously. It is assumed that assigning the same unit/LUN number will make it accessible to the same OS node.
  • step 2440 the OS recognizes the new LUN.
  • step 2442 the filesystem is mounted or a logical volume is created to complete the restore.
  • the saved STD stripeset information can include the stripeset name and the stripeset disk member order, which can be re-created after the restore.
  • the system makes each BCV stripeset member into one-member mirrorset. This creates a striped mirrorset (RAID 0 +1) from the BCV stripeset.
  • the system adds each member of the STD stripeset to the corresponding BCV mirrorset.
  • the BCV striped mirrorset has the data from which the restoration is performed.
  • the system can reuse the spindles/disks that belonged to the units of the STD stripeset that was just deleted. Note that unlike the case of a RAID 1 restore, there are no valid copies of any STD device once the restore from the BCV has started.
  • RAID 0 +1 (striped mirrors) replication is similar to a RAID 0 restoration with some differences discussed below. It is assumed that the unit being restored to already exists. For RAW backup and replication, the systems may need the device node to exist just as it was when the replica was taken. Filesystems will be created as necessary. In the equivalent of RAID 1 step 2424 , the system adds one member of the STD stripeset to the corresponding BCV mirrorset, which contains the data from which the restoration is performed. The system can reuse the spindles/disks that belonged to the units of the STD stripeset that were just deleted.

Abstract

This invention is a system and method for managing replication of data distributed over one or more computer systems. A data storage system can perform computer-executed steps of establishing one or more mirrored copies of data that are copies of one or more volumes of standard data (e.g. a database) on a first computer system. In one embodiment, the mirrored data is restored using logical volume swapping.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present application claims the benefit of U.S. application Ser. No. 09/894,422, filed on Jun. 28, 2001, which is incorporated herein by reference.[0001]
  • A portion of the disclosure of this patent document contains command formats and other computer language listings, all of which are subject to copyright protection. The copyright owner, EMC Corporation, has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. [0002]
  • FIELD OF THE INVENTION
  • The invention relates generally to managing data in a data storage environment, and more particularly to a system and method for managing replication of data distributed over one or more computer systems. [0003]
  • BACKGROUND OF THE INVENTION
  • As is known in the art, computer systems which process and store large amounts of data typically include a one or more processors in communication with a shared data storage system in which the data is stored. The data storage system may include one or more storage devices, usually of a fairly robust nature and useful for storage spanning various temporal requirements, e.g. disk drives. The one or more processors perform their respective operations using the storage system. To minimize the chance of data loss, the computer systems also can include a backup storage system in communication with the primary processor and the data storage system. Often the connection between the one or more processors and the backup storage system is through a network in which case the processor is sometimes referred to as a “backup client.”[0004]
  • The backup storage system can include a backup storage device (such as tape storage or any other storage mechanism), together with a system for placing data into the storage device and recovering the data from that storage device. To perform a backup, the client copies data from the shared storage system across the network to the backup storage system. Thus, an actual data file may be communicated over the network to the backup storage device. [0005]
  • The shared storage system corresponds to the actual physical storage. For the client to write the backup data over the network to the backup storage system, the client first converts the backup data into file data i.e., the client retrieves the data from the physical storage system level, and converts the data into application level format (e.g. a file) through a logical volume manager level, a file system level and the application level. When the backup storage device receives the data file, the backup storage system can take the application level data file, and convert it to its appropriate file system level format for the backup storage system. The data can then be converted through the logical volume manager level and into physical storage. [0006]
  • The EMC Data Manager (EDM) is capable of such backup and restore over a network, as described in numerous publications available from EMC of Hopkinton, Mass., including the EDM User Guide (Network) “Basic EDM Product Manual”. For performance improvements, a backup storage architecture in which a direct connection is established between the shared storage system and the backup storage system was conceived. Such a system is described in U.S. Pat. No. 6,047,294, assigned to assignee of the present invention, and entitled Logical Restore from a Physical Backup in Computer Storage System and herein incorporated by reference. [0007]
  • Today much of the data processing and storage environment is dedicated to the needs of supporting and storing large databases, which only get larger. Although data storage systems, such as the EMC Symmetrix Integrated Cache Disk Array, and some of its supporting software such as TimeFinder have made general advancements in the data storage art through the advanced use of disk mirroring much of the capability of such technology is beyond the grasp of most entities. This is because of an ever-increasing shortage of skilled computer professionals. Typically, an entity such as a company might employ or contract a data storage administrator to take care of data storage needs, a database programmer to take of database needs and general network administrators and other information technology professionals to take care of general computing needs. [0008]
  • If one of these skilled professionals leaves or is difficult to hire then the task of storing a database and taking care of its backup and restore needs may be neglected or never happen in the first place. What is needed is a computer-based tool, such as a system or program that could automate many of these tasks and reduce the complexity so that such a wide array or depth of skill sets are not needed. Further it would be an advantage if such a tool provided solutions for disaster recovery of data. [0009]
  • Prior art systems have allowed for restoration of source or standard data from replicated copies, but there has been no straight-forward simple way to get logical information related to the source so that another computer could take over the role of a failed computer (i.e., serve as a surrogate for the failed computer). There is a long-felt need for a technique to enable extraction of such logical information in a straight-forward non-complex and fast manner so that a surrogate computer could work with replicated copies in substantially the same manner as the original source computer that had operated with standard data. This would be advancement in the art with particular relevance in the field of disaster recovery. [0010]
  • SUMMARY OF THE INVENTION
  • The present invention is a system and method for management of data replicated across one or more computer systems. [0011]
  • The method of this invention allows management of data that may be replicated across one or more computer systems. The method includes the computer-executed steps of establishing one or more mirrored copies of data that are copies of one or more volumes of data that are part of a first volume group on a first computer system. The mirrored copies of data are separated or split from the respective one or more volumes of data. Steps include the discovering of logical information related to the one or more volumes of data that are part of the volume group on the first computer system. [0012]
  • A map is created from the discovered information to map logical information to physical devices on the first computer system. Then a duplicate of the one or more mirrored copies of data is mounted on the second computer system by using the map to create a second volume group that is substantially identical to the first volume group. [0013]
  • In an alternative embodiment, the invention includes a system for carrying out method steps. In another alterative embodiment, the invention includes a program product for carrying out method steps. [0014]
  • In a further aspect of the invention, a storage system performs a restore by utilizing logical volume or unit swapping.[0015]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and further advantages of the present invention may be better under stood by referring to the following description taken into conjunction with the accompanying drawings in which: [0016]
  • FIG. 1 is a block diagram of a data storage network including host computer systems a data storage system and a backup system and also including logic for enabling the method of the present invention; [0017]
  • FIG. 2 is an exemplary representation of a computer-readable medium encoded with the logic of FIG. 1 for enabling the method of the present invention; [0018]
  • FIG. 3 is a schematic representation of the data storage network of FIG. 1 in which the invention may be configured to operate with standard and BCV devices for implementing the method of this invention; [0019]
  • FIG. 4 is a representation of an embodiment of the logic of FIG. 1 and showing a preferred functional structure; [0020]
  • FIG. 5 is a representation of a general overview of the method steps of this invention; [0021]
  • FIG. 6 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5; [0022]
  • FIG. 7 is another flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5; [0023]
  • FIG. 8 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5; [0024]
  • FIG. 9 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5; [0025]
  • FIG. 10 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5; [0026]
  • FIG. 11 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention which are generally part of the steps shown in FIG. 5; and [0027]
  • FIG. 12 is a flow logic diagram illustrating some method steps of the invention carried out by the logic of this invention that are generally part of the steps shown in FIG. 5; [0028]
  • FIG. 13 is a schematic block diagram of a storage system having logical unit swapping in accordance with the present invention shown in an initial configuration; [0029]
  • FIG. 14 is a schematic block diagram of the storage system of FIG. 13 shown during creation of a logical unit mirror. [0030]
  • FIG. 15 is a schematic block diagram of the storage system of FIG. 13 shown after creation of the mirror. [0031]
  • FIG. 16 is a schematic block diagram of the storage system of FIG. 13 shown with logical unit swapping; [0032]
  • FIG. 17 is a schematic block diagram of the storage system of FIG. 13 shown after restoration of data from the mirror of FIG. 13; [0033]
  • FIG. 18 is a flow diagram showing an exemplary top level sequence of steps for implementing logical unit swapping in a storage array in accordance with the present invention; [0034]
  • FIG. 19 is a flow diagram showing further details for a portion of the flow diagram of FIG. 18; [0035]
  • FIG. 20 is a flow diagram showing a further portion of the flow diagram of FIG. 18; [0036]
  • FIG. 21 is a flow diagram showing additional details for a portion of the flow diagram of FIG. 20; and [0037]
  • FIG. 22 is a flow diagram showing further details for a portion of the flow diagram of FIG. 20.[0038]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • The methods and apparatus of the present invention are intended for use with data storage systems, such as the Symmetrix Integrated Cache Disk Array system available from EMC Corporation of Hopkinton, Mass. Specifically, this invention is directed to methods and apparatus for use in systems of this type that include transferring a mirrored set of data from a standard device to a redundant device for use in applications such as backup or error recovery, but which is not limited to such applications. [0039]
  • The methods and apparatus of this invention may take the form, at least partially, of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, random access or read only-memory, or any other machine-readable storage medium. When the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. The methods and apparatus of the present invention may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via any other form of transmission. And may be implemented such that herein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to specific logic circuits. [0040]
  • The logic for carrying out the method is embodied as part of the system described below beginning with reference to FIG. 1. One aspect of the invention is embodied as a method that is described below with detailed specificity in reference to FIGS. [0041] 5-12.
  • Data Storage Environment Including Logic for This Invention
  • Referring now to, FIG. 1, reference is now made to a [0042] data storage network 100 in which the invention is particularly useful and includes a data storage system 119, host computer systems 113 a and 113 b, and backup system 200.
  • In a preferred embodiment the data storage system is a Symmetrix Integrated Cache Disk Arrays available from EMC Corporation of Hopkinton, Mass. Such a data storage system and its implementation is fully described in U.S. Pat No. 6,101,497 issued Aug. 8, 2000, and also in U.S. Pat. No. 5,206,939 issued Apr. 27, 1993, each of which is assigned to EMC the assignee of this invention and each of which is hereby incorporated by reference. Consequently, the following discussion makes only general references to the operation of such systems. [0043]
  • The invention is useful in an environment wherein replicating to a local volume denoted as a business continuance volume (BCV) is employed (FIG. 2). Such a local system which employs mirroring for allowing access to production volumes while performing backup is also described in the '497 patent incorporated herein. [0044]
  • The [0045] data storage system 119 includes a system memory 114 and sets or pluralities 115 and 116 of multiple data storage devices or data stores. The system memory 114 can comprise a buffer or cache memory; the storage devices in the pluralities 115 and 116 can comprise disk storage devices, optical storage devices and the like. However, in a preferred embodiment the storage devices are disk storage devices. The sets 115 and 116 represent an array of storage devices in any of a variety of known configurations.
  • A host adapter (HA) [0046] 117 provides communications between the host system 113 and the system memory 114; disk adapters (DA) 120 and 121 provide pathways between the system memory 114 and the storage device pluralities 115 and 116. A bus 122 interconnects the system memory 114, the host adapters 117 and 118 and the disk adapters 120 and 121. Each system memory 114 and 141 is used by various elements within the respective systems to transfer information and interact between the respective host adapters and disk adapters.
  • A [0047] backup storage system 200 is connected to the data storage system 119. The backup storage system is preferably an EMC Data Manager (EDM) connected to the data storage system as described in Symmetrix Connect User Guide, P/N 200-113-591, Rev. C, December 1997, available from EMC Corporation of Hopkinton, Mass. The direct connection between the shared storage system and the backup storage system may be provided as a high-speed data channel 123 such as a SCSI cable or one or more fiber-channel cables. In this system, a user may be permitted to backup data over the network or the direct connection.
  • [0048] Backup system 200 includes a backup/restore server 202, Logic 206 as part of the server, and a tape library unit 204 that may include tape medium (not shown) and a robotic picker mechanism (also not shown) as is available on the preferred EDM system.
  • [0049] Logic 206 is installed and becomes part of the EDM for carrying out the method of this invention and the EDM becomes at least part of a system for carrying out the invention. Logic 206 is preferably embodied as software for carrying out the methods of this invention and is preferably included at least as part of a backup/restore server 202 in communication with the data storage system 119 through an adapter 132 (e.g., a SCSI adapter) along communication path 123. Substantially identical logic may also be installed as software on any host computer system such as 113 a or 113 b, shown as logic 206 a and 206 b, respectively. In a preferred embodiment the software is Unix-based and daemons are launched by the software for execution where needed on the backup system, or host computers. The deamons on each of these computers communicate through sockets.
  • The Logic of this invention, in a preferred embodiment is computer program code in the Perl programming language. As shown in FIG. 2, it may be carried out from a computer-readable medium such as CD-[0050] ROM 198 encoded with Logic 206 that acts in cooperation with normal computer electronic memory as is known in the art. Perl is a Unix-based language (see e.g. Programming Perl, 2nd Edition by Larry Wall, Randal I. Shwartz, and Tom Christiansen, published by O'Reilly and Associates). Nevertheless, one skilled in the computer arts will recognize that the logic, which may be implemented interchangeably as hardware or software may be implemented in various fashions in accordance with the teachings presented now.
  • Generally speaking, the [0051] data storage system 119 operates in response to commands from one or more computer or host systems, such as the host systems 113 a and 113 b, that are each connected via a host adapter, such as host adapters 117 and 118. The host adapters 117 and 118 transfer commands to a command buffer that is part of system memory 114. The command buffer stores data structures and write requests that the disk adapters generate. The disk adapters, such as the disk adapters 120 or 121, respond by effecting a corresponding operation using the information in a command buffer. The selected disk adapter then initiates a data operation. Reading operations transfer data from the storage devices to the system memory 114 through a corresponding disk adapter and subsequently transfer data from the system memory 114 to the corresponding host adapter, such as host adapter 117 when the host system 113 a initiates the data writing operation.
  • The [0052] computer systems 113 a and 113 b may be any conventional computing system, each having an operating system, such as a system available from Sun Microsystems, and running the Solaris operating system (a version of Unix), an HP system running HP-UX (a Hewlett-Packard client, running a Hewlett-Packard version of the Unix operating system) or an IBM system running the AIX operating system (an IBM version of Unix) or any other system with an associated operating system such as the WINDOWS NT operating system.
  • A short description of concepts useful for understanding this invention and known in the art is now given. A physical disk is formatted into a “physical volume” for use by the management software such Logical Volume Manager (LVM) software available from EMC. Each physical volume is split up into discrete chunks, called physical partitions or physical extents. Physical volumes are combined into a “volume group.” A volume group is thus a collection of disks, treated as one large storage area. A “logical volume” consists of some number of physical partitions/extents, allocated from a single volume group. A “filesystem” is simply stated a structure or a collection of files. In Unix, filesystem can refer to two very distinct things: the directory tree or the arrangement of files on disk partitions. [0053]
  • Below is a short description of other useful terminology which may be understood in more detail with reference to the incorporated '497 patent. When a mirror is “established” the [0054] data storage system 119 creates a mirror image (copy or replication) of a source or standard volume. When using the preferred Symmetrix such a mirror is denoted as a business continuance volume (BCV), also referred to in general terms as a mirrored disk. and in such a context specifically as a BCV device. If data on the standard volume changes, the same changes are immediately applied to the mirrored disk. When a mirror is “split” the preferred Symmetrix data storage system 119 isolates the mirrored version of the disk and no further changes are applied to the mirrored volume. After a split is complete, the primary disk can continue to change but the mirror maintains the point-in-time data that existed at the time of the split.
  • Mirrors can be “synchronized” in either direction (i.e., from the BCV to the standard or visa versa). For example, changes from the standard volume that occurred after a split to the mirror can be applied to the BCV or mirrored disk. This brings the mirrored disk current with the standard. If you synchronize in the other direction you can make the primary disk match the mirror. This is often the final step during a restore. [0055]
  • The operation of a BCV device and its corresponding BCV volume or volumes is more readily understood in terms of data sets stored in logical volumes and is useful for understanding the present invention. As known, any given logical volume may be stored on a portion or all of one physical disk drive or on two or more disk drives. [0056]
  • Referring to FIG. 3, in this particular embodiment, disk adapter [0057] 120 (FIG. 1) controls the operations of a series of physical disks 115 that are shown in FIG. 3 in terms of logical volumes 212. The segmentation or hypering of physical disks into logical volumes is well known in the art.
  • Similarly a disk adapter interfaces [0058] logical volumes 214 to the data storage system bus 122 (FIG. 1). Each of these volumes 214 is defined as a Business Continuation Volume and is designated a BCV device. The concept of BCV's are described in detail in the incorporated '497 patent so will be only generally descried herein. Each BCV device comprises a standard disk controller and related disk storage devices as shown in FIG. 1 especially configured to independently support applications and processes. The use of these BCV devices enables a host such as host 113 a, described from here on as the “source” host computer system to utilize instantaneous copies of the data in the standard volumes 212. In a conventional operations there typically will be at least one BCV volume assigned to each host device that will operate on a data set concurrently. However, as will be explained below, this invention, in particular logic 206 and its counterparts 206 a and 206 b add additional function so that the BCV volumes established for use by one host may be used by another host, such as host 113 b, described from here on as the “target” host computer system.
  • Although the invention has particular advantages when the target and source host computer system are separate distinct computers, there may also be advantages in having the two combined together. Thus, the target and source computer may be integrally combined as one computer. [0059]
  • Referring again to FIG. 3, host [0060] 113 a may continue online transaction processing (such as database transaction processing) or other processing without any impact or load on the volumes 212, while their respective mirror images on BCV's 214 are used to back up data in cooperation with backup system 200. However using the Logic of this invention the BCV's may be established for use on another host substantially automatically under control of a computer program, rather than requiring intervention of an operator all along the way. The advantages and details associated with such an operation are described below.
  • The direction of data flow for backup is from the [0061] data storage system 119 to the backup system 200 as represented by arrow 211. The direction of data flow for restore is to the data storage system (opposite from arrow 211 ), but the BCV's may be mounted on another host other than the one originally established in accordance with the method of this invention.
  • The EDM normally offers several options for controlling mirror behavior before and after a backup or restore, which are incorporated with this invention and are therefore discussed now at a high level. (Further detail about such known polices may be found in a white paper available from EMC: Robert Boudrie and David Dysert, [0062] EMC Best Practices: Symmetrix Connect and File Level Granularity.)
  • Pre-Backup Mirror Policy
  • Bring Mirrors Down—This option expects to find the mirrors established and it will split the mirrors automatically before the backup. If the mirrors are down already, the backup will fail and report an error. The error is designed to prevent the system from backing up mirrors that are in an unexpected state. [0063]
  • Verify Mirrors are Down—This option expects to find the mirrors split and it will leave them split and perform the backup. If the mirrors are established at the time of backup, the backup will fail and report an error. This error is designed to ensure that the backup is taken for the specific point in time that the mirrored data represents. [0064]
  • Bring Mirrors Down if Needed—This option checks whether the mirrors are established or split and it will split the mirrors if they are established. If you select this option, the backup will not fail regardless of the state of the mirrors. [0065]
  • Bring Mirrors Down after Establishing—This option checks the mirrors and if they are not established, the EDM first establishes the mirror to ensure that it is an exact copy of data on the primary volume. Then the EDM splits the mirrors to perform the backup. [0066]
  • Post-Backup Mirror Policy
  • During post-backup processing, mirror management can be configured to do any of the following: [0067]
  • Bring Mirrors Up—After the restore is complete, the EDM automatically resynchronizes the mirror to the primary disk. [0068]
  • Leave Mirrors Down—After the restore is complete, the EDM leaves the mirrors split from the primary disk. [0069]
  • Leave Mirrors as Found—After the restore is complete, the EDM resynchronizes the mirrors to the primary disk if they were established to begin with. If not, the EDM leaves the mirrors split. [0070]
  • The invention includes a method for managing data that may be replicated across one or more computer systems. The method is carried out in the above-described environment by the Logic of this invention, which in a preferred embodiment is a program code in the Perl programming language as mentioned above. [0071]
  • The method includes the computer-executed steps of establishing one or more mirrored copies of data (BCV's) that are copies of one or more volumes of data (Standard Volumes). The BCV's are established in a conventional manner as described in the incorporated '497 patent. The BCV's are separated or split from the respective one more volumes of data in a conventional manner and which is also described in the incorporated '497 patent. [0072]
  • The Standard volumes are part of a volume group on the [0073] source computer system 113 a that has an operating system 210 a (FIG. 3). The operating system is preferably a Unix operating system, such as Solaris from Sun Microsystems of California, AIX from IBM of New York, or HP-UX from Hewlett Packard of California.
  • The method further includes discovering logical information related to the Standard volumes that are part of the volume group on the [0074] source computer system 113 a. A map of the logical information to physical devices on the source computer system is created, preferably in the form of a flat file that may be converted into a tree structure for fast verification of the logical information. That map is used to build a substantially identical logical configuration on the target computer system 113 b, preferably after the logical information has been verified by using a tree structure configuration of the logical information.
  • The logical configuration is used to mount a duplicate of the BCV's on the target computer system (denoted as mounted target BCV's). The newly mounted target BCV's then become part of a second volume group on the [0075] target computer system 113 b that has an operating system 210 b. The operating system is preferably a Unix operating system, such as Solaris from Sun Microsystems of California, AIX from IBM of New York, or HP-UX from Hewlett Packard of California.
  • The invention is particularly useful when data on the standard volumes and BCV's represents data related to an [0076] application 208 a and/or application 208 b, and in particular the invention is particularly useful if the application is a database, such as an Oracle database available from Oracle Corporation of Redwood, Calif.
  • Referring to FIG. 4, the [0077] logic 206 includes program code that enables certain functions and may be thought of as code modules, although the code may or may not be actually structured or compartmentalized in modular form, i.e., this illustrated concept is more logical than physical. Accordingly, D/M module 300 serves a discovery/mapping function; E/S module 302 serves an establish/split function; B/M module 304 serves a build/mount function; B/R module 306 serves a backup/restore function; and D/C module 308 serves a dismount/cleanup function. Any of the functions may be accomplished by calling a procedure for running such a function as part of the data storage system and the backup system.
  • The discovery/mapping function, discovers and maps logical to physical devices on the [0078] source host 113 a, and includes such information as physical and logical volumes, volume groups, and file system information. The establish/split function establishes BCV's or splits such from standard volumes, depending on the pre- and post-mirror policies in effect on source host 113 a.
  • The build/mount function substantially exports the BCV's established on the [0079] source host 113 a to the target host 113 b. It creates volume group, logical volume, and file system objects on the target host computer system.
  • The backup/restore function performs backup of the target host BCV data that has been exported or migrated from the source host. The dismount/cleanup function removes all volume group, logical volume, and filesystem objects from the target host. [0080]
  • Method Steps of the Invention
  • Now for a better understanding of the method steps of this invention the steps are described in detail with reference to FIGS. [0081] 5-12.
  • FIG. 5 shows an overview of the entire process. In [0082] step 400 the logic 206 maps logical to physical devices on the source host. In a step 402, the logic establishes or splits standard to BCV's (which may be accomplished by a call to another function on the data storage system) in accordance with the mirror policy in effect at the source host. Step 404, logic builds and mounts on the target host so that the BCV's are exported or migrated to the target host. Step 406 is a step for Backup and/or Restore, as described in more detail below. Step 408 is a cleanup step in which all volume group logical volume, and filesystem objects are removed from the target server.
  • FIG. 6 is an overview of the steps of the mapping and discovery process. Such processing begins in [0083] step 500. The filesystem is discovered on the source host in step 502. The logical volume is discovered in step 504. The volume group information is discovered on the source host in step 506. In step 508, the map is created preferably as a flat file because that is an efficient data structure for compiling and using the information.
  • For mapping purposes, in general, the method uses a data storage system input file. Preferably the input file is a three-column file that contains a list of the standard and BCV device serial numbers containing the data and the data copies respectively, and the physical address of the BCV devices. [0084]
  • The following is an example of this file: [0085]
    TABLE 1
    Example of data storage system input file
    Standard (STD) Device BCV Dev
    7902C000 790B4000
    7902D000 790B5000
    7902E000 790B6000
    7902F00  790B7000
  • An example of how such a map is created in the preferred embodiment for each of the preferred operating systems: Solaris, AIX, and HP-UX is now shown in tables 2-4, in the respective order mentioned. [0086]
    TABLE 2
    Mapping information for Sun Solaris:
    Mapping file (for SUN Solaris)
    The .std Mapping file is generated by a Unix-based call with the -std
    option flag. The .std Mapping file is a multi-columned file of information
    about the Standard devices
    The columns may include:
    1. Device Serial Number-from the Data Storage system input file
    2. Physical Address (i.e., c0d0t1)
    3. Volume Group
    4. Logical Volume Name
    5. File Type
    6. Mount Point
    7. Serial Number
    8. Device Type
  • The following is an example of a flat file using such information for a Solaris operating system: [0087]
  • 3701F000 clt0d0s2 testvg1 vol01 ufs/mir1 947015961.1105.sunmir2 sliced [0088]
  • 3701F000 clt0d0s2 testvg1 vol02 ufs/mir1 947015961.1105.sunmir2 sliced [0089]
  • 37020000 clt0d1s2 testvg1 vol01 ufs/mir1 947015961.1105.sunmir2 sliced [0090]
  • 37020000 clt0d1s2 testvg1 vol02 ufs/mir1 947015961.1105.sunmir2 sliced [0091]
    TABLE 3
    Mapping information for IBM AIX:
    Mapping file (for IBM AIX)
    The .std Mapping file is generated by a Unix-based call with the -std
    option flag. The .std Mapping file is a multi-columned file of information
    about the Standard devices
    The columns may include:
     1. Device Serial Number-from the Data Storage System
     2. Physical Address (i.e., hdisk1)
     3. Volume Group
     4. Logical Volume Name
     5. Volume Group Partition Size
     6. File Type
     7. Mount Point
     8. Logical Volume Partition size
     9. Logical Volume source journal log
    10. Logical Volume number of devices striped over
    11. Logical Volume Stripe size
  • The following is an example of a flat file using such information for an AIX operating system: [0092]
  • 37006000 hdisk1 testvg2-2 testvg2-lv01 4 jfs/testvg2/mntpt1 25 loglv02 N/A N/A [0093]
  • 37006000 hdisk1 testvg2-2 testvg2-lv02 4 jfs/testvg2/mntpt1/mntpt2 25 loglv02 N/A N/A [0094]
  • 37006000 hdisk1 testvg2-2 testvg2-lv03 4 jfs/testvg2-3 25 loglv02 N/A N/A [0095]
  • 37006000 hdisk1 testvg2-2 testvg2-lv04 4 jfs/testvg2-4 25 loglv02 N/A N/A [0096]
    TABLE 4
    Mapping information for HP-UX:
    Mapping file (for HP-UX)
    The .std Mapping file is generated by a Unix-based call with the -std
    option flag. The .std Mapping file is a multi-columned file of information
    about the Standard devices
    The columns may include:
    1. Device Serial Number-from the data storage system input file
    2. Physical Address (i.e., c0d0t1)
    3. Volume Group
    4. Logical Volume Name
    5. Logical Volume Number
    6. File Type
    7. Mount Point
  • The following is an example of a flat file using such information for an AIX operating system: [0097]
  • 7903A000 c3t8d2 vgedm2 lvt8d2 1 vxfs/t8d2 [0098]
  • 7903B000 c3t8d3 vgedm2 lvt8d3 2 vxfs/t8d3 [0099]
  • 7903C000 c3t8d4 vgedm2 lvt8d4 3 vxfs/t8d4 [0100]
  • 7903D000 c3t8d5 vgedm2 lvt8d5 4 vxfs/t8d5 [0101]
  • Referring now to FIG. 7, step [0102] 600 uses the flat file to create a tree structure. This structure is preferably built by a unix function call from information in the mapping files described above. It may be built on both the target host computer system and the source host computer system. It is referred to as a tree because the Volume group information may be placed as the root of the tree and the branches represent the device information within the group and the logical volumes within the group. It is used in step 602 to verify the accuracy of the map file before the map file is sent to the target host. The tree is converted to a map preferably as a flat file in step 604. This flat file map is then sent back to the target in step 606.
  • Referring to FIG. 8, the process of establishing/splitting with a backup system is started in [0103] step 700. The mirror policy is checked in step 702. An inquiry is posed in step 704 to determine if BCV's are established in accordance with the mirror policy. If the answer is no then BCV's are established in step 706. The BCV's are split from the source host in step 708. The BCV's are made not ready to the host in step 710.
  • Referring to FIG. 9, the process of beginning to build/mount logical information so the BCV's can be mounted on the target is begun in [0104] step 800. The volume groups are created on the target is step 802. Logical volumes are created on the target in step 804. The filesystem is created on the target in step 806. The device mount may now be completed with this logical information related to the BCV's on the target host in step 808.
  • Referring to FIG. 10, the newly mounted target BCV's may now be backed up in [0105] step 900. The application is then shut down on the target in step 902. And following the backup of the target BCV's cleanup steps as described in FIG. 12 and notification take place in step 904.
  • If the software application on the target host in the source host is a database, then information related to the data may also be backed up, with the effect that essentially the entire database is backed up. Important information from the database includes any transactional data performed by the database operations, and related control files, table spaces, and archives/redo logs. [0106]
  • Regarding databases, these are other terminology are now discussed. The terminology is described with reference to an Oracle database because that is the preferred embodiment but one skilled in the art will recognize that other databases may be used with this invention. [0107]
  • Control files contain important information in the Oracle database, including information that describes the instance where the datafiles and log files reside. Datafiles may be files on the operating system filesystem. A related term is tablespace that is the lowest logical layer of the Oracle data structure. The tablespace consists of one or more datafiles. The tablespace is important in that it provides the finest granularity for laying out data across datafiles. [0108]
  • In the database there are archive files known as redo log files or simply as the redo log. This is where information that will be used in a restore operation is kept. Without the redo log files a system failure would render the data unrecoverable. When a log switch occurs, the log records in the filled redo log file are copied to an archive log file if archiving is enabled. [0109]
  • Referring now to FIG. 11, the process for restoring source standard volumes is shown beginning at [0110] step 1000. Step 1002, poses an inquiry to determine if the restore is to be from the BCV's on the target or tape. In accordance with the answer the standard volumes are synchronized or restored from the target mounted BCV's or tape, respectively in steps 1004 or 1006. Step 1008 begins the notification and cleanup steps that are generally described in FIG. 12.
  • The cleanup/dismount process begins in step in [0111] 1100 as shown in FIG. 12. The BCV's are dismount from the target in step 1102. This may be accomplished for example with the UNIX umount command. The objects related to volume group, logical volume, and filesystem or move the target in steps 1104 and 1106. The cleanup is completed in step 1108. The BCV's are re-established on the source (i.e., made ready to the host) in step in 1108.
  • In another aspect of the invention, a data storage system includes a storage array having logical volumes or units that can be accessed by one or more clients via a switch. A first logical unit can be replicated to create a copy, i.e., a mirrored BCV, of the first logical unit within the storage array. At a given time, the mirrors are split so that write operations to disk no longer affect the copy. In the case where the first logical unit is no longer accessible, such as due to disk failure, the storage array can provide access to the copy of the first logical unit by the client by swapping the logical unit accessed by the host. In one embodiment, the client and/or client application is not aware that the first logical unit, e.g., original or source, logical unit is no longer being accessed. If desired, a restore can be performed from the copy to the first logical unit and application access to the first logical unit can be provided after mirror synchronization for the restore is complete. [0112]
  • FIG. 13 shows an [0113] exemplary system 1200 including a storage array 1202 having a series of logical units 1204 a-N and a client 1206 that can access the storage array 1202 via a switch 1208, such as a Fiber Channel Switch. In an exemplary embodiment, the client 1206, such as a Unix-based workstation, includes an adapter 1206 a, a disk 1206 b, and an application 1206 c, e.g., an Oracle database. The storage array 1202 includes a host/disk adapter 1203 for providing access to the logical volumes or units, as described above.
  • In an initial configuration, the [0114] client 1206 and/or application 1206 a access the first logical unit 1204 a as described above. That is, the first logical unit 1204 a is presented to the client 1206 as a SCSI device having SCSI attributes, e.g., Port 0, target, e.g., target 0, and LUN address, e.g., LUN 0.
  • As shown in FIG. 14, a copy of the first [0115] logical unit 1204 a can be created on a second logical unit 1204 b of the storage array 1202. That is, mirror synchronization occurs and subsequent writes to the first logical unit 1204 a are also updated on the second logical unit 1204 b. As shown in FIG. 15, at a given time, after synchronization, the mirror is split so that writes to the first logical unit 1204 a are no longer made to the copy on the second logical unit 1204 b. At this point, the first logical unit 1204 a contains the same data as the client disk 1206 b, for example, and the second logical unit 1202 b is a point-in-time copy of the first logical unit 1204 a that can be restored to the first logical unit 1204 a at a later time.
  • Due to some type of disk failure or application failure in the application [0116] 1206 c, for example, the first logical unit 1204 a may no longer be available/reliable so that it would be desirable to access data from the copy on the second logical unit 1204 b, as shown in FIG. 16. In one particular embodiment, the storage array 1202 provides access to the second logical unit 1204 b, i.e., the copy, instead of the first logical unit 1204 a without the client's knowledge. That is, the storage array 1202 swaps client access from the first logical unit 1204 a to the second logical unit 1204 b, as described more fully below. With this arrangement, a disk-based “instant restore” by logical unit swapping can be provided.
  • The first [0117] logical unit 1204 a, which can be provided as a new disk, can be restored from the second logical unit 1204 b after client application access to disk 1206 b is stopped during the mirror synchronization.
  • As shown in FIG. 17, the [0118] storage array 1202 can optionally again provide a connection to the first logical unit 1204 a for the client 1206 and retain the copy on the second logical unit 1204 b. The first logical unit 1204 a is now the restored contents of the copy on the second logical unit 1204 b, which is available for subsequent restore operations as desired.
  • FIG. 18 shows a high level flow diagram having an exemplary sequence of steps for implementing data restores by logical unit swapping in accordance with the present invention. In step [0119] 2000, a solve for storage module, which is described in FIG. 19, obtains host, storage set, logical unit (LUN) info, and the like, that is associated with the restore. The data replication for a first logical unit is performed in step 2002. And in step 2004, the data is restored, as described in detail below.
  • FIG. 19 shows further details of the solve for storage module of FIG. 18. The solve for storage module for various storage types, such as the original host device, storage set, and LUN, which define the STD and the BCV. In [0120] step 2102, for each OSO (operating system object) that is a file in a file system or raw device, the underlying filesystem is identified. For each filesystem name, the physical device (raw device) node(s) is discovered in step 2104. In step 2106, the type of storage the physical device resides on, e.g, Symmetrix, is determined.
  • In [0121] step 2108, the storage array details are obtained, such as WWN, subsystem name, and any other information that would be required to identify and communicate to the particular storage array. In step 2110, the system determines the storage array device unit name to which the physical name corresponds. This converts a host physical device (e.g. /dev/rdsk/c?t?d?s?) to the corresponding LUN on the array. In step 2112, the system checks the STD unit type to see if it can be replicated. In general, the logical unit can contain data in one of the following formats: JBOD (just a bunch of disks), RAID 0 or RAID 1 or RAID 0+1. Unless otherwise specified, RAID1 is assumed in this description. The unit should not be a partition of a storage set.
  • These formats are well known to one of ordinary skill in the art. In general, RAID [0122] 0 (Redundant Array of Independent Disks, level 0) refers to a storageset, which is known as a stripeset, that includes striped data across an array of disks. A single logical disk can span a number of physical disks. Note that RAID 0 does not provide redundancy. RAID 1 refers to a storageset, which is known as a mirrorset, of at least two physical disks that provide an independent copy of the virtual disk. RAID 0+1 refers to a storageset that stripes data across an array of disks and mirrors the data to a BCV.
  • In [0123] step 2114, the system identifies possible “quick” replication disks if the STD unit is RAID 1 or RAID (0+1). In an exemplary embodiment, three or more member mirrors indicates that there is at least one mirror that can be used for a quick replication. Without three member mirrors, the replication will take longer. In step 2116, the system obtains storageset details for the STD unit. Exemplary information of interest includes a list of disk members for the stripeset, mirrorset or striped mirrorset, and the size of each member.
  • FIG. 20 shows a top level sequence of steps for implementing a restore with logical unit swapping in accordance with the present invention. In [0124] step 2200, the system prepares for the replication. In an exemplary embodiment, preparation includes adding a replication disk to the STD mirrorset if the mirrorset has less than a predetermined number, e.g., three, members. If a mirrorset has three or more members when the replication is run, one of the disks already in the mirrorset is selected for its solution. Exemplary commands are set forth below;
  • SET<MIRRORSETNAME>[0125]
  • MEMBERSHIP=<CURRENT_MEMBERS>+1 [0126]
  • SET<MIRRORSETNAME>NOPOLICY [0127]
  • SET<MIRRORSETNAME>REPLACE=<CLONE_DISK>[0128]
  • SET<MIRRORSETNAME>POLICY=<PREVIOUS POLICY>[0129]
  • The system then checks the state of the STD mirrorset members and waits until all members are in a normalized (synchronized) state. [0130]
  • In [0131] step 2202, the system executes the desired replication. First, the cloned BCV disk is split or reduced from the STD mirror set using a split or reduce command, for example. Alternatively, the system immediately replaces the reduced disks with additional disks so that the mirrorset is always in a position for a quick replication. It is understood that a quick replication refers to a process in which the split can occur without having to wait for normalization. The system then creates a one-member BCV mirrorset from the reduced disk. The BCV mirrorset is then initialized so that it is not destroyed and the name of the just-created mirrorset is saved. In step 2204, the system executes the mount further details of which are described below in connection with FIG. 21.
  • Referring now to FIG. 21, in [0132] step 2300, the system obtains mount host connection information. It is assumed that the mount host has a valid connection(s). The system also obtains mount host YIBA information, such as the WWN(s). In general, the host will have multiple connections if it has multiple HBAs. The mount host information can be determined by the connection data.
  • In [0133] step 2302, the system defines the BCV LUN for mount host connection(s). It is understood that the BCV LUN is assigned a LUN based on the offset value of the target connection(s). Offset control refers to the LUN number of the unit that will be made available to the host. In one embodiment, the initial visibility of the LUN is to no device. In an exemplary embodiment, client connections have decimal offsets for logical units. A logical unit number in an offset for a client has that client “see” the storage defined by that logical unit.
  • The system, in step [0134] 2304, then assigns the BCV LUN to a mount host connection(s). For hosts with multi-paths, such as SecurePath or other host-based device failover software, the system assigns the LUN to the all paths involved in the multi-path of the host where the system is to mount the replica. The system should also be aware of the mode in which the controller is configured, e.g., transparent failover or multiple bus failover. The system can also consider the connection offset in order to determine the LUNs that are visible on the connection to the HBA. The preferred path can also be considered.
  • The system makes dynamic BCV LUNs visible on the mount host OS in [0135] step 2306. In an exemplary embodiment, this occurs without rebooting the mount host with the host OS supporting so-called bus rescans. The BCV LUN should be mounted as a specific node. Alternatively, the system can determine the node assigned to the LUN. In step 2308, the system imports volume groups and mounts the filesystem.
  • It will be readily apparent to one of ordinary skill in the art that the invention is applicable to a range of other storage configurations including other RAID configurations. For example, steps to handle a [0136] RAID 0+1 configuration are similar to those described above for RAID 1. In this case, there is a restoration of a replication of a filesystem built on a striped mirrorset (RAID 0+1). It is assumed that the Solve for Storage routine has run successfully. A striped mirrorset is a stripeset where each of the members of the stripe has one or more mirrors.
  • Preparation for the replication is the same as that described for [0137] RAID 1 above except a) that there will be more than one disk to add, since it is a stripe and each column of the stripe needs to have the mirror disk added; and b) there will be more than one set of mirrors to check, since it is a stripe and each column of the stripe needs to have its mirrors normalized. To execute the replication, the BCV disks from the striped mirror set are reduced and the mirrors in a striped mirrorset are reduced in one operation. A BCV stripeset is created from the reduced disks. The order of the disks should be the same as the stripeset members that the clone disks came from. The remaining replication steps are substantially similar to those of RAID 1 except that a) the system initializes the replication or clone stripeset that was just created, and b) the system saves the name of the just-created stripeset. The execute mount procedure is also substantially similar to RAID 1 except that the system defines the LUN from the previously created clone stripeset.
  • Replication of a filesystem on a stripeset (RAID [0138] 0) is similar to that described above for RAID 1 and RAID 0 formats. For RAID 0 replications, the STD unit is a stripeset of two or more disks. There are no mirrors of the unit, so the system creates a temporary striped mirrorset or RAID 0+1 to create a replication or clone of the stripeset. After the cloning has taken place, the system deletes any temporary containers that were created. To prepare the RAID 0 replication, the system converts a STD stripeset to a striped mirrorset. Each disk member of the stripeset is temporarily turned into a mirrorset. Execution of the replication is substantially similar to RAID 0+1 and execution of the mount is substantially similar to RAID 1.
  • FIG. 22 shows an exemplary implementation of restoring a [0139] RAID 1 replication with logical unit swapping in accordance with the present invention. The below description assumes that the unit being restored to already exists. For RAW (raw device) backup and replication, the system needs the device node to exist just as it was when the replica was taken. The information can come from catalog information created at replication time and can be managed by the an instant restore daemon (IRD). In general, filesystems will be created as necessary. It is understood that the raw device is how the OS perceives the LUN. Filesystems are built on top of raw devices. An application can use a raw device or filesystem to store data.
  • In [0140] step 2400, the system discovers, for the physical device node(s), the filesystem name being restored to. If the replication was a RAW device, an instant restore daemon (IRD) provides the device node to which the system is restoring. In step 2402, the system determines the STD storage type, e.g., Symmetrix. Storage array details for the STD physical device are obtained in step 2402. Exemplary details include world wide name (WWN), subsystem name, and any other information that would be required to communicate to the particular storage array. In step 2406, the system determines the array device unit name to which the STD physical name corresponds. This converts a host physical device (e.g. /dev/rdsk/c?t?d?s?) for a Solaris system raw device) to the corresponding LUN on the array. In one embodiment, information from a SymAPI call is used.
  • In [0141] step 2408, the system obtains STD mirrorset details for the unit, such as a list of disk members for the mirrorset, the size of each member (may be different), status of mirrorset, and fail if restore is not possible. The filesystem is unmounted in step 2410, such as by a function call. In step 2412, the system disables volume groups residing on discovered nodes and in step 2414, the system renders the STD device “not ready” on the host OS. The “not ready” status of the STD device is preferred since the system may delete the unit. In one embodiment, the STD is made “not ready” after disabling access to the LUN (step 2416 ).
  • In [0142] step 2416, the system saves the STD LUN information and the STD mirrorset name. In step 2418, the system disables host access to the STD LUN and in step 2420 the system deletes the unit(s) associated with the STD LUN. The system then deletes the STD mirrorset in step 2422. It is understood that the mirrorset will be recreated after the restore.
  • One member of the STD mirrorset is added to the BCV mirrorset in [0143] step 2424. The BCV mirrorset has the data the system is to restore from. The system can reuse the spindles/disks that belonged to the units that were just deleted. Note that the user should be responsible for creating a LUN with the same RAID configuration and same number of members as the replication or clone. Also note that in the case of a RAID 1 or RAID 0+1 restore, there is still at least one copy of the STD device until the restore from the BCV has completed. This copy is preserved from step 2420 and 2422 and can be retrieved if the restore fails.
  • In [0144] step 2426, the system waits during the synchronization process for the BCV mirrorset to normalize. In step 2428, the system reduces the STD disk from BCV mirrorset. In one embodiment, the system reduces the disks that were added to the mirrorset. The STD mirrorset is then re-created in step 2430, such as by using the restored STD disk and the other members of original STD mirrorset.
  • It is understood that [0145] step 2426 through 2434 are unnecessary to perform a so-called instant restore for which waiting, normalization, and preservation of the BCV are not needed.
  • The system then waits for the STD mirrorset to normalize in [0146] step 2432. Note that it may be possible to skip this step since the data has been restored, but it is currently unprotected by mirrors while the synchronization is taking place. It may be possible to initiate the mirror synchronization process and then continue. In step 2434, the restored mirrorset is initialized so that it is not destroyed. In step 2436, STD LUN is re-created using the re-created STD mirrorset.
  • In [0147] step 2438, the system assigns the STD LUN to a connection(s). In one embodiment, the system assigns the same unit numbers that were discovered previously. It is assumed that assigning the same unit/LUN number will make it accessible to the same OS node. In step 2440, the OS recognizes the new LUN. And in step 2442, the filesystem is mounted or a logical volume is created to complete the restore.
  • It is understood that replication in other configurations, such as [0148] RAID 0 and RAID 0 +1, is well within the scope of the invention. For a RAID 0 (non-mirrored stripes) restoration, it can be assumed that the unit being restored to already exists. For RAW backup and replication, the system may need the device node to exist just as it was when the replica was taken. Filesystems will be created as necessary.
  • Some of the differences between a [0149] RAID 0 restoration and a RAID 1 restoration are described below in conjunction with a comparable RAID 1 step in step FIG. 22. Referring to step 2416, in the RAID 0 restoration, the saved STD stripeset information can include the stripeset name and the stripeset disk member order, which can be re-created after the restore. In contrast to step 2424, in a RAID 0 format the system makes each BCV stripeset member into one-member mirrorset. This creates a striped mirrorset (RAID 0+1) from the BCV stripeset. In the equivalent step 2426, for RAID 0 restoration, the system adds each member of the STD stripeset to the corresponding BCV mirrorset. The BCV striped mirrorset has the data from which the restoration is performed. Here the system can reuse the spindles/disks that belonged to the units of the STD stripeset that was just deleted. Note that unlike the case of a RAID 1 restore, there are no valid copies of any STD device once the restore from the BCV has started.
  • In the comparable [0150] 2430 step of the RAID 1 restoration process, in the RAID 0 restoration the STD stripeset is re-created using restored STD disk and the other members of original STD stripeset. The order of disks added to stripeset should be noted. The restored stripeset is then initialized so that it will not be destroyed, the STD LUN is re-created, and the STD LUN is assigned to a connection. The comparable steps of 2440 and 2442 are then performed.
  • Restoration of a [0151] RAID 0+1 (striped mirrors) replication is similar to a RAID 0 restoration with some differences discussed below. It is assumed that the unit being restored to already exists. For RAW backup and replication, the systems may need the device node to exist just as it was when the replica was taken. Filesystems will be created as necessary. In the equivalent of RAID 1 step 2424, the system adds one member of the STD stripeset to the corresponding BCV mirrorset, which contains the data from which the restoration is performed. The system can reuse the spindles/disks that belonged to the units of the STD stripeset that were just deleted. Note that like the case of a RAID 1 restore, there is a valid copy of each STD device once the restore from the BCV has started. The remaining steps are substantially similar to those described above in conjunction with RAID 0 and/or RAID 1 and will be readily apparent to one of ordinary skill in the art in view of the description contained herein.
  • It is understood that the invention is applicable to a variety of known storage systems including Symmetrix and Clarion systems by EMC Corporation and HP/Compaq StorageWorks systems, such as the HSG80 system. [0152]
  • A system and method has been described for managing data that may be replicated across one or more computer systems. Having described a preferred embodiment of the present invention, it may occur to skilled artisans to incorporate these concepts into other embodiments. Nevertheless, this invention should not be limited to the disclosed embodiment, but rather only by the spirit and scope of the following claims and their equivalents. One skilled in the art will appreciate further features and advantages of the invention based on the above-described embodiments. All publications and references cited herein are expressly incorporated herein by reference in their entirety.[0153]

Claims (27)

What is claimed is:
1. A method of performing a data restore by swapping logical units, comprising:
providing a first logical unit in a storage array to a client;
creating a mirror of the first logical unit on a second logical unit of the storage array;
splitting the mirror from the first logical unit; and
providing the second logical unit to the client without notifying the client such that the client can access the mirrored data.
2. The method according to claim 1, further including recreating the first logical unit from the second logical unit.
3. The method according to claim 1, further including
restoring the mirrored data from the second logical unit to the first logical unit; and
returning client access to the first logical unit.
4. The method according to claim 3, further including waiting for synchronization of the mirror of the second logical unit.
5. The method according to claim 3, further including determining characteristics of a physical storage device corresponding to the first logical unit.
6. The method according to claim 5, wherein the storage device characteristics include world wide name and subsystem name.
7. The method according to claim 1, further including creating the mirror of the first logical unit in a format selected from the group consisting of JBOD, RAID 0, RAID 0+1, and RAID 1.
8. The method according to claim 1, further including selecting the second logical unit from a plurality of logical units in the storage array based on visibility to the client.
9. The method according to claim 1, further including converting a host physical device location to an identification of the first logical unit.
10. The method according to claim 9, further including unmounting a filesystem associated with the first logical unit.
11. The method according to claim 10, further including disabling client access to the first logical unit.
12. The method according to claim 1, wherein the storage array corresponds to one of a Symmetrix™ system, a Clarion™ system, and an HSG80 ™ system.
13. A data storage system, comprising computer-executable logic that enables the method steps of:
providing a first logical unit in a storage array to a client;
creating a mirror of the first logical unit on a second logical unit of the storage array;
splitting the mirror from the first logical unit; and
providing the second logical unit to the client without notifying the client such that the client can access the mirrored data.
14. The system according to claim 13, further including recreating the first logical unit from the second logical unit.
15. The system according to claim 13, further including
restoring the mirrored data from the second logical unit to the first logical unit; and
returning client access to the first logical unit.
16. The system according to claim 15, further including waiting for synchronization of the mirror of the second logical unit.
17. The system according to claim 5, further including determining characteristics of a physical storage device corresponding to the first logical unit.
18. The system according to claim 17, wherein the storage device characteristics include world wide name and subsystem name.
19. The system according to claim 13, further including creating the mirror of the first logical unit in a format selected from the group consisting of JBOD, RAID 0, RAID 0+1, and RAID 1.
20. The system according to claim 13, further including selecting the second logical unit from a plurality of logical units in the storage array based on visibility to the client.
21. The system according to claim 13, further including converting a host physical device location to an identification of the first logical unit.
22. The system according to claim 21, further including unmounting a filesystem associated with the first logical unit.
23. The system according to claim 22, further including disabling client access to the first logical unit.
24. The system according to claim 13, further including creating a map of logical information on the first logical unit to one or more physical devices.
25. The system according to claim 1, wherein the storage array corresponds to one of a Symmetrix™ system, a Clarion™ system, and an HSG80™ system.
26. A computer readable medium for use with a data storage system comprising code to enable the steps of:
providing a first logical unit in a storage array to a client;
creating a mirror of the first logical unit on a second logical unit of the storage array;
splitting the mirror from the first logical unit; and
providing the second logical unit to the client without notifying the client such that the client can access the mirrored data.
27. The medium according to claim 26, further including
restoring the mirrored data from the second logical unit to the first logical unit; and
returning client access to the first logical unit.
US10/259,237 2001-06-28 2002-09-27 Data storage system having data restore by swapping logical units Abandoned US20030065780A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/259,237 US20030065780A1 (en) 2001-06-28 2002-09-27 Data storage system having data restore by swapping logical units

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/894,422 US7613806B2 (en) 2001-06-28 2001-06-28 System and method for managing replication sets of data distributed over one or more computer systems
US10/259,237 US20030065780A1 (en) 2001-06-28 2002-09-27 Data storage system having data restore by swapping logical units

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/894,422 Continuation-In-Part US7613806B2 (en) 2001-06-28 2001-06-28 System and method for managing replication sets of data distributed over one or more computer systems

Publications (1)

Publication Number Publication Date
US20030065780A1 true US20030065780A1 (en) 2003-04-03

Family

ID=25403060

Family Applications (5)

Application Number Title Priority Date Filing Date
US09/894,422 Expired - Fee Related US7613806B2 (en) 2001-06-28 2001-06-28 System and method for managing replication sets of data distributed over one or more computer systems
US09/946,078 Expired - Lifetime US7096250B2 (en) 2001-06-28 2001-09-04 Information replication system having enhanced error detection and recovery
US09/946,007 Expired - Lifetime US7076685B2 (en) 2001-06-28 2001-09-04 Information replication system mounting partial database replications
US10/259,237 Abandoned US20030065780A1 (en) 2001-06-28 2002-09-27 Data storage system having data restore by swapping logical units
US11/383,574 Abandoned US20060200698A1 (en) 2001-06-28 2006-05-16 Information replication system mounting partial database replications

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/894,422 Expired - Fee Related US7613806B2 (en) 2001-06-28 2001-06-28 System and method for managing replication sets of data distributed over one or more computer systems
US09/946,078 Expired - Lifetime US7096250B2 (en) 2001-06-28 2001-09-04 Information replication system having enhanced error detection and recovery
US09/946,007 Expired - Lifetime US7076685B2 (en) 2001-06-28 2001-09-04 Information replication system mounting partial database replications

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/383,574 Abandoned US20060200698A1 (en) 2001-06-28 2006-05-16 Information replication system mounting partial database replications

Country Status (2)

Country Link
US (5) US7613806B2 (en)
JP (1) JP4744804B2 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040199607A1 (en) * 2001-12-21 2004-10-07 Network Appliance, Inc. Reconfiguration of storage system including multiple mass storage devices
US20040215637A1 (en) * 2003-04-11 2004-10-28 Kenichi Kitamura Method and data processing system with data replication
US20040267801A1 (en) * 2003-06-26 2004-12-30 Dunsmore Silas W. Method and apparatus for exchanging sub-hierarchical structures within a hierarchical file system
US20050086249A1 (en) * 2003-10-16 2005-04-21 International Business Machines Corporation Method for logical volume conversions
US20050229031A1 (en) * 2004-03-30 2005-10-13 Alexei Kojenov Method, system, and program for restoring data to a file
US20060230216A1 (en) * 2005-03-23 2006-10-12 International Business Machines Corporation Data processing system and method
US7246258B2 (en) 2004-04-28 2007-07-17 Lenovo (Singapore) Pte. Ltd. Minimizing resynchronization time after backup system failures in an appliance-based business continuance architecture
US20070288526A1 (en) * 2006-06-08 2007-12-13 Emc Corporation Method and apparatus for processing a database replica
US20070294319A1 (en) * 2006-06-08 2007-12-20 Emc Corporation Method and apparatus for processing a database replica
US7444420B1 (en) * 2001-06-07 2008-10-28 Emc Corporation Apparatus and method for mirroring and restoring data
US7516537B1 (en) 2003-04-04 2009-04-14 Network Appliance, Inc. Method for converting a standalone network storage system into a disk drive storage enclosure
US20100042802A1 (en) * 2008-08-15 2010-02-18 International Business Machines Corporation Management of recycling bin for thinly-provisioned logical volumes
US20100174683A1 (en) * 2009-01-08 2010-07-08 Bryan Wayne Freeman Individual object restore
US8082411B1 (en) * 2008-04-30 2011-12-20 Netapp, Inc. Method and system for logical unit substitution
US20120151136A1 (en) * 2010-12-13 2012-06-14 International Business Machines Corporation Instant data restoration
US8484425B2 (en) * 2005-05-24 2013-07-09 Hitachi, Ltd. Storage system and operation method of storage system including first and second virtualization devices
US9098466B2 (en) 2012-10-29 2015-08-04 International Business Machines Corporation Switching between mirrored volumes
US9424263B1 (en) * 2010-03-09 2016-08-23 Hitachi Data Systems Engineering UK Limited Multi-tiered filesystem
US20210096758A1 (en) * 2019-10-01 2021-04-01 Limited Liability Company "Peerf" Method of constructing a file system based on a hierarchy of nodes

Families Citing this family (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6910053B1 (en) * 1999-06-18 2005-06-21 Sap Aktiengesellschaft Method for data maintenance in a network of partially replicated database systems
US7035880B1 (en) 1999-07-14 2006-04-25 Commvault Systems, Inc. Modular backup and retrieval system used in conjunction with a storage area network
US7389311B1 (en) 1999-07-15 2008-06-17 Commvault Systems, Inc. Modular backup and retrieval system
US7395282B1 (en) 1999-07-15 2008-07-01 Commvault Systems, Inc. Hierarchical backup and retrieval system
US7155481B2 (en) 2000-01-31 2006-12-26 Commvault Systems, Inc. Email attachment management in a computer system
US7003641B2 (en) 2000-01-31 2006-02-21 Commvault Systems, Inc. Logical view with granular access to exchange data managed by a modular data and storage management system
US6658436B2 (en) 2000-01-31 2003-12-02 Commvault Systems, Inc. Logical view and access to data managed by a modular data and storage management system
US7613806B2 (en) * 2001-06-28 2009-11-03 Emc Corporation System and method for managing replication sets of data distributed over one or more computer systems
US7039669B1 (en) * 2001-09-28 2006-05-02 Oracle Corporation Techniques for adding a master in a distributed database without suspending database operations at extant master sites
JP4108973B2 (en) * 2001-12-26 2008-06-25 株式会社日立製作所 Backup system
US7313598B1 (en) * 2002-06-13 2007-12-25 Cisco Technology, Inc. Method and apparatus for partial replication of directory information in a distributed environment
US7844577B2 (en) * 2002-07-15 2010-11-30 Symantec Corporation System and method for maintaining a backup storage system for a computer system
US7467129B1 (en) * 2002-09-06 2008-12-16 Kawasaki Microelectronics, Inc. Method and apparatus for latency and power efficient database searches
US8402001B1 (en) * 2002-10-08 2013-03-19 Symantec Operating Corporation System and method for archiving data
US7200625B2 (en) * 2002-10-18 2007-04-03 Taiwan Semiconductor Manufacturing Co., Ltd. System and method to enhance availability of a relational database
JP2004157637A (en) * 2002-11-05 2004-06-03 Hitachi Ltd Storage management method
US7649880B2 (en) 2002-11-12 2010-01-19 Mark Adams Systems and methods for deriving storage area commands
US8005918B2 (en) 2002-11-12 2011-08-23 Rateze Remote Mgmt. L.L.C. Data storage devices having IP capable partitions
JP2006506847A (en) 2002-11-12 2006-02-23 ゼテーラ・コーポレイシヨン Communication protocol, system and method
US7170890B2 (en) 2002-12-16 2007-01-30 Zetera Corporation Electrical devices with improved communication
JP4387116B2 (en) 2003-02-28 2009-12-16 株式会社日立製作所 Storage system control method and storage system
US20040254964A1 (en) * 2003-06-12 2004-12-16 Shoji Kodama Data replication with rollback
US7454569B2 (en) 2003-06-25 2008-11-18 Commvault Systems, Inc. Hierarchical system and method for performing storage operations in a computer network
US7987157B1 (en) * 2003-07-18 2011-07-26 Symantec Operating Corporation Low-impact refresh mechanism for production databases
US7873684B2 (en) * 2003-08-14 2011-01-18 Oracle International Corporation Automatic and dynamic provisioning of databases
US20050108186A1 (en) * 2003-10-31 2005-05-19 Eric Anderson Textual filesystem interface method and apparatus
US7546324B2 (en) 2003-11-13 2009-06-09 Commvault Systems, Inc. Systems and methods for performing storage operations using network attached storage
JP2005190106A (en) * 2003-12-25 2005-07-14 Hitachi Ltd Storage control subsystem for managing logical volume
US7921082B2 (en) * 2004-01-23 2011-04-05 Lsi Corporation File recovery under linux operating system
US8311974B2 (en) 2004-02-20 2012-11-13 Oracle International Corporation Modularized extraction, transformation, and loading for a database
US7266656B2 (en) 2004-04-28 2007-09-04 International Business Machines Corporation Minimizing system downtime through intelligent data caching in an appliance-based business continuance architecture
US7571173B2 (en) * 2004-05-14 2009-08-04 Oracle International Corporation Cross-platform transportable database
US8554806B2 (en) * 2004-05-14 2013-10-08 Oracle International Corporation Cross platform transportable tablespaces
US20060080507A1 (en) * 2004-05-18 2006-04-13 Tyndall John F System and method for unit attention handling
US8949395B2 (en) 2004-06-01 2015-02-03 Inmage Systems, Inc. Systems and methods of event driven recovery management
US20060026367A1 (en) * 2004-07-27 2006-02-02 Sanjoy Das Storage task coordination apparatus method and system
US7299376B2 (en) 2004-08-25 2007-11-20 International Business Machines Corporation Apparatus, system, and method for verifying backup data
GB2418273A (en) * 2004-09-18 2006-03-22 Hewlett Packard Development Co An array of discs with stripes and mirroring
US7627099B2 (en) * 2004-10-08 2009-12-01 At&T Intellectual Property I, L.P. System and method for providing a backup-restore solution for active-standby service management systems
US7596585B2 (en) * 2004-11-03 2009-09-29 Honeywell International Inc. Object replication using information quality of service
US8059539B2 (en) * 2004-12-29 2011-11-15 Hewlett-Packard Development Company, L.P. Link throughput enhancer
US7210060B2 (en) 2004-12-30 2007-04-24 Emc Corporation Systems and methods for restoring data
US20060182050A1 (en) * 2005-01-28 2006-08-17 Hewlett-Packard Development Company, L.P. Storage replication system with data tracking
US8862852B2 (en) * 2005-02-03 2014-10-14 International Business Machines Corporation Apparatus and method to selectively provide information to one or more computing devices
US7702850B2 (en) 2005-03-14 2010-04-20 Thomas Earl Ludwig Topology independent storage arrays and methods
US7620981B2 (en) 2005-05-26 2009-11-17 Charles William Frank Virtual devices and virtual bus tunnels, modules and methods
US20070005669A1 (en) * 2005-06-09 2007-01-04 Mueller Christoph K Method and system for automated disk i/o optimization of restored databases
US8819092B2 (en) 2005-08-16 2014-08-26 Rateze Remote Mgmt. L.L.C. Disaggregated resources and access methods
US7743214B2 (en) 2005-08-16 2010-06-22 Mark Adams Generating storage system commands
US20070180151A1 (en) * 2005-09-20 2007-08-02 Honeywell International Inc. Model driven message processing
US9270532B2 (en) * 2005-10-06 2016-02-23 Rateze Remote Mgmt. L.L.C. Resource command messages and methods
US7610314B2 (en) * 2005-10-07 2009-10-27 Oracle International Corporation Online tablespace recovery for export
US7958322B2 (en) * 2005-10-25 2011-06-07 Waratek Pty Ltd Multiple machine architecture with overhead reduction
US7543125B2 (en) 2005-12-19 2009-06-02 Commvault Systems, Inc. System and method for performing time-flexible calendric storage operations
JP5165206B2 (en) 2006-03-17 2013-03-21 富士通株式会社 Backup system and backup method
US7924881B2 (en) 2006-04-10 2011-04-12 Rateze Remote Mgmt. L.L.C. Datagram identifier management
US8838528B2 (en) * 2006-05-22 2014-09-16 Inmage Systems, Inc. Coalescing and capturing data between events prior to and after a temporal window
US7636868B2 (en) * 2006-06-27 2009-12-22 Microsoft Corporation Data replication in a distributed system
WO2008018969A1 (en) * 2006-08-04 2008-02-14 Parallel Computers Technology, Inc. Apparatus and method of optimizing database clustering with zero transaction loss
US8135685B2 (en) * 2006-09-18 2012-03-13 Emc Corporation Information classification
US8612570B1 (en) 2006-09-18 2013-12-17 Emc Corporation Data classification and management using tap network architecture
US7725670B2 (en) * 2006-10-02 2010-05-25 Novell, Inc. System and method of imaging a memory module while in functional operation
US8909599B2 (en) 2006-11-16 2014-12-09 Oracle International Corporation Efficient migration of binary XML across databases
US8719809B2 (en) 2006-12-22 2014-05-06 Commvault Systems, Inc. Point in time rollback and un-installation of software
US8548964B1 (en) 2007-09-28 2013-10-01 Emc Corporation Delegation of data classification using common language
US8522248B1 (en) 2007-09-28 2013-08-27 Emc Corporation Monitoring delegated operations in information management systems
US9323901B1 (en) 2007-09-28 2016-04-26 Emc Corporation Data classification for digital rights management
US9461890B1 (en) 2007-09-28 2016-10-04 Emc Corporation Delegation of data management policy in an information management system
US8868720B1 (en) * 2007-09-28 2014-10-21 Emc Corporation Delegation of discovery functions in information management system
US9141658B1 (en) 2007-09-28 2015-09-22 Emc Corporation Data classification and management for risk mitigation
US7865468B2 (en) * 2008-02-29 2011-01-04 International Business Machines Corporation Prefetching remote files on local disk space
US8370302B2 (en) * 2009-06-02 2013-02-05 Hitachi, Ltd. Method and apparatus for block based volume backup
US8676759B1 (en) * 2009-09-30 2014-03-18 Sonicwall, Inc. Continuous data backup using real time delta storage
US8818962B2 (en) * 2010-05-26 2014-08-26 International Business Machines Corporation Proactive detection of data inconsistencies in a storage system point-in-time copy of data
US8417672B2 (en) * 2010-10-11 2013-04-09 Microsoft Corporation Item level recovery
US9021198B1 (en) 2011-01-20 2015-04-28 Commvault Systems, Inc. System and method for sharing SAN storage
US8615488B2 (en) 2011-07-01 2013-12-24 International Business Machines Corporation Physical replication of database subset to improve availability and reduce resource cost in a cloud environment
US9195631B1 (en) 2012-03-26 2015-11-24 Emc Corporation Providing historical data to an event-based analysis engine
US9354762B1 (en) * 2012-06-26 2016-05-31 Emc International Company Simplifying rules generation for an event-based analysis engine by allowing a user to combine related objects in a rule
US8949168B1 (en) 2012-06-27 2015-02-03 Emc International Company Managing a memory of an event-based analysis engine
US9430125B1 (en) 2012-06-27 2016-08-30 Emc International Company Simplifying rules generation for an event-based analysis engine
US9405815B1 (en) 2012-10-02 2016-08-02 Amazon Technologies, Inc. Data recovery in a distributed computing environment
US8914329B1 (en) * 2012-12-24 2014-12-16 Emc Corporation Automated time-based testing method for distributed system
US9098804B1 (en) 2012-12-27 2015-08-04 Emc International Company Using data aggregation to manage a memory for an event-based analysis engine
US9110965B1 (en) * 2013-03-06 2015-08-18 Symantec Corporation Systems and methods for disaster recovery from binary large objects
US9805016B2 (en) * 2013-10-22 2017-10-31 Microsoft Technology Licensing, Llc Techniques to present a dynamic formula bar in a spreadsheet
CN104881418B (en) * 2014-02-28 2018-12-04 阿里巴巴集团控股有限公司 The method and apparatus in the quick recycling rollback space for MySQL
US9558255B2 (en) 2014-03-11 2017-01-31 International Business Machines Corporation Managing replication configuration availability
US9558078B2 (en) 2014-10-28 2017-01-31 Microsoft Technology Licensing, Llc Point in time database restore from storage snapshots
US9880907B2 (en) 2015-03-31 2018-01-30 International Business Machines Corporation System, method, and computer program product for dynamic volume mounting in a system maintaining synchronous copy objects
KR101717360B1 (en) * 2015-07-30 2017-03-16 엘에스산전 주식회사 Apparatus and method for managing of data base in energy management system
US10423588B2 (en) * 2015-08-25 2019-09-24 International Business Machines Corporation Orchestrated disaster recovery
EP3156905A1 (en) * 2015-10-15 2017-04-19 Softthinks System for managing backup strategies
WO2017094096A1 (en) * 2015-12-01 2017-06-08 株式会社野村総合研究所 Transaction processing system and transaction control method
CN109117314B (en) * 2015-12-18 2021-04-16 福建随行软件有限公司 Anti-misoperation data rapid recovery method and system
US10534675B2 (en) * 2016-09-30 2020-01-14 International Business Machines Corporation ACL based open transactions in replication environment
EP3548161B1 (en) * 2016-12-05 2021-04-21 Volvo Truck Corporation An air processing assembly comprising a ptfe based oleophobic membrane and method for drying compressed air for a vehicle
US11023454B2 (en) 2018-12-11 2021-06-01 EMC IP Holding Company LLC Checking data integrity in incremental journaling
US11347687B2 (en) * 2018-12-19 2022-05-31 EMC IP Holding Company LLC Incremental inline journaling in a journaled file system
US10929432B2 (en) * 2019-01-23 2021-02-23 EMC IP Holding Company LLC System and method for intelligent data-load balancing for backups
JP6881480B2 (en) * 2019-01-28 2021-06-02 株式会社安川電機 Industrial equipment management system, industrial equipment management method, and program
CN112565627B (en) * 2020-11-30 2023-02-03 天津津航计算技术研究所 Multi-channel video centralized display design method based on bitmap superposition

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5513192A (en) * 1992-08-28 1996-04-30 Sun Microsystems, Inc. Fault tolerant disk drive system with error detection and correction
US5742792A (en) * 1993-04-23 1998-04-21 Emc Corporation Remote data mirroring
US5784548A (en) * 1996-03-08 1998-07-21 Mylex Corporation Modular mirrored cache memory battery backup system
US5889935A (en) * 1996-05-28 1999-03-30 Emc Corporation Disaster control features for remote data mirroring
US5928367A (en) * 1995-01-06 1999-07-27 Hewlett-Packard Company Mirrored memory dual controller disk storage system
US6567811B1 (en) * 1999-07-15 2003-05-20 International Business Machines Corporation Method and system to merge volume groups on a UNIX-based computer system
US6691245B1 (en) * 2000-10-10 2004-02-10 Lsi Logic Corporation Data storage with host-initiated synchronization and fail-over of remote mirror
US6757698B2 (en) * 1999-04-14 2004-06-29 Iomega Corporation Method and apparatus for automatically synchronizing data from a host computer to two or more backup data storage locations

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US647294A (en) * 1900-01-23 1900-04-10 Hattie C Cropley Hot-water bag.
US5497492A (en) * 1990-09-04 1996-03-05 Microsoft Corporation System and method for loading an operating system through use of a fire system
US5206939A (en) * 1990-09-24 1993-04-27 Emc Corporation System and method for disk mapping and data retrieval
US5987627A (en) * 1992-05-13 1999-11-16 Rawlings, Iii; Joseph H. Methods and apparatus for high-speed mass storage access in a computer system
US6604118B2 (en) * 1998-07-31 2003-08-05 Network Appliance, Inc. File system image transfer
US5632012A (en) * 1993-11-24 1997-05-20 Storage Technology Corporation Disk scrubbing system
US5907672A (en) * 1995-10-04 1999-05-25 Stac, Inc. System for backing up computer disk volumes with error remapping of flawed memory addresses
US5778395A (en) * 1995-10-23 1998-07-07 Stac, Inc. System for backing up files from disk volumes on multiple nodes of a computer network
US5852715A (en) * 1996-03-19 1998-12-22 Emc Corporation System for currently updating database by one host and reading the database by different host for the purpose of implementing decision support functions
US6101497A (en) * 1996-05-31 2000-08-08 Emc Corporation Method and apparatus for independent and simultaneous access to a common data set
US6023584A (en) * 1997-01-03 2000-02-08 Ncr Corporation Installation of computer programs using disk mirroring
US5950230A (en) * 1997-05-28 1999-09-07 International Business Machines Corporation RAID array configuration synchronization at power on
US6216211B1 (en) * 1997-06-13 2001-04-10 International Business Machines Corporation Method and apparatus for accessing mirrored logical volumes
US6058455A (en) 1997-07-02 2000-05-02 International Business Corporation RAID system having a selectable unattended mode of operation with conditional and hierarchical automatic re-configuration
US6205527B1 (en) * 1998-02-24 2001-03-20 Adaptec, Inc. Intelligent backup and restoring system and method for implementing the same
US6047294A (en) 1998-03-31 2000-04-04 Emc Corp Logical restore from a physical backup in a computer storage system
US6178427B1 (en) * 1998-05-07 2001-01-23 Platinum Technology Ip, Inc. Method of mirroring log datasets using both log file data and live log data including gaps between the two data logs
US6119131A (en) * 1998-06-12 2000-09-12 Microsoft Corporation Persistent volume mount points
US6366986B1 (en) * 1998-06-30 2002-04-02 Emc Corporation Method and apparatus for differential backup in a computer storage system
US6574591B1 (en) * 1998-07-31 2003-06-03 Network Appliance, Inc. File systems image transfer between dissimilar file systems
US6493726B1 (en) * 1998-12-29 2002-12-10 Oracle Corporation Performing 2-phase commit with delayed forget
US6920537B2 (en) * 1998-12-31 2005-07-19 Emc Corporation Apparatus and methods for copying, backing up and restoring logical objects in a computer storage system by transferring blocks out of order or in parallel
US6185666B1 (en) * 1999-09-11 2001-02-06 Powerquest Corporation Merging computer partitions
US6493729B2 (en) * 1999-09-23 2002-12-10 International Business Machines Corporation Method and system to administer mirrored filesystems
US6751658B1 (en) * 1999-10-18 2004-06-15 Apple Computer, Inc. Providing a reliable operating system for clients of a net-booted environment
US6421688B1 (en) * 1999-10-20 2002-07-16 Parallel Computers Technology, Inc. Method and apparatus for database fault tolerance with instant transaction replication using off-the-shelf database servers and low bandwidth networks
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
US6714980B1 (en) * 2000-02-11 2004-03-30 Terraspring, Inc. Backup and restore of data associated with a host in a dynamically changing virtual server farm without involvement of a server that uses an associated storage device
JP2001337790A (en) * 2000-05-24 2001-12-07 Hitachi Ltd Storage unit and its hierarchical management control method
US6708265B1 (en) * 2000-06-27 2004-03-16 Emc Corporation Method and apparatus for moving accesses to logical entities from one storage element to another storage element in a computer storage system
US6535891B1 (en) * 2000-09-26 2003-03-18 Emc Corporation Method and apparatus for indentifying accesses to a repository of logical objects stored on a storage system based upon information identifying accesses to physical storage locations
US7010584B1 (en) * 2000-11-09 2006-03-07 International Business Machines Corporation Changing the operating system in a computer operation without substantial interruption of operations through the use of a surrogate computer
US6594744B1 (en) * 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
US20020073082A1 (en) * 2000-12-12 2002-06-13 Edouard Duvillier System modification processing technique implemented on an information storage and retrieval system
JP2002189570A (en) * 2000-12-20 2002-07-05 Hitachi Ltd Duplex method for storage system, and storage system
US6681290B2 (en) * 2001-01-29 2004-01-20 International Business Machines Corporation Physical data layout to reduce seeks in a raid system
US6691139B2 (en) * 2001-01-31 2004-02-10 Hewlett-Packard Development Co., Ltd. Recreation of archives at a disaster recovery site
US6968347B2 (en) * 2001-02-28 2005-11-22 Emc Corporation Data recovery method and apparatus
US6779063B2 (en) * 2001-04-09 2004-08-17 Hitachi, Ltd. Direct access storage system having plural interfaces which permit receipt of block and file I/O requests
US6742138B1 (en) * 2001-06-12 2004-05-25 Emc Corporation Data recovery method and apparatus
US6714953B2 (en) * 2001-06-21 2004-03-30 International Business Machines Corporation System and method for managing file export information
US7613806B2 (en) * 2001-06-28 2009-11-03 Emc Corporation System and method for managing replication sets of data distributed over one or more computer systems

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5513192A (en) * 1992-08-28 1996-04-30 Sun Microsystems, Inc. Fault tolerant disk drive system with error detection and correction
US5742792A (en) * 1993-04-23 1998-04-21 Emc Corporation Remote data mirroring
US5928367A (en) * 1995-01-06 1999-07-27 Hewlett-Packard Company Mirrored memory dual controller disk storage system
US5784548A (en) * 1996-03-08 1998-07-21 Mylex Corporation Modular mirrored cache memory battery backup system
US5889935A (en) * 1996-05-28 1999-03-30 Emc Corporation Disaster control features for remote data mirroring
US6757698B2 (en) * 1999-04-14 2004-06-29 Iomega Corporation Method and apparatus for automatically synchronizing data from a host computer to two or more backup data storage locations
US6567811B1 (en) * 1999-07-15 2003-05-20 International Business Machines Corporation Method and system to merge volume groups on a UNIX-based computer system
US6691245B1 (en) * 2000-10-10 2004-02-10 Lsi Logic Corporation Data storage with host-initiated synchronization and fail-over of remote mirror

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7444420B1 (en) * 2001-06-07 2008-10-28 Emc Corporation Apparatus and method for mirroring and restoring data
US20040199607A1 (en) * 2001-12-21 2004-10-07 Network Appliance, Inc. Reconfiguration of storage system including multiple mass storage devices
US7506127B2 (en) * 2001-12-21 2009-03-17 Network Appliance, Inc. Reconfiguration of storage system including multiple mass storage devices
US7516537B1 (en) 2003-04-04 2009-04-14 Network Appliance, Inc. Method for converting a standalone network storage system into a disk drive storage enclosure
US20040215637A1 (en) * 2003-04-11 2004-10-28 Kenichi Kitamura Method and data processing system with data replication
US20090144291A1 (en) * 2003-04-11 2009-06-04 Kenichi Kitamura Method and data processing system with data replication
US8117167B2 (en) * 2003-04-11 2012-02-14 Hitachi, Ltd. Method and data processing system with data replication
US7487162B2 (en) * 2003-04-11 2009-02-03 Hitachi, Ltd. Method and data processing system with data replication
US8028006B2 (en) * 2003-06-26 2011-09-27 Standoffbysoft Llc Method and apparatus for exchanging sub-hierarchical structures within a hierarchical file system
US7401092B2 (en) * 2003-06-26 2008-07-15 Standbysoft Llc Method and apparatus for exchanging sub-hierarchical structures within a hierarchical file system
US20080313217A1 (en) * 2003-06-26 2008-12-18 Standbysoft Llc Method and apparatus for exchanging sub-hierarchical structures within a hierarchical file system
US20040267801A1 (en) * 2003-06-26 2004-12-30 Dunsmore Silas W. Method and apparatus for exchanging sub-hierarchical structures within a hierarchical file system
US20050086249A1 (en) * 2003-10-16 2005-04-21 International Business Machines Corporation Method for logical volume conversions
US8086572B2 (en) * 2004-03-30 2011-12-27 International Business Machines Corporation Method, system, and program for restoring data to a file
US20050229031A1 (en) * 2004-03-30 2005-10-13 Alexei Kojenov Method, system, and program for restoring data to a file
US7246258B2 (en) 2004-04-28 2007-07-17 Lenovo (Singapore) Pte. Ltd. Minimizing resynchronization time after backup system failures in an appliance-based business continuance architecture
US7836215B2 (en) * 2005-03-23 2010-11-16 International Business Machines Corporation Method for providing high performance storage devices
US20060230216A1 (en) * 2005-03-23 2006-10-12 International Business Machines Corporation Data processing system and method
US8484425B2 (en) * 2005-05-24 2013-07-09 Hitachi, Ltd. Storage system and operation method of storage system including first and second virtualization devices
US20130275690A1 (en) * 2005-05-24 2013-10-17 Hitachi, Ltd. Storage system and operation method of storage system
US20070294319A1 (en) * 2006-06-08 2007-12-20 Emc Corporation Method and apparatus for processing a database replica
US20070288526A1 (en) * 2006-06-08 2007-12-13 Emc Corporation Method and apparatus for processing a database replica
US8082411B1 (en) * 2008-04-30 2011-12-20 Netapp, Inc. Method and system for logical unit substitution
US8595461B2 (en) 2008-08-15 2013-11-26 International Business Machines Corporation Management of recycling bin for thinly-provisioned logical volumes
US8347059B2 (en) * 2008-08-15 2013-01-01 International Business Machines Corporation Management of recycling bin for thinly-provisioned logical volumes
US20100042802A1 (en) * 2008-08-15 2010-02-18 International Business Machines Corporation Management of recycling bin for thinly-provisioned logical volumes
US8285680B2 (en) * 2009-01-08 2012-10-09 International Business Machines Corporation Individual object restore
US20100174683A1 (en) * 2009-01-08 2010-07-08 Bryan Wayne Freeman Individual object restore
US9424263B1 (en) * 2010-03-09 2016-08-23 Hitachi Data Systems Engineering UK Limited Multi-tiered filesystem
US20120254535A1 (en) * 2010-12-13 2012-10-04 International Business Machines Corporation Instant data restoration
US20120151136A1 (en) * 2010-12-13 2012-06-14 International Business Machines Corporation Instant data restoration
US9971656B2 (en) * 2010-12-13 2018-05-15 International Business Machines Corporation Instant data restoration
US9983946B2 (en) * 2010-12-13 2018-05-29 International Business Machines Corporation Instant data restoration
US9098466B2 (en) 2012-10-29 2015-08-04 International Business Machines Corporation Switching between mirrored volumes
US20210096758A1 (en) * 2019-10-01 2021-04-01 Limited Liability Company "Peerf" Method of constructing a file system based on a hierarchy of nodes
US11803313B2 (en) * 2019-10-01 2023-10-31 Limited Liability Company “Peerf” Method of constructing a file system based on a hierarchy of nodes

Also Published As

Publication number Publication date
JP2004535638A (en) 2004-11-25
US7076685B2 (en) 2006-07-11
US20030005120A1 (en) 2003-01-02
US7096250B2 (en) 2006-08-22
US20060200698A1 (en) 2006-09-07
US7613806B2 (en) 2009-11-03
US20030172158A1 (en) 2003-09-11
US20030172157A1 (en) 2003-09-11
JP4744804B2 (en) 2011-08-10

Similar Documents

Publication Publication Date Title
US20030065780A1 (en) Data storage system having data restore by swapping logical units
US6978282B1 (en) Information replication system having automated replication storage
US6035412A (en) RDF-based and MMF-based backups
US7103713B2 (en) Storage system, device and method using copy-on-write for synchronous remote copy
US6631477B1 (en) Host system for mass storage business continuance volumes
KR100604242B1 (en) File server storage arrangement
US6389459B1 (en) Virtualized storage devices for network disk mirroring applications
JP5068081B2 (en) Management apparatus and management method
US6981114B1 (en) Snapshot reconstruction from an existing snapshot and one or more modification logs
US7444420B1 (en) Apparatus and method for mirroring and restoring data
US8726072B1 (en) System and method for improving cluster performance using an operation thread for passive nodes
US6247103B1 (en) Host storage management control of outboard data movement using push-pull operations
US20070174580A1 (en) Scalable storage architecture
US6393537B1 (en) Host storage management control of outboard data movement
US7509535B1 (en) System and method for managing failover in a data storage environment
US7725669B1 (en) Backup and restore operations using coherency groups for ISB protocol systems
US7584339B1 (en) Remote backup and restore operations for ISB protocol systems
US7702757B2 (en) Method, apparatus and program storage device for providing control to a networked storage architecture
US7487310B1 (en) Rotation policy for SAN copy sessions of ISB protocol systems
US7555674B1 (en) Replication machine and method of disaster recovery for computers
JP2003015933A (en) File level remote copy method for storage device
US7587565B1 (en) Generating automated and scheduled SAN copy sessions for ISB protocol systems
CN112380050A (en) Method for using snapshot in system backup
US11307933B2 (en) Automated targetless snapshots
Dell

Legal Events

Date Code Title Description
AS Assignment

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAURER, CHARES F., III;NAIK, SUJIT SURESH;PILLAI, ANANTHAN K.;AND OTHERS;REEL/FRAME:013529/0341;SIGNING DATES FROM 20021010 TO 20021108

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION