WO1998053400A1 - Apparatus and method for backup of a disk storage system - Google Patents

Apparatus and method for backup of a disk storage system Download PDF

Info

Publication number
WO1998053400A1
WO1998053400A1 PCT/US1998/009887 US9809887W WO9853400A1 WO 1998053400 A1 WO1998053400 A1 WO 1998053400A1 US 9809887 W US9809887 W US 9809887W WO 9853400 A1 WO9853400 A1 WO 9853400A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
backup
peripheral
disk peripheral
disk
Prior art date
Application number
PCT/US1998/009887
Other languages
French (fr)
Inventor
Richard J. Clifton
Sanjoy Chatterjee
John P. Larson
Joseph R. Richart
Cyril E. Sagan
Original Assignee
Data General Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Data General Corporation filed Critical Data General Corporation
Priority to JP55044398A priority Critical patent/JP3792258B2/en
Priority to CA002286356A priority patent/CA2286356A1/en
Priority to DE69831944T priority patent/DE69831944T2/en
Priority to AU75726/98A priority patent/AU7572698A/en
Priority to EP98923431A priority patent/EP0983548B1/en
Publication of WO1998053400A1 publication Critical patent/WO1998053400A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1466Management of the backup or restore process to make the backup process non-disruptive

Definitions

  • the present invention relates generally to data backup systems for use with computers and more particularly to an apparatus and method for backing up a storage system to tape while the storage system is in use .
  • Data processing systems commonly use one or more disk drives for storage of data and programs .
  • the host computer will retrieve from the disk system the particular information currently required and will send to the disk system for storage new or updated information or data or programs which the host may require in the future, but does not require in internal main memory at that time .
  • a number of commercial utility programs are available for performing backup operations. Typically, these utilities are intended to run on the database server. In some cases, the utility can be run on another computer system which communicates with the database server via the LAN. This has drawbacks in that backing up a terabyte database over a LAN would be very slow and, whether the backup utility is running on the backup server or on another computer on the LAN, the participation of the database server during the backup process is required. Involving the database server will divert processing power away from the primary tasks of the server and may either degrade the response time to system users or lengthen the time required to complete the backup.
  • Another problem in the prior art is that organizations generally desire a "snapshot" of their database as it exists at a certain point in time.
  • One means of ensuring data consistency during the backup is to restrict users from having access to the data during the backup operation. Since the backup for extremely large databases can sometimes take hours, it is often unacceptable to the organizations for their databases to be unavailable to their users for the duration of the backup.
  • U.S. Patent 5,535,381 discloses a system wherein a copy-on-write (COW) technique is used to save "original" data to separate buffers prior to execution of a write command.
  • COW data is stored on the backup tape in separate tape records, so the image stored on the tape is not a duplicate of the original image and requires reintegration to recreate the original data image.
  • U.S. Patent 5,487,160 discloses another COW technique wherein original data is stored on a spare disk drive until the backup is complete and then the contents of the spare drive is transferred to the backup tape in bulk.
  • the image on the backup tape is fragmented and requires a reintegration process to reconstitute the original image.
  • the present invention resolves these problems and drawbacks by allowing users unrestricted access to the system during the backup process while creating a snapshot backup image on tape that does not require reconstruction.
  • the present invention relates to a method and apparatus for backing up a database or storage system onto a tape backup system using commercially available backup software utility programs.
  • a separate backup appliance is used to handle the transfer of the database from the disk system to the tape system. It is another feature of the invention that blocks of the original database data are stored temporarily and reintegrated into the database image prior to transfer to the backup tape system.
  • FIG. 1 is a block diagram of a data processing system for performing a backup according to the invention.
  • Fig. 2 depicts the functional organization of the data processing system of Fig. 1.
  • Fig. 3 depicts the flow of COW 231.
  • Fig. 4 depicts the flow of ODCU 243.
  • Fig. 5 depicts the flow of SRU 242.
  • Fig. 6 depicts the organization of MM 245.
  • Storage 130 communicates with server 12O via bus 162, either SCSI or fibre channel .
  • server 120 could be, for example, a Data General AViiON Model 6600 computer running the Microsoft NT operating system and storage 130 could be one or more intelligent RAID disk arrays, for example, Data General CLARiiON disk arrays, organized as a single logical disk storage.
  • Storage 130 contains one or more internal processors , shown collectively in Fig. 1 as storage processor (SP) 132. SP 132 is capable of executing programs and responding to commands from server 120.
  • SP storage processor
  • SP 132 provides the capability for storage 130 to perform tasks, such as copy-on-write discussed below, without the participation of server 120.
  • Storage 130 stores various data and programs, including database 131, used by server 120.
  • Database 131 could be provided by any one of several commercial database vendors, such as Oracle, Microsoft or Informix.
  • BA 140 is connected to storage 130, Backup Storage Unit (BSU) 150 and LAN 110.
  • BA 140 can be any computer of the users choice that is able to connect to and operate with storage 130, BSU 150 and LAN 110 and has sufficient processing capability to run the backup utility and handle the backup data.
  • BA 140 can be a smaller computer than server 120, for example a Data General AViiON Model 3600.
  • BA 140 is connected to LAN 110 via a standard network connection 164 and to storage 130 and BSU 150 via SCSI or fibre channel buses 163 and 165.
  • BA 140 can be, but does not have to be, running the same operating system software as server 120.
  • BA 140 includes processor 142 and memory system 141, comprised of RAM memory 144 and disk drive 143. As will be discussed in detail below, memory system 141 will temporarily store original data blocks from storage 130 and related information. BSU 150 could be any large capacity tape system, for example a Data General DLT Array system.
  • BU 141 will communicate with server 120 via LAN 110 to coordinate preparation for and initiation of the backup process.
  • the database must be placed in a quiescent state by server 120. This typically requires temporarily precluding new database transactions until all pending transactions have concluded and all necessary database information has been written from server 120 to storage 130. This insures that the database is in a stable consistent state. The interruption of normal database processing is typically accomplished quickly and with minimal disruption for the users .
  • the backup can be started and users can be allowed to continue to use the system normally while the backup occurs .
  • Storage 130 contains copy-on-write (COW) program 231 for execution by SP 132 during the backup operation.
  • COW 231 monitors for write commands to storage 130 from server 120 and, if a write command is detected, suspends the execution of the write until a copy is made of the data currently at that memory address in storage 130.
  • storage 130 data is addressed, copied and moved in blocks of 512 bytes.
  • Modified Map (MM) 245 and Original Data Cache (ODC) 246 represent areas of memory system 141.
  • MM 245 and ODC 246 will be files residing on disk drive 143 and MM 245 will be memory mapped to increase access speed.
  • the function of ODC 246 is to temporarily hold blocks of "original data" copied from storage 130 until such time as they are needed to construct the snapshot database image to be stored to BSU 150.
  • MM 245 will contain one entry for each block stored in ODC 246 during the backup. Transferring the associated block from ODC 246 to BSU 150 or overwriting the block in ODC 246 will not cause the MM 245 entry to be removed.
  • the contents and organization of MM 245 is shown in Fig. 6 and discussed below.
  • BU 241, Special Read Utility (SRU) 242, and Original Data Cache Utility (ODCU) 243 are programs running on processor 142.
  • BU 241 is a commercial backup utility of the system user's choice, for example, Legato Networker or Cheyenne ARCserver.
  • SRU 242 and ODCU 243 is transparent to BU 241 and BU 241 need not be modified for use in the disclosed system.
  • BU 241 believes that it is communicating directly with storage 130 and that it is in complete control of the backup.
  • SRU 242 and ODCU 243 communicate with and control MM 245 and ODC 246.
  • SRU 242 performs the functions of (1) forwarding data read requests from BU 241 to storage 130, (2) receiving the requested data back from storage 130, (3) placing the received data in a RAM 144 buffer while the storage 130 addresses of the received blocks are compared with MM 245 to determine if any data blocks previously read from those addresses are already in ODC 246, (4) if one or more such previously read blocks are present in ODC 246, substituting those blocks from ODC 246 for the blocks currently in the buffer from the same addresses, (5) modifying MM 245 to indicate that those blocks have been transferred to BU 241, and (6) forwarding the contents of the reconciled buffer to BU 241.
  • SRU 242 will also filter the write commands from BU 241. Write commands to BA 140 subsystems will be allowed, but attempts by BU 241 to write to storage 130, should any occur, during the backup process will not be allowed.
  • ODCU 243 performs the functions of (1) monitoring for COW data transfers from storage 130, (2) determining if the COW blocks received from storage 130 are part of the area of storage 130 being backed up, (3) if so, comparing the addresses of each of the received COW blocks with the contents of MM 245 to determine if an entry already exists for any of the received backup blocks, (4) creating an entry in MM 245 for each original COW block received; and (5) storing original COW blocks in ODC 246.
  • Fig. 6 depicts the functional organization of MM 245.
  • MM 245 will be empty.
  • each write command from server 120 to a data block in storage 130 will cause COW 231 to first copy the existing data from that address and send it as a COW block to ODCU 243.
  • ODCU 243 receives a COW block from database 131, and the original data from that block is not already in ODC 246, ODCU 243 will create an entry in MM 245 for that block.
  • MM 245 will accumulate a plurality of entries 1-N, each entry representing one original data block copied from storage 130 to ODC 246 during the backup. Only the first COW block received from an address will be stored in ODC 246. Subsequent COW blocks with the same address will be ignored by ODCU 243.
  • Each MM 245 entry 1-N comprises four fields of information.
  • the storage address of each block in storage 130 is made up of a unique identifier of the specific physical or logical component within storage 130 on which that particular data block resides and an offset to the specific 512 byte block within the component .
  • These two address components are stored in fields 610 and 620 of MM 245 and provide a unique identifier for each block of data in storage 130.
  • Field 630 contains the state of the associated data block.
  • Each block in ODC 246 will be in one of two states: read by BU 141 or not yet read by BU 141.
  • field 640 contains the offset into ODC 246 where the associated data block is located.
  • Fig. 3 illustrates the operation of COW 231.
  • COW 231 monitors all commands from server 120 to storage 130 to detect write commands. If a write command to storage 130 is detected at step 320, the command is suspended at step 330 and the relevant data is copied at step 340.
  • a write command from server 120 could address a single data block or a number of data blocks. All blocks addressed by the write command are copied at step 340. The copied data block or blocks are sent to BA 140 at step 350. After the block or blocks have been copied, the write command from server 120 can be performed.
  • COW 231 can be designed with additional functionality, if desired, depending on the available processing capacity and memory capacity of SP 132. In the embodiment described herein, a relatively straightforward COW 231 technique is discussed. COW 231 makes a copy of all writes to storage 130 and suspends the entire write until all blocks are copied. In an alternative embodiment, COW 231 could divide a relatively large write request, i.e., a write request affecting a relatively large number of blocks, into a number of smaller incremental writes. COW 231 could then suspend each incremental write only until the portion of the blocks addressed by that incremental write are copied.
  • COW 231 could maintain a map of the specific addresses to be backed up and only make a copy when writes are intended for those specific memory areas.
  • COW 231 could maintain a table of the addresses of blocks in storage 130 that have already been copied and only perform copy-on-write for those blocks which have not previously been copied during the backup.
  • Fig. 4 illustrates the manner in which ODCU 243 handles the COW blocks sent to BA 140 at step 350.
  • ODCU 243 initiates step 410 monitoring for COW blocks from storage 130. If one or more blocks are received from storage 130 at step 420, the addresses of the received blocks are checked at step 430 to determine if the received data is part of the portion of storage 130 that is being backed up. If the received data is not part of the area of storage 130 being backed up, the blocks are ignored at step 440 and ODCU 243 continues its monitoring.
  • MM 245 is checked to determine if an entry for the unique identifier any of the received blocks already exists in MM 245, meaning that the original data from that block has already been stored in ODC 246. If an entry for a block address already exists in MM 245, then the data just received represents a subsequent write to the same block during the backup. In this case, the received block is ignored and step 410 monitoring continues.
  • a new MM 245 entry is created at step 470 for that block.
  • ODCU 243 can reuse the ODC offset of a block that has already been moved to BSU 150, as indicated by state 630, if any such offset is available. If no previously used ODC offset is available, ODCU 243 will assign a new ODC offset. As an alternative embodiment, reuse of ODC offsets can be disallowed, thereby ensuring that each entry in MM 245 is assigned a unique ODC offset during the backup and all ODC data blocks are retained in ODC 246. Finally, the received block is stored in ODC 246 at step 480.
  • Fig. 5 illustrates the operation of SRU 242.
  • BU 241 is issuing read commands to storage 130 and receiving back the requested data which it in turn sends to BSU 150.
  • SRU 242 is interposed between BU 241 and storage 130.
  • SRU 242 monitors for read commands from BU 241 to storage 130. Read commands are passed to storage 130 at step 540. If the read is not directed to storage 130, the read command is passed on to its target for execution at step 530. Storage 130 will retrieve the data requested at step 540 and return it to the backup appliance.
  • the commercial utility employed as BU 241 will generally control and select the specific quantity of data it will request from storage 130 with each read request.
  • storage 130 uses a 512 byte data block as its standard data unit for manipulation and addressing.
  • the amount of data read from BU 241 will be substantially larger than 512 bytes.
  • BU 241 might request 64K bytes with each read command.
  • SRU 242 will forward the read request to storage 130 and, when storage returns the requested data, will place the returned quantity of data in a buffer in RAM 144.
  • some of the data blocks in the buffer may be blocks that were modified after the backup was initiated. To ensure the backup creates a true snapshot, any such modified data blocks in the buffer must be replaced with the corresponding original data blocks from ODC 246 before the requested data is supplied to BU 241.
  • each block in the buffer is compared with the addresses stored in MM 245 to determine if original data for that address is available in ODC 246.
  • each block in the buffer that has an address that matches an address stored in MM245 is discarded and replaced with the original data block from ODC 246.
  • the MM 245 state field of each original data block that was placed in the buffer is updated to reflect that the block has been read out of ODC 246 in preparation for being backed up to BSU 150.
  • the change in block status in step 580 acts as an indication to ODCU 243 that the area occupied by that block in ODC 246 can be overwritten and used for storage of another block.
  • the contents of the buffer which now consists only of original data, is forwarded to BU 241 for transfer to BSU 150.
  • the preferred embodiment described above provides a means of producing a snapshot backup of database 131 using a separate backup appliance that does not require server 120 to participate in the backup process and requires only COW capability from storage 130.
  • the functions of MM 245 and ODC 246 could be implemented to reside in storage 130 and the SRU 242 and ODCU 243 utilities could be implemented to run on SP 132 instead of processor 142.
  • the storage of COW data blocks and the reconciliation of COW data and data read in response to BU 142 read requests would occur entirely within storage 130. This implementation would reduce the processing demand on processor 142, reduce or remove the requirement for disk 143, and reduce traffic on bus 163, but would have the effects of increasing the workload of SP 132 and requiring storage 130 to accommodate storage of MM 245 and ODC 246.

Abstract

A backup system and method provides for creation of a reconciled snapshot backup image of a database while the database, residing on a disk array system, is in use by users. A backup computer running a commercial backup utility is connected between the array system and a tape storage system. While the backup is underway, write requests to the database are suspended until the data currently in those data blocks is copied and stored in an original data cache. The disk system address of the copied block and a pointer to the location of the block in the cache are stored in a map. The backup utility incrementally reads portions of the database from the disk system and forwards those portions to the tape system. Prior to each portion being forwarded to the tape system, all data blocks in the portion which have an address that corresponds to the address of a block in the cache are discarded and replaced with the data from the cache for that address.

Description

APPARATUS AND METHOD FOR BACKUP OF A DISK STORAGE SYSTEM
BACKGROUND OF THE INVENTION Field of the Invention
The present invention relates generally to data backup systems for use with computers and more particularly to an apparatus and method for backing up a storage system to tape while the storage system is in use .
Description of the Prior Art
Data processing systems commonly use one or more disk drives for storage of data and programs . The host computer will retrieve from the disk system the particular information currently required and will send to the disk system for storage new or updated information or data or programs which the host may require in the future, but does not require in internal main memory at that time .
Many organizations, such as large businesses and governmental entities, have extremely large databases of information which they need to have readily available for rapid access and modification. These databases may in some circumstances equal or exceed one terabyte of data and require large data storage systems containing multiple disk drives or arrays of disk drives organized into a single large logical memory system. Often there is a host processor, or server, that is dedicated solely or primarily to handling database transactions. The various users of the database transmit their database requests to the database server and receive the requested data from the database server via a network .
Organizations using such large databases typically need to create backups of their databases from time to time for business, legal or archival reasons. Also, while modern disk systems are, in general, highly reliable devices, some organizations may desire to have their database backed up as protection against the possibility of a storage system failure .
It is, therefore, a common practice to periodically perform a backup of part or all of the data on the disk system. Typically this is done by copying onto magnetic tapes the data desired to be backed up. The tapes are then retained for a period of time, as determined by the system user, and can be used to restore the system to a known condition, if necessary.
A number of commercial utility programs are available for performing backup operations. Typically, these utilities are intended to run on the database server. In some cases, the utility can be run on another computer system which communicates with the database server via the LAN. This has drawbacks in that backing up a terabyte database over a LAN would be very slow and, whether the backup utility is running on the backup server or on another computer on the LAN, the participation of the database server during the backup process is required. Involving the database server will divert processing power away from the primary tasks of the server and may either degrade the response time to system users or lengthen the time required to complete the backup.
Another problem in the prior art is that organizations generally desire a "snapshot" of their database as it exists at a certain point in time. One means of ensuring data consistency during the backup is to restrict users from having access to the data during the backup operation. Since the backup for extremely large databases can sometimes take hours, it is often unacceptable to the organizations for their databases to be unavailable to their users for the duration of the backup.
Prior art systems have been developed in an effort to resolve this problem and allow users to continue to write to the database while the backup is in progress. For example, U.S. Patent 5,535,381 discloses a system wherein a copy-on-write (COW) technique is used to save "original" data to separate buffers prior to execution of a write command. The COW data is stored on the backup tape in separate tape records, so the image stored on the tape is not a duplicate of the original image and requires reintegration to recreate the original data image.
U.S. Patent 5,487,160 discloses another COW technique wherein original data is stored on a spare disk drive until the backup is complete and then the contents of the spare drive is transferred to the backup tape in bulk. Here again, the image on the backup tape is fragmented and requires a reintegration process to reconstitute the original image.
The present invention resolves these problems and drawbacks by allowing users unrestricted access to the system during the backup process while creating a snapshot backup image on tape that does not require reconstruction.
SUMMARY OF THE INVENTION
The present invention relates to a method and apparatus for backing up a database or storage system onto a tape backup system using commercially available backup software utility programs.
It is an object of the invention to create a backup tape containing a snapshop of the database .
It is another object of the invention to allow rapid, consistent backup of the database while users continue to have access.
It is a feature of the invention that a separate backup appliance is used to handle the transfer of the database from the disk system to the tape system. It is another feature of the invention that blocks of the original database data are stored temporarily and reintegrated into the database image prior to transfer to the backup tape system.
It is an advantage of the invention that standard commercial backup utility programs can be employed.
It is another advantage of the invention that the host processor does not have to participate in the backup.
It is a further advantage that the backup image on tape does not require later reintegration or reconstruction.
Other features and advantages of the present invention will be understood by those of ordinary skill in the art after referring to the detailed description of the preferred embodiment and drawings herein.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a data processing system for performing a backup according to the invention.
Fig. 2 depicts the functional organization of the data processing system of Fig. 1.
Fig. 3 depicts the flow of COW 231.
Fig. 4 depicts the flow of ODCU 243.
Fig. 5 depicts the flow of SRU 242.
Fig. 6 depicts the organization of MM 245.
DESCRIPTION OF THE PREFERRED EMBODIMENT The preferred embodiment and method will be discussed in terms of backing up a large database residing in a disk storage system to a tape peripheral . It will be understood by those of ordinary skill in the art that the apparatus and techniques described herein are not limited to database backups, but could readily be adapted to perform a backup of any selected portion of the information in a storage system or to perform a complete backup of all information in storage 130.
Referring to Fig. 1, multiple users 101 are connected to server 120 via local area network 110. Storage 130 communicates with server 12O via bus 162, either SCSI or fibre channel . In a preferred embodiment of a system wherein database 131 is on the order of a terabyte of data, server 120 could be, for example, a Data General AViiON Model 6600 computer running the Microsoft NT operating system and storage 130 could be one or more intelligent RAID disk arrays, for example, Data General CLARiiON disk arrays, organized as a single logical disk storage. Storage 130 contains one or more internal processors , shown collectively in Fig. 1 as storage processor (SP) 132. SP 132 is capable of executing programs and responding to commands from server 120. SP 132 provides the capability for storage 130 to perform tasks, such as copy-on-write discussed below, without the participation of server 120. Storage 130 stores various data and programs, including database 131, used by server 120. Database 131 could be provided by any one of several commercial database vendors, such as Oracle, Microsoft or Informix.
Backup Appliance (BA) 140 is connected to storage 130, Backup Storage Unit (BSU) 150 and LAN 110. BA 140 can be any computer of the users choice that is able to connect to and operate with storage 130, BSU 150 and LAN 110 and has sufficient processing capability to run the backup utility and handle the backup data. Typically, BA 140 can be a smaller computer than server 120, for example a Data General AViiON Model 3600. BA 140 is connected to LAN 110 via a standard network connection 164 and to storage 130 and BSU 150 via SCSI or fibre channel buses 163 and 165. BA 140 can be, but does not have to be, running the same operating system software as server 120.
BA 140 includes processor 142 and memory system 141, comprised of RAM memory 144 and disk drive 143. As will be discussed in detail below, memory system 141 will temporarily store original data blocks from storage 130 and related information. BSU 150 could be any large capacity tape system, for example a Data General DLT Array system.
In the embodiment of Fig. 1, BU 141 will communicate with server 120 via LAN 110 to coordinate preparation for and initiation of the backup process. Immediately prior to initiation of the backup, the database must be placed in a quiescent state by server 120. This typically requires temporarily precluding new database transactions until all pending transactions have concluded and all necessary database information has been written from server 120 to storage 130. This insures that the database is in a stable consistent state. The interruption of normal database processing is typically accomplished quickly and with minimal disruption for the users . As soon as the system is in the quiescent state, the backup can be started and users can be allowed to continue to use the system normally while the backup occurs .
Looking now at Fig. 2, the functional organization of a preferred embodiment of a backup system according to the invention is depicted. Storage 130 contains copy-on-write (COW) program 231 for execution by SP 132 during the backup operation. COW 231 monitors for write commands to storage 130 from server 120 and, if a write command is detected, suspends the execution of the write until a copy is made of the data currently at that memory address in storage 130. In the embodiment of storage 130 described herein, storage 130 data is addressed, copied and moved in blocks of 512 bytes.
Modified Map (MM) 245 and Original Data Cache (ODC) 246 represent areas of memory system 141. In a preferred embodiment, MM 245 and ODC 246 will be files residing on disk drive 143 and MM 245 will be memory mapped to increase access speed. As will be discussed in more detail below, the function of ODC 246 is to temporarily hold blocks of "original data" copied from storage 130 until such time as they are needed to construct the snapshot database image to be stored to BSU 150.
MM 245 will contain one entry for each block stored in ODC 246 during the backup. Transferring the associated block from ODC 246 to BSU 150 or overwriting the block in ODC 246 will not cause the MM 245 entry to be removed. The contents and organization of MM 245 is shown in Fig. 6 and discussed below.
BU 241, Special Read Utility (SRU) 242, and Original Data Cache Utility (ODCU) 243 are programs running on processor 142. BU 241 is a commercial backup utility of the system user's choice, for example, Legato Networker or Cheyenne ARCserver. The operation of SRU 242 and ODCU 243 is transparent to BU 241 and BU 241 need not be modified for use in the disclosed system. BU 241 believes that it is communicating directly with storage 130 and that it is in complete control of the backup.
SRU 242 and ODCU 243 communicate with and control MM 245 and ODC 246. SRU 242 performs the functions of (1) forwarding data read requests from BU 241 to storage 130, (2) receiving the requested data back from storage 130, (3) placing the received data in a RAM 144 buffer while the storage 130 addresses of the received blocks are compared with MM 245 to determine if any data blocks previously read from those addresses are already in ODC 246, (4) if one or more such previously read blocks are present in ODC 246, substituting those blocks from ODC 246 for the blocks currently in the buffer from the same addresses, (5) modifying MM 245 to indicate that those blocks have been transferred to BU 241, and (6) forwarding the contents of the reconciled buffer to BU 241.
In a preferred embodiment, SRU 242 will also filter the write commands from BU 241. Write commands to BA 140 subsystems will be allowed, but attempts by BU 241 to write to storage 130, should any occur, during the backup process will not be allowed.
ODCU 243 performs the functions of (1) monitoring for COW data transfers from storage 130, (2) determining if the COW blocks received from storage 130 are part of the area of storage 130 being backed up, (3) if so, comparing the addresses of each of the received COW blocks with the contents of MM 245 to determine if an entry already exists for any of the received backup blocks, (4) creating an entry in MM 245 for each original COW block received; and (5) storing original COW blocks in ODC 246.
Fig. 6 depicts the functional organization of MM 245. At backup initiation, MM 245 will be empty. Once the backup is started, each write command from server 120 to a data block in storage 130 will cause COW 231 to first copy the existing data from that address and send it as a COW block to ODCU 243. When ODCU 243 receives a COW block from database 131, and the original data from that block is not already in ODC 246, ODCU 243 will create an entry in MM 245 for that block. During the course of the backup, MM 245 will accumulate a plurality of entries 1-N, each entry representing one original data block copied from storage 130 to ODC 246 during the backup. Only the first COW block received from an address will be stored in ODC 246. Subsequent COW blocks with the same address will be ignored by ODCU 243.
Each MM 245 entry 1-N comprises four fields of information. In the embodiment described herein, the storage address of each block in storage 130 is made up of a unique identifier of the specific physical or logical component within storage 130 on which that particular data block resides and an offset to the specific 512 byte block within the component . These two address components are stored in fields 610 and 620 of MM 245 and provide a unique identifier for each block of data in storage 130. Field 630 contains the state of the associated data block. Each block in ODC 246 will be in one of two states: read by BU 141 or not yet read by BU 141. Finally, field 640 contains the offset into ODC 246 where the associated data block is located.
Once database 131 is placed in a quiescent condition, the backup operation is initiated. Fig. 3 illustrates the operation of COW 231. At step 310, COW 231 monitors all commands from server 120 to storage 130 to detect write commands. If a write command to storage 130 is detected at step 320, the command is suspended at step 330 and the relevant data is copied at step 340. A write command from server 120 could address a single data block or a number of data blocks. All blocks addressed by the write command are copied at step 340. The copied data block or blocks are sent to BA 140 at step 350. After the block or blocks have been copied, the write command from server 120 can be performed.
As is well understood by those in the art, COW 231 can be designed with additional functionality, if desired, depending on the available processing capacity and memory capacity of SP 132. In the embodiment described herein, a relatively straightforward COW 231 technique is discussed. COW 231 makes a copy of all writes to storage 130 and suspends the entire write until all blocks are copied. In an alternative embodiment, COW 231 could divide a relatively large write request, i.e., a write request affecting a relatively large number of blocks, into a number of smaller incremental writes. COW 231 could then suspend each incremental write only until the portion of the blocks addressed by that incremental write are copied. In another embodiment, COW 231 could maintain a map of the specific addresses to be backed up and only make a copy when writes are intended for those specific memory areas. In yet another alternative embodiment, COW 231 could maintain a table of the addresses of blocks in storage 130 that have already been copied and only perform copy-on-write for those blocks which have not previously been copied during the backup.
Fig. 4 illustrates the manner in which ODCU 243 handles the COW blocks sent to BA 140 at step 350. At backup initiation, ODCU 243 initiates step 410 monitoring for COW blocks from storage 130. If one or more blocks are received from storage 130 at step 420, the addresses of the received blocks are checked at step 430 to determine if the received data is part of the portion of storage 130 that is being backed up. If the received data is not part of the area of storage 130 being backed up, the blocks are ignored at step 440 and ODCU 243 continues its monitoring. If the received data is part of the area of storage 130 being backed up, MM 245 is checked to determine if an entry for the unique identifier any of the received blocks already exists in MM 245, meaning that the original data from that block has already been stored in ODC 246. If an entry for a block address already exists in MM 245, then the data just received represents a subsequent write to the same block during the backup. In this case, the received block is ignored and step 410 monitoring continues.
If the address of a received block is not found in MM 245 at step 460, then a new MM 245 entry is created at step 470 for that block. For performing a backup of a single database, it is unnecessary to retain data blocks in ODC 246 after they have been backed up. Therefore, ODCU 243 can reuse the ODC offset of a block that has already been moved to BSU 150, as indicated by state 630, if any such offset is available. If no previously used ODC offset is available, ODCU 243 will assign a new ODC offset. As an alternative embodiment, reuse of ODC offsets can be disallowed, thereby ensuring that each entry in MM 245 is assigned a unique ODC offset during the backup and all ODC data blocks are retained in ODC 246. Finally, the received block is stored in ODC 246 at step 480.
Fig. 5 illustrates the operation of SRU 242. As indicated above, from the viewpoint of BU 241, BU 241 is issuing read commands to storage 130 and receiving back the requested data which it in turn sends to BSU 150. In fact, SRU 242 is interposed between BU 241 and storage 130. At step 510, SRU 242 monitors for read commands from BU 241 to storage 130. Read commands are passed to storage 130 at step 540. If the read is not directed to storage 130, the read command is passed on to its target for execution at step 530. Storage 130 will retrieve the data requested at step 540 and return it to the backup appliance.
The commercial utility employed as BU 241 will generally control and select the specific quantity of data it will request from storage 130 with each read request. As discussed above, in a preferred embodiment, storage 130 uses a 512 byte data block as its standard data unit for manipulation and addressing. Typically, the amount of data read from BU 241 will be substantially larger than 512 bytes. For example, BU 241 might request 64K bytes with each read command. SRU 242 will forward the read request to storage 130 and, when storage returns the requested data, will place the returned quantity of data in a buffer in RAM 144. At this point, some of the data blocks in the buffer may be blocks that were modified after the backup was initiated. To ensure the backup creates a true snapshot, any such modified data blocks in the buffer must be replaced with the corresponding original data blocks from ODC 246 before the requested data is supplied to BU 241.
Therefore, at step 560 the unique address in storage 130 of each block in the buffer is compared with the addresses stored in MM 245 to determine if original data for that address is available in ODC 246. At step 570, each block in the buffer that has an address that matches an address stored in MM245 is discarded and replaced with the original data block from ODC 246. At step 580, the MM 245 state field of each original data block that was placed in the buffer is updated to reflect that the block has been read out of ODC 246 in preparation for being backed up to BSU 150. The change in block status in step 580 acts as an indication to ODCU 243 that the area occupied by that block in ODC 246 can be overwritten and used for storage of another block. At step 590, the contents of the buffer, which now consists only of original data, is forwarded to BU 241 for transfer to BSU 150.
The preferred embodiment described above provides a means of producing a snapshot backup of database 131 using a separate backup appliance that does not require server 120 to participate in the backup process and requires only COW capability from storage 130. It will be understood that, the functions of MM 245 and ODC 246 could be implemented to reside in storage 130 and the SRU 242 and ODCU 243 utilities could be implemented to run on SP 132 instead of processor 142. In this alternative implementation, the storage of COW data blocks and the reconciliation of COW data and data read in response to BU 142 read requests would occur entirely within storage 130. This implementation would reduce the processing demand on processor 142, reduce or remove the requirement for disk 143, and reduce traffic on bus 163, but would have the effects of increasing the workload of SP 132 and requiring storage 130 to accommodate storage of MM 245 and ODC 246.
The particular embodiment above is to be considered in all respects as illustrative and not restrictive .The scope of the invention is indicated by the appended claims rather than by the foregoing description.

Claims

We claim:
1. In a data processing system having a host computer connected to a disk peripheral, a backup computer connected to the host computer and the disk peripheral, and a tape peripheral connected to the backup computer, a method for creating a snapshot image of a desired portion of the data residing on the disk peripheral comprising the steps of:
(a) reading a block of the desired data from the disk peripheral ;
(b) determining if a block of data previously read from the same disk peripheral address as the block read in step (a) is available in an original data cache;
(c) if so, transferring the previously read block of data to the tape peripheral;
(d) if not, transferring the block of data read in step (a) to the tape peripheral; and
(e) repeating steps (a) - (d) until the entire desired portion has been transferred to the tape peripheral.
2. In a data processing system having a host computer connected to a disk peripheral, a backup computer connected to the host computer and the disk peripheral and a tape peripheral connected to the backup computer, a method for creating a snapshot image of a desired portion of the data residing on the disk peripheral comprising the steps of: (a) reading a plurality of blocks of the desired data from the disk peripheral;
(b) placing the plurality of blocks in a buffer;
(c) for each block placed in the buffer in step (b) , performing the steps of : determining if a block of data previously read from the same disk peripheral address is available in an original data cache; and if so, replacing the block of data currently in the buffer with the previously read block of data;
(d) transferring the contents of the buffer to the tape peripheral ;
(e) repeating steps (a) - (d) until the desired portion has been transferred from the disk peripheral to the tape peripheral .
3. In a data processing system having a host computer connected to an disk peripheral, a backup computer connected to the host computer and the disk peripheral and a tape peripheral connected to the backup computer, a method for creating a snapshot image of a desired portion of the data residing on the disk peripheral comprising the steps of: (a) monitoring for a write command from the host computer to the disk peripheral; (b) if a write command is detected, performing the steps of:
(i) suspending the write command,
(ii) copying the data from the disk peripheral addresses to which the write command is directed,
(iii) storing the data copied at step (ii) in an original data cache, and
(iv) executing the write;
(c) reading at least one block of the desired data from the disk peripheral;
(d) for each block read in step (c) , performing the steps of : determining if a block of data previously read from the same disk peripheral address is available in the original data cache; and if so, transferring the previously read block of data to the tape peripheral; if not, transferring the block of data read in step (c) to the tape peripheral e) repeating steps (a) - (d) until the desired portion of data has been transferred from the disk peripheral to the tape peripheral .
4. In a data processing system having a host computer, a disk peripheral having copy-on-write capability and connected to the host computer, and a backup computer connected to the host computer and the disk peripheral, a copy-on-write method comprising the steps of:
(a) monitoring for a write command from the host computer to the disk peripheral
(b) if a write command is detected, performing the steps of:
(i) suspending the write command,
(ii) copying the data from the addresses to which the write command is directed,
(iii) for each data block copied in step (ii) , performing the steps of:
(1) comparing the address of the block with a list of addresses of blocks previously stored in an original data cache;
(2) if the address of the block is on the list, discarding the block;
(3) if the address of the block is not on the list, storing the block in the original data cache and adding the address of the block to the list; and
(iv) executing the write.
5. The method of claim 4 wherein step b(iii) (3) includes the additional step of storing a pointer to the location of the block in the backup computer.
6. The method of claim 4 wherein step b(iii) (3) includes the additional step of storing a state indicator for the block, said state indicator indicating whether the block has been read from the backup computer.
7. A data processing system comprising: a host computer; a disk peripheral connected to the host computer and having an internal processor; a tape peripheral ; a backup computer, connected to the disk peripheral and the tape peripheral, for making a snapshot copy on the tape peripheral of at least a portion of the data on the disk peripheral while the host computer continues to have access to the data on the disk peripheral .
8. A backup system for use with a data processing system having a disk peripheral with copy-on-write capability and a host computer connected to the disk peripheral, said backup system comprising: a tape peripheral for storing a backup copy of at least a portion of the information on the disk peripheral; and a backup computer, connected to the disk peripheral and the tape peripheral, the backup computer having means for receiving copy-on-write data from the disk peripheral , means for storing the copy-on-write data in the backup computer, means for requesting backup data from the disk peripheral , means for receiving backup data from the disk peripheral in response to a request, means for transferring data to the tape peripheral; and means for selecting either the received backup data or the stored copy-on-write data to be transferred to the tape peripheral .
9. A backup computer system for use with a tape peripheral, a disk peripheral having copy-on-write capability and a host computer connected to the disk peripheral , the backup computer system comprising: a backup computer, connected to the disk peripheral and the tape peripheral, the backup computer having means for receiving copy-on-write data from the disk peripheral , means for storing the copy-on-write data in the backup computer, means for requesting backup data from the disk peripheral , means for receiving backup data from the disk peripheral in response to a request, means for transferring data to the tape peripheral; and means for selecting either the received backup data or the stored copy-on-write data to be transferred to the tape peripheral .
10. A data storage and backup system for use with a host computer, said system comprising: a disk peripheral connected to the host computer and having copy-on-write capability; a tape peripheral for storing a backup copy of at least a portion of the information on the disk peripheral; a backup computer, connected to the disk peripheral and the tape peripheral , the backup computer having means for receiving copy-on-write data from the disk peripheral, means for storing the copy-on-write data in the backup computer, means for requesting backup data from the disk peripheral, means for receiving backup data from the disk peripheral in response to a request, means for transferring data to the tape peripheral, and means for selecting either the received backup data or the stored copy-on-write data to be transferred to the tape peripheral .
11. A backup system for use with a tape peripheral, a disk peripheral with copy-on-write capability and a host computer connected to the disk peripheral, the backup system comprising: a processor for executing utility programs; data storage means; an original data cache in the data storage means; a data cache utility having means for receiving copy-on-write (COW) data from the disk peripheral and means for storing the COW data in the original data cache; a backup utility having means for issuing read commands to the disk peripheral, means for receiving data from the disk peripheral and means sending data to the tape peripheral ; and a read utility having means for receiving data from the disk peripheral in response to a read command from the backup utility, means for comparing the disk peripheral address of received data with disk peripheral addresses of COW data, means for providing data to the backup utility, and means for selecting the data to be provided to the backup utility from either the data received from the disk peripheral or the COW data stored in the original data cache.
12. The data processing system of claim 7 wherein the backup computer includes an original data cache; a data cache utility having means for receiving copy-on-write (COW) data from the disk peripheral and means for storing the COW data in the original data cache; a backup utility having means for issuing read commands to the disk peripheral, means for receiving data from the disk peripheral and means sending data to the tape peripheral ; and a read utility having means for receiving data from the disk peripheral in response to a read command from the backup utility; means for comparing the disk peripheral address of received data with disk peripheral addresses of COW data; means for providing data to the backup utility; and means for selecting the data to be provided to the backup utility from either the data received from the disk peripheral and the COW data stored in the original data cache .
13. The system of claim 11 or 12 wherein the backup computer further comprises : a map of the original data cache; means for storing in the map the disk peripheral address of the COW data in the map; and means for assigning and storing in the map a pointer to the COW data in the original data cache means for checking the map for stored addresses matching the addresses of data received from the disk peripheral .
14. The data processing system of claim 13, wherein, for each address requested by the backup utility, the means for selecting selects the received data if no corresponding address if found in the map and selects the original data cache data if a corresponding address is found in the map.
15. The apparatus of claim 13 wherein the data cache utility further includes means for checking the map prior to storing COW data in the original data cache and storing the data in the original data cache only if the map indicates that data from the same disk peripheral address is not already stored in the original data cache .
16. The apparatus of claim 14 wherein the data cache utility further includes means for storing a state field in the map for each COW entry, said state field indicating that the corresponding COW data in the original data cache either has been or has not been selected by the read utility selecting means .
17. The apparatus of claim 16 wherein the read utility further includes means for modifying the contents of the state field when the read utility selects the COW data associated with that state field from the original data cache .
18. The apparatus of claim 13 wherein the means for assigning the pointer includes means for reassigning a previously assigned pointer if the state field associated with the previously assigned pointer indicates that the COW data associated with that pointer has been selected by the read utility selecting means .
PCT/US1998/009887 1997-05-19 1998-05-14 Apparatus and method for backup of a disk storage system WO1998053400A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP55044398A JP3792258B2 (en) 1997-05-19 1998-05-14 Disk storage system backup apparatus and method
CA002286356A CA2286356A1 (en) 1997-05-19 1998-05-14 Apparatus and method for backup of a disk storage system
DE69831944T DE69831944T2 (en) 1997-05-19 1998-05-14 DEVICE AND METHOD FOR SECURING A PLATE STORAGE SYSTEM
AU75726/98A AU7572698A (en) 1997-05-19 1998-05-14 Apparatus and method for backup of a disk storage system
EP98923431A EP0983548B1 (en) 1997-05-19 1998-05-14 Apparatus and method for backup of a disk storage system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/858,231 US6081875A (en) 1997-05-19 1997-05-19 Apparatus and method for backup of a disk storage system
US08/858,231 1997-05-19

Publications (1)

Publication Number Publication Date
WO1998053400A1 true WO1998053400A1 (en) 1998-11-26

Family

ID=25327810

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/009887 WO1998053400A1 (en) 1997-05-19 1998-05-14 Apparatus and method for backup of a disk storage system

Country Status (7)

Country Link
US (1) US6081875A (en)
EP (1) EP0983548B1 (en)
JP (1) JP3792258B2 (en)
AU (1) AU7572698A (en)
CA (1) CA2286356A1 (en)
DE (1) DE69831944T2 (en)
WO (1) WO1998053400A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6182198B1 (en) * 1998-06-05 2001-01-30 International Business Machines Corporation Method and apparatus for providing a disc drive snapshot backup while allowing normal drive read, write, and buffering operations
US7165145B2 (en) 2003-07-02 2007-01-16 Falconstor Software, Inc. System and method to protect data stored in a storage system
JP2010244532A (en) * 2009-04-02 2010-10-28 Lsi Corp System to reduce drive overhead using mirrored cache volume in storage array

Families Citing this family (157)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6631477B1 (en) * 1998-03-13 2003-10-07 Emc Corporation Host system for mass storage business continuance volumes
US6353878B1 (en) 1998-08-13 2002-03-05 Emc Corporation Remote control of backup media in a secondary storage subsystem through access to a primary storage subsystem
US6269431B1 (en) 1998-08-13 2001-07-31 Emc Corporation Virtual storage and block level direct access of secondary storage for recovery of backup data
US6345346B1 (en) * 1999-02-26 2002-02-05 Voom Technologies Substantially instantaneous storage restoration for non-computer forensics applications
JP4282030B2 (en) * 1999-06-03 2009-06-17 株式会社日立製作所 Data duplex control method and duplex storage subsystem
JP3693226B2 (en) * 1999-06-30 2005-09-07 矢崎総業株式会社 Microcomputer backup device and automotive power window control device
US6594780B1 (en) 1999-10-19 2003-07-15 Inasoft, Inc. Operating system and data protection
US7337360B2 (en) * 1999-10-19 2008-02-26 Idocrase Investments Llc Stored memory recovery system
US6810396B1 (en) 2000-03-09 2004-10-26 Emc Corporation Managed access of a backup storage system coupled to a network
US6938039B1 (en) 2000-06-30 2005-08-30 Emc Corporation Concurrent file across at a target file server during migration of file systems between file servers using a network file system access protocol
US6457109B1 (en) * 2000-08-18 2002-09-24 Storage Technology Corporation Method and apparatus for copying data from one storage system to another storage system
US6701456B1 (en) 2000-08-29 2004-03-02 Voom Technologies, Inc. Computer system and method for maintaining an audit record for data restoration
US6640217B1 (en) * 2000-09-19 2003-10-28 Bocada, Inc, Method for extracting and storing records of data backup activity from a plurality of backup devices
US6823336B1 (en) 2000-09-26 2004-11-23 Emc Corporation Data storage system and method for uninterrupted read-only access to a consistent dataset by one host processor concurrent with read-write access by another host processor
US6609126B1 (en) * 2000-11-15 2003-08-19 Appfluent Technology, Inc. System and method for routing database requests to a database and a cache
US6594744B1 (en) * 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
EP1415425B1 (en) 2001-07-06 2019-06-26 CA, Inc. Systems and methods of information backup
US6640291B2 (en) * 2001-08-10 2003-10-28 Hitachi, Ltd. Apparatus and method for online data migration with remote copy
US6880101B2 (en) * 2001-10-12 2005-04-12 Dell Products L.P. System and method for providing automatic data restoration after a storage device failure
US6799189B2 (en) * 2001-11-15 2004-09-28 Bmc Software, Inc. System and method for creating a series of online snapshots for recovery purposes
US6886021B1 (en) 2001-11-27 2005-04-26 Unisys Corporation Method for tracking audit files spanning multiple tape volumes
US7296125B2 (en) * 2001-11-29 2007-11-13 Emc Corporation Preserving a snapshot of selected data of a mass storage system
US6751715B2 (en) * 2001-12-13 2004-06-15 Lsi Logic Corporation System and method for disabling and recreating a snapshot volume
US6948039B2 (en) * 2001-12-14 2005-09-20 Voom Technologies, Inc. Data backup and restoration using dynamic virtual storage
US6993539B2 (en) 2002-03-19 2006-01-31 Network Appliance, Inc. System and method for determining changes in two snapshots and for transmitting changes to destination snapshot
US7185031B2 (en) * 2002-03-25 2007-02-27 Quantum Corporation Creating a backup volume using a data profile of a host volume
US7185169B2 (en) * 2002-04-26 2007-02-27 Voom Technologies, Inc. Virtual physical drives
US7546364B2 (en) * 2002-05-16 2009-06-09 Emc Corporation Replication of remote copy data for internet protocol (IP) transmission
US6988165B2 (en) * 2002-05-20 2006-01-17 Pervasive Software, Inc. System and method for intelligent write management of disk pages in cache checkpoint operations
US7389313B1 (en) 2002-08-07 2008-06-17 Symantec Operating Corporation System and method for creating a snapshot copy of a database
US7284016B2 (en) * 2002-12-03 2007-10-16 Emc Corporation Client-server protocol for directory access of snapshot file systems in a storage system
JP3974538B2 (en) 2003-02-20 2007-09-12 株式会社日立製作所 Information processing system
JP4165747B2 (en) * 2003-03-20 2008-10-15 株式会社日立製作所 Storage system, control device, and control device program
US7457982B2 (en) * 2003-04-11 2008-11-25 Network Appliance, Inc. Writable virtual disk of read-only snapshot file objects
US7085909B2 (en) * 2003-04-29 2006-08-01 International Business Machines Corporation Method, system and computer program product for implementing copy-on-write of a file
US7111136B2 (en) * 2003-06-26 2006-09-19 Hitachi, Ltd. Method and apparatus for backup and recovery system using storage based journaling
US20050015416A1 (en) 2003-07-16 2005-01-20 Hitachi, Ltd. Method and apparatus for data recovery using storage based journaling
US20050022213A1 (en) 2003-07-25 2005-01-27 Hitachi, Ltd. Method and apparatus for synchronizing applications for data recovery using storage based journaling
US7398422B2 (en) * 2003-06-26 2008-07-08 Hitachi, Ltd. Method and apparatus for data recovery system using storage based journaling
JP4124348B2 (en) 2003-06-27 2008-07-23 株式会社日立製作所 Storage system
US8856927B1 (en) 2003-07-22 2014-10-07 Acronis International Gmbh System and method for using snapshots for rootkit detection
US8074035B1 (en) * 2003-07-22 2011-12-06 Acronis, Inc. System and method for using multivolume snapshots for online data backup
US7467266B2 (en) * 2003-08-05 2008-12-16 International Business Machines Corporation Snapshot management method apparatus and system
US7725760B2 (en) 2003-09-23 2010-05-25 Symantec Operating Corporation Data storage system
US7904428B2 (en) 2003-09-23 2011-03-08 Symantec Corporation Methods and apparatus for recording write requests directed to a data store
US7991748B2 (en) 2003-09-23 2011-08-02 Symantec Corporation Virtual data store creation and use
US7827362B2 (en) 2004-08-24 2010-11-02 Symantec Corporation Systems, apparatus, and methods for processing I/O requests
US7287133B2 (en) 2004-08-24 2007-10-23 Symantec Operating Corporation Systems and methods for providing a modification history for a location within a data store
US7577806B2 (en) 2003-09-23 2009-08-18 Symantec Operating Corporation Systems and methods for time dependent data storage and recovery
US7730222B2 (en) 2004-08-24 2010-06-01 Symantec Operating System Processing storage-related I/O requests using binary tree data structures
US20050071380A1 (en) * 2003-09-29 2005-03-31 Micka William F. Apparatus and method to coordinate multiple data storage and retrieval systems
US7222117B1 (en) 2003-11-14 2007-05-22 Advent Software, Inc. Segmented global area database
US7409497B1 (en) 2003-12-02 2008-08-05 Network Appliance, Inc. System and method for efficiently guaranteeing data consistency to clients of a storage system cluster
US7478101B1 (en) 2003-12-23 2009-01-13 Networks Appliance, Inc. System-independent data format in a mirrored storage system environment and method for using the same
JP2005346610A (en) 2004-06-07 2005-12-15 Hitachi Ltd Storage system and method for acquisition and use of snapshot
JP4421385B2 (en) 2004-06-09 2010-02-24 株式会社日立製作所 Computer system
JP2006011811A (en) * 2004-06-25 2006-01-12 Hitachi Ltd Storage control system and storage control method
US20060015696A1 (en) * 2004-07-15 2006-01-19 Lu Nguyen Integrated storage device
US7225371B2 (en) * 2004-08-03 2007-05-29 International Business Machines Corporation Method and apparatus for storing and retrieving multiple point-in-time consistent data sets
US20080140959A1 (en) * 2004-10-12 2008-06-12 Oqo, Inc. One-touch backup system
US7730277B1 (en) 2004-10-25 2010-06-01 Netapp, Inc. System and method for using pvbn placeholders in a flexible volume of a storage system
US9165003B1 (en) 2004-11-29 2015-10-20 Netapp, Inc. Technique for permitting multiple virtual file systems having the same identifier to be served by a single storage system
US20060123211A1 (en) * 2004-12-08 2006-06-08 International Business Machines Corporation Method for optimizing a snapshot operation on a file basis
US20060143412A1 (en) * 2004-12-28 2006-06-29 Philippe Armangau Snapshot copy facility maintaining read performance and write performance
US7757056B1 (en) 2005-03-16 2010-07-13 Netapp, Inc. System and method for efficiently calculating storage required to split a clone volume
US7617370B2 (en) * 2005-04-29 2009-11-10 Netapp, Inc. Data allocation within a storage system architecture
US7325111B1 (en) 2005-11-01 2008-01-29 Network Appliance, Inc. Method and system for single pass volume scanning for multiple destination mirroring
US8347373B2 (en) 2007-05-08 2013-01-01 Fortinet, Inc. Content filtering of remote file-system access protocols
JP2007199756A (en) * 2006-01-23 2007-08-09 Hitachi Ltd Computer system and data copying method
US8095751B2 (en) 2006-02-28 2012-01-10 International Business Machines Corporation Managing set of target storage volumes for snapshot and tape backups
CN100394393C (en) * 2006-03-10 2008-06-11 四川大学 Information system data consistency detection
CN100353331C (en) * 2006-03-10 2007-12-05 四川大学 Long-distance data fast restoring method of network information system
US7769723B2 (en) * 2006-04-28 2010-08-03 Netapp, Inc. System and method for providing continuous data protection
US8165221B2 (en) 2006-04-28 2012-04-24 Netapp, Inc. System and method for sampling based elimination of duplicate data
TWI316188B (en) * 2006-05-17 2009-10-21 Ind Tech Res Inst Mechanism and method to snapshot data
US9026495B1 (en) 2006-05-26 2015-05-05 Netapp, Inc. System and method for creating and accessing a host-accessible storage entity
US7921077B2 (en) * 2006-06-29 2011-04-05 Netapp, Inc. System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US8412682B2 (en) * 2006-06-29 2013-04-02 Netapp, Inc. System and method for retrieving and using block fingerprints for data deduplication
US8010509B1 (en) 2006-06-30 2011-08-30 Netapp, Inc. System and method for verifying and correcting the consistency of mirrored data sets
US7987167B1 (en) 2006-08-04 2011-07-26 Netapp, Inc. Enabling a clustered namespace with redirection
US7747584B1 (en) 2006-08-22 2010-06-29 Netapp, Inc. System and method for enabling de-duplication in a storage system architecture
US7865741B1 (en) 2006-08-23 2011-01-04 Netapp, Inc. System and method for securely replicating a configuration database of a security appliance
US8116455B1 (en) 2006-09-29 2012-02-14 Netapp, Inc. System and method for securely initializing and booting a security appliance
US7739546B1 (en) 2006-10-20 2010-06-15 Netapp, Inc. System and method for storing and retrieving file system log information in a clustered computer system
US7685178B2 (en) * 2006-10-31 2010-03-23 Netapp, Inc. System and method for examining client generated content stored on a data container exported by a storage system
US8996487B1 (en) 2006-10-31 2015-03-31 Netapp, Inc. System and method for improving the relevance of search results using data container access patterns
US7720889B1 (en) 2006-10-31 2010-05-18 Netapp, Inc. System and method for nearly in-band search indexing
US8423731B1 (en) 2006-10-31 2013-04-16 Netapp, Inc. System and method for automatic scheduling and policy provisioning for information lifecycle management
JP2008134685A (en) * 2006-11-27 2008-06-12 Konica Minolta Business Technologies Inc Nonvolatile memory system and nonvolatile memory control method
US7613947B1 (en) 2006-11-30 2009-11-03 Netapp, Inc. System and method for storage takeover
US7711683B1 (en) 2006-11-30 2010-05-04 Netapp, Inc. Method and system for maintaining disk location via homeness
US8301673B2 (en) * 2006-12-29 2012-10-30 Netapp, Inc. System and method for performing distributed consistency verification of a clustered file system
US7853750B2 (en) * 2007-01-30 2010-12-14 Netapp, Inc. Method and an apparatus to store data patterns
US7631158B2 (en) * 2007-01-31 2009-12-08 Inventec Corporation Disk snapshot method using a copy-on-write table in a user space
US8868495B2 (en) * 2007-02-21 2014-10-21 Netapp, Inc. System and method for indexing user data on storage systems
US7870356B1 (en) 2007-02-22 2011-01-11 Emc Corporation Creation of snapshot copies using a sparse file for keeping a record of changed blocks
US7809908B2 (en) * 2007-02-23 2010-10-05 Inventec Corporation Disk snapshot acquisition method
US8219821B2 (en) 2007-03-27 2012-07-10 Netapp, Inc. System and method for signature based data container recognition
US7653612B1 (en) 2007-03-28 2010-01-26 Emc Corporation Data protection services offload using shallow files
US8510524B1 (en) 2007-03-29 2013-08-13 Netapp, Inc. File system capable of generating snapshots and providing fast sequential read access
US8533410B1 (en) 2007-03-29 2013-09-10 Netapp, Inc. Maintaining snapshot and active file system metadata in an on-disk structure of a file system
US7849057B1 (en) 2007-03-30 2010-12-07 Netapp, Inc. Identifying snapshot membership for blocks based on snapid
US7827350B1 (en) 2007-04-27 2010-11-02 Netapp, Inc. Method and system for promoting a snapshot in a distributed file system
US7882304B2 (en) * 2007-04-27 2011-02-01 Netapp, Inc. System and method for efficient updates of sequential block storage
US8219749B2 (en) * 2007-04-27 2012-07-10 Netapp, Inc. System and method for efficient updates of sequential block storage
US8762345B2 (en) * 2007-05-31 2014-06-24 Netapp, Inc. System and method for accelerating anchor point detection
US8301791B2 (en) * 2007-07-26 2012-10-30 Netapp, Inc. System and method for non-disruptive check of a mirror
US7941406B2 (en) * 2007-08-20 2011-05-10 Novell, Inc. Techniques for snapshotting
US8793226B1 (en) 2007-08-28 2014-07-29 Netapp, Inc. System and method for estimating duplicate data
JP5028196B2 (en) * 2007-09-12 2012-09-19 株式会社リコー Backup / restore device, backup / restore system, and backup / restore method
US7996636B1 (en) 2007-11-06 2011-08-09 Netapp, Inc. Uniquely identifying block context signatures in a storage volume hierarchy
US8380674B1 (en) 2008-01-09 2013-02-19 Netapp, Inc. System and method for migrating lun data between data containers
US8725986B1 (en) 2008-04-18 2014-05-13 Netapp, Inc. System and method for volume block number to disk block number mapping
US8219564B1 (en) 2008-04-29 2012-07-10 Netapp, Inc. Two-dimensional indexes for quick multiple attribute search in a catalog system
US8250043B2 (en) * 2008-08-19 2012-08-21 Netapp, Inc. System and method for compression of partially ordered data sets
US20100228906A1 (en) * 2009-03-06 2010-09-09 Arunprasad Ramiya Mothilal Managing Data in a Non-Volatile Memory System
US8683088B2 (en) * 2009-08-06 2014-03-25 Imation Corp. Peripheral device data integrity
US8745365B2 (en) * 2009-08-06 2014-06-03 Imation Corp. Method and system for secure booting a computer by booting a first operating system from a secure peripheral device and launching a second operating system stored a secure area in the secure peripheral device on the first operating system
US8458217B1 (en) 2009-08-24 2013-06-04 Advent Software, Inc. Instantly built information space (IBIS)
JP5464003B2 (en) * 2010-03-26 2014-04-09 富士通株式会社 Database management apparatus and database management program
US8639658B1 (en) * 2010-04-21 2014-01-28 Symantec Corporation Cache management for file systems supporting shared blocks
US8566640B2 (en) 2010-07-19 2013-10-22 Veeam Software Ag Systems, methods, and computer program products for instant recovery of image level backups
US8849750B2 (en) 2010-10-13 2014-09-30 International Business Machines Corporation Synchronization for initialization of a remote mirror storage facility
US8402004B2 (en) 2010-11-16 2013-03-19 Actifio, Inc. System and method for creating deduplicated copies of data by tracking temporal relationships among copies and by ingesting difference data
US8843489B2 (en) 2010-11-16 2014-09-23 Actifio, Inc. System and method for managing deduplicated copies of data using temporal relationships among copies
US9858155B2 (en) 2010-11-16 2018-01-02 Actifio, Inc. System and method for managing data with service level agreements that may specify non-uniform copying of data
US8904126B2 (en) 2010-11-16 2014-12-02 Actifio, Inc. System and method for performing a plurality of prescribed data management functions in a manner that reduces redundant access operations to primary storage
US8417674B2 (en) 2010-11-16 2013-04-09 Actifio, Inc. System and method for creating deduplicated copies of data by sending difference data between near-neighbor temporal states
US9244967B2 (en) 2011-08-01 2016-01-26 Actifio, Inc. Incremental copy performance between data stores
US8769350B1 (en) 2011-09-20 2014-07-01 Advent Software, Inc. Multi-writer in-memory non-copying database (MIND) system and method
US8332349B1 (en) 2012-01-06 2012-12-11 Advent Software, Inc. Asynchronous acid event-driven data processing using audit trail tools for transaction systems
US9519549B2 (en) * 2012-01-11 2016-12-13 International Business Machines Corporation Data storage backup with lessened cache pollution
US9495435B2 (en) 2012-06-18 2016-11-15 Actifio, Inc. System and method for intelligent database backup
US9569310B2 (en) * 2013-02-27 2017-02-14 Netapp, Inc. System and method for a scalable crash-consistent snapshot operation
CA2912394A1 (en) 2013-05-14 2014-11-20 Actifio, Inc. Efficient data replication and garbage collection predictions
CN103649901A (en) 2013-07-26 2014-03-19 华为技术有限公司 Data transmission method, data receiving method and sotring equipment
US8886671B1 (en) 2013-08-14 2014-11-11 Advent Software, Inc. Multi-tenant in-memory database (MUTED) system and method
US9904603B2 (en) 2013-11-18 2018-02-27 Actifio, Inc. Successive data fingerprinting for copy accuracy assurance
US9009355B1 (en) * 2013-12-17 2015-04-14 Emc Corporation Processing requests to a data store during back up
US9720778B2 (en) 2014-02-14 2017-08-01 Actifio, Inc. Local area network free data movement
US9792187B2 (en) 2014-05-06 2017-10-17 Actifio, Inc. Facilitating test failover using a thin provisioned virtual machine created from a snapshot
WO2015195834A1 (en) 2014-06-17 2015-12-23 Rangasamy Govind Resiliency director
US10042710B2 (en) 2014-09-16 2018-08-07 Actifio, Inc. System and method for multi-hop data backup
US10379963B2 (en) 2014-09-16 2019-08-13 Actifio, Inc. Methods and apparatus for managing a large-scale environment of copy data management appliances
US10445187B2 (en) 2014-12-12 2019-10-15 Actifio, Inc. Searching and indexing of backup data sets
US10055300B2 (en) 2015-01-12 2018-08-21 Actifio, Inc. Disk group based backup
US10282201B2 (en) 2015-04-30 2019-05-07 Actifo, Inc. Data provisioning techniques
US10691659B2 (en) 2015-07-01 2020-06-23 Actifio, Inc. Integrating copy data tokens with source code repositories
US10613938B2 (en) 2015-07-01 2020-04-07 Actifio, Inc. Data virtualization using copy data tokens
US10445298B2 (en) 2016-05-18 2019-10-15 Actifio, Inc. Vault to object store
US10476955B2 (en) 2016-06-02 2019-11-12 Actifio, Inc. Streaming and sequential data replication
US11334443B1 (en) * 2017-01-12 2022-05-17 Acronis International Gmbh Trusted data restoration and authentication
US10855554B2 (en) 2017-04-28 2020-12-01 Actifio, Inc. Systems and methods for determining service level agreement compliance
US11403178B2 (en) 2017-09-29 2022-08-02 Google Llc Incremental vault to object store
US11176001B2 (en) 2018-06-08 2021-11-16 Google Llc Automated backup and restore of a disk group
US10740187B1 (en) 2019-01-31 2020-08-11 EMC IP Holding Company LLC Systems and methods of managing and creating snapshots in a cache-based storage system
CN111324620A (en) * 2020-02-18 2020-06-23 中国联合网络通信集团有限公司 Data processing method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0332210A2 (en) * 1988-03-11 1989-09-13 Hitachi, Ltd. Backup control method and system in data processing system
EP0566967A2 (en) * 1992-04-20 1993-10-27 International Business Machines Corporation Method and system for time zero backup session security

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2071346A1 (en) * 1991-10-18 1993-04-19 Claus William Mikkelsen Method and means for time zero backup copy of data
US5241668A (en) * 1992-04-20 1993-08-31 International Business Machines Corporation Method and system for automated termination and resumption in a time zero backup copy process
US5379412A (en) * 1992-04-20 1995-01-03 International Business Machines Corporation Method and system for dynamic allocation of buffer storage space during backup copying
US5263154A (en) * 1992-04-20 1993-11-16 International Business Machines Corporation Method and system for incremental time zero backup copying of data
US5241669A (en) * 1992-04-20 1993-08-31 International Business Machines Corporation Method and system for sidefile status polling in a time zero backup copy process
US5241670A (en) * 1992-04-20 1993-08-31 International Business Machines Corporation Method and system for automated backup copy ordering in a time zero backup copy session
US5487160A (en) * 1992-12-04 1996-01-23 At&T Global Information Solutions Company Concurrent image backup for disk storage system
US5535381A (en) * 1993-07-22 1996-07-09 Data General Corporation Apparatus and method for copying and restoring disk files
US5604862A (en) * 1995-03-14 1997-02-18 Network Integrity, Inc. Continuously-snapshotted protection of computer files
US5852715A (en) * 1996-03-19 1998-12-22 Emc Corporation System for currently updating database by one host and reading the database by different host for the purpose of implementing decision support functions

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0332210A2 (en) * 1988-03-11 1989-09-13 Hitachi, Ltd. Backup control method and system in data processing system
EP0566967A2 (en) * 1992-04-20 1993-10-27 International Business Machines Corporation Method and system for time zero backup session security

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6182198B1 (en) * 1998-06-05 2001-01-30 International Business Machines Corporation Method and apparatus for providing a disc drive snapshot backup while allowing normal drive read, write, and buffering operations
US7165145B2 (en) 2003-07-02 2007-01-16 Falconstor Software, Inc. System and method to protect data stored in a storage system
US7418547B2 (en) 2003-07-02 2008-08-26 Falconstor, Inc. System and method to protect data stored in a storage system
US7467259B2 (en) 2003-07-02 2008-12-16 Falcon Stor, Inc. System and method to protect data stored in a storage system
US8041892B2 (en) 2003-07-02 2011-10-18 Falconstor, Inc. System and method to protect data stored in a storage system
JP2010244532A (en) * 2009-04-02 2010-10-28 Lsi Corp System to reduce drive overhead using mirrored cache volume in storage array

Also Published As

Publication number Publication date
AU7572698A (en) 1998-12-11
EP0983548B1 (en) 2005-10-19
JP2001520779A (en) 2001-10-30
DE69831944D1 (en) 2005-11-24
JP3792258B2 (en) 2006-07-05
DE69831944T2 (en) 2006-07-06
EP0983548A1 (en) 2000-03-08
CA2286356A1 (en) 1998-11-26
US6081875A (en) 2000-06-27

Similar Documents

Publication Publication Date Title
US6081875A (en) Apparatus and method for backup of a disk storage system
EP1160654B1 (en) Method for on-line, real time, data migration
US5089958A (en) Fault tolerant computer backup system
US5881311A (en) Data storage subsystem with block based data management
US5720026A (en) Incremental backup system
US7337286B1 (en) Storage control system for restoring a remote data copy
US6397229B1 (en) Storage-controller-managed outboard incremental backup/restore of data
US5568628A (en) Storage control method and apparatus for highly reliable storage controller with multiple cache memories
US6353878B1 (en) Remote control of backup media in a secondary storage subsystem through access to a primary storage subsystem
US5619690A (en) Computer system including a computer which requests an access to a logical address in a secondary storage system with specification of a local address in the secondary storage system
US6269431B1 (en) Virtual storage and block level direct access of secondary storage for recovery of backup data
US8074035B1 (en) System and method for using multivolume snapshots for online data backup
US7047380B2 (en) System and method for using file system snapshots for online data backup
US5497483A (en) Method and system for track transfer control during concurrent copy operations in a data processing storage subsystem
EP1148416B1 (en) Computer system and snapshot data management method
US6366987B1 (en) Computer data storage physical backup and logical restore
EP1942414B1 (en) Snapshot system and method
US8423825B2 (en) Data restoring method and an apparatus using journal data and an identification information
US7266572B2 (en) Restoring virtual devices
US6505273B2 (en) Disk control device and method processing variable-block and fixed-block accesses from host devices
EP0566964B1 (en) Method and system for sidefile status polling in a time zero backup copy process
US5941994A (en) Technique for sharing hot spare drives among multiple subsystems
US7185048B2 (en) Backup processing method
US5909700A (en) Back-up data storage facility incorporating filtering to select data items to be backed up
US20030229651A1 (en) Snapshot acquisition method, storage system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA IL JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2286356

Country of ref document: CA

Ref country code: CA

Ref document number: 2286356

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1998923431

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1998923431

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1998923431

Country of ref document: EP