WO1993008529A1 - Method and means for time zero backup copying of data - Google Patents

Method and means for time zero backup copying of data Download PDF

Info

Publication number
WO1993008529A1
WO1993008529A1 PCT/EP1992/002127 EP9202127W WO9308529A1 WO 1993008529 A1 WO1993008529 A1 WO 1993008529A1 EP 9202127 W EP9202127 W EP 9202127W WO 9308529 A1 WO9308529 A1 WO 9308529A1
Authority
WO
WIPO (PCT)
Prior art keywords
cpu
datasets
backup
copying
storage subsystem
Prior art date
Application number
PCT/EP1992/002127
Other languages
French (fr)
Inventor
Claus William Mikkelsen
Original Assignee
International Business Machines Corporation
Ibm Deutschland Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation, Ibm Deutschland Gmbh filed Critical International Business Machines Corporation
Priority to EP92919444A priority Critical patent/EP0608255A1/en
Publication of WO1993008529A1 publication Critical patent/WO1993008529A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1466Management of the backup or restore process to make the backup process non-disruptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation

Definitions

  • This invention relates to maintaining continued availability of datasets in external storage to accessing computer systems (CPU) . More particularly, it relates to backup copying of records in external storage concurrent with a dramatically shortened suspension of CPU application execution occasioned by said copying.
  • a data processing system must be prepared to recover, not only from corruptions of stored data such as to noise bursts, software bugs, media defects, and write path errors, but from global events such as CPU power failure.
  • the most common technique to ensure continued availability of data is to make one or more copies of CPU datasets and put them in a safe place. This "backup" process occurs within contexts of storage systems of increasing function.
  • Backup policies are policies of scheduling. They have a space and a time dimension exemplified by a range of datasets and by frequency of occurrence.
  • a FULL backup imports copying the entire range whether updated or not.
  • An INCREMENTAL backup copies only that portion of the dataset that has changed since the last backup (either full or incremental). The backup copy represents a consistent view of the data as of the time the copy or snap-shot was made.
  • Gawlick et al US Pat. 4,507,751, "Method and Apparatus for Logging Journal Data Using a Write Ahead Dataset", issued 3/26/1985 exemplifies a transaction management system where all transactions are recorded on a log on a write-ahead- dataset basis.
  • a unit of work is first recorded on the backup medium (log) and then written to its external storage address.
  • Anglin et al "Method and Apparatus for Executing Critical Disk Access Commands", USSN 07/524,206, filed May 16, 1990, (IBM Ref. SA9-90-012), teaches performing media maintenance on selective portions of a tracked cyclically operable magnetic media concurrent with active access to other portions of the DASD media.
  • Anglin's method requires the phased movement of customer data between a target track to an alternate track, diversion of all concurrent access requests to the alternate track or tracks, and completion of maintenance and copyback from the alternate to the target track.
  • Requests and interrupts occurring prior to executing track to track customer data movement result in the process restarting. Otherwise, requests and interrupts occurring during execution of the data movement view a DEVICE BUSY state. This causes a re-queuing of the requests etc.
  • the above objects are satisfied by a method and means that rely upon mapping data to be copied onto the backup copy medium atomically and using sidefiles for buffering any data subset affected by a concurrent update.
  • This allows updates to be concurrently written through to external storage while preserving both the consistency and copyback order.
  • the method of the invention is implemented by backup copying designated datasets in a uniquely identified session. Each session includes session registration and initialization and concurrent copying of the state of the designated datasets as of a predetermined time (tO) while writing through all updates after tO to the external store.
  • the method includes the steps of (1) writing sidefiles of the affected uncopied portion of the dataset, (2) updating the original data in place on said external store, and (3) copying the sidefiles to the medium in backup copy order.
  • the integrity of the copied dataset is maintained while the period of process suspension is nearly eliminated.
  • the method and means of this invention is directed to a new use of sidefile generation. That is, the difference resides in the use of generation of sidefiles of the uncopied portion of a dataset where the sidefile use facilitates both backing up datasets in ordinary copy order and overlapping of backing up and updating.
  • Figs. 1 exhibits a typical multi-processing multi-programming environment according to the prior art where executing processes and applications randomly or sequentially access data from external storage.
  • Fig. 2 shows a timeline depiction of the backup window among batch or streaming processes according to the prior art.
  • Fig. 3 depicts the near elimination of the backup window as a consequence of the method and means of the invention.
  • Fig. 4 sets forth a conceptual flow of the to backup copy method of the invention.
  • Figs. 5 and 6 represents the control flow at the external storage control and the CPU operating system levels respectively.
  • each CPU in a system may be of an IBM/360 or 370 architected CPU type having as an example an IBM MVS operating system.
  • An IBM/360 architected CPU is fully described in Amdahl et al, USP 3,400,371, "Data Processing System", issued on September 3, 1968.
  • a configuration involving CPU's sharing access to external storage is set forth in Luiz et al, USP 4,207,609, "Path Independent Device Reservation and Reconnection in a Multi-CPU and Shared Device Access System", issued June 10, 1980.
  • MVS operating system is also described in IBM publication GC28-1150, "MVS/Extended Architecture System Programming Library: System Macros and Facilities", Volume 1. Details of standard MVS or other operating system services such as local lock management, sub-system invocation by interrupt or monitor, and the posting and waiting of tasks is omitted. These OS services are believed well appreciated by those skilled in the art.
  • FIG. 1 there is depicted a multi ⁇ processing, multiprogramming system according to the prior art.
  • Such systems including a plurality of processors (1,3) accessing external storage (21,23,25,27,29) over redundant channel demand/response interfaces (5,7,9).
  • a CPU process establishes a path to externally stored data in an IBM System 370 and the like through an MVS or other operating system by invoking a START I/O, transferring control to a channel subsystem which reserves a path to the data over which transfers are made.
  • applications have data dependences and may briefly suspend operations until a fetch or update is completed. During the transfer, the path is locked until the transfer is completed.
  • FIG. 2 there is shown a timeline depiction of the backup window among batch or streaming processes according to the prior art. That is, at a time just prior to backup, applications are suspended or shut down. The suspension persists until the backup process is completed.
  • Backup termination signifies completion and commitment.
  • completion it is meant that all the data that was to have been copied was in fact read from the source.
  • commitment it is meant that all the data to be copied was in fact written to the output media.
  • the data is completely usable again by the applications.
  • the time from when the tO backup is issued and the data being available again is in the low sub-second range.
  • the total application data outage (backup window) can be measured in milliseconds.
  • each backup session is assigned a unique session identification (ID) and comprises an initialization and a backup processing component. While multiple backup sessions may be run concurrently, each session ID and whence “snapshot" is unique.
  • ID session identification
  • Each CPU includes an operating system having a storage manager component.
  • an IBM System 370 type CPU running under the MVS operating system would include a storage manager of the data facilities data set services (DFDSS) type as described in Ferro et al, U.S.Pat. 4,855,907, issued Aug. 8, 1989, "Method for Moving VSAM Base Clusters While Maintaining Alternate Indices into the Cluster".
  • DFDSS is also described in the IBM publication GC26-4388, "Data Facility Data Set Services: User's Guide”, dated ⁇ mm/dd/yyyy>.
  • Data is logically organized into records and datasets.
  • the real address of the data in external storage is in terms of DASDs volumes, tracks, and cylinders.
  • the virtual address of the same is couched in terms of base addresses + offsets and /or extents.
  • a record may be of the count-key-data format. It may occupy one or more units of real storage.
  • a dataset as a logical collection of multiple records may be stored on contiguous units of real storage or may be dispersed. It follows that if backup proceeds on the dataset level then, it is necessary to perform multiple sorts to form inverted indices into real storage.
  • backup processing is managed at two levels, namely, at the CPU OS resource manager level (fig.l - 1, 3) and at the storage control unit level (fig.l - 21, 23). Initialization.
  • the initialization process comprises three broad steps responsive to a resource manager (e.g.DFDSS) receiving a request to copy or backup particular data. These steps include sorting datasets, building one or more bit maps, and signalling logical completion to an invoking process at the CPU. The listed or identified datasets are sorted according to the access path elements down to DASD track granularity. Next, bit maps are constructed which correlate the data set and the access path insofar as any one of them is included or excluded from a given copy session. Lastly, the resource manager signals logical completion meaning that updates will be processed against the dataset after only a short delay.
  • a resource manager e.g.DFDSS
  • the resource manager for storage receives a request to copy or backup up data. Normally, this request is in the form of a list of data sets or a filtered list of data sets. DFDSS maps the request into a list of physical extents by DASD storage volume and by storage control unit (SCU). Next, DFDSS registers the request with each participating SCU. At this point, the session ID is determined and the session is established.
  • DFDSS resource manager for storage
  • DFDSS initializes the session with each SCU by passing all the extents being copied for each volume for each SCU. Each SCU will then build a bitmap for each volume participating in the session. This bitmap will indicate which tracks are part of the tO copy session. Control is returned to DFDSS. This is the "Logically Complete" point at which the data is again available for use. DFDSS notifies the operating system component such as a scheduler in system managed storage accordingly.
  • DFDSS begins reading the tracks requested. While the to copy session is active, each SCU monitors all updates. If an update is received, the SCU executes a predetermined algorithm which takes the update into account.
  • the update is for a volume NOT in the tO session, then the update completes normally.
  • the bitmap is checked to see if that track is protected. If the bit is off (assume this imports a binary 0) , it indicates the track is not currently in the copy session and the update completes normally.
  • the track is protected bit is on it indicates the track is part of a the copy session and it has not as yet been read by DFDSS. In this case, the SCU
  • DFDSS If any tracks are contained in a separate cache partition, DFDSS promptly reads those tracks to minimize the effect of normal cache operations. Referring again to figure 4, the steps of the method are depicted. In this figure, the updates to tracks 4 and 7 cause the unchanged tracks to be staged into the separate cache partition prior to the update completing. DFDSS subsequently reads the tracks from the separate cache partition. Tracks read by DFDSS that are not yet ready to be merged onto the output media are temporarily stored in a host sidefile.
  • Attention processing is used to ensure that separate cache partitions do not consume inordinate amounts of cache.
  • the operating system When an attention is surfaced to the host, the operating system notifies a DFDSS task which then empties the separate cache partition.
  • DFDSS Once DFDSS receives the interrupt, it begins emptying the tracks that had accumulated in the separate cache partition. Any tracks read that are not yet ready to be placed onto the output media are considered "out of sequence" and are stored temporarily in a host-memory sidefile.
  • a process invoking the tO process desires to backup copy the datasets stored on 100 predetermined DASD tracks. If none of those tracks are changed during the copy process, DFDSS could simply read tracks 1-100 and place them on the output media. In order to permit concurrent updating of external store while backup copying, it must also be assumed that data stored on one or more of the predetermined tracks has a reasonable expectation of being altered.
  • Tracks read directly from DASD. These are tracks that have not been changed (by an application) after the tO copy process began.
  • Tracks read from the cache partition. These are the original images of tracks that have been changed after the tO process began.
  • FIG. 5 covers initialization and SCU backup processing while figure 6 depicts CPU OS processing of sidefiles (asynch processing) and CPU OS management of copy session data transfers (synchronous processing) from the SCU to the output medium.
  • FIG. 5 covers initialization and SCU backup processing while figure 6 depicts CPU OS processing of sidefiles (asynch processing) and CPU OS management of copy session data transfers (synchronous processing) from the SCU to the output medium.
  • These presentations are supplemented in this section by more detailed flow of control listings for purposes of completeness. These listings are a many to one mapping to the flow diagrams depicted in figures 5 and 6.
  • the initialization process starts with the CPU operating system (OS) receiving a request to backup or copy some amount of data.
  • OS CPU operating system
  • This phase includes two processes being performed simultaneously, one by the SCU and one by the CPU Operating System.
  • SCU 1. FOR EVERY UPDATE THAT OCCURS, A CHECK IS MADE TO SEE IF THAT UPDATE IS FOR A VOLUME THAT CURRENTLY HAS A TO COPY SESSION.
  • the CPU OS flow consists of an asynchronous process and a synchronous process.

Abstract

Backup copying of designated datasets representing point in time consistency may be performed in a CPU on a DASD storage subsystem concurrent with CPU application execution by suspending execution only long enough to form a logical to physical address concordance and thereafter physically backing up the datasets on the storage subsystem on a scheduled or opportunistic basis. Application initiated updates to the uncopied designated datasets are first buffered, sidefiles made of the affected datasets, the updates written through to the storage subsystem, and the sidefiles written to storage in backup copy order as controlled by the concordance.

Description

D E S C R I P T I O N
METHOD AND MEANS FOR TIME ZERO BACKUP COPYING OF DATA
Field of the Invention
This invention relates to maintaining continued availability of datasets in external storage to accessing computer systems (CPU) . More particularly, it relates to backup copying of records in external storage concurrent with a dramatically shortened suspension of CPU application execution occasioned by said copying.
Description of Related Art
A data processing system must be prepared to recover, not only from corruptions of stored data such as to noise bursts, software bugs, media defects, and write path errors, but from global events such as CPU power failure. The most common technique to ensure continued availability of data is to make one or more copies of CPU datasets and put them in a safe place. This "backup" process occurs within contexts of storage systems of increasing function.
Applications have executed on CPU's in either a batch (streamed) or interactive (transactional) mode. In batch mode, usually one application at a time executes without interruption. Interactive mode is characterized by interrupt driven multiplicity of applications or transactions.
Backup policies are policies of scheduling. They have a space and a time dimension exemplified by a range of datasets and by frequency of occurrence. A FULL backup imports copying the entire range whether updated or not. An INCREMENTAL backup copies only that portion of the dataset that has changed since the last backup (either full or incremental). The backup copy represents a consistent view of the data as of the time the copy or snap-shot was made.
The higher the backup frequency, the closer the backup copy mirrors the current copy of the data. Considering the large volumes of data, backing up is not a trivial maintenance operation. Thus, the opportunity cost of backing up can be high on a large multiprocessing, multiprogramming facility relative to other processing.
The Backup Window and Effect Upon Batch and Transactional Processing
When a CPU backs up data in a streamed or batch mode system, every process, task, or application listens. By this it is meant that processes supporting streamed or batch mode operations are suspended for the duration of the copying. The coined term for this event is "backup window". In contrast to batch mode, log based or transaction management applications are processed in the interactive mode. They practically eliminate the "backup window" by concurrently updating an on-line dataset and logging the change. However, the latter is a form of backup copying whose consistency is "fuzzy". That is, it is not a snapshot of the state of a dataset/database at a single point in time. Rather, a log is an event file requiring further processing against said database. The co-pending Wang et al application USSN: 07/385,647, filed July 25, 1989, entitled "A Computer Based Method For Dataset Copying Using an Incremental Backup Policy", (IBM Ref. SA9-89-043), illustrates backup in a batch mode system using a modified incremental policy. A modified incremental policy copies only hew data or data updates since the last backup. Significantly, applications are suspended during the copying.
As mentioned above, to establish a prior point of consistency in a log based system, it is necessary to "repeat history" by replaying the log from the last checkpoint over the datasets or database of interest. The distinction between batch mode and log based backup is that the backup copy is consistent and speaks as of the time of its last recordation, whereas the log and database require further processing in the event of fault in order to exhibit point in time consistency.
Gawlick et al, US Pat. 4,507,751, "Method and Apparatus for Logging Journal Data Using a Write Ahead Dataset", issued 3/26/1985 exemplifies a transaction management system where all transactions are recorded on a log on a write-ahead- dataset basis. In this patent, a unit of work is first recorded on the backup medium (log) and then written to its external storage address.
Sidefile Generation in Performing DASD Media Maintenance
The copending application, Anglin et al, "Method and Apparatus for Executing Critical Disk Access Commands", USSN 07/524,206, filed May 16, 1990, (IBM Ref. SA9-90-012), teaches performing media maintenance on selective portions of a tracked cyclically operable magnetic media concurrent with active access to other portions of the DASD media. Anglin's method requires the phased movement of customer data between a target track to an alternate track, diversion of all concurrent access requests to the alternate track or tracks, and completion of maintenance and copyback from the alternate to the target track.
Requests and interrupts occurring prior to executing track to track customer data movement result in the process restarting. Otherwise, requests and interrupts occurring during execution of the data movement view a DEVICE BUSY state. This causes a re-queuing of the requests etc.
Summary of the Invention
It is an object of this invention to devise a method and means for consistent backup copying of records to external storage, and that such copying be concurrent with a drastically shortened suspension of CPU application execution occasioned by said copying.
It is a related object to devise a backup copying method and means susceptible of supporting full, incremental, or mixed backup scheduling policies.
The above objects are satisfied by a method and means that rely upon mapping data to be copied onto the backup copy medium atomically and using sidefiles for buffering any data subset affected by a concurrent update. This allows updates to be concurrently written through to external storage while preserving both the consistency and copyback order. The method of the invention is implemented by backup copying designated datasets in a uniquely identified session. Each session includes session registration and initialization and concurrent copying of the state of the designated datasets as of a predetermined time (tO) while writing through all updates after tO to the external store. The method includes the steps of (1) writing sidefiles of the affected uncopied portion of the dataset, (2) updating the original data in place on said external store, and (3) copying the sidefiles to the medium in backup copy order.
Advantageously, the integrity of the copied dataset is maintained while the period of process suspension is nearly eliminated. Also unlike the aformentioned Anglin reference, the method and means of this invention is directed to a new use of sidefile generation. That is, the difference resides in the use of generation of sidefiles of the uncopied portion of a dataset where the sidefile use facilitates both backing up datasets in ordinary copy order and overlapping of backing up and updating.
Brief Description of the Drawing
Figs. 1 exhibits a typical multi-processing multi-programming environment according to the prior art where executing processes and applications randomly or sequentially access data from external storage.
Fig. 2 shows a timeline depiction of the backup window among batch or streaming processes according to the prior art. Fig. 3 depicts the near elimination of the backup window as a consequence of the method and means of the invention.
Fig. 4 sets forth a conceptual flow of the to backup copy method of the invention.
Figs. 5 and 6 represents the control flow at the external storage control and the CPU operating system levels respectively.
Description of the Preferred Embodiment
Illustrative CPU Environment for Executing the Method of the Invention
The invention can be conveniently practiced in a configuration in which each CPU in a system may be of an IBM/360 or 370 architected CPU type having as an example an IBM MVS operating system. An IBM/360 architected CPU is fully described in Amdahl et al, USP 3,400,371, "Data Processing System", issued on September 3, 1968. A configuration involving CPU's sharing access to external storage is set forth in Luiz et al, USP 4,207,609, "Path Independent Device Reservation and Reconnection in a Multi-CPU and Shared Device Access System", issued June 10, 1980.
An MVS operating system is also described in IBM publication GC28-1150, "MVS/Extended Architecture System Programming Library: System Macros and Facilities", Volume 1. Details of standard MVS or other operating system services such as local lock management, sub-system invocation by interrupt or monitor, and the posting and waiting of tasks is omitted. These OS services are believed well appreciated by those skilled in the art.
Path to Data, Batch and Interactive Modes, and Backup Copying
Referring now to figure 1, there is depicted a multi¬ processing, multiprogramming system according to the prior art. Such systems including a plurality of processors (1,3) accessing external storage (21,23,25,27,29) over redundant channel demand/response interfaces (5,7,9). As described in Luiz et al, a CPU process establishes a path to externally stored data in an IBM System 370 and the like through an MVS or other operating system by invoking a START I/O, transferring control to a channel subsystem which reserves a path to the data over which transfers are made. Typically, applications have data dependences and may briefly suspend operations until a fetch or update is completed. During the transfer, the path is locked until the transfer is completed.
Referring now to figure 2, there is shown a timeline depiction of the backup window among batch or streaming processes according to the prior art. That is, at a time just prior to backup, applications are suspended or shut down. The suspension persists until the backup process is completed. Backup termination signifies completion and commitment. By completion it is meant that all the data that was to have been copied was in fact read from the source. By commitment it is meant that all the data to be copied was in fact written to the output media. Separating Logical Completion from Physical Completion
Referring now to figure 3, there is depicted the near elimination of the backup window as a consequence of the method and means of the invention. Once the backup method of the invention (tO copy) process starts, the data (as far as the copy is concerned) is "frozen" at that point in time. At that point in time, the copy is said to be "Logically Complete". The committed state, or "Physically Complete" state will not occur until late .
At the "Logically Complete" point in time, the data is completely usable again by the applications. The time from when the tO backup is issued and the data being available again is in the low sub-second range. In other words, the total application data outage (backup window) can be measured in milliseconds.
Abnormal Termination
If the tO backup process abnormally terminates between the point of logical completion and the point of physical completion, then the backup copy is useless and the process needs to be restarted. In this respect the method and means of the invention is vulnerable in a manner similar to the prior art. That is, all backup must be rerun. One limitation is that is that the time criticality of the snapshot is lost.
Conceptual Aspects
Referring now to figures 4 and 5, there is set forth a conceptual flow of the method of the invention. It should be noted that each backup session is assigned a unique session identification (ID) and comprises an initialization and a backup processing component. While multiple backup sessions may be run concurrently, each session ID and whence "snapshot" is unique.
Each CPU includes an operating system having a storage manager component. Typically, an IBM System 370 type CPU running under the MVS operating system would include a storage manager of the data facilities data set services (DFDSS) type as described in Ferro et al, U.S.Pat. 4,855,907, issued Aug. 8, 1989, "Method for Moving VSAM Base Clusters While Maintaining Alternate Indices into the Cluster". DFDSS is also described in the IBM publication GC26-4388, "Data Facility Data Set Services: User's Guide", dated <mm/dd/yyyy>.
Data is logically organized into records and datasets. The real address of the data in external storage is in terms of DASDs volumes, tracks, and cylinders. The virtual address of the same is couched in terms of base addresses + offsets and /or extents.
A record may be of the count-key-data format. It may occupy one or more units of real storage. A dataset as a logical collection of multiple records may be stored on contiguous units of real storage or may be dispersed. It follows that if backup proceeds on the dataset level then, it is necessary to perform multiple sorts to form inverted indices into real storage.
For purposes of this invention, backup processing is managed at two levels, namely, at the CPU OS resource manager level (fig.l - 1, 3) and at the storage control unit level (fig.l - 21, 23). Initialization.
Referring again to figures 4 and 5, the initialization process comprises three broad steps responsive to a resource manager (e.g.DFDSS) receiving a request to copy or backup particular data. These steps include sorting datasets, building one or more bit maps, and signalling logical completion to an invoking process at the CPU. The listed or identified datasets are sorted according to the access path elements down to DASD track granularity. Next, bit maps are constructed which correlate the data set and the access path insofar as any one of them is included or excluded from a given copy session. Lastly, the resource manager signals logical completion meaning that updates will be processed against the dataset after only a short delay.
More particularly, the resource manager for storage (DFDSS) receives a request to copy or backup up data. Normally, this request is in the form of a list of data sets or a filtered list of data sets. DFDSS maps the request into a list of physical extents by DASD storage volume and by storage control unit (SCU). Next, DFDSS registers the request with each participating SCU. At this point, the session ID is determined and the session is established.
It should be appreciated that DFDSS initializes the session with each SCU by passing all the extents being copied for each volume for each SCU. Each SCU will then build a bitmap for each volume participating in the session. This bitmap will indicate which tracks are part of the tO copy session. Control is returned to DFDSS. This is the "Logically Complete" point at which the data is again available for use. DFDSS notifies the operating system component such as a scheduler in system managed storage accordingly.
Backup Processing
Following initialization, DFDSS begins reading the tracks requested. While the to copy session is active, each SCU monitors all updates. If an update is received, the SCU executes a predetermined algorithm which takes the update into account.
If the update is for a volume NOT in the tO session, then the update completes normally. On the other hand, if the update is on a volume that is part of the session, then the bitmap is checked to see if that track is protected. If the bit is off (assume this imports a binary 0) , it indicates the track is not currently in the copy session and the update completes normally. Significantly, if the track is protected (bit is on) it indicates the track is part of a the copy session and it has not as yet been read by DFDSS. In this case, the SCU
(1) Holds the update.
(2) Stages the track from the device into a separate cache partition (This track contains the data as it existed at the point in time the tO backup process started) .
(3) Allows the update to continue.
(4) If any tracks are contained in a separate cache partition, DFDSS promptly reads those tracks to minimize the effect of normal cache operations. Referring again to figure 4, the steps of the method are depicted. In this figure, the updates to tracks 4 and 7 cause the unchanged tracks to be staged into the separate cache partition prior to the update completing. DFDSS subsequently reads the tracks from the separate cache partition. Tracks read by DFDSS that are not yet ready to be merged onto the output media are temporarily stored in a host sidefile.
Attention processing is used to ensure that separate cache partitions do not consume inordinate amounts of cache. When an attention is surfaced to the host, the operating system notifies a DFDSS task which then empties the separate cache partition.
In figure 4, random application updates of the data copied by to copy process occur at "A". The original images of these tracks are copied into the separate cache partition. DFDSS reads unchanged tracks from the DASD device at "B". If any track has been changed after the tO process started, they are not returned to DFDSS. When tracks are moved at "C" into the separate cache partition as a result of updates, a threshold attention interrupt is surfaced to the host. These interrupts are serviced by the operating system. The operating system issues the appropriate command to the SCU to obtain the reason for the interrupt. If the interrupt is for a specific to process, that indication is passed onto DFDSS.
Once DFDSS receives the interrupt, it begins emptying the tracks that had accumulated in the separate cache partition. Any tracks read that are not yet ready to be placed onto the output media are considered "out of sequence" and are stored temporarily in a host-memory sidefile.
As a last measure, data is read directly from the DASD device and data stored in the host sidefile are ultimately merged onto the output media in the proper sequence at step "D".
Illustrative Example
Referring again to figures 4 and 5, assume that a process invoking the tO process desires to backup copy the datasets stored on 100 predetermined DASD tracks. If none of those tracks are changed during the copy process, DFDSS could simply read tracks 1-100 and place them on the output media. In order to permit concurrent updating of external store while backup copying, it must also be assumed that data stored on one or more of the predetermined tracks has a reasonable expectation of being altered.
Given that the process has already begun and that DFDSS has already copied tracks 1-20. This means it has yet to copy tracks 21-100. If an application or process tries to change track 7, that would be allowed to complete "as usual" since track 7 has already been copied. If, however, an attempt was made to change track 44, that change could not complete "as usual" since track 44 has not yet been copied. It is necessary to ensure that track 44 is preserved in its original state for the copy. So prior to updating an uncopied track, a temporary copy of track 44 is retained in a sidefile before the change is allowed to complete. This temporary copy of track 44 is located in a separate cache partition for subsequent retrieval by DFDSS. DFDSS retrieves this track and, at the proper time, places track 44 on the output media. The backup process causes DFDSS to obtain data stored on predetermined tracks from two sources:
(1) Tracks read directly from DASD. These are tracks that have not been changed (by an application) after the tO copy process began.
(2) Tracks read from the cache partition. These are the original images of tracks that have been changed after the tO process began.
Since one objective is to minimize the impact on normal cache operations, as soon as tracks are read into the separate cache partition then they are available to be read by DFDSS.
Detail Logic Flow of Backup Processing
Referring now to figures 5 and 6, there are shown several flow diagrams. Fig. 5 covers initialization and SCU backup processing while figure 6 depicts CPU OS processing of sidefiles (asynch processing) and CPU OS management of copy session data transfers (synchronous processing) from the SCU to the output medium. These presentations are supplemented in this section by more detailed flow of control listings for purposes of completeness. These listings are a many to one mapping to the flow diagrams depicted in figures 5 and 6.
Initialization Flow Listing
The initialization process starts with the CPU operating system (OS) receiving a request to backup or copy some amount of data. This request is processed according to the following logic:
1. BUILD LIST OF DATA SETS TO BE BACKED UP.
2. SORT LIST OF DATA SETS BY THE DASD VOLUMES THAT THEY RESIDE ON.
3. FIND OUT WHICH VOLUMES BELONG TO WHICH SCUs
4. NOTIFY EACH SCU IN THE SESSION AND ESTABLISH A SESSIONID UNIQUE ACROSS ALL SCUS
5. FOR EACH VOLUME ON EACH SCU, NOTIFY WHICH TRACKS ARE PART OF THE TO COPY SESSION.
A. THE SCU THEN BUILDS A BIT MAP FOR EACH VOLUME IN THE SESSION
B. IN THE BIT MAP, A "0" INDICATES THAT TRACK IS NOT PART OF THE TO COPY SESSION. A "1" INDICATES THAT CORRESPONDING TRACK IS PART OF THE TO COPY SESSION.
6. CPU OS RETURNS AN INDICATION TO THE INVOKING PROCESS THAT THE "LOGICAL COMPLETE" POINT HAS BEEN REACHED AND THAT THE APPLICATION IS FREE TO USE THE DATA AGAIN.
SCU Flow Listing
This phase includes two processes being performed simultaneously, one by the SCU and one by the CPU Operating System. SCU 1. FOR EVERY UPDATE THAT OCCURS, A CHECK IS MADE TO SEE IF THAT UPDATE IS FOR A VOLUME THAT CURRENTLY HAS A TO COPY SESSION.
2. IF THE ANSWER TO #1 IS NO, THE UPDATE COMPLETES NORMALLY.
3. IF THE ANSWER TO #1 IS YES, A CHECK IS MADE AGAINST THE CORRESPONDING BITMAP TO SEE IF THE UPDATE IS TO A TRACK THAT IS PART OF THE TO COPY SESSION.
4. IF THE ANSWER TO #3 IS NO, THE UPDATE COMPLETES NORMALLY.
5. IF THE ANSWER TO #3 IS YES, THE FOLLOWING STEPS TAKE PLACE:
A. THE UPDATE IS TEMPORARILY HELD
B. THE TRACK THAT IS ABOUT TO BE UPDATED IS COPIED INTO A SIDEFILE AREA IN THE SCU CACHE.
C. THE UPDATE IS ALLOWED TO COMPLETE
D. ' THE BITMAP ENTRY FOR THAT TRACK IS TURNED OFF
INDICATING THAT THE TRACK IS NO LONGER PART OF THE TO COPY SESSION. FUTURE UPDATES, THEREFORE ARE NOT IMPACTED.
E. CHECK IF THE NUMBER OF TRACKS CURRENTLY CONTAINED IN THE SIDEFILE EXCEED A PREDEFINED THRESHOLD (1) IF IT DOES NOT EXCEED THE THRESHOLD, CONTINUE
(2) IF IT DOES EXCEED THE THRESHOLD, SURFACE AN ATTENTION TO THE CPU OS INDICATING THAT THE SIDEFILE MUST BE READ (EMPTIED) IMMEDIATELY.
6. ANY READS (FROM DASD) WHICH OCCUR ONLY FROM THE TO COPY PROCESS IN CPU OS RESULT IN THE FOLLOWING STEPS BEING TAKEN:
A. THE DATA TRACKS REQUESTED ARE TRANSFERRED TO THE CPU OS TO COPY PROCESS.
B. THE CORRESPONDING BIT IN THE BITMAP IS TURNED OFF INDICATING THAT THE TRACK IS NO LONGER PART OF THE TO COPY SESSION AS FAR AS THE SCU IS CONCERNED.
7. WHEN ALL THE BITS IN ALL THE BITMAPS IN A SCU (BELONGING TO A SINGLE SESSION) ARE TURNED OFF, THAT SESSION HAS ESSENTIALLY COMPLETED FOR THAT SCU.
CPU OS Flow Listing
The CPU OS flow consists of an asynchronous process and a synchronous process.
ASYNCHRONOUS PROCESS
1. LISTEN FOR AN ATTENTION (any "signal" sent from an SCU to the CPU OS indicative of the occurrence of a predefined event) . 2. WHEN ATTENTION OCCURS ON A SCU, START READING DATA FROM THE SCU SIDEFILE UNTIL THAT SIDEFILE IS EMPTY.
3. EACH TRACK READ FROM THE SIDEFILE IS AN "OUT OF SEQUENCE" TRACK AND IS STORED IN A HOST WORKFILE UNTIL IT IS READY TO BE PUT ONTO THE OUTPUT MEDIUM.
4. GOTO #1
SYNCHRONOUS PROCESS
Recall that the tO Copy process starts reading the data tracks in a designated order.
1. THE TO COPY PROCESS DETERMINES WHICH TRACKS ARE TO BE READ IN A SINGLE I/O REQUEST
2. THE HOST WORKFILE IS QUERIED TO SEE IF ANY OF THE TRACKS TO BE READ ARE ALREADY IN THE WORKFILE
A. IF THE ANSWER TO #2 IS NO, THE TRACK IS STILL ASSUMED TO EXIST ON THE DASD DEVICE IN AN UNCHANGED STATE
B. IF THE ANSWER TO #2 IS YES, THE READ COMMAND IS ALTERED SO AS TO AVOID READING A TRACK ALREADY READ. THAT IS, THE TRACK HAD BEEN PREVIOUSLY UPDATED AND THE ORIGINAL TRACK WAS STAGED INTO THE SIDEFILE AND SUBSEQUENTLY MOVED TO THE HOST WORKFILE 3. THE SESSION READ IS ISSUED FOR SOME NUMBER OF TRACKS
A. IF THE SCU INDICATES A SESSION READ WAS
ATTEMPTED ON A TRACK NOT CURRENTLY IN THE SESSION, CPU OS ASSUMES THE TRACK RESIDES IN THE SCU SIDEFILE OR THE HOST WORKFILE AND THE TRACK IS RECOVERED FROM THERE.
4. DATA OBTAINED FROM #3 IS WRITTEN ONTO THE OUTPUT MEDIUM AFTER BEING MERGED WITH ANY DATA TRACKS OBTAINED FROM STEP #2B
5. IF THERE ARE MORE TRACKS TO READ, GOTO #1
6. ELSE, WHEN ALL TRACKS HAVE BEEN READ AND WRITTEN TO THE OUTPUT MEDIUM:
A. TERMINATE THE SESSION WITH ALL PARTICIPATING SCUS
B. RETURN A "PHYSICAL COMPLETE" SIGNAL TO THE INVOKING PROCESS. THIS INDICATES THAT THE DATA TO BE BACKED UP HAS IN FACT BEEN WRITTEN TO THE OUTPUT MEDIUM
Extensions
Although the invention has been described within the context of an IBM MVS operating system, it may likewise be practiced within any commercially available general purpose operating system such as VM, OS2 and the like. Also, although DFDSS has been identified as an illustrative external storage resource manager, the invention is operable with any equivalent manager without undue experimentation by the ordinary skilled artisan.
These and other extensions of the invention may be made without departing from the spirit and scope thereof as recited in the appended claims.

Claims

C L A I M S
1. A CPU implemented method for backup copying of designated datasets representing point in time data consistency on an storage subsystem attaching said CPU, said backup copying being concurrent with CPU application execution, comprising the steps of:
(a) suspending application execution and forming a dataset logical-to-physical storage subsystem address concordance, and resuming application execution thereafter;
(b) physically backing up the designated datasets on the storage subsystem on a scheduled or opportunistic basis by causing them to be copied from the storage subsystem to the CPU and by the CPU writing them to other storage subsystem locations; and
(c) processing at the storage subsystem any application initiated updates to the uncopied designated datasets by buffering said updates, writing sidefiles of the datasets or portions thereof affected by the updates, writing the updates through to the storage subsystem, and copying the sidefiles to the CPU and at the CPU writing the designated datasets to storage in backup copy order defined by the by the concordance in the manner of step (b) .
2. A CPU implemented method for minimizing the suspension of operations against datasets located on an external store attaching a CPU while said data sets are subject to backup copying, comprising the steps of:
(a) atomically copying a predetermined dataset extent from a first to a second range of locations on said external store so as to creating a backup copy on said second range; and
(b) processing any concurrent updates against any uncopied portion of the predetermined dataset extent located on the first range by
(1) writing the uncopied portion affected by the updates to side files,
(2) processing the updates over the first range of locations, and
(3) copying the sidefiles to the second range in ordinary backup copy order.
3. The method according to claim 2, wherein said method further comprises the step of (c) processing updates to the dataset extent in the first range unexceptionally if the update affects that portion of the dataset extent already copied.
4. A method for managing backup copying of datasets concurrent with application execution on a CPU communicatively coupling an external subsystem of tracked cyclic storage devices over at least one defined access path, comprising the steps of: (a) suspending application execution at said CPU and designating at least one dataset for backup copying and requesting a copy of said designated datasets from the storage subsystem;
(b) forming dataset and device track concordances at the storage subsystem and signalling the CPU of the logical completion of backup copying;
(c) resuming application execution at said CPU responsive to said logical completion signal;
(d) copying designated datasets from the storage subsystem to the CPU on a scheduled or opportunistic basis;
(e) processing any application updates to datasets within the storage subsystem by writing them through unless they are addressed to a portion of a designated dataset uncopied to the CPU, otherwise, buffering the update, writing a sidefile of the uncopied portion of the designated dataset, then writing the update through, and copying the sidefile to the CPU; and
(f) writing accumulated designated datasets and sidefiles by the CPU to other storage subsystem locations asynchronous to steps (d) and (e) in copy order.
5. The method according to claim 4, wherein the concordances of dataset and track locations are manifest in the form of bit maps in which the uncopied designated datasets are attributed a first Boolean value while copied and undesignated datasets are attributed a second Boolean value; and wherein each time a designated dataset as represented by its device track contents is copied to the CPU, the step of changing the counterpart bitmap attribute from a first to a second Boolean value.
6. The method according to claim 4 or 5, wherein the method further comprises the step of:
(g) signalling physical completion of backup copying only when all of the designated datasets and sidefiles have been written to said other storage subsystem locations.
7. The method according to one of claims 4 to 6, wherein step (e) may be modified such that sidefiles are accumulated at the storage subsystem and copied to the CPU only upon the occurrence of a threshhold number.
8. In a system formed from a CPU referencing datasets or portions thereof from locations in an external subsystem of tracked cyclic storage devices attaching said CPU, wherein the improvement comprises:
(a) means for atomically copying a predetermined dataset extent from a first to a second range of locations (backup copy) on said external storage subsystem, and (b) means for processing any concurrent updates against any uncopied portion of the predetermined dataset extent located on the first range by
(1) writing the uncopied portion affected by the updates to side files,
(2) processing the updates over the first range of locations, and
(3) copying the sidefiles to the second range in ordinary backup copy order.
PCT/EP1992/002127 1991-10-18 1992-09-16 Method and means for time zero backup copying of data WO1993008529A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP92919444A EP0608255A1 (en) 1991-10-18 1992-09-16 Method and means for time zero backup copying of data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US78104491A 1991-10-18 1991-10-18
US781,044 1991-10-18

Publications (1)

Publication Number Publication Date
WO1993008529A1 true WO1993008529A1 (en) 1993-04-29

Family

ID=25121497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP1992/002127 WO1993008529A1 (en) 1991-10-18 1992-09-16 Method and means for time zero backup copying of data

Country Status (6)

Country Link
EP (1) EP0608255A1 (en)
JP (1) JPH05210555A (en)
KR (1) KR950014175B1 (en)
CN (1) CN1025381C (en)
CA (1) CA2071346A1 (en)
WO (1) WO1993008529A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0666536A1 (en) * 1994-01-11 1995-08-09 Hitachi, Ltd. Dump method, controller, and information processing system
GB2290396A (en) * 1994-07-20 1995-12-20 Intelligence Quotient Int Backing-up shared data
US5546534A (en) * 1993-07-19 1996-08-13 Intelligence Quotient International Ltd. Method of operating a computer system
WO1997024667A1 (en) * 1995-12-28 1997-07-10 Ipl Systems, Inc. Dasd storage back-up using out-of-sequence writes
US5675725A (en) * 1993-07-19 1997-10-07 Cheyenne Advanced Technology Limited Computer backup system operable with open files
EP0881570A1 (en) * 1997-05-30 1998-12-02 Atsuro Ogawa Database recovery system
US6081875A (en) * 1997-05-19 2000-06-27 Emc Corporation Apparatus and method for backup of a disk storage system
US6785791B2 (en) 2001-03-29 2004-08-31 Fujitsu Limited Method for accessing data being copied between data regions before the copying is completed
US7797670B2 (en) 2006-04-14 2010-09-14 Apple Inc. Mirrored file system
US7822922B2 (en) 2004-04-22 2010-10-26 Apple Inc. Accessing data storage systems without waiting for read errors
US8250397B2 (en) 2007-01-08 2012-08-21 Apple Inc. N-way synchronization of data
US8321374B2 (en) 2005-06-21 2012-11-27 Apple Inc. Peer-to-peer N-way syncing in decentralized environment
US8495015B2 (en) 2005-06-21 2013-07-23 Apple Inc. Peer-to-peer syncing in a decentralized environment
US8868491B2 (en) 2006-08-04 2014-10-21 Apple Inc. Method and system for using global equivalency sets to identify data during peer-to-peer synchronization
US10176048B2 (en) 2014-02-07 2019-01-08 International Business Machines Corporation Creating a restore copy from a copy of source data in a repository having source data at different point-in-times and reading data from the repository for the restore copy
US10372546B2 (en) 2014-02-07 2019-08-06 International Business Machines Corporation Creating a restore copy from a copy of source data in a repository having source data at different point-in-times
US10387446B2 (en) 2014-04-28 2019-08-20 International Business Machines Corporation Merging multiple point-in-time copies into a merged point-in-time copy
US11169958B2 (en) 2014-02-07 2021-11-09 International Business Machines Corporation Using a repository having a full copy of source data and point-in-time information from point-in-time copies of the source data to restore the source data at different points-in-time
US11194667B2 (en) 2014-02-07 2021-12-07 International Business Machines Corporation Creating a restore copy from a copy of a full copy of source data in a repository that is at a different point-in-time than a restore point-in-time of a restore request

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1293461C (en) * 1999-07-30 2007-01-03 神基科技股份有限公司 Method for suspending state of computer system
US7039657B1 (en) * 1999-11-09 2006-05-02 International Business Machines Corporation Method, system, and program for accessing data from storage systems
JP4414381B2 (en) 2005-08-03 2010-02-10 富士通株式会社 File management program, file management apparatus, and file management method
JP2008027163A (en) * 2006-07-20 2008-02-07 Fujitsu Ltd Data recorder, data recording program, and data recording method
CN108228647B (en) * 2016-12-21 2022-05-24 伊姆西Ip控股有限责任公司 Method and apparatus for data copying

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4686620A (en) * 1984-07-26 1987-08-11 American Telephone And Telegraph Company, At&T Bell Laboratories Database backup method
US4752910A (en) * 1984-10-30 1988-06-21 Prime Computer, Inc. Method and apparatus for continuous after-imaging
EP0395563A2 (en) * 1989-04-27 1990-10-31 International Business Machines Corporation Method and apparatus for providing continuous availability of applications in a computer network
EP0410630A2 (en) * 1989-07-25 1991-01-30 International Business Machines Corporation Backup and recovery apparatus for digital computer

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62203248A (en) * 1986-03-03 1987-09-07 Nec Corp Dynamic saving and restoration system for data base
JPH0290341A (en) * 1988-09-28 1990-03-29 Hitachi Ltd Saving system for data base file

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4686620A (en) * 1984-07-26 1987-08-11 American Telephone And Telegraph Company, At&T Bell Laboratories Database backup method
US4752910A (en) * 1984-10-30 1988-06-21 Prime Computer, Inc. Method and apparatus for continuous after-imaging
EP0395563A2 (en) * 1989-04-27 1990-10-31 International Business Machines Corporation Method and apparatus for providing continuous availability of applications in a computer network
EP0410630A2 (en) * 1989-07-25 1991-01-30 International Business Machines Corporation Backup and recovery apparatus for digital computer

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5546534A (en) * 1993-07-19 1996-08-13 Intelligence Quotient International Ltd. Method of operating a computer system
US5675725A (en) * 1993-07-19 1997-10-07 Cheyenne Advanced Technology Limited Computer backup system operable with open files
EP0666536A1 (en) * 1994-01-11 1995-08-09 Hitachi, Ltd. Dump method, controller, and information processing system
US5809542A (en) * 1994-01-11 1998-09-15 Hitachi, Ltd. Dumping method for dumping data to a dump data storage device that manages the the dumping of data updated since a previous dump request
GB2290396A (en) * 1994-07-20 1995-12-20 Intelligence Quotient Int Backing-up shared data
WO1997024667A1 (en) * 1995-12-28 1997-07-10 Ipl Systems, Inc. Dasd storage back-up using out-of-sequence writes
US6081875A (en) * 1997-05-19 2000-06-27 Emc Corporation Apparatus and method for backup of a disk storage system
EP0881570A1 (en) * 1997-05-30 1998-12-02 Atsuro Ogawa Database recovery system
US6785791B2 (en) 2001-03-29 2004-08-31 Fujitsu Limited Method for accessing data being copied between data regions before the copying is completed
US7822922B2 (en) 2004-04-22 2010-10-26 Apple Inc. Accessing data storage systems without waiting for read errors
US8321374B2 (en) 2005-06-21 2012-11-27 Apple Inc. Peer-to-peer N-way syncing in decentralized environment
US8495015B2 (en) 2005-06-21 2013-07-23 Apple Inc. Peer-to-peer syncing in a decentralized environment
US8635209B2 (en) 2005-06-21 2014-01-21 Apple Inc. Peer-to-peer syncing in a decentralized environment
US7797670B2 (en) 2006-04-14 2010-09-14 Apple Inc. Mirrored file system
US8868491B2 (en) 2006-08-04 2014-10-21 Apple Inc. Method and system for using global equivalency sets to identify data during peer-to-peer synchronization
US8250397B2 (en) 2007-01-08 2012-08-21 Apple Inc. N-way synchronization of data
US10176048B2 (en) 2014-02-07 2019-01-08 International Business Machines Corporation Creating a restore copy from a copy of source data in a repository having source data at different point-in-times and reading data from the repository for the restore copy
US10372546B2 (en) 2014-02-07 2019-08-06 International Business Machines Corporation Creating a restore copy from a copy of source data in a repository having source data at different point-in-times
US11150994B2 (en) 2014-02-07 2021-10-19 International Business Machines Corporation Creating a restore copy from a copy of source data in a repository having source data at different point-in-times
US11169958B2 (en) 2014-02-07 2021-11-09 International Business Machines Corporation Using a repository having a full copy of source data and point-in-time information from point-in-time copies of the source data to restore the source data at different points-in-time
US11194667B2 (en) 2014-02-07 2021-12-07 International Business Machines Corporation Creating a restore copy from a copy of a full copy of source data in a repository that is at a different point-in-time than a restore point-in-time of a restore request
US10387446B2 (en) 2014-04-28 2019-08-20 International Business Machines Corporation Merging multiple point-in-time copies into a merged point-in-time copy
US11630839B2 (en) 2014-04-28 2023-04-18 International Business Machines Corporation Merging multiple point-in-time copies into a merged point-in-time copy

Also Published As

Publication number Publication date
CN1071770A (en) 1993-05-05
CA2071346A1 (en) 1993-04-19
KR950014175B1 (en) 1995-11-22
CN1025381C (en) 1994-07-06
JPH05210555A (en) 1993-08-20
KR930008636A (en) 1993-05-21
EP0608255A1 (en) 1994-08-03

Similar Documents

Publication Publication Date Title
USRE37601E1 (en) Method and system for incremental time zero backup copying of data
US5448718A (en) Method and system for time zero backup session security
US5379412A (en) Method and system for dynamic allocation of buffer storage space during backup copying
USRE37364E1 (en) Method and system for sidefile status polling in a time zero backup copy process
US5379398A (en) Method and system for concurrent access during backup copying of data
US5241670A (en) Method and system for automated backup copy ordering in a time zero backup copy session
US5241668A (en) Method and system for automated termination and resumption in a time zero backup copy process
US5497483A (en) Method and system for track transfer control during concurrent copy operations in a data processing storage subsystem
US5375232A (en) Method and system for asynchronous pre-staging of backup copies in a data processing storage subsystem
WO1993008529A1 (en) Method and means for time zero backup copying of data
US5875479A (en) Method and means for making a dual volume level copy in a DASD storage subsystem subject to updating during the copy interval
US8074035B1 (en) System and method for using multivolume snapshots for online data backup
US7318135B1 (en) System and method for using file system snapshots for online data backup
JP3792258B2 (en) Disk storage system backup apparatus and method
US7246211B1 (en) System and method for using file system snapshots for online data backup
JPH0715664B2 (en) How to recover data set
US20060053260A1 (en) Computing system with memory mirroring and snapshot reliability
JPH0736761A (en) On-line copying processing method with high reliability for external memory device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CS DE HU PL RU UA

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
LE32 Later election for international application filed prior to expiration of 19th month from priority date or according to rule 32.2 (b)

Ref country code: UA

WWE Wipo information: entry into national phase

Ref document number: 1992919444

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1992919444

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWR Wipo information: refused in national office

Ref document number: 1992919444

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1992919444

Country of ref document: EP