US5781912A - Recoverable data replication between source site and destination site without distributed transactions - Google Patents

Recoverable data replication between source site and destination site without distributed transactions Download PDF

Info

Publication number
US5781912A
US5781912A US08/772,003 US77200396A US5781912A US 5781912 A US5781912 A US 5781912A US 77200396 A US77200396 A US 77200396A US 5781912 A US5781912 A US 5781912A
Authority
US
United States
Prior art keywords
site
destination site
transaction
destination
changes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/772,003
Inventor
Alan Demers
Sandeep Jain
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracle Corp filed Critical Oracle Corp
Priority to US08/772,003 priority Critical patent/US5781912A/en
Assigned to ORACLE CORPORATION reassignment ORACLE CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DEMERS, ALAN, JAIN, SANDEEP
Application granted granted Critical
Publication of US5781912A publication Critical patent/US5781912A/en
Assigned to ORACLE INTERNATIONAL CORPORATION reassignment ORACLE INTERNATIONAL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ORACLE CORPORATION
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1474Saving, restoring, recovering or retrying in transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • G06F11/1662Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability

Definitions

  • the present invention relates to database systems, and more particularly to techniques for propagating changes from one site to another.
  • replication it is desirable to store copies of a particular set of data, such as a relational table, at multiple sites. If users are allowed to update the set of data at one site, the updates must be propagated to the copies at the other sites in order for the copies to remain consistent. The process of propagating the changes is generally referred to as replication.
  • the site at which a change is initially made to a set of replicated data is referred to herein as the source site.
  • the sites to which the change must be propagated are referred to herein as destination sites. If a user is allowed to make changes to copies of a particular table that are at different sites, those sites are source sites with respect to the changes initially made to their copy of the table, and destination sites with respect to the changes initially made to copies of the table at other sites.
  • Replication does not require an entire transaction that is executed at a source site to be re-executed at each of the destination sites. Only the changes made by the transaction to replicated data need to be propagated. Thus, other types of operations, such as read and sort operations, that may have been executed in the original transaction do not have to be re-executed at the destination sites.
  • Row-level replication and column-level replication constitute two distinct styles of replication.
  • the updates performed by an executing transaction are recorded in a deferred transaction queue.
  • the information recorded in the deferred transaction queue includes both the old and the new values for each data item that was updated.
  • Row-level and column-level replication differ with respect to whether old and new values are transmitted for an entire relational row (row-level) or for only a subset of its columns (column-level).
  • the changes recorded in the deferred transaction queue are propagated to the destination site.
  • the destination site first checks that its current data values agree with the transmitted "old" values. The check may fail, for example, if concurrent changes have been made to the same replicated data at different sites. If the check fails, a conflict is said to have been detected. Various techniques may be used to resolve such conflicts. If no conflict is detected, the current data values at the destination site are replaced with the transmitted "new" values.
  • FIG. 1 it illustrates a system in which copies of a table 118 are stored at multiple sites.
  • the system includes three sites 100, 102 and 104.
  • Sites 100, 102 and 104 include disks 106, 108 and 110 that store copies 120, 122 and 124 of table 118, respectively.
  • Database servers 130, 132 and 134 are executing at sites 100, 102 and 104, respectively.
  • deferred transaction queue 160 of a replication mechanism 140.
  • Such records are referred to herein as deferred transaction records.
  • the deferred transaction queue 160 will be stored on a non-volatile storage device so that the information contained therein can be recovered after a failure.
  • Replication mechanism 140 includes a dequeue process for each of sites 102 and 104.
  • Dequeue process 150 periodically dequeues all deferred transaction records that (1) involve changes that must be propagated to site 102, and (2) that dequeue process 150 has not previously dequeued.
  • the records dequeued by dequeue process 150 are transmitted in a stream to site 102.
  • the database server 132 at site 102 makes the changes to copy 122 of table 118 after checking to verify that the current values in copy 122 match the "old values" contained in the deferred transaction records.
  • dequeue process 152 periodically dequeues all deferred transaction records that (1) involve changes that must be propagated to site 104, and (2) that dequeue process 152 has not previously dequeued.
  • the records dequeued by dequeue process 152 are transmitted in a stream to site 104.
  • the database server 134 at site 104 makes the changes to copy 124 of table 118 after checking to verify that the current values in copy 124 match the "old values" contained in the deferred transaction records.
  • a mechanism must be provided which allows dequeue processes 150 and 152 to distinguish between the deferred transaction records within deferred transaction queue 160 that they have already dequeued, and the deferred transaction records that they have not yet dequeued.
  • a single stream connects dequeue processes 150 and 152 to their corresponding destination sites. Efficiency may be improved by establishing multiple streams between the source site and each of the destination sites.
  • the replication mechanism has no control over the order in which commands that are sent over one stream are applied at a destination site relative to commands that are sent over a different stream. Therefore, a transmission scheduling mechanism must be provided if commands are to be sent to a destination site over more than one stream.
  • database systems implement replication by executing deferred transactions using two phase commit techniques.
  • two phase commit operations numerous messages are sent between the source site and each of the destination sites for each transaction to ensure that changes at all sites are made permanent as an atomic event.
  • the use of two phase commit techniques ensures that the various databases may be accurately recovered after a failure, the overhead involved in the numerous inter-site messages is significant. Therefore, it is desirable to provide a mechanism that involves less messaging overhead than two phase commit techniques but which still allows accurate recovery after a failure.
  • a method and system are provided for recovering after a failure in a replication environment.
  • a transaction is executed at a source site that makes changes that must be replicated at a destination site.
  • the changes are made permanent at the source site.
  • the changes are made permanent at the source site without the source site being informed as to whether the changes were successfully applied at the destination site.
  • the changes are sent to the destination site.
  • the changes are applied at the destination site. If the changes are successfully applied before the failure, then the changes are made permanent at the destination site and a record is added to a set of records at the destination site. The record indicates that the changes where made permanent at the destination site.
  • the set of records at the destination site are used to determine which changes must be sent from the source site to the destination site after the failure.
  • FIG. 1 is a block diagram of a computer system that includes a replication mechanism
  • FIG. 2 is a block diagram of a computer system that may be used to implement the present invention
  • FIG. 3A is block diagram of a portion of a replication system in which queue batch numbers are used to coordinate dequeuing operations according to an embodiment of the invention
  • FIG. 3B illustrates the system of FIG. 3A after a stamping operation is performed
  • FIG. 3C illustrates the system of FIG. 3B after a dequeuing operation is performed
  • FIG. 3D illustrates the system of FIG. 3C after another stamping operation is performed
  • FIG. 4 is a block diagram that illustrates propagation mechanisms that propagate transactions using multiple streams per destination site according to an embodiment of the invention
  • FIG. 5 is a flow chart illustrating the steps used to schedule the transmission of transactions according to an embodiment of the invention.
  • FIG. 6 is a block diagram of a replication system in which the destination site maintains an applied transaction table that may be used in recovery after a failure, according to an embodiment of the invention.
  • Computer system 200 includes a bus 201 or other communication mechanism for communicating information, and a processor 202 coupled with bus 201 for processing information.
  • Computer system 200 further comprises a random access memory (RAM) or other dynamic storage device 204 (referred to as main memory), coupled to bus 201 for storing information and instructions to be executed by processor 202.
  • Main memory 204 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 202.
  • Computer system 200 also comprises a read only memory (ROM) and/or other static storage device 206 coupled to bus 201 for storing static information and instructions for processor 202.
  • Data storage device 207 is coupled to bus 201 for storing information and instructions.
  • a data storage device 207 such as a magnetic disk or optical disk and its corresponding disk drive can be coupled to computer system 200.
  • Computer system 200 can also be coupled via bus 201 to a display device 221, such as a cathode ray tube (CRT), for displaying information to a computer user.
  • Computer system 200 further includes a keyboard 222 and a cursor control device 223, such as a mouse.
  • the present invention is related to the use of computer system 200 to propagate to other sites changes made to data on disk 207.
  • replication is performed by computer system 200 in response to processor 202 executing sequences of instructions contained in memory 204.
  • Such instructions may be read into memory 204 from another computer-readable medium, such as data storage device 207.
  • Execution of the sequences of instructions contained in memory 204 causes processor 202 to perform the process steps that will be described hereafter.
  • hard-wired circuitry may be used in place of or in combination with software instructions to implement the present invention.
  • the present invention is not limited to any specific combination of hardware circuitry and software.
  • one phase of the replication process involves placing deferred transaction records into a deferred transaction queue.
  • the deferred transaction queue is implemented as a relational table, where each deferred transaction record is stored as one or more rows within the table.
  • a transaction record for a given transaction may consist of ten rows within the deferred transaction queue, where each of the ten rows corresponds to an update performed by the transaction and contains an old and new value for the update and an update sequence number that identifies the order in which the update was performed relative to the other updates performed by the transaction.
  • the transaction record also contains a transaction identifier that identifies the transaction and a "prepared time" value that indicates when the transaction finished execution (was "prepared") relative to other transactions.
  • the transaction identifier and the prepared time value of a transaction may be stored, for example, in one of the rows that constitute the transaction record for the transaction.
  • the process of dequeuing a deferred transaction record for one destination site does not automatically remove the deferred transaction record from the deferred transaction queue because the deferred transaction record may have to be dequeued for other destination sites.
  • the deferred transaction record may be removed from the deferred transaction queue by a process that may be entirely independent of the dequeuing processes.
  • each deferred transaction record must be dequeued N-1 times (once for each destination site) before it can be deleted from the deferred transaction queue. Because the act of dequeuing a deferred transaction record does not remove the deferred transaction record from the deferred transaction queue, the presence of a deferred transaction record within the deferred transaction queue does not indicate whether the deferred transaction record has been dequeued for any given destination site.
  • a dequeuing process For each destination site, a dequeuing process repeatedly performs a dequeuing operation on the deferred transaction queue. During every dequeuing operation the dequeuing process performs, it must only dequeue the deferred transaction records for its destination site that it has not already dequeued. Therefore, a mechanism must be provided for determining which deferred transaction records within the deferred transaction queue have already been dequeued for each of the destination sites.
  • One way to keep track of which deferred transaction records have been dequeued for each destination site involves storing within each deferred transaction record a sequence number that indicates the sequence in which the transaction associated with the deferred transaction record was made permanent ("committed") at the source site. Each dequeuing process then keeps track of the highest sequence number of the records that it has dequeued. At each subsequent pass, the dequeuing process only reads those records with higher sequence numbers than the highest sequence number encountered on the previous pass.
  • the process of dequeuing records from the deferred transaction queue may be implemented by executing a query on the table.
  • a dequeuing process would repeatedly execute the equivalent of the SQL query:
  • a transaction is not considered committed until a deferred transaction record for the transaction is written into the deferred transaction queue. Therefore, a commit time cannot be assigned to a transaction until the deferred transaction record is written into the deferred transaction queue. Consequently, the deferred transaction record that is written into the deferred transaction queue does not contain the true commit time of the corresponding transaction.
  • a "prepared time value" is stored in the transaction record. Prepared time values indicate the time in which transactions completed execution, not the actual time the transactions committed.
  • the prepared time values are used as sequence numbers for the dequeuing technique described above.
  • the database system is not able to guarantee that the deferred transaction records of isolated transactions will commit in the order in which the transactions acquire prepare times. Without such a guarantee, deferred transaction records may be written into the deferred transaction queue out of prepare sequence.
  • deferred transaction records may be written into the deferred transaction queue out of prepare sequence renders the prepare sequence approach unusable. For example, assume that two transactions with sequence numbers S1 ⁇ S2 are inserted into the deferred transaction queue out of order. If a dequeue process performs a dequeue operation after the S2 deferred transaction record is inserted and before the S1 deferred transaction record is inserted, then the highest sequence number seen by the dequeue process will be at least S2. When the dequeue process performs a subsequent dequeue operation, the dequeue process will only dequeue deferred transaction records that have sequence numbers greater than S2. The S1 deferred transaction record will be skipped and may never be dequeued by that dequeue process.
  • One approach to avoid the out-of-sequence problem associated with the prepare sequence approach is to prevent transactions from acquiring prepared time values until the deferred transaction records for all transactions that have previously acquired prepared time values are stored in the deferred transaction queue. If transactions cannot acquire prepared time values until the deferred transaction records for all transactions that have previously acquired prepared time values are stored in the deferred transaction queue, then the commit time order will always reflect the prepared time order. Thus, the prepared time may be treated as the commit time.
  • an "enqueue lock” may be used to restrict access to the sequence assignment mechanism. Before a transaction can be assigned a sequence number, the transaction must acquire the enqueue lock. The transaction must then hold the enqueue lock until the deferred transaction record for the transaction is actually written to the deferred transaction queue. This technique effectively makes the sequence number assignment and the insertion of the deferred transaction record an atomic operation. The following steps could be used to implement this technique:
  • a record can be maintained to indicate which deferred transaction records have been dequeued for which sites. For example, a plurality of flags may be stored in each deferred transaction record, where each flag corresponds to a destination site. Initially, all of the flags indicate that the deferred transaction record has not been dequeued.
  • the dequeue process inspects each deferred transaction record to determine whether the flag corresponding to the destination site associated with the dequeue process has been set. If the flag has been set, the deferred transaction record is skipped. If the flag has not been set, the dequeue process dequeues the deferred transaction record.
  • the dequeue process sets the flag within the deferred transaction record that corresponds to the destination site associated with the dequeue process to indicate that the deferred transaction record has been dequeued for that destination site.
  • a record that indicates which deferred transaction records have been dequeued for each destination site may be maintained external to the deferred transaction queue.
  • each dequeue process may maintain a dequeued transactions table into which the dequeue process inserts a row for each deferred transaction record that it dequeues, where the row identifies the transaction associated with the dequeued deferred transaction record.
  • the dequeued transaction table approach also involves a significant amount of overhead. Specifically, a row must be generated and inserted for each destination site for every deferred transaction record. In addition, the dequeue query is expensive in that the entire deferred transaction queue may have to be scanned looking for deferred transaction records that are not recorded in a particular dequeued transaction table.
  • a "queue batch number" column is added to each deferred transaction record in the deferred transaction queue.
  • the queue batch value is set to some default value.
  • each dequeue process “stamps" the deferred transaction queue by setting the queue batch values in all of the deferred transaction records that have the default queue batch value to a queue batch number that is greater than any queue batch number that has previously been assigned to any deferred transaction record. The dequeue process then dequeues all of the records that have queue batch numbers greater than the queue batch number used by that dequeue process in its previous batch stamping operation.
  • FIGS. 3A-3D The queue batch number stamping technique is illustrated in FIGS. 3A-3D.
  • FIG. 3A it illustrates an embodiment of the invention in which a deferred transaction queue 300 is implemented using a table.
  • Deferred transaction records 308 are inserted into deferred transaction queue 300 by a database server after the transactions are prepared at the local (source) site. Prior to insertion into deferred transaction queue 300, these deferred transaction records are assigned the default queue batch value. In the illustrated embodiment, the default queue batch value is -5000.
  • deferred transaction records for five transactions have been inserted into the deferred transaction queue 300. None of the transactions have yet been dequeued by any dequeue process, and therefore all still contain the default queue batch value.
  • Dequeue process 302 has previously dequeued deferred transaction records with queue batch numbers up to 60, and therefore stores the value "60" as its LAST -- BATCH number.
  • Dequeue process 304 has previously dequeued deferred transaction records with queue batch numbers up to 59, and therefore stores the value "59" as its LAST -- BATCH number.
  • dequeue process 304 Prior to performing a dequeue operation, dequeue process 304 performs a batch stamping operation on deferred transaction queue 300.
  • the batch stamping operation all deferred transaction records within deferred transaction queue 300 that currently hold the default queue batch number (-5000) are stamped with a higher queue batch number than has previously been assigned to any deferred transaction records.
  • a queue batch counter 306 is used to track the highest previously assigned batch number. Initially, the queue batch counter is set to a value that is greater than the default queue batch number. At the time illustrated in FIG. 3A, the highest previously assigned queue batch value is 60.
  • FIG. 3B illustrates deferred transaction queue 300 after dequeue process 304 has performed a batch stamping operation.
  • the queue batch counter 306 is incremented, increasing the value of the counter to 61.
  • the deferred transaction records within deferred transaction queue 300 that previously stored the default queue batch value of -5000 now store the new queue batch value of 61.
  • dequeue process 304 dequeues all of the deferred transaction records that have queue batch values that are higher than the highest queue batch value previously used by dequeue process 304.
  • the LAST -- BATCH value of dequeue process 304 is 59, and the five deferred transaction records in deferred transaction queue 300 have queue batch values of 61. Therefore, dequeue process 304 will dequeue all five of the deferred transaction records.
  • FIG. 3C illustrates deferred transaction queue 300 after dequeue process 304 has performed a dequeue operation.
  • the LAST -- BATCH value of dequeue process 304 has been updated to reflect that dequeue process has dequeued all deferred transaction records with queue batch values up to 61.
  • deferred transaction records have been inserted into deferred transaction queue 300 since the batch stamping operation performed by dequeue process 304. These new deferred transaction records have been assigned the default queue batch value. As long as the new deferred transaction records were added after the batch stamping operation, the new deferred transaction records will not have been dequeued by dequeue process 304 regardless of whether they were inserted before or after the dequeue operation because dequeue process 304 only dequeued those deferred transaction records with queue batch values greater than 59.
  • dequeue process 302 performs a batch stamping operation.
  • Dequeue process 302 increments the queue batch counter to 62, and stamps all of the deferred transaction records that have the default queue batch value with the new queue batch value of 62.
  • FIG. 3D illustrates the state of deferred transaction queue 300 after dequeue process 302 has performed such a batch stamping operation.
  • Dequeue process 302 may then perform a dequeue operation in which dequeue process 302 dequeues all deferred transaction records with queue batch values greater than 60.
  • dequeue process 302 would dequeue all of the deferred transaction records previously dequeued by dequeue process 304, as well as all of the new deferred transaction records. After the dequeue operation, dequeue process 302 would update its LAST -- BATCH value to 62.
  • Dequeue process 304 would only dequeue those deferred transaction records with queue batch values greater than the LAST -- BATCH value of dequeue process 304. In the illustrated example, the LAST -- BATCH value of dequeue process 304 is 61. Therefore, dequeue process 304 would only dequeue those deferred transaction records that it did not dequeue in its previous dequeue operation.
  • dequeue processes can quickly distinguish between deferred transaction records they have already dequeued, and deferred transaction records they have not yet dequeued.
  • many deferred transaction records can be concurrently written into the deferred transaction queue 300 out of prepared time order without adversely affecting dequeue operations. Therefore, the bottleneck associated with the sequence stamp locking technique described above is avoided.
  • each deferred transaction record is only updated once, not once for every destination site. Specifically, each deferred transaction record will only be updated during the first batch stamping operation performed after the deferred transaction record has been inserted into the deferred transaction queue 300 and stamped with a non-default queue batch number. Therefore, this technique avoids the significant overhead associated with the record flagging techniques described above.
  • dequeued transactions are processed sequentially, not as atomic "batches" of transactions.
  • the order in which a transaction is processed is based on both the batch number of the transaction and the prepared time of the transaction. Specifically, transactions are dequeued in ⁇ batch number, prepared time> order. Thus, for each dequeue process, transactions with older batch numbers are processed before transactions with newer batch numbers. Within a batch, transactions with older prepared times are processed before transactions with newer prepared times.
  • the LAST -- BATCH value alone is not enough to indicate which transactions have and have not been processed by a particular dequeuing process.
  • a ⁇ LAST -- BATCH, transaction identifier> value pair is maintained by each dequeue process to indicate the last transaction to be processes by the dequeuing process.
  • the ⁇ LAST -- BATCH, transaction identifier> value pair for a dequeue process may be used to determine which transactions must still be processed by the dequeue process.
  • the dequeue processes perform batch stamping operations before every dequeue query they perform.
  • a batch stamping operation does not need to be performed before a dequeuing query for a given site as long as a batch stamping operation has been performed subsequent to the last dequeuing query for the given site.
  • the actual number of batch stamping operations performed between consecutive dequeue operations for a site will not affect the dequeue query.
  • dequeue process 302 can perform a dequeue query without first performing a batch stamping operation. This is possible because dequeue process 304 performed a batch stamping operation since the last dequeue query performed by dequeue process 302. Under these circumstances, the newly arrived deferred transaction records would not be dequeued by dequeue process 302 until a subsequent dequeue query is performed by dequeue process 302.
  • the present invention is not limited to any particular mechanism for scheduling batch stamping operations relative to dequeue operations.
  • each destination site has a dequeue process and the dequeue processes perform the batch stamping operations.
  • each destination site may have more than one dequeue process, and each dequeue process may service more than one destination site.
  • batch stamping operations may be performed by one or more processes executing independent of the dequeue processes, or by recursive transactions initiated by the dequeue processes.
  • a process responsible for purging the deferred transaction queue reads the ⁇ LAST -- BATCH, transaction-id> value pair for each of the destination sites.
  • the ⁇ LAST -- BATCH, transaction-id> value pair maintained by each dequeue process indicates the last transaction encountered by that dequeue process.
  • Each dequeue process will maintain its own ⁇ LAST -- BATCH, transaction-id> value.
  • the transaction with the lowest ⁇ batch number, prepared time> value represents the most recent transaction that has been encountered by the dequeue processes for all sites (the "global bookmark").
  • the purging process deletes all deferred transaction records in the deferred transaction queue for transactions that have lower ⁇ batch number, prepared time> values than the global bookmark (except for transactions currently marked with the default batch value), since these deferred transaction records have been dequeued for all destination sites for which they need to be dequeued.
  • a dequeue process may not dequeue some deferred transaction records it encounters because the deferred transaction records do not have to be propagated to the destination site associated with the dequeue process.
  • the ⁇ LAST -- BATCH, transaction-id> value for each site is updated based on all deferred transaction records encountered (but not necessarily dequeued) during the dequeue operations. Specifically, each dequeue process updates its ⁇ LAST -- BATCH, transaction-id> value based on all deferred transaction records it sees during a dequeue operation, including those deferred transaction records that it does not actually dequeue.
  • the dequeue process For example, assume that the ⁇ LAST -- BATCH, transaction-id> value for a dequeue process associated with a destination site A is ⁇ 20, 5>.
  • the dequeue process encounters two deferred transaction records with batch numbers higher than 20.
  • the first deferred transaction record is for a transaction TXA, has a queue batch number of 23 and must be dequeued for site A.
  • the second deferred transaction record is for a transaction TXB, has a queue batch number of 25 and does not have to be dequeued for site A.
  • the dequeue process updates its ⁇ LAST -- BATCH, transaction-id> value to ⁇ 25, TXB> after performing the dequeue operation.
  • the ⁇ LAST -- BATCH, transaction-id> value for each site will be updated according to the frequency (F1) that dequeue operations are performed for that site, not the frequency (F2) at which changes are actually propagated to that site.
  • F1 may be significantly greater than F2.
  • propagating a transaction to a destination site is performed by causing the destination site to execute operations that make at the destination site the changes made by the transaction at the source site.
  • the source site transmits a stream of information to the destination site to cause such operations to be performed.
  • the source site sends deferred transactions to a destination site as a sequence of remote procedure calls, essentially described in U.S. patent application Ser. No. 08/126,586 entitled “Method and Apparatus for Data Replication", filed on Sep. 24, 1993 by Sandeep Jain and Dean Daniels.
  • Deferred transaction boundaries are marked in the stream by special "begin-unit-of-work” and "end-unit-of-work” tokens that contain transaction identifiers.
  • the destination site receives messages on the stream. When it receives the "begin-unit-of-work" token, the destination site starts a local transaction for executing the procedure calls that will follow the "begin-unit-of-work” token. Such transactions are referred to herein as replication transactions.
  • a replication transaction executes the procedure calls specified in the stream until it encounters the next "end-unit-of-work" token. When the "end-unit-of-work" token is encountered, the replication transaction is finished. The destination site continues reading and processing deferred transactions using replication transactions in this manner until the stream is exhausted.
  • a replication transaction When distributed transactions are used to perform replication, a replication transaction enters a "prepared" state when the "end-unit-of-work" token is encountered. The destination site informs the source site that the replication transaction is prepared and awaits a commit instruction from the source site.
  • the two phase commit operation used by distributed transactions is described in greater detail below. Also described below is an alternative to the use of distributed transactions in which the replication transaction can be committed immediately after it is prepared, without further communication with the source site.
  • a first transaction has written to a data item that is subsequently written to or read by a second transaction
  • all changes made by the first transaction must be made at a destination site before any of the changes made by the second transaction are made at the destination site.
  • the second transaction is said to "depend on" the first transaction.
  • the dependency is referred to as a write-read dependency.
  • the dependency is referred to as a write-write dependency.
  • read-write dependency Another type of dependency, referred to as a read-write dependency, exists if a first transaction reads a data item that is subsequently written to by a second transaction.
  • read-write dependencies are not relevant in the context of replication since only updates, not reads, are propagated to the destination sites.
  • One way to ensure that changes made by a transaction are always applied after the changes made by the transactions on which the transaction depends is to propagate the changes in a sequence based on the batch numbers and the prepared times of the transactions.
  • a single stream can be opened to each destination site.
  • Each process in charge of propagating changes to a destination site introduces the changes into the stream in batch order.
  • the changes within each batch are sorted in prepared time order so that deferred transaction records with earlier prepared times are introduced into the stream prior to deferred transaction records with later prepared times. Since changes are applied at the destination site in the order in which they arrive in the stream, the changes made by each transaction will be made at the destination site after the changes made by any transactions upon which the transaction depends.
  • the prepared-time ordering of the deferred transaction records may be incorporated into the dequeue process.
  • the dequeue query :
  • the deferred transaction record for a transaction is always written into the deferred transaction queue after the deferred transaction records for the transactions on which it depends. Therefore, it is guaranteed that subsequent batches will not contain transactions on which any of the transactions in the current batch depend.
  • the single-stream propagation technique described above ensures that changes will be applied at the destination sites in the correct order. However, performance is reduced by the fact that only one stream is used to propagate changes to each destination site. According to one embodiment of the invention, multiple streams are used to propagate updates to a single destination site. Because changes sent over one stream may be applied in any order relative to changes sent over another stream, a scheduling mechanism is provided to ensure that changes made by a given transaction will never be applied prior to the changes made by transactions on which the given transaction depends.
  • Propagation mechanisms 400 and 402 that use multiple streams to propagate transactions to destination sites according to an embodiment of the invention.
  • Propagation mechanisms 400 and 402 propagate transactions to destination sites 404 and 434, respectively.
  • Propagation mechanism 400 includes a scheduler process 412, a scheduler heap 410 and three stream control processes 414, 416 and 418. Each of stream control processes 414, 416 and 418 manages an instance of the streaming protocol used to propagate transactions to destination site 404.
  • propagation mechanism 402 includes a scheduler process 422, a scheduler heap 420 and three stream control processes 424, 426 and 428. Each of stream control processes 424, 426 and 428 manages an instance of the streaming protocol used to propagate transactions to destination site 434.
  • dequeue processes 302 and 304 dequeue deferred transaction records from deferred transaction queue 300.
  • dequeue processes 302 and 304 insert the dequeued deferred transaction records into scheduler heaps 410 and 420, respectively, and propagation mechanisms 402 and 402 transmit the transactions specified in the deferred transaction records over multiple streams to destination sites 404 and 434, respectively.
  • Dequeue processes 302 and 304 insert the deferred transaction records of each batch into the scheduler heap in an order based on the prepared times of the corresponding transactions, thus ensuring that the deferred transaction record for any given transaction will never be inserted into the scheduler heap before a deferred transaction record of a transaction on which the transaction depends.
  • Scheduler process 412 is responsible for passing the transactions associated with the deferred transaction records in scheduler heap 410 to stream control processes 414, 416 and 418 in a safe manner. To ensure safety, scheduler process 412 cannot pass a transaction to a stream control process if it is possible that the transaction depends on a transaction that (1) has been propagated to destination site 404 using a different stream control process, and (2) is not known to have been committed at the destination site 404.
  • scheduler process 412 cannot pass a transaction to a stream control process if it is possible that the transaction depends on a transaction that has not yet been propagated to destination site 404.
  • scheduler process 412 ensures the safe scheduling of transaction propagation to destination site 404 using the scheduling techniques illustrated in FIG. 5.
  • the scheduler process 412 inspects the deferred transaction records in the scheduler heap 410 to identify an unsent deferred transaction record.
  • scheduler process 412 determines whether the transaction for that deferred transaction record could possibly depend on any transaction associated with any other deferred transaction record in the scheduler heap 410 (step 502). The determination performed by scheduler process 412 during step 502 shall be described in greater detail below.
  • step 500 If the transaction associated with the deferred transaction record could depend on any transaction associated with any other deferred transaction record in the scheduler heap 410, then the transaction associated with the deferred transaction record is not passed to any stream control process and control passes back to step 500. If the transaction associated with the deferred transaction record could not possibly depend on any transaction associated with any other deferred transaction record in the scheduler heap 410, then the transaction associated with the deferred transaction record is passed to a stream control process at step 504. The stream control process propagates the transaction to the destination site 404. At step 506, the deferred transaction record is marked as "sent”. Control then passes back to step 500.
  • the propagation mechanism 400 Periodically, the propagation mechanism 400 receives from the destination site 404 messages that indicate which transactions have been committed at the destination site. In response to such messages, the deferred transaction records associated with the transactions are removed from the scheduler heap 410. The removed deferred transaction records no longer prevent the propagation of transactions that depended on the transactions associated with the removed deferred transaction records.
  • scheduler process 422 passes transactions associated with unsent deferred transaction records in scheduler heap 420 to stream control processes 424, 426 and 428 when the transaction could not possibly depend on any transaction associated with any other deferred transaction record in the scheduler heap 422.
  • the scheduler processes 412 and 422 and the dequeue processes 302 and 304 have been described as separate processes. However, the actual division of functionality between processes may vary from implementation to implementation. For example, a single process may be used to perform both the dequeuing and scheduling operations for a given site. Similarly, a single process may be used to perform the dequeuing and scheduling operations for all destination sites. The present invention is not limited to any particular division of functionality between processes.
  • the embodiment illustrated in FIG. 4 includes three stream control processes per destination site.
  • the actual number of stream control processes may vary from implementation to implementation. For example, ten streams may be maintained between each source site and each destination site. Alternatively, ten streams may be maintained between the source site and a destination site, while only two streams are maintained between the source site and a different destination site. Further, the number of streams maintained between the source site and destination sites may be dynamically adjusted based on factors such the currently available communication bandwidth.
  • a transaction is not propagated as long as the transaction may depend on one or more transactions that are not known to have been committed at the destination site.
  • a transaction can be safely propagated even when transactions that it may depend on are not known to have been committed at the destination site under certain conditions. Specifically, assume that the scheduler process determines that a transaction TXA cannot possibly depend on any propagated transactions that are not known to have committed except for two transactions TXB and TXC. If it is known that transactions TXB and TXC were propagated in the same stream, then TXA can be safely propagated in that same stream.
  • a record is maintained to indicate which stream was used to propagate each "sent" transaction.
  • transactions may be propagated to a destination site when (1) all transactions on which they may depend are known to have committed at the destination site, or (2) all transactions on which they may depend which are not known to have committed at the destination site were propagated over the same stream. In the latter case, transactions must be propagated using the same stream as was used to propagate the transactions on which they may depend.
  • scheduler processes 412 and 422 must determine whether the transactions associated with unsent deferred transaction records could possibly depend on any transactions associated with the other deferred transaction records stored in the scheduler heap.
  • scheduler processes 412 and 422 must determine whether the transactions associated with unsent deferred transaction records could possibly depend on any transactions associated with the other deferred transaction records stored in the scheduler heap.
  • a database system in which a mechanism for approximating dependencies is maintained.
  • the approximation must be "safe" with respect to the true dependency relation. That is, the approximation must always indicate that a transaction TXA depends on another transaction TXB if TXA actually depends on TXB. However, the approximation does not have to be entirely accurate with respect to two transactions where there is no actual dependency. Thus, it is acceptable for there to exist some pair of transactions TXA and TXB such that the approximation indicates that TXA depends on TXB when TXA does not actually depend on TXB.
  • the determination at step 502 may be performed by comparing the dependent time value of the transaction associated with the unsent deferred transaction record with the prepare times of the transactions associated with all other deferred transaction records in the scheduler heap. If the dependent time value is less than all prepare time values, then the transaction cannot depend on any other the transactions associated with the deferred transaction records that are currently in the scheduler heap. Otherwise, it is possible that the transaction depends on one of the other transactions in the scheduler heap.
  • the database must show all of the changes made by a transaction, or none of the changes made by the transaction. Consequently, none of the changes made by a transaction are made permanent within a database until the transaction has been fully executed. A transaction is said to "commit" when the changes made by the transaction are made permanent to the database.
  • the original transaction at the source site and the propagated transactions at the destination sites are all treated as "child transactions" that form parts of a single “distributed” transaction.
  • all changes made by a distributed transaction must be made permanent at all sites if any of the changes are made permanent at any site.
  • the technique typically employed to ensure this occurs is referred to as two-phase commit.
  • the process that is coordinating the distributed transaction sends the child transactions to the sites to which they correspond.
  • the coordinator process will typically be a process executing at the source site.
  • the child transactions are then executed at their respective sites.
  • the child transaction is said to be "prepared”.
  • a message is sent from the site back to the coordinating process.
  • the second phase of the two phase commit begins.
  • the coordinator processes sends messages to all sites to instruct the sites to commit the child transactions.
  • the sites send messages back to the coordinating process to indicate that the child transactions are committed.
  • the distributed transaction is considered to be committed. If any child transaction fails to be prepared or committed at any site, the coordinator process sends messages to all of the sites to cause all child transactions to be "rolled back", thus removing all changes by all child transactions of the distributed transaction.
  • the advantage of implementing replication through the use of distributed transactions is that the distributed transactions can be successfully rolled back and reapplied as an atomic unit if a failure occurs during execution.
  • performing a two phase commit imposes a significant delay between the completion of transactions and when the transactions are actually committed.
  • two round trips prepare, prepared, commit, committed
  • the latency imposed by these round trip messages may be unacceptably high.
  • streams of deferred transactions are propagated from a source site to one or more destination sites without the overhead of distributed transactions.
  • transactions at the source site are committed unilaterally without waiting for any confirmations from the destination sites.
  • the destination sites execute and commit replication transactions without reporting back to the source site.
  • the source and destination sites store information that allows the status of the replication transactions to be determined after a failure.
  • each destination site maintains an applied transactions table and the source site maintains a durable record of which transactions it knows to have committed at the destination site (a "low water mark").
  • a replication transaction commits at a destination site
  • an entry for the replication transaction is committed to the applied transaction table.
  • the low water mark at the source site and the information contained in the applied transactions table at the destination site may be inspected to determine the status of all transactions that have been propagated to the destination site.
  • the low water mark at the source site or the applied transaction table at the destination site indicates that a transaction has been committed at the destination site, then the changes made by that transaction do not have to be propagated again as part of the failure recovery process.
  • the low water mark at the source site nor the applied transaction table at the destination site indicates that a transaction that must be propagated to a destination site has been committed at the destination site, then the transaction will have to be propagated again as part of failure recovery.
  • the scheduler heap does not grow indefinitely.
  • entries are periodically deleted from the scheduler heap in response to messages received at the source site from the destination site.
  • the messages contain "committed transactions data" that indicates that one or more transactions that were propagated from the source site have been successfully executed and committed at the destination site.
  • committed transactions data In response to receiving committed transactions data from a destination site, the entries in the scheduler heap for the transactions specified in the committed transactions data are deleted from the scheduler heap.
  • the committed transactions data may be, for example, the transaction sequence number of the last transaction that was committed at the destination site that arrived at the destination site on a particular stream.
  • the scheduler keeps track of which transactions were sent on which streams. Since the transactions that are propagated on any given stream are processed in order at the destination site, the source site knows that all transactions on a that particular stream that preceded the transaction identified in the committed transaction data have also been committed at the destination site. The entries for those transactions are deleted from the scheduler heap along with the entry for the transaction specifically identified in the committed transactions data.
  • Various events may cause a destination site to transmit messages containing committed transactions data. For example, such messages may be sent when a buffer is filled at the destination site, or in response to "flush tokens".
  • a flush token is a token sent on a stream from the source site to the destination site to flush the stream.
  • the destination site responds to the flush token by executing and committing all of the transactions that preceded the flush token on that particular stream, and by sending to the source site committed transaction information that indicates which transactions that have been propagated from the source site on that stream have been committed at the destination site. As mentioned above, this committed transaction information may simply identify the most recently committed transaction from the stream on which the flush token was sent. The source site knows that all transactions that preceded the identified transaction on the stream have also been committed at the destination site.
  • the source site periodically updates the low water mark associated with a destination site based on the committed transaction information received from the destination site.
  • Various mechanisms may be used to determine a low water mark for a destination site based on committed transaction information received from the destination site.
  • an ordered list of transactions is maintained for each stream that is being used to send deferred transactions from a source site to a destination site.
  • Each element in an ordered list represents a transaction that was propagated on the stream associated with the ordered list.
  • the order of the elements in the ordered list indicates the order in which the corresponding transactions were propagated on the stream associated with the ordered list.
  • committed transaction information may identify a transaction that is known to have been committed at the destination site.
  • a process at the source site removes from the ordered list of the appropriate stream the element that corresponds to the identified transaction, as well as all preceding elements.
  • the low water mark may be determined by inspecting the ordered lists for all streams to a given destination site and identifying the oldest transaction represented on the lists. All transactions older than that transaction have necessarily been committed at the destination site, so data identifying that transaction may be stored as the low water mark for that destination site.
  • the source site sends a "purge" message to the destination site.
  • the purge message indicates the low water mark that is durably stored at the source site.
  • the destination site may then delete from the applied transaction table the entries for all transactions that are older than the transaction associated with the low water mark.
  • a purge message is not sent to the destination site unless the low water mark specified in the purge message has been stored on non-volatile memory at the source site. Consequently, the applied transactions table will always identify all transactions that (1) are above the durably stored low water mark and (2) have been propagated to and committed at the destination site.
  • the source site maintains a "low water mark table" that contains a low water mark for each destination site.
  • the low water mark for a destination site identifies a transaction T such that every transaction that was dequeued before T is known to have been applied and committed at that destination site. Under certain conditions, some transactions that are later than T in the dequeue sequence may also have committed at the destination site, but this fact may not yet be known at the source site.
  • Maintaining a low water mark table at the source site has the benefit that after a failure, the source site only needs to be informed about the status of transactions that are above the low water mark.
  • each deferred transaction record is given a dequeue sequence number upon being dequeued.
  • dequeue sequence numbers are assigned consecutively as transactions are dequeued. The fact that the sequence numbers are consecutive means that a skip in the sequence indicates the absence of a transaction, rather than just a delay between when transactions were assigned sequence numbers.
  • the dequeue sequence number associated with a transaction is propagated to the destination site with the transaction.
  • the destination site stores the dequeue sequence number of a transaction in the applied transaction table entry for the transaction.
  • FIG. 6 illustrates a replication system in which deferred transactions are propagated from a source site 602 to a destination site 620 over a plurality of propagation streams 622.
  • the scheduler heap 604 at the source site 602 contains entries for the transactions that have been assigned dequeue numbers 33 through 70. In the illustrated example, all of these transactions have been propagated to the destination site 620 over one of the propagation streams 622. Therefore, all of the entries are marked as "sent". It should be noted that the scheduler heap 604 may additionally include any number of unsent entries for transactions that have not yet been propagated.
  • the low water mark stored at the source site 602 for destination site 620 is 33.
  • all entries with dequeue sequence numbers below 33 have been removed from applied transaction table 650.
  • Applied transaction table 650 currently indicates that the transactions associated with dequeue numbers 33, 34, 35, 40 and 53, which are equal to or above the low water mark of 33, have been committed at the destination site 620.
  • a source site During recovery, a source site must determine the status of transactions that have been propagated to each destination site. As explained above, the status of the transactions may be determined based on the low water marks and information in the applied transaction tables of the destination sites. If the applied transaction table at a destination site contains an entry for a transaction or if the transaction falls below the low water mark for that destination site, then the transaction was committed at the destination site prior to the failure. Otherwise, the transaction had not committed at the destination site prior to the failure.
  • the source site Because low water marks are maintained at the source site, the source site only needs to be informed of the transactions that were committed at a destination site that are above the low water mark for the destination site (the "above-the-mark committed transactions"). Therefore, one step in the recovery process is communicating to the source site information that identifies the set of above-the-mark committed transactions.
  • the set of above-the-mark committed transactions is sent from the destination site to the source site as a series of dequeue sequence number ranges.
  • the set of above-the-mark committed transactions is sent from the destination site to the source site in the form of tuples, where each tuple identifies a range of dequeue sequence numbers. For example, assume that destination site 620 had, prior to a failure, committed the transactions propagated from source site 602 with dequeue sequence numbers up to 55, with dequeue sequence numbers from 90 to 200, and with dequeue sequence numbers from 250 to 483.
  • source site 602 sends the low water mark 33 to destination site 620 to request the set of above-the-mark committed transactions that were propagated to destination site 620 from source site 602.
  • destination site 620 sends back to source site 602 the tuples (55, 90), (200,250) and (483,-). With this information, the recovery process knows that all transactions that fall within the indicated ranges will have to be re-propagated to destination site 620.
  • the number of tuples that must be sent as committed transaction information is limited to the number of gaps between the dequeue sequence numbers of committed transactions, and the number of gaps is bounded by the size of the scheduling heap. Therefore, if the original transaction heap was small enough to be stored in dynamic memory prior to the failure, then the committed transaction information should fit in the dynamic memory during recovery.

Abstract

A method and system are provided for recovering after a failure in a data replication environment. According to the method, a transaction is executed at a source site that makes changes that must be replicated at a destination site. The changes are made permanent at the source site. The changes are made permanent at the source site without the source site being informed as to whether the changes were successfully applied at the destination site. The changes are sent to the destination site. The changes are applied at the destination site. If the changes are successfully applied before the failure, then the changes are made permanent at the destination site and a record is added to a set of records at the destination site. The record indicates that the changes where made permanent at the destination site. After a failure, the set of records at the destination site are used to determine which changes must be sent from the source site to the destination site after the failure.

Description

RELATED APPLICATIONS
The present Application is related to the following Applications: U.S. patent application Ser. No. 08/770,573, entitled "Parallel Queue Propagation," filed by Alan Demers, James Stamos, Sandeep Jain, Brian Oki, and Roger J. Bamford on Dec. 19, 1996; and U.S. patent application Ser. No. 08/769,740, entitled "Dequeuing Using Queue Batch Numbers," filed by Alan Demers, James Stamos, Sandeep Jain, Brian Oki, and Roger J. Bamford on Dec. 19, 1996.
FIELD OF THE INVENTION
The present invention relates to database systems, and more particularly to techniques for propagating changes from one site to another.
BACKGROUND OF THE INVENTION
Under certain conditions, it is desirable to store copies of a particular set of data, such as a relational table, at multiple sites. If users are allowed to update the set of data at one site, the updates must be propagated to the copies at the other sites in order for the copies to remain consistent. The process of propagating the changes is generally referred to as replication.
Various mechanisms have been developed for performing replication. Once such mechanism is described in U. S. patent application Ser. No. 08/126,586 entitled "Method and Apparatus for Data Replication", filed on Sep. 24, 1993 by Sandeep Jain and Dean Daniels, now abandoned, the contents of which are incorporated by reference.
The site at which a change is initially made to a set of replicated data is referred to herein as the source site. The sites to which the change must be propagated are referred to herein as destination sites. If a user is allowed to make changes to copies of a particular table that are at different sites, those sites are source sites with respect to the changes initially made to their copy of the table, and destination sites with respect to the changes initially made to copies of the table at other sites.
Replication does not require an entire transaction that is executed at a source site to be re-executed at each of the destination sites. Only the changes made by the transaction to replicated data need to be propagated. Thus, other types of operations, such as read and sort operations, that may have been executed in the original transaction do not have to be re-executed at the destination sites.
Row-level replication and column-level replication constitute two distinct styles of replication. In row-level or column-level replication, the updates performed by an executing transaction are recorded in a deferred transaction queue. The information recorded in the deferred transaction queue includes both the old and the new values for each data item that was updated. Row-level and column-level replication differ with respect to whether old and new values are transmitted for an entire relational row (row-level) or for only a subset of its columns (column-level).
The changes recorded in the deferred transaction queue are propagated to the destination site. The destination site first checks that its current data values agree with the transmitted "old" values. The check may fail, for example, if concurrent changes have been made to the same replicated data at different sites. If the check fails, a conflict is said to have been detected. Various techniques may be used to resolve such conflicts. If no conflict is detected, the current data values at the destination site are replaced with the transmitted "new" values.
Referring to FIG. 1, it illustrates a system in which copies of a table 118 are stored at multiple sites. Specifically, the system includes three sites 100, 102 and 104. Sites 100, 102 and 104 include disks 106, 108 and 110 that store copies 120, 122 and 124 of table 118, respectively. Database servers 130, 132 and 134 are executing at sites 100, 102 and 104, respectively.
Assume that database server 130 executes a transaction that makes changes to copy 120. When execution of the transaction is successfully completed at site 100, a record of the changes made by the transaction is stored in a deferred transaction queue 160 of a replication mechanism 140. Such records are referred to herein as deferred transaction records. Typically, the deferred transaction queue 160 will be stored on a non-volatile storage device so that the information contained therein can be recovered after a failure.
Replication mechanism 140 includes a dequeue process for each of sites 102 and 104. Dequeue process 150 periodically dequeues all deferred transaction records that (1) involve changes that must be propagated to site 102, and (2) that dequeue process 150 has not previously dequeued. The records dequeued by dequeue process 150 are transmitted in a stream to site 102. The database server 132 at site 102 makes the changes to copy 122 of table 118 after checking to verify that the current values in copy 122 match the "old values" contained in the deferred transaction records.
Similarly, dequeue process 152 periodically dequeues all deferred transaction records that (1) involve changes that must be propagated to site 104, and (2) that dequeue process 152 has not previously dequeued. The records dequeued by dequeue process 152 are transmitted in a stream to site 104. The database server 134 at site 104 makes the changes to copy 124 of table 118 after checking to verify that the current values in copy 124 match the "old values" contained in the deferred transaction records.
Various obstacles may impede the efficiency of the replication mechanism 140 illustrated in FIG. 1. For example, a mechanism must be provided which allows dequeue processes 150 and 152 to distinguish between the deferred transaction records within deferred transaction queue 160 that they have already dequeued, and the deferred transaction records that they have not yet dequeued.
Further, a single stream connects dequeue processes 150 and 152 to their corresponding destination sites. Efficiency may be improved by establishing multiple streams between the source site and each of the destination sites. However, there are constraints on the order in which updates must be applied at the destination sites, and the replication mechanism has no control over the order in which commands that are sent over one stream are applied at a destination site relative to commands that are sent over a different stream. Therefore, a transmission scheduling mechanism must be provided if commands are to be sent to a destination site over more than one stream.
Currently, database systems implement replication by executing deferred transactions using two phase commit techniques. During two phase commit operations, numerous messages are sent between the source site and each of the destination sites for each transaction to ensure that changes at all sites are made permanent as an atomic event. While the use of two phase commit techniques ensures that the various databases may be accurately recovered after a failure, the overhead involved in the numerous inter-site messages is significant. Therefore, it is desirable to provide a mechanism that involves less messaging overhead than two phase commit techniques but which still allows accurate recovery after a failure.
SUMMARY OF THE INVENTION
A method and system are provided for recovering after a failure in a replication environment. According to the method, a transaction is executed at a source site that makes changes that must be replicated at a destination site. The changes are made permanent at the source site. The changes are made permanent at the source site without the source site being informed as to whether the changes were successfully applied at the destination site.
The changes are sent to the destination site. The changes are applied at the destination site. If the changes are successfully applied before the failure, then the changes are made permanent at the destination site and a record is added to a set of records at the destination site. The record indicates that the changes where made permanent at the destination site. After a failure, the set of records at the destination site are used to determine which changes must be sent from the source site to the destination site after the failure.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference-numerals refer to similar elements and in which:
FIG. 1 is a block diagram of a computer system that includes a replication mechanism;
FIG. 2 is a block diagram of a computer system that may be used to implement the present invention;
FIG. 3A is block diagram of a portion of a replication system in which queue batch numbers are used to coordinate dequeuing operations according to an embodiment of the invention;
FIG. 3B illustrates the system of FIG. 3A after a stamping operation is performed;
FIG. 3C illustrates the system of FIG. 3B after a dequeuing operation is performed;
FIG. 3D illustrates the system of FIG. 3C after another stamping operation is performed;
FIG. 4 is a block diagram that illustrates propagation mechanisms that propagate transactions using multiple streams per destination site according to an embodiment of the invention;
FIG. 5 is a flow chart illustrating the steps used to schedule the transmission of transactions according to an embodiment of the invention; and
FIG. 6 is a block diagram of a replication system in which the destination site maintains an applied transaction table that may be used in recovery after a failure, according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
A method and apparatus for replicating data at multiple sites is described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
HARDWARE OVERVIEW
Referring to FIG. 2, it is a block diagram of a computer system 200 upon which an embodiment of the present invention can be implemented. Computer system 200 includes a bus 201 or other communication mechanism for communicating information, and a processor 202 coupled with bus 201 for processing information. Computer system 200 further comprises a random access memory (RAM) or other dynamic storage device 204 (referred to as main memory), coupled to bus 201 for storing information and instructions to be executed by processor 202. Main memory 204 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 202. Computer system 200 also comprises a read only memory (ROM) and/or other static storage device 206 coupled to bus 201 for storing static information and instructions for processor 202. Data storage device 207 is coupled to bus 201 for storing information and instructions.
A data storage device 207 such as a magnetic disk or optical disk and its corresponding disk drive can be coupled to computer system 200. Computer system 200 can also be coupled via bus 201 to a display device 221, such as a cathode ray tube (CRT), for displaying information to a computer user. Computer system 200 further includes a keyboard 222 and a cursor control device 223, such as a mouse.
The present invention is related to the use of computer system 200 to propagate to other sites changes made to data on disk 207. According to one embodiment, replication is performed by computer system 200 in response to processor 202 executing sequences of instructions contained in memory 204. Such instructions may be read into memory 204 from another computer-readable medium, such as data storage device 207. Execution of the sequences of instructions contained in memory 204 causes processor 202 to perform the process steps that will be described hereafter. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the present invention. Thus, the present invention is not limited to any specific combination of hardware circuitry and software.
DEQUEUING TECHNIQUES
As mentioned above, one phase of the replication process involves placing deferred transaction records into a deferred transaction queue. According to one embodiment, the deferred transaction queue is implemented as a relational table, where each deferred transaction record is stored as one or more rows within the table.
For example, a transaction record for a given transaction may consist of ten rows within the deferred transaction queue, where each of the ten rows corresponds to an update performed by the transaction and contains an old and new value for the update and an update sequence number that identifies the order in which the update was performed relative to the other updates performed by the transaction. The transaction record also contains a transaction identifier that identifies the transaction and a "prepared time" value that indicates when the transaction finished execution (was "prepared") relative to other transactions. The transaction identifier and the prepared time value of a transaction may be stored, for example, in one of the rows that constitute the transaction record for the transaction.
The process of dequeuing a deferred transaction record for one destination site does not automatically remove the deferred transaction record from the deferred transaction queue because the deferred transaction record may have to be dequeued for other destination sites. Once a deferred transaction record has been dequeued for all destination sites, the deferred transaction record may be removed from the deferred transaction queue by a process that may be entirely independent of the dequeuing processes.
For example, in a replication environment consisting of N sites, each deferred transaction record must be dequeued N-1 times (once for each destination site) before it can be deleted from the deferred transaction queue. Because the act of dequeuing a deferred transaction record does not remove the deferred transaction record from the deferred transaction queue, the presence of a deferred transaction record within the deferred transaction queue does not indicate whether the deferred transaction record has been dequeued for any given destination site.
For each destination site, a dequeuing process repeatedly performs a dequeuing operation on the deferred transaction queue. During every dequeuing operation the dequeuing process performs, it must only dequeue the deferred transaction records for its destination site that it has not already dequeued. Therefore, a mechanism must be provided for determining which deferred transaction records within the deferred transaction queue have already been dequeued for each of the destination sites.
THE PREPARE SEQUENCE APPROACH
One way to keep track of which deferred transaction records have been dequeued for each destination site involves storing within each deferred transaction record a sequence number that indicates the sequence in which the transaction associated with the deferred transaction record was made permanent ("committed") at the source site. Each dequeuing process then keeps track of the highest sequence number of the records that it has dequeued. At each subsequent pass, the dequeuing process only reads those records with higher sequence numbers than the highest sequence number encountered on the previous pass.
When the deferred transaction queue is implemented using a relational table, the process of dequeuing records from the deferred transaction queue may be implemented by executing a query on the table. To implement the prepare sequence approach described above, a dequeuing process would repeatedly execute the equivalent of the SQL query:
select * from queue-- table where sequence-- number>last-- sequence-- number order by sequence-- number;
Generally, a transaction is not considered committed until a deferred transaction record for the transaction is written into the deferred transaction queue. Therefore, a commit time cannot be assigned to a transaction until the deferred transaction record is written into the deferred transaction queue. Consequently, the deferred transaction record that is written into the deferred transaction queue does not contain the true commit time of the corresponding transaction. In place of the commit time, a "prepared time value" is stored in the transaction record. Prepared time values indicate the time in which transactions completed execution, not the actual time the transactions committed.
Because the transaction records do not contain actual commit times, the prepared time values are used as sequence numbers for the dequeuing technique described above. However, the database system is not able to guarantee that the deferred transaction records of isolated transactions will commit in the order in which the transactions acquire prepare times. Without such a guarantee, deferred transaction records may be written into the deferred transaction queue out of prepare sequence.
The possibility that deferred transaction records may be written into the deferred transaction queue out of prepare sequence renders the prepare sequence approach unusable. For example, assume that two transactions with sequence numbers S1<S2 are inserted into the deferred transaction queue out of order. If a dequeue process performs a dequeue operation after the S2 deferred transaction record is inserted and before the S1 deferred transaction record is inserted, then the highest sequence number seen by the dequeue process will be at least S2. When the dequeue process performs a subsequent dequeue operation, the dequeue process will only dequeue deferred transaction records that have sequence numbers greater than S2. The S1 deferred transaction record will be skipped and may never be dequeued by that dequeue process.
SEQUENCE STAMP LOCKING
One approach to avoid the out-of-sequence problem associated with the prepare sequence approach is to prevent transactions from acquiring prepared time values until the deferred transaction records for all transactions that have previously acquired prepared time values are stored in the deferred transaction queue. If transactions cannot acquire prepared time values until the deferred transaction records for all transactions that have previously acquired prepared time values are stored in the deferred transaction queue, then the commit time order will always reflect the prepared time order. Thus, the prepared time may be treated as the commit time.
For example, an "enqueue lock" may be used to restrict access to the sequence assignment mechanism. Before a transaction can be assigned a sequence number, the transaction must acquire the enqueue lock. The transaction must then hold the enqueue lock until the deferred transaction record for the transaction is actually written to the deferred transaction queue. This technique effectively makes the sequence number assignment and the insertion of the deferred transaction record an atomic operation. The following steps could be used to implement this technique:
begin transaction
perform transaction operations
acquire enqueue lock
acquire sequence number
insert deferred transaction record into deferred transaction queue
commit and release enqueue lock
While this technique avoids the out-of-sequence problems associated with the prepare sequence approach, it also creates a bottleneck in transaction processing. Specifically, when numerous concurrent processes complete execution at the same time, one will acquire the enqueue lock and the others will have to await their turn. Thus, while the transactions may be executing in parallel to take full advantage of the processing power of the hardware on which they are executing, they will have to be processed serially upon completion.
RECORD FLAGGING
To avoid the bottleneck associated with the stamp locking process, a record can be maintained to indicate which deferred transaction records have been dequeued for which sites. For example, a plurality of flags may be stored in each deferred transaction record, where each flag corresponds to a destination site. Initially, all of the flags indicate that the deferred transaction record has not been dequeued. During each dequeue pass, the dequeue process inspects each deferred transaction record to determine whether the flag corresponding to the destination site associated with the dequeue process has been set. If the flag has been set, the deferred transaction record is skipped. If the flag has not been set, the dequeue process dequeues the deferred transaction record. When a dequeue process dequeues the deferred transaction record, the dequeue process sets the flag within the deferred transaction record that corresponds to the destination site associated with the dequeue process to indicate that the deferred transaction record has been dequeued for that destination site.
Unfortunately, the record flagging approach has the disadvantage that each deferred transaction record will be updated once for each destination site. This disadvantage is significant because updates involve a relatively large amount of overhead and there may be a large number of destination sites.
As an alternative to using flags within the deferred transaction records, a record that indicates which deferred transaction records have been dequeued for each destination site may be maintained external to the deferred transaction queue. For example, each dequeue process may maintain a dequeued transactions table into which the dequeue process inserts a row for each deferred transaction record that it dequeues, where the row identifies the transaction associated with the dequeued deferred transaction record.
However, the dequeued transaction table approach also involves a significant amount of overhead. Specifically, a row must be generated and inserted for each destination site for every deferred transaction record. In addition, the dequeue query is expensive in that the entire deferred transaction queue may have to be scanned looking for deferred transaction records that are not recorded in a particular dequeued transaction table.
QUEUE BATCH NUMBERS
According to an embodiment of the invention, a "queue batch number" column is added to each deferred transaction record in the deferred transaction queue. When a deferred transaction record is initially inserted into the queue, the queue batch value is set to some default value. Before dequeuing deferred transaction records, each dequeue process "stamps" the deferred transaction queue by setting the queue batch values in all of the deferred transaction records that have the default queue batch value to a queue batch number that is greater than any queue batch number that has previously been assigned to any deferred transaction record. The dequeue process then dequeues all of the records that have queue batch numbers greater than the queue batch number used by that dequeue process in its previous batch stamping operation.
The queue batch number stamping technique is illustrated in FIGS. 3A-3D. Referring to FIG. 3A, it illustrates an embodiment of the invention in which a deferred transaction queue 300 is implemented using a table. Deferred transaction records 308 are inserted into deferred transaction queue 300 by a database server after the transactions are prepared at the local (source) site. Prior to insertion into deferred transaction queue 300, these deferred transaction records are assigned the default queue batch value. In the illustrated embodiment, the default queue batch value is -5000.
At the time illustrated in FIG. 3A, deferred transaction records for five transactions have been inserted into the deferred transaction queue 300. None of the transactions have yet been dequeued by any dequeue process, and therefore all still contain the default queue batch value. Dequeue process 302 has previously dequeued deferred transaction records with queue batch numbers up to 60, and therefore stores the value "60" as its LAST-- BATCH number. Dequeue process 304 has previously dequeued deferred transaction records with queue batch numbers up to 59, and therefore stores the value "59" as its LAST-- BATCH number.
Prior to performing a dequeue operation, dequeue process 304 performs a batch stamping operation on deferred transaction queue 300. During the batch stamping operation, all deferred transaction records within deferred transaction queue 300 that currently hold the default queue batch number (-5000) are stamped with a higher queue batch number than has previously been assigned to any deferred transaction records. To ensure that the new queue batch number is higher than any previously assigned queue batch number, a queue batch counter 306 is used to track the highest previously assigned batch number. Initially, the queue batch counter is set to a value that is greater than the default queue batch number. At the time illustrated in FIG. 3A, the highest previously assigned queue batch value is 60.
FIG. 3B illustrates deferred transaction queue 300 after dequeue process 304 has performed a batch stamping operation. The queue batch counter 306 is incremented, increasing the value of the counter to 61. The deferred transaction records within deferred transaction queue 300 that previously stored the default queue batch value of -5000 now store the new queue batch value of 61. After the batch stamping operation, dequeue process 304 dequeues all of the deferred transaction records that have queue batch values that are higher than the highest queue batch value previously used by dequeue process 304. At the time illustrated in FIG. 3B, the LAST-- BATCH value of dequeue process 304 is 59, and the five deferred transaction records in deferred transaction queue 300 have queue batch values of 61. Therefore, dequeue process 304 will dequeue all five of the deferred transaction records.
FIG. 3C illustrates deferred transaction queue 300 after dequeue process 304 has performed a dequeue operation. The LAST-- BATCH value of dequeue process 304 has been updated to reflect that dequeue process has dequeued all deferred transaction records with queue batch values up to 61.
At the time illustrated in FIG. 3C, five new deferred transaction records have been inserted into deferred transaction queue 300 since the batch stamping operation performed by dequeue process 304. These new deferred transaction records have been assigned the default queue batch value. As long as the new deferred transaction records were added after the batch stamping operation, the new deferred transaction records will not have been dequeued by dequeue process 304 regardless of whether they were inserted before or after the dequeue operation because dequeue process 304 only dequeued those deferred transaction records with queue batch values greater than 59.
Assume that at the time illustrated in FIG. 3C, dequeue process 302 performs a batch stamping operation. Dequeue process 302 increments the queue batch counter to 62, and stamps all of the deferred transaction records that have the default queue batch value with the new queue batch value of 62. FIG. 3D illustrates the state of deferred transaction queue 300 after dequeue process 302 has performed such a batch stamping operation. Dequeue process 302 may then perform a dequeue operation in which dequeue process 302 dequeues all deferred transaction records with queue batch values greater than 60. During the dequeue operation, dequeue process 302 would dequeue all of the deferred transaction records previously dequeued by dequeue process 304, as well as all of the new deferred transaction records. After the dequeue operation, dequeue process 302 would update its LAST-- BATCH value to 62.
Assume that no new records arrive after the time illustrated in FIG. 3D and the next batch stamping operation is performed by dequeue process 302. Under these conditions, the queue batch counter 306 would be incremented to 63, but none of the deferred transaction records within deferred transaction queue 300 will be updated. Dequeue process 304 would only dequeue those deferred transaction records with queue batch values greater than the LAST-- BATCH value of dequeue process 304. In the illustrated example, the LAST-- BATCH value of dequeue process 304 is 61. Therefore, dequeue process 304 would only dequeue those deferred transaction records that it did not dequeue in its previous dequeue operation.
By comparing LAST-- BATCH numbers with queue batch numbers, dequeue processes can quickly distinguish between deferred transaction records they have already dequeued, and deferred transaction records they have not yet dequeued. Using this technique, many deferred transaction records can be concurrently written into the deferred transaction queue 300 out of prepared time order without adversely affecting dequeue operations. Therefore, the bottleneck associated with the sequence stamp locking technique described above is avoided.
Further, each deferred transaction record is only updated once, not once for every destination site. Specifically, each deferred transaction record will only be updated during the first batch stamping operation performed after the deferred transaction record has been inserted into the deferred transaction queue 300 and stamped with a non-default queue batch number. Therefore, this technique avoids the significant overhead associated with the record flagging techniques described above.
SEQUENTIAL PROCESSING
According to one embodiment, dequeued transactions are processed sequentially, not as atomic "batches" of transactions. The order in which a transaction is processed is based on both the batch number of the transaction and the prepared time of the transaction. Specifically, transactions are dequeued in <batch number, prepared time> order. Thus, for each dequeue process, transactions with older batch numbers are processed before transactions with newer batch numbers. Within a batch, transactions with older prepared times are processed before transactions with newer prepared times.
Because batches are not processed as atomic units, the LAST-- BATCH value alone is not enough to indicate which transactions have and have not been processed by a particular dequeuing process. According to one embodiment, a <LAST-- BATCH, transaction identifier> value pair is maintained by each dequeue process to indicate the last transaction to be processes by the dequeuing process. After a failure, the <LAST-- BATCH, transaction identifier> value pair for a dequeue process may be used to determine which transactions must still be processed by the dequeue process.
SCHEDULING BATCH STAMPING OPERATIONS
In the embodiment described above, the dequeue processes perform batch stamping operations before every dequeue query they perform. However, a batch stamping operation does not need to be performed before a dequeuing query for a given site as long as a batch stamping operation has been performed subsequent to the last dequeuing query for the given site. Further, as long as at least one batch stamping operation is performed between consecutive dequeue queries for a given site, the actual number of batch stamping operations performed between consecutive dequeue operations for a site will not affect the dequeue query.
For example, at the time shown in FIG. 3C, dequeue process 302 can perform a dequeue query without first performing a batch stamping operation. This is possible because dequeue process 304 performed a batch stamping operation since the last dequeue query performed by dequeue process 302. Under these circumstances, the newly arrived deferred transaction records would not be dequeued by dequeue process 302 until a subsequent dequeue query is performed by dequeue process 302. The present invention is not limited to any particular mechanism for scheduling batch stamping operations relative to dequeue operations.
In the embodiment described above, each destination site has a dequeue process and the dequeue processes perform the batch stamping operations. In alternative embodiments, each destination site may have more than one dequeue process, and each dequeue process may service more than one destination site. Further, batch stamping operations may be performed by one or more processes executing independent of the dequeue processes, or by recursive transactions initiated by the dequeue processes.
PURGING THE DEFERRED TRANSACTION QUEUE
Once a deferred transaction record has been processed for all destination sites to which it must be propagated, the deferred transaction record can be deleted from the deferred transaction queue. According to one embodiment, a process responsible for purging the deferred transaction queue reads the <LAST-- BATCH, transaction-id> value pair for each of the destination sites. The <LAST-- BATCH, transaction-id> value pair maintained by each dequeue process indicates the last transaction encountered by that dequeue process.
Each dequeue process will maintain its own <LAST-- BATCH, transaction-id> value. Of all the transactions thus identified, the transaction with the lowest <batch number, prepared time> value represents the most recent transaction that has been encountered by the dequeue processes for all sites (the "global bookmark"). The purging process deletes all deferred transaction records in the deferred transaction queue for transactions that have lower <batch number, prepared time> values than the global bookmark (except for transactions currently marked with the default batch value), since these deferred transaction records have been dequeued for all destination sites for which they need to be dequeued.
A dequeue process may not dequeue some deferred transaction records it encounters because the deferred transaction records do not have to be propagated to the destination site associated with the dequeue process. According to one embodiment, the <LAST-- BATCH, transaction-id> value for each site is updated based on all deferred transaction records encountered (but not necessarily dequeued) during the dequeue operations. Specifically, each dequeue process updates its <LAST-- BATCH, transaction-id> value based on all deferred transaction records it sees during a dequeue operation, including those deferred transaction records that it does not actually dequeue.
For example, assume that the <LAST-- BATCH, transaction-id> value for a dequeue process associated with a destination site A is <20, 5>. During a dequeue operation, the dequeue process encounters two deferred transaction records with batch numbers higher than 20. The first deferred transaction record is for a transaction TXA, has a queue batch number of 23 and must be dequeued for site A. The second deferred transaction record is for a transaction TXB, has a queue batch number of 25 and does not have to be dequeued for site A. Under these circumstances, the dequeue process updates its <LAST-- BATCH, transaction-id> value to <25, TXB> after performing the dequeue operation.
Consequently, the <LAST-- BATCH, transaction-id> value for each site will be updated according to the frequency (F1) that dequeue operations are performed for that site, not the frequency (F2) at which changes are actually propagated to that site. For sites to which changes must rarely be propagated, F1 may be significantly greater than F2. As a result, the delay between that time at which a deferred transaction record has been dequeued for all necessary sites and the time at which the deferred transaction record is deleted from the deferred transaction queue can be significantly shorter than it would be if the <LAST-- BATCH, transaction-id> values were only updated based on the deferred transaction records that a dequeue process actually dequeues.
TRANSACTION PROPAGATION
In replication, propagating a transaction to a destination site is performed by causing the destination site to execute operations that make at the destination site the changes made by the transaction at the source site. According to one embodiment, the source site transmits a stream of information to the destination site to cause such operations to be performed.
Specifically, the source site sends deferred transactions to a destination site as a sequence of remote procedure calls, essentially described in U.S. patent application Ser. No. 08/126,586 entitled "Method and Apparatus for Data Replication", filed on Sep. 24, 1993 by Sandeep Jain and Dean Daniels. Deferred transaction boundaries are marked in the stream by special "begin-unit-of-work" and "end-unit-of-work" tokens that contain transaction identifiers.
The destination site receives messages on the stream. When it receives the "begin-unit-of-work" token, the destination site starts a local transaction for executing the procedure calls that will follow the "begin-unit-of-work" token. Such transactions are referred to herein as replication transactions. A replication transaction executes the procedure calls specified in the stream until it encounters the next "end-unit-of-work" token. When the "end-unit-of-work" token is encountered, the replication transaction is finished. The destination site continues reading and processing deferred transactions using replication transactions in this manner until the stream is exhausted.
When distributed transactions are used to perform replication, a replication transaction enters a "prepared" state when the "end-unit-of-work" token is encountered. The destination site informs the source site that the replication transaction is prepared and awaits a commit instruction from the source site. The two phase commit operation used by distributed transactions is described in greater detail below. Also described below is an alternative to the use of distributed transactions in which the replication transaction can be committed immediately after it is prepared, without further communication with the source site.
DEPENDENCIES BETWEEN TRANSACTIONS
After a deferred transaction record has been dequeued for a destination site, the changes identified in the deferred transaction record are propagated to the destination site. However, the order in which the changes were made at the source site places some restrictions with respect to the order in which the changes must be made at the destination site.
Specifically, if a first transaction has written to a data item that is subsequently written to or read by a second transaction, then all changes made by the first transaction must be made at a destination site before any of the changes made by the second transaction are made at the destination site. In these circumstances, the second transaction is said to "depend on" the first transaction. When the second transaction merely reads the data item, the dependency is referred to as a write-read dependency. When the second transaction writes to the data item, the dependency is referred to as a write-write dependency.
During replication, it is critical that the order of write-write dependencies be observed so that the copy of the data item at the destination site will reflect the correct value after the two writes have been applied at the destination site. It is desirable that the order of write-read dependencies be observed during replication to reduce the likelihood that the database at the destination site will transition through invalid intervening states during the application of the changes at the destination site.
Another type of dependency, referred to as a read-write dependency, exists if a first transaction reads a data item that is subsequently written to by a second transaction. However, read-write dependencies are not relevant in the context of replication since only updates, not reads, are propagated to the destination sites.
There is a correlation between the prepared times of transactions and whether it is possible for a dependency to exist between the transactions. Specifically, transactions are not able to read or update any changes made by any other transactions until the other transactions are prepared and committed. Therefore, a transaction TXA cannot depend on a transaction TXB if the prepared time of the transaction TXA is earlier than the prepared time of transaction TXB.
There is also a correlation between the times that the deferred transaction records for transactions are written into the deferred transaction queue and whether it is possible for a dependency to exist between the transactions. Specifically, if every transaction acquires its prepared time as its last action before entering the committed state, then the deferred transaction record for any given transaction will never be written into the deferred transaction queue before the deferred transaction records of any transactions on which the given transaction depends. For example, if TXA depends on TXB, then it is guaranteed that the deferred transaction record for TXB will be written to the deferred transaction queue before the deferred transaction record for TXA. This is true because the changes made by TXB are not made visible to any transactions (including TXA) until the deferred transaction record for TXB is written to the deferred transaction queue.
SINGLE-STREAM PROPAGATION
One way to ensure that changes made by a transaction are always applied after the changes made by the transactions on which the transaction depends is to propagate the changes in a sequence based on the batch numbers and the prepared times of the transactions.
Specifically, a single stream can be opened to each destination site. Each process in charge of propagating changes to a destination site introduces the changes into the stream in batch order. The changes within each batch are sorted in prepared time order so that deferred transaction records with earlier prepared times are introduced into the stream prior to deferred transaction records with later prepared times. Since changes are applied at the destination site in the order in which they arrive in the stream, the changes made by each transaction will be made at the destination site after the changes made by any transactions upon which the transaction depends.
The prepared-time ordering of the deferred transaction records may be incorporated into the dequeue process. Specifically, the dequeue query:
select * from queue-- table
where (queue-- batch-- number>last-- batch)
order by queue-- batch, prepared-- time;
will retrieve new batches of deferred transaction records from the deferred transaction queue and order the deferred transaction records based on batch number and prepared time. Based on this ordering, if any transactions in a given batch depend on each other, their changes will be transmitted in the appropriate order. Further, as explained above, the deferred transaction record for a transaction is always written into the deferred transaction queue after the deferred transaction records for the transactions on which it depends. Therefore, it is guaranteed that subsequent batches will not contain transactions on which any of the transactions in the current batch depend.
MULTIPLE-STREAM PROPAGATION
The single-stream propagation technique described above ensures that changes will be applied at the destination sites in the correct order. However, performance is reduced by the fact that only one stream is used to propagate changes to each destination site. According to one embodiment of the invention, multiple streams are used to propagate updates to a single destination site. Because changes sent over one stream may be applied in any order relative to changes sent over another stream, a scheduling mechanism is provided to ensure that changes made by a given transaction will never be applied prior to the changes made by transactions on which the given transaction depends.
Referring to FIG. 4, it illustrates propagation mechanisms 400 and 402 that use multiple streams to propagate transactions to destination sites according to an embodiment of the invention. Propagation mechanisms 400 and 402 propagate transactions to destination sites 404 and 434, respectively. Propagation mechanism 400 includes a scheduler process 412, a scheduler heap 410 and three stream control processes 414, 416 and 418. Each of stream control processes 414, 416 and 418 manages an instance of the streaming protocol used to propagate transactions to destination site 404. Similarly, propagation mechanism 402 includes a scheduler process 422, a scheduler heap 420 and three stream control processes 424, 426 and 428. Each of stream control processes 424, 426 and 428 manages an instance of the streaming protocol used to propagate transactions to destination site 434.
As explained above, dequeue processes 302 and 304 dequeue deferred transaction records from deferred transaction queue 300. In the embodiment illustrated in FIG. 4, dequeue processes 302 and 304 insert the dequeued deferred transaction records into scheduler heaps 410 and 420, respectively, and propagation mechanisms 402 and 402 transmit the transactions specified in the deferred transaction records over multiple streams to destination sites 404 and 434, respectively. Dequeue processes 302 and 304 insert the deferred transaction records of each batch into the scheduler heap in an order based on the prepared times of the corresponding transactions, thus ensuring that the deferred transaction record for any given transaction will never be inserted into the scheduler heap before a deferred transaction record of a transaction on which the transaction depends.
When dequeue process 302 places a deferred transaction record in scheduler heap 410, the deferred transaction record is initially marked as "unsent". Scheduler process 412 is responsible for passing the transactions associated with the deferred transaction records in scheduler heap 410 to stream control processes 414, 416 and 418 in a safe manner. To ensure safety, scheduler process 412 cannot pass a transaction to a stream control process if it is possible that the transaction depends on a transaction that (1) has been propagated to destination site 404 using a different stream control process, and (2) is not known to have been committed at the destination site 404. In addition, the scheduler process 412 cannot pass a transaction to a stream control process if it is possible that the transaction depends on a transaction that has not yet been propagated to destination site 404. According to one embodiment of the invention, scheduler process 412 ensures the safe scheduling of transaction propagation to destination site 404 using the scheduling techniques illustrated in FIG. 5.
Referring to FIG. 5, it is a flow chart illustrating steps for scheduling the propagation of transactions according to one embodiment of the invention. At step 500, the scheduler process 412 inspects the deferred transaction records in the scheduler heap 410 to identify an unsent deferred transaction record. When the scheduler process 412 encounters an unsent deferred transaction record, scheduler process 412 determines whether the transaction for that deferred transaction record could possibly depend on any transaction associated with any other deferred transaction record in the scheduler heap 410 (step 502). The determination performed by scheduler process 412 during step 502 shall be described in greater detail below.
If the transaction associated with the deferred transaction record could depend on any transaction associated with any other deferred transaction record in the scheduler heap 410, then the transaction associated with the deferred transaction record is not passed to any stream control process and control passes back to step 500. If the transaction associated with the deferred transaction record could not possibly depend on any transaction associated with any other deferred transaction record in the scheduler heap 410, then the transaction associated with the deferred transaction record is passed to a stream control process at step 504. The stream control process propagates the transaction to the destination site 404. At step 506, the deferred transaction record is marked as "sent". Control then passes back to step 500.
Periodically, the propagation mechanism 400 receives from the destination site 404 messages that indicate which transactions have been committed at the destination site. In response to such messages, the deferred transaction records associated with the transactions are removed from the scheduler heap 410. The removed deferred transaction records no longer prevent the propagation of transactions that depended on the transactions associated with the removed deferred transaction records.
The components of propagation mechanism 402 operate in the same manner as the corresponding components of propagation mechanism 400. Specifically, scheduler process 422 passes transactions associated with unsent deferred transaction records in scheduler heap 420 to stream control processes 424, 426 and 428 when the transaction could not possibly depend on any transaction associated with any other deferred transaction record in the scheduler heap 422.
For the purposes of explanation, the scheduler processes 412 and 422 and the dequeue processes 302 and 304 have been described as separate processes. However, the actual division of functionality between processes may vary from implementation to implementation. For example, a single process may be used to perform both the dequeuing and scheduling operations for a given site. Similarly, a single process may be used to perform the dequeuing and scheduling operations for all destination sites. The present invention is not limited to any particular division of functionality between processes.
The embodiment illustrated in FIG. 4 includes three stream control processes per destination site. However, the actual number of stream control processes may vary from implementation to implementation. For example, ten streams may be maintained between each source site and each destination site. Alternatively, ten streams may be maintained between the source site and a destination site, while only two streams are maintained between the source site and a different destination site. Further, the number of streams maintained between the source site and destination sites may be dynamically adjusted based on factors such the currently available communication bandwidth.
In the embodiments described above, a transaction is not propagated as long as the transaction may depend on one or more transactions that are not known to have been committed at the destination site. However, a transaction can be safely propagated even when transactions that it may depend on are not known to have been committed at the destination site under certain conditions. Specifically, assume that the scheduler process determines that a transaction TXA cannot possibly depend on any propagated transactions that are not known to have committed except for two transactions TXB and TXC. If it is known that transactions TXB and TXC were propagated in the same stream, then TXA can be safely propagated in that same stream.
According to one embodiment of the invention, a record is maintained to indicate which stream was used to propagate each "sent" transaction. In this embodiment, transactions may be propagated to a destination site when (1) all transactions on which they may depend are known to have committed at the destination site, or (2) all transactions on which they may depend which are not known to have committed at the destination site were propagated over the same stream. In the latter case, transactions must be propagated using the same stream as was used to propagate the transactions on which they may depend.
DEPENDENCY DETERMINATION
As described above, scheduler processes 412 and 422 must determine whether the transactions associated with unsent deferred transaction records could possibly depend on any transactions associated with the other deferred transaction records stored in the scheduler heap. However, due to time and space limitations, it is not practical to store a precise representation of the true dependency relation between all transactions.
Rather than attempt to maintain a precise representation of actual dependencies, a database system is provided in which a mechanism for approximating dependencies is maintained. The approximation must be "safe" with respect to the true dependency relation. That is, the approximation must always indicate that a transaction TXA depends on another transaction TXB if TXA actually depends on TXB. However, the approximation does not have to be entirely accurate with respect to two transactions where there is no actual dependency. Thus, it is acceptable for there to exist some pair of transactions TXA and TXB such that the approximation indicates that TXA depends on TXB when TXA does not actually depend on TXB.
A technique for such an approximation is described in U.S. patent application Ser. No. 08/740,544, filed Oct. 29, 1996, by Swart et al. entitled "Tracking Dependencies Between Transactions in a Database" (attorney docket no. 3018-010), the contents of which are incorporated herein by reference. In that technique, a "dependent time value" is computed for each transaction. The dependent time value for a given transaction is the maximum commit time of any transaction that previously wrote a data item that was either read or written by the given transaction. Using this approximation mechanism, the determination at step 502 may be performed by comparing the dependent time value of the transaction associated with the unsent deferred transaction record with the prepare times of the transactions associated with all other deferred transaction records in the scheduler heap. If the dependent time value is less than all prepare time values, then the transaction cannot depend on any other the transactions associated with the deferred transaction records that are currently in the scheduler heap. Otherwise, it is possible that the transaction depends on one of the other transactions in the scheduler heap.
Using this technique, it is possible for a transaction TXA to be propagated before the another transaction TXB even when the approximation indicates that TXA depends on TXB. However, this will only occur if the deferred transaction record for TXA is inserted into the scheduler heap before the deferred transaction record for TXB. Because the deferred transaction records within each dequeue batch are sorted by prepare time before being inserted into the schedule heap, TXA could not actually depend on TXB if the deferred transaction record for TXA is inserted into the scheduler heap before the deferred transaction record of TXB.
DISTRIBUTED TRANSACTIONS
To ensure the integrity of a database, the database must show all of the changes made by a transaction, or none of the changes made by the transaction. Consequently, none of the changes made by a transaction are made permanent within a database until the transaction has been fully executed. A transaction is said to "commit" when the changes made by the transaction are made permanent to the database.
According to one replication approach, the original transaction at the source site and the propagated transactions at the destination sites are all treated as "child transactions" that form parts of a single "distributed" transaction. To ensure consistency, all changes made by a distributed transaction must be made permanent at all sites if any of the changes are made permanent at any site. The technique typically employed to ensure this occurs is referred to as two-phase commit.
During the first phase of two-phase commit, the process that is coordinating the distributed transaction (the "coordinator process") sends the child transactions to the sites to which they correspond. In the context of replication, the coordinator process will typically be a process executing at the source site. The child transactions are then executed at their respective sites. When a child transaction is fully executed at a given site, the child transaction is said to be "prepared". When a child transaction is prepared at a site, a message is sent from the site back to the coordinating process.
When all of the sites have reported that their respective child transactions are prepared, the second phase of the two phase commit begins. During the second phase of two phase commit, the coordinator processes sends messages to all sites to instruct the sites to commit the child transactions. After committing the child transactions, the sites send messages back to the coordinating process to indicate that the child transactions are committed. When the coordinating process has been informed that all of the child transactions have committed, the distributed transaction is considered to be committed. If any child transaction fails to be prepared or committed at any site, the coordinator process sends messages to all of the sites to cause all child transactions to be "rolled back", thus removing all changes by all child transactions of the distributed transaction.
The advantage of implementing replication through the use of distributed transactions is that the distributed transactions can be successfully rolled back and reapplied as an atomic unit if a failure occurs during execution. However, performing a two phase commit imposes a significant delay between the completion of transactions and when the transactions are actually committed. Specifically, two round trips (prepare, prepared, commit, committed) are made between the source site and each destination site for every distributed transaction before the distributed transaction is committed. The latency imposed by these round trip messages may be unacceptably high.
REPLICATION WITHOUT DISTRIBUTED TRANSACTIONS
According to an embodiment of the invention, streams of deferred transactions are propagated from a source site to one or more destination sites without the overhead of distributed transactions. Specifically, transactions at the source site are committed unilaterally without waiting for any confirmations from the destination sites. Likewise, the destination sites execute and commit replication transactions without reporting back to the source site. To ensure database integrity after a failure, the source and destination sites store information that allows the status of the replication transactions to be determined after a failure.
According to one embodiment of the invention, each destination site maintains an applied transactions table and the source site maintains a durable record of which transactions it knows to have committed at the destination site (a "low water mark"). When a replication transaction commits at a destination site, an entry for the replication transaction is committed to the applied transaction table. After a failure, the low water mark at the source site and the information contained in the applied transactions table at the destination site may be inspected to determine the status of all transactions that have been propagated to the destination site.
Specifically, if either the low water mark at the source site or the applied transaction table at the destination site indicates that a transaction has been committed at the destination site, then the changes made by that transaction do not have to be propagated again as part of the failure recovery process. On the other hand, if neither the low water mark at the source site nor the applied transaction table at the destination site indicates that a transaction that must be propagated to a destination site has been committed at the destination site, then the transaction will have to be propagated again as part of failure recovery.
PURGING THE SCHEDULER HEAP
The scheduler heap does not grow indefinitely. According to one embodiment, entries are periodically deleted from the scheduler heap in response to messages received at the source site from the destination site. The messages contain "committed transactions data" that indicates that one or more transactions that were propagated from the source site have been successfully executed and committed at the destination site. In response to receiving committed transactions data from a destination site, the entries in the scheduler heap for the transactions specified in the committed transactions data are deleted from the scheduler heap.
When entries are deleted from the scheduler heap, it is not necessary to immediately update the low water mark maintained by the source site to indicate that the transactions specified in the committed transactions data were committed at the destination site because the applied transaction table at the destination site already indicates that the transactions specified in the committed transactions data were committed at the destination site. Consequently, those transactions will not be retransmitted to the destination site after a failure.
The committed transactions data may be, for example, the transaction sequence number of the last transaction that was committed at the destination site that arrived at the destination site on a particular stream. The scheduler keeps track of which transactions were sent on which streams. Since the transactions that are propagated on any given stream are processed in order at the destination site, the source site knows that all transactions on a that particular stream that preceded the transaction identified in the committed transaction data have also been committed at the destination site. The entries for those transactions are deleted from the scheduler heap along with the entry for the transaction specifically identified in the committed transactions data.
FLUSH TOKENS
Various events may cause a destination site to transmit messages containing committed transactions data. For example, such messages may be sent when a buffer is filled at the destination site, or in response to "flush tokens". A flush token is a token sent on a stream from the source site to the destination site to flush the stream.
The destination site responds to the flush token by executing and committing all of the transactions that preceded the flush token on that particular stream, and by sending to the source site committed transaction information that indicates which transactions that have been propagated from the source site on that stream have been committed at the destination site. As mentioned above, this committed transaction information may simply identify the most recently committed transaction from the stream on which the flush token was sent. The source site knows that all transactions that preceded the identified transaction on the stream have also been committed at the destination site.
UPDATING THE LOW WATER MARK
The source site periodically updates the low water mark associated with a destination site based on the committed transaction information received from the destination site. Various mechanisms may be used to determine a low water mark for a destination site based on committed transaction information received from the destination site.
For example, according to one embodiment of the invention, an ordered list of transactions is maintained for each stream that is being used to send deferred transactions from a source site to a destination site. Each element in an ordered list represents a transaction that was propagated on the stream associated with the ordered list. The order of the elements in the ordered list indicates the order in which the corresponding transactions were propagated on the stream associated with the ordered list.
As mentioned above, committed transaction information may identify a transaction that is known to have been committed at the destination site. In response to the committed transaction information, a process at the source site removes from the ordered list of the appropriate stream the element that corresponds to the identified transaction, as well as all preceding elements.
By truncating the ordered lists for each stream in this manner, the low water mark may be determined by inspecting the ordered lists for all streams to a given destination site and identifying the oldest transaction represented on the lists. All transactions older than that transaction have necessarily been committed at the destination site, so data identifying that transaction may be stored as the low water mark for that destination site.
PURGING THE APPLIED TRANSACTION TABLES
The maintenance of an applied transaction table at every destination site allows for accurate recovery after a failure in a replicated environment. Further, if each applied transaction table is allowed to grow indefinitely, then maintenance of a low water mark at the source site is unnecessary because the applied transaction table will reflect all of the propagated transactions that have ever committed at the destination site. However, an infinitely growing data structure is generally not practical. Therefore, a mechanism is provided for periodically purging entries from the applied transaction tables according to one embodiment of the invention.
To purge records from an applied transaction table, the source site sends a "purge" message to the destination site. The purge message indicates the low water mark that is durably stored at the source site. Upon receiving this message from the source site, the destination site may then delete from the applied transaction table the entries for all transactions that are older than the transaction associated with the low water mark.
Significantly, a purge message is not sent to the destination site unless the low water mark specified in the purge message has been stored on non-volatile memory at the source site. Consequently, the applied transactions table will always identify all transactions that (1) are above the durably stored low water mark and (2) have been propagated to and committed at the destination site.
THE LOW WATER MARK TABLE
According to one embodiment, the source site maintains a "low water mark table" that contains a low water mark for each destination site. As explained above, the low water mark for a destination site identifies a transaction T such that every transaction that was dequeued before T is known to have been applied and committed at that destination site. Under certain conditions, some transactions that are later than T in the dequeue sequence may also have committed at the destination site, but this fact may not yet be known at the source site.
Maintaining a low water mark table at the source site has the benefit that after a failure, the source site only needs to be informed about the status of transactions that are above the low water mark.
DEQUEUE SEQUENCE NUMBERS
According to one embodiment of the invention, each deferred transaction record is given a dequeue sequence number upon being dequeued. For each destination site, dequeue sequence numbers are assigned consecutively as transactions are dequeued. The fact that the sequence numbers are consecutive means that a skip in the sequence indicates the absence of a transaction, rather than just a delay between when transactions were assigned sequence numbers. The dequeue sequence number associated with a transaction is propagated to the destination site with the transaction. The destination site stores the dequeue sequence number of a transaction in the applied transaction table entry for the transaction.
For example, FIG. 6 illustrates a replication system in which deferred transactions are propagated from a source site 602 to a destination site 620 over a plurality of propagation streams 622. The scheduler heap 604 at the source site 602 contains entries for the transactions that have been assigned dequeue numbers 33 through 70. In the illustrated example, all of these transactions have been propagated to the destination site 620 over one of the propagation streams 622. Therefore, all of the entries are marked as "sent". It should be noted that the scheduler heap 604 may additionally include any number of unsent entries for transactions that have not yet been propagated.
The low water mark stored at the source site 602 for destination site 620 is 33. In response to a purge message containing the low water mark 33, all entries with dequeue sequence numbers below 33 have been removed from applied transaction table 650. Applied transaction table 650 currently indicates that the transactions associated with dequeue numbers 33, 34, 35, 40 and 53, which are equal to or above the low water mark of 33, have been committed at the destination site 620.
RANGE-BASED COMMIT TRANSACTIONS DATA
During recovery, a source site must determine the status of transactions that have been propagated to each destination site. As explained above, the status of the transactions may be determined based on the low water marks and information in the applied transaction tables of the destination sites. If the applied transaction table at a destination site contains an entry for a transaction or if the transaction falls below the low water mark for that destination site, then the transaction was committed at the destination site prior to the failure. Otherwise, the transaction had not committed at the destination site prior to the failure.
Because low water marks are maintained at the source site, the source site only needs to be informed of the transactions that were committed at a destination site that are above the low water mark for the destination site (the "above-the-mark committed transactions"). Therefore, one step in the recovery process is communicating to the source site information that identifies the set of above-the-mark committed transactions.
Even when low water marks are maintained at a source site, the set of above-the-mark committed transactions may still be huge if the low water marks were not updated recently before the failure. Therefore, according to one embodiment of the invention, the set of above-the-mark committed transactions is sent from the destination site to the source site as a series of dequeue sequence number ranges.
According to one embodiment, the set of above-the-mark committed transactions is sent from the destination site to the source site in the form of tuples, where each tuple identifies a range of dequeue sequence numbers. For example, assume that destination site 620 had, prior to a failure, committed the transactions propagated from source site 602 with dequeue sequence numbers up to 55, with dequeue sequence numbers from 90 to 200, and with dequeue sequence numbers from 250 to 483.
After the failure, source site 602 sends the low water mark 33 to destination site 620 to request the set of above-the-mark committed transactions that were propagated to destination site 620 from source site 602. In response, destination site 620 sends back to source site 602 the tuples (55, 90), (200,250) and (483,-). With this information, the recovery process knows that all transactions that fall within the indicated ranges will have to be re-propagated to destination site 620.
Significantly, the number of tuples that must be sent as committed transaction information is limited to the number of gaps between the dequeue sequence numbers of committed transactions, and the number of gaps is bounded by the size of the scheduling heap. Therefore, if the original transaction heap was small enough to be stored in dynamic memory prior to the failure, then the committed transaction information should fit in the dynamic memory during recovery.
In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (18)

What is claimed is:
1. A method of recovering after a failure in a replication environment, the method comprising the steps of:
at a source site, executing a transaction that makes changes that must be replicated at a destination site;
making the changes permanent at the source site;
sending the changes to the destination site;
applying the changes at the destination site;
if the changes are successfully applied before the failure, then
making the changes permanent at the destination site; and
adding a record to a set of records at the destination site, wherein the record indicates that the changes where made permanent at the destination site;
after a failure, using the set of records at the destination site to determine which changes must be sent from the source site to the destination site after the failure; and
wherein the step of making the changes permanent at the source site is performed without the source site being informed as to whether the changes were successfully applied at the destination site.
2. The method of claim 1 wherein the step of making the changes permanent at the destination site is performed without automatically informing the source site that the changes were made permanent at the destination site.
3. The method of claim 1 further comprising the step of, after the failure, transmitting committed transaction information from the destination site to the source site, wherein the committed transaction information indicates transactions represented in said set of records at said destination site, wherein the step of using the set of records to determine which changes must be sent from the source site to the destination site after the failure is performed at the source site.
4. The method of claim 3 further comprising the steps of:
assigning sequence numbers to changes that are to be sent from the source site to the destination site;
wherein the step of transmitting committed transaction information includes transmitting data that indicates gaps in a sequence formed by said sequence numbers.
5. The method of claim 1 further comprising the steps of:
receiving messages from the destination site that indicate which changes have been made permanent at the destination site;
generating low water mark data for said destination site based on said messages, wherein all transactions that must be propagated to said destination site that are older than a time indicated by said low water mark data are known to have been made permanent at said destination site; and
durably storing said low water mark data at said source site.
6. The method of claim 5 further comprising the steps of:
sending a message to the destination site that indicates the low water mark data stored at the source site; and
in response to the message at the destination site, deleting from said set of records the records that correspond to transactions that are older than the time indicated by said low water mark.
7. The method of claim 1 further comprising the steps of:
sending a flush message from the source site to the destination site over a stream;
at the destination site, performing the following steps in response to the flush message
identifying a transaction that was propagated from the source site to the destination site over said stream and that has been committed at the destination site; and
sending data that identifies said transaction from the destination site to the source site.
8. A method for recording the status of propagated transactions in a computer system in which changes made at a source site are replicated at a plurality of destination sites, the method comprising the steps of: at the source site, performing the steps of:
maintaining a sequence counter for each destination site of said plurality of destination sites;
prior to propagating a transaction to a destination site, performing the steps of
incrementing the sequence counter for the destination site; and
assigning the transaction a sequence number based on the sequence counter for the destination site;
at each destination site, recording the sequence numbers of transactions that have been propagated from the source site and that have committed at the destination site.
9. The method of claim 8 further comprising the steps of:
durably storing at the source site a low water mark for a destination site,
wherein the low water mark identifies a time such that all transactions that are older than the low water mark are known to have been committed at the destination site;
sending a purge message to the destination site that indicates that the low water mark; and
in response to the purge message from the source site, deleting records at the destination site of transactions that are older than the low water mark.
10. The method of claim 9 further comprising the steps of:
after a failure, performing the steps of
sending a message indicating the low water mark for a destination site from the source site to the destination site; and
sending committed transactions information from the destination site to the source site, wherein the committed transactions information indicates, for all the transactions propagated from the source site that are newer than the low water mark for the destination site, whether the transactions committed at the destination site before the failure.
11. The method of claim 10 wherein the step of sending committed transactions information includes sending information that indicates gaps in the sequence numbers of transactions propagated from the source site.
12. The computer-readable medium of claim 11 further comprising sequences of instructions for performing the steps of:
sending a flush message from the source site to the destination site over a stream;
at the destination site, performing the following steps in response to the flush message
identifying a transaction that was propagated from the source site to the destination site over said stream and that has been committed at the destination site; and
sending data that identifies said transaction from the destination site to the source site.
13. A computer-readable medium having stored thereon sequences of instructions for recovering after a failure in a replication environment, the sequences of instructions including instructions for performing the steps of:
at a source site, executing a transaction that makes changes that must be replicated at a destination site;
making the changes permanent at the source site;
sending the changes to the destination site;
applying the changes at the destination site;
if the changes are successfully applied before the failure, then
making the changes permanent at the destination site; and
adding a record to a set of records at the destination site, wherein the record indicates that the changes where made permanent at the destination site;
after a failure, performing the steps of
inspecting the set of records at the destination site to determine a set of changes that were made permanent at the destination site prior to the failure; and
using the set of records at the destination site to determine which changes must be sent from the source site to the destination site after the failure;
wherein the step of making the changes permanent at the source site is performed without the source site being informed as to whether the changes were successfully applied at the destination site.
14. The computer-readable medium of claim 13 wherein the step of making the changes permanent at the destination site is performed without automatically informing the source site that the changes were made permanent at the destination site.
15. The computer-readable medium of claim 13 further comprising sequences of instructions for performing the step of, after the failure, transmitting committed transaction information from the destination site to the source site, wherein the committed transaction information indicates transactions from said set of records at said destination site, wherein the step of using the set of records to determine which changes must be sent from the source site to the destination site after the failure is performed at the source site.
16. The computer-readable medium of claim 15 further comprising sequences of instructions for performing the steps of:
assigning sequence numbers to changes that are to be sent from the source site to the destination site;
wherein the step of transmitting committed transaction information includes transmitting data that indicates gaps in a sequence formed by said sequence numbers.
17. The computer-readable medium of claim 13 further comprising sequences of instructions for performing the steps of:
receiving messages from the destination site that indicate which changes have been made permanent at the destination site;
determining a low water mark data for said destination site based on said messages, wherein all transactions that must be propagated to said destination site that are older than a time indicated by said low water mark data are known to have been made permanent at said destination site; and
durably storing said low water mark data at said source site.
18. The computer-readable medium of claim 17 further comprising sequences of instructions for performing the steps of:
sending a message to the destination site that indicates the low water mark data that has been stored at the source site; and
in response to the message at the destination site, deleting from said set of records the records that correspond to said transactions that fall below the low water mark.
US08/772,003 1996-12-19 1996-12-19 Recoverable data replication between source site and destination site without distributed transactions Expired - Lifetime US5781912A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/772,003 US5781912A (en) 1996-12-19 1996-12-19 Recoverable data replication between source site and destination site without distributed transactions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/772,003 US5781912A (en) 1996-12-19 1996-12-19 Recoverable data replication between source site and destination site without distributed transactions

Publications (1)

Publication Number Publication Date
US5781912A true US5781912A (en) 1998-07-14

Family

ID=25093596

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/772,003 Expired - Lifetime US5781912A (en) 1996-12-19 1996-12-19 Recoverable data replication between source site and destination site without distributed transactions

Country Status (1)

Country Link
US (1) US5781912A (en)

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5956714A (en) * 1997-08-13 1999-09-21 Southwestern Bell Telephone Company Queuing system using a relational database
US6047385A (en) * 1997-09-10 2000-04-04 At&T Corp Digital cross-connect system restoration technique
US6161198A (en) * 1997-12-23 2000-12-12 Unisys Corporation System for providing transaction indivisibility in a transaction processing system upon recovery from a host processor failure by monitoring source message sequencing
US6212531B1 (en) * 1998-01-13 2001-04-03 International Business Machines Corporation Method for implementing point-in-time copy using a snapshot function
US20010029833A1 (en) * 2000-03-03 2001-10-18 Toru Morita Musical sound generator
US6339778B1 (en) * 1997-12-12 2002-01-15 International Business Machines Corporation Method and article for apparatus for performing automated reconcile control in a virtual tape system
US6347322B1 (en) * 1998-11-09 2002-02-12 Lucent Technologies Inc. Transaction state data replication by transaction forwarding in replicated database systems
US6366930B1 (en) * 1996-04-12 2002-04-02 Computer Associates Think, Inc. Intelligent data inventory & asset management systems method and apparatus
US6411991B1 (en) * 1998-09-25 2002-06-25 Sprint Communications Company L.P. Geographic data replication system and method for a network
US6421686B1 (en) * 1999-11-15 2002-07-16 International Business Machines Corporation Method of replicating data records
US20020138302A1 (en) * 2001-03-21 2002-09-26 David Bodnick Prepaid telecommunication card for health care compliance
US20020161698A1 (en) * 2000-10-04 2002-10-31 Wical Kelly J. Caching system using timing queues based on last access times
US20030004990A1 (en) * 2000-03-02 2003-01-02 Draper Stephen P.W. System and method for reducing the size of data difference representations
US20030033303A1 (en) * 2001-08-07 2003-02-13 Brian Collins System and method for restricting access to secured data
US6604236B1 (en) 1998-06-30 2003-08-05 Iora, Ltd. System and method for generating file updates for files stored on read-only media
US6640247B1 (en) * 1999-12-13 2003-10-28 International Business Machines Corporation Restartable computer database message processing
US6714943B1 (en) 2001-01-31 2004-03-30 Oracle International Corporation Method and mechanism for tracking dependencies for referential integrity constrained tables
US6728719B1 (en) 2001-01-31 2004-04-27 Oracle International Corporation Method and mechanism for dependency tracking for unique constraints
US6775708B1 (en) * 2000-02-08 2004-08-10 Microsoft Corporation Identification of transactional boundaries
US20040193658A1 (en) * 2003-03-31 2004-09-30 Nobuo Kawamura Disaster recovery processing method and apparatus and storage unit for the same
US6804672B1 (en) * 2001-01-31 2004-10-12 Oracle International Corporation Method and mechanism for dependency tracking
US20040205587A1 (en) * 2001-08-07 2004-10-14 Draper Stephen P.W. System and method for enumerating arbitrary hyperlinked structures in which links may be dynamically calculable
US20050015425A1 (en) * 2003-07-14 2005-01-20 Sun Microsystems, Inc. Transaction manager freezing
US20050015353A1 (en) * 2003-07-14 2005-01-20 Sun Microsystems, Inc. Read/write lock transaction manager freezing
US20050080823A1 (en) * 2003-10-10 2005-04-14 Brian Collins Systems and methods for modifying a set of data objects
US20050097091A1 (en) * 2003-09-06 2005-05-05 Oracle International Corporation SQL tuning base
US20050251523A1 (en) * 2004-05-07 2005-11-10 Oracle International Corporation Minimizing downtime for application changes in database systems
US20050273474A1 (en) * 2004-06-03 2005-12-08 Nobuo Kawamura Method and system for data processing with data replication for the same
US20050289198A1 (en) * 2004-06-25 2005-12-29 International Business Machines Corporation Methods, apparatus and computer programs for data replication
US20060004828A1 (en) * 2004-05-14 2006-01-05 Oracle International Corporation Finer grain dependency tracking for database objects
US7024429B2 (en) 2002-01-31 2006-04-04 Nextpage,Inc. Data replication based upon a non-destructive data model
US7039660B2 (en) 2002-12-19 2006-05-02 Hitachi, Ltd. Disaster recovery processing method and apparatus and storage unit for the same
US7047300B1 (en) 1998-02-10 2006-05-16 Sprint Communications Company L.P. Survivable and scalable data system and method for computer networks
US7058853B1 (en) * 2000-06-09 2006-06-06 Hewlett-Packard Development Company, L.P. Highly available transaction processing
US20060190497A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Support for schema evolution in a multi-node peer-to-peer replication environment
US20060190498A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Replication-only triggers
US20060190504A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Simulating multi-user activity while maintaining original linear request order for asynchronous transactional events
US20060190503A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Online repair of a replicated table
US20080059469A1 (en) * 2006-08-31 2008-03-06 International Business Machines Corporation Replication Token Based Synchronization
US20080114853A1 (en) * 2006-10-05 2008-05-15 Holt John M Network protocol for network communications
US20080114816A1 (en) * 2006-11-10 2008-05-15 Sybase, Inc. Replication system with methodology for replicating database sequences
US20080243944A1 (en) * 2002-03-06 2008-10-02 Colligo Networks, Inc. Synchronous peer-to-peer multipoint database synchronization
US20090138523A1 (en) * 2007-11-28 2009-05-28 Wan-Chang Pi Content engine asynchronous upgrade framework
US20090313311A1 (en) * 2008-06-12 2009-12-17 Gravic, Inc. Mixed mode synchronous and asynchronous replication system
US20100049751A1 (en) * 2005-06-10 2010-02-25 Dominic Benjamin Giampaolo Methods and Apparatuses for Data Protection
US7757226B2 (en) 2004-03-17 2010-07-13 Oracle International Corporation Method and mechanism for performing a rolling upgrade of distributed computer software
US20100191884A1 (en) * 2008-06-12 2010-07-29 Gravic, Inc. Method for replicating locks in a data replication engine
US20100325371A1 (en) * 2009-06-22 2010-12-23 Ashwin Jagadish Systems and methods for web logging of trace data in a multi-core system
US7890461B2 (en) 2004-03-19 2011-02-15 Hitachi, Ltd. System executing log data transfer synchronously and database data transfer asynchronously
US20110060724A1 (en) * 2009-09-08 2011-03-10 Oracle International Corporation Distributed database recovery
US20110196832A1 (en) * 2010-02-09 2011-08-11 Yonatan Zunger Location Assignment Daemon (LAD) For A Distributed Storage System
US20110196873A1 (en) * 2010-02-09 2011-08-11 Alexander Kesselman System and Method for Replicating Objects In A Distributed Storage System
US20110196822A1 (en) * 2010-02-09 2011-08-11 Yonatan Zunger Method and System For Uploading Data Into A Distributed Storage System
US20110196829A1 (en) * 2010-02-09 2011-08-11 Vickrey Rebekah C Method and System for Providing Efficient Access to a Tape Storage System
US20110196900A1 (en) * 2010-02-09 2011-08-11 Alexandre Drobychev Storage of Data In A Distributed Storage System
US8010627B1 (en) 1998-09-25 2011-08-30 Sprint Communications Company L.P. Virtual content publishing system
US20130036105A1 (en) * 2011-08-01 2013-02-07 Tagged, Inc. Reconciling a distributed database from hierarchical viewpoints
US20140108348A1 (en) * 2012-10-11 2014-04-17 Matthew Allen Ahrens Retrieving point-in-time copies of a source database for creating virtual databases
US8938418B2 (en) 2010-02-09 2015-01-20 Google Inc. Method and system for efficiently replicating data in non-relational databases
US9020887B2 (en) 2004-12-21 2015-04-28 Proofpoint, Inc. Managing the status of documents in a distributed storage system
US9405816B2 (en) 2013-03-05 2016-08-02 Microsoft Technology Licensing, Llc Reconciliation of geo-replicated database clusters
US9659292B1 (en) * 2001-08-30 2017-05-23 EMC IP Holding Company LLC Storage-based replication of e-commerce transactions in real time
US9723045B2 (en) 2012-10-18 2017-08-01 Hewlett Packard Enterprise Development Lp Communicating tuples in a message
US9785514B1 (en) * 2007-09-28 2017-10-10 Veritas Technologies Llc Techniques for file system recovery
US20170293910A1 (en) * 2016-04-06 2017-10-12 Ford Global Technologies, Llc Wireless payment transactions in a vehicle environment
US10354236B1 (en) * 2018-02-11 2019-07-16 Loopring Project Ltd Methods for preventing front running in digital asset transactions

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4853843A (en) * 1987-12-18 1989-08-01 Tektronix, Inc. System for merging virtual partitions of a distributed database
US5065311A (en) * 1987-04-20 1991-11-12 Hitachi, Ltd. Distributed data base system of composite subsystem type, and method fault recovery for the system
JPH0581115A (en) * 1991-09-20 1993-04-02 Hitachi Eng Co Ltd Data processing method for distributed database managing system
JPH0588855A (en) * 1991-01-14 1993-04-09 Nec Corp Trouble recovering system
US5247664A (en) * 1991-03-28 1993-09-21 Amoco Corporation Fault-tolerant distributed database system and method for the management of correctable subtransaction faults by the global transaction source node
EP0569693A1 (en) * 1992-05-11 1993-11-18 Motorola, Inc. Spectrum recovery apparatus and method therefor
US5280611A (en) * 1991-11-08 1994-01-18 International Business Machines Corporation Method for managing database recovery from failure of a shared store in a system including a plurality of transaction-based systems of the write-ahead logging type
US5371731A (en) * 1990-10-10 1994-12-06 British Telecommunications Public Limited Company Network traffic management
US5404508A (en) * 1992-12-03 1995-04-04 Unisys Corporation Data base backup and recovery system and method
US5504861A (en) * 1994-02-22 1996-04-02 International Business Machines Corporation Remote data duplexing
US5530855A (en) * 1992-10-13 1996-06-25 International Business Machines Corporation Replicating a database by the sequential application of hierarchically sorted log records
US5546379A (en) * 1993-10-01 1996-08-13 Nec America Bandwidth-on-demand remote office network apparatus and method
US5555371A (en) * 1992-12-17 1996-09-10 International Business Machines Corporation Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage
US5594900A (en) * 1992-12-02 1997-01-14 International Business Machines Corporation System and method for providing a backup copy of a database
US5615329A (en) * 1994-02-22 1997-03-25 International Business Machines Corporation Remote data duplexing
US5639380A (en) * 1994-05-31 1997-06-17 Misquitta; Neale J. System for automating groundwater recovery controlled by monitoring parameters in monitoring wells
US5682513A (en) * 1995-03-31 1997-10-28 International Business Machines Corporation Cache queue entry linking for DASD record updates

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5333314A (en) * 1987-04-20 1994-07-26 Hitachi, Ltd. Distributed data base system of composite subsystem type, and method of fault recovery for the system
US5065311A (en) * 1987-04-20 1991-11-12 Hitachi, Ltd. Distributed data base system of composite subsystem type, and method fault recovery for the system
US4853843A (en) * 1987-12-18 1989-08-01 Tektronix, Inc. System for merging virtual partitions of a distributed database
US5371731A (en) * 1990-10-10 1994-12-06 British Telecommunications Public Limited Company Network traffic management
JPH0588855A (en) * 1991-01-14 1993-04-09 Nec Corp Trouble recovering system
US5247664A (en) * 1991-03-28 1993-09-21 Amoco Corporation Fault-tolerant distributed database system and method for the management of correctable subtransaction faults by the global transaction source node
JPH0581115A (en) * 1991-09-20 1993-04-02 Hitachi Eng Co Ltd Data processing method for distributed database managing system
US5280611A (en) * 1991-11-08 1994-01-18 International Business Machines Corporation Method for managing database recovery from failure of a shared store in a system including a plurality of transaction-based systems of the write-ahead logging type
EP0569693A1 (en) * 1992-05-11 1993-11-18 Motorola, Inc. Spectrum recovery apparatus and method therefor
US5640561A (en) * 1992-10-13 1997-06-17 International Business Machines Corporation Computerized method and system for replicating a database using log records
US5530855A (en) * 1992-10-13 1996-06-25 International Business Machines Corporation Replicating a database by the sequential application of hierarchically sorted log records
US5594900A (en) * 1992-12-02 1997-01-14 International Business Machines Corporation System and method for providing a backup copy of a database
US5404508A (en) * 1992-12-03 1995-04-04 Unisys Corporation Data base backup and recovery system and method
US5555371A (en) * 1992-12-17 1996-09-10 International Business Machines Corporation Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage
US5546379A (en) * 1993-10-01 1996-08-13 Nec America Bandwidth-on-demand remote office network apparatus and method
US5615329A (en) * 1994-02-22 1997-03-25 International Business Machines Corporation Remote data duplexing
US5504861A (en) * 1994-02-22 1996-04-02 International Business Machines Corporation Remote data duplexing
US5639380A (en) * 1994-05-31 1997-06-17 Misquitta; Neale J. System for automating groundwater recovery controlled by monitoring parameters in monitoring wells
US5682513A (en) * 1995-03-31 1997-10-28 International Business Machines Corporation Cache queue entry linking for DASD record updates

Cited By (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847982B2 (en) 1996-04-12 2005-01-25 Computer Associates Think, Inc. Intelligent data inventory and asset management system method and apparatus
US6366930B1 (en) * 1996-04-12 2002-04-02 Computer Associates Think, Inc. Intelligent data inventory & asset management systems method and apparatus
US5956714A (en) * 1997-08-13 1999-09-21 Southwestern Bell Telephone Company Queuing system using a relational database
US6047385A (en) * 1997-09-10 2000-04-04 At&T Corp Digital cross-connect system restoration technique
US6339778B1 (en) * 1997-12-12 2002-01-15 International Business Machines Corporation Method and article for apparatus for performing automated reconcile control in a virtual tape system
US6161198A (en) * 1997-12-23 2000-12-12 Unisys Corporation System for providing transaction indivisibility in a transaction processing system upon recovery from a host processor failure by monitoring source message sequencing
US6212531B1 (en) * 1998-01-13 2001-04-03 International Business Machines Corporation Method for implementing point-in-time copy using a snapshot function
US7047300B1 (en) 1998-02-10 2006-05-16 Sprint Communications Company L.P. Survivable and scalable data system and method for computer networks
US6604236B1 (en) 1998-06-30 2003-08-05 Iora, Ltd. System and method for generating file updates for files stored on read-only media
US6411991B1 (en) * 1998-09-25 2002-06-25 Sprint Communications Company L.P. Geographic data replication system and method for a network
US8010627B1 (en) 1998-09-25 2011-08-30 Sprint Communications Company L.P. Virtual content publishing system
US6347322B1 (en) * 1998-11-09 2002-02-12 Lucent Technologies Inc. Transaction state data replication by transaction forwarding in replicated database systems
US6421686B1 (en) * 1999-11-15 2002-07-16 International Business Machines Corporation Method of replicating data records
US6640247B1 (en) * 1999-12-13 2003-10-28 International Business Machines Corporation Restartable computer database message processing
US6775708B1 (en) * 2000-02-08 2004-08-10 Microsoft Corporation Identification of transactional boundaries
US7028251B2 (en) 2000-03-02 2006-04-11 Iora, Ltd. System and method for reducing the size of data difference representations
US20030004990A1 (en) * 2000-03-02 2003-01-02 Draper Stephen P.W. System and method for reducing the size of data difference representations
US20010029833A1 (en) * 2000-03-03 2001-10-18 Toru Morita Musical sound generator
US7058853B1 (en) * 2000-06-09 2006-06-06 Hewlett-Packard Development Company, L.P. Highly available transaction processing
US20020161698A1 (en) * 2000-10-04 2002-10-31 Wical Kelly J. Caching system using timing queues based on last access times
US6714943B1 (en) 2001-01-31 2004-03-30 Oracle International Corporation Method and mechanism for tracking dependencies for referential integrity constrained tables
US6728719B1 (en) 2001-01-31 2004-04-27 Oracle International Corporation Method and mechanism for dependency tracking for unique constraints
US20050050109A1 (en) * 2001-01-31 2005-03-03 Oracle International Corporation Method and mechanism for dependency tracking at low granularity levels
US6804672B1 (en) * 2001-01-31 2004-10-12 Oracle International Corporation Method and mechanism for dependency tracking
US7010529B2 (en) * 2001-01-31 2006-03-07 Oracle International Corporation Method and mechanism for dependency tracking at low granularity levels
US20020138302A1 (en) * 2001-03-21 2002-09-26 David Bodnick Prepaid telecommunication card for health care compliance
US20030033303A1 (en) * 2001-08-07 2003-02-13 Brian Collins System and method for restricting access to secured data
US20040205587A1 (en) * 2001-08-07 2004-10-14 Draper Stephen P.W. System and method for enumerating arbitrary hyperlinked structures in which links may be dynamically calculable
US9659292B1 (en) * 2001-08-30 2017-05-23 EMC IP Holding Company LLC Storage-based replication of e-commerce transactions in real time
US7024429B2 (en) 2002-01-31 2006-04-04 Nextpage,Inc. Data replication based upon a non-destructive data model
US20080243944A1 (en) * 2002-03-06 2008-10-02 Colligo Networks, Inc. Synchronous peer-to-peer multipoint database synchronization
US7966285B2 (en) * 2002-03-06 2011-06-21 Ionaphal Data Limited Liability Company Synchronous peer-to-peer multipoint database synchronization
US7039660B2 (en) 2002-12-19 2006-05-02 Hitachi, Ltd. Disaster recovery processing method and apparatus and storage unit for the same
US20100121824A1 (en) * 2003-03-31 2010-05-13 Hitachi, Ltd. Disaster recovery processing method and apparatus and storage unit for the same
US7562103B2 (en) 2003-03-31 2009-07-14 Hitachi, Ltd. Disaster recovery processing method and apparatus and storage unit for the same
US20040193658A1 (en) * 2003-03-31 2004-09-30 Nobuo Kawamura Disaster recovery processing method and apparatus and storage unit for the same
US7668874B2 (en) 2003-03-31 2010-02-23 Hitachi, Ltd. Disaster recovery processing method and apparatus and storage unit for the same
US20060010180A1 (en) * 2003-03-31 2006-01-12 Nobuo Kawamura Disaster recovery processing method and apparatus and storage unit for the same
US20050015425A1 (en) * 2003-07-14 2005-01-20 Sun Microsystems, Inc. Transaction manager freezing
US7640545B2 (en) * 2003-07-14 2009-12-29 Sun Microsytems, Inc. Transaction manager freezing
US7739252B2 (en) * 2003-07-14 2010-06-15 Oracle America, Inc. Read/write lock transaction manager freezing
US20050015353A1 (en) * 2003-07-14 2005-01-20 Sun Microsystems, Inc. Read/write lock transaction manager freezing
US7634456B2 (en) 2003-09-06 2009-12-15 Oracle International Corporation SQL structure analyzer
US20050119999A1 (en) * 2003-09-06 2005-06-02 Oracle International Corporation Automatic learning optimizer
US20050097091A1 (en) * 2003-09-06 2005-05-05 Oracle International Corporation SQL tuning base
US8825629B2 (en) 2003-09-06 2014-09-02 Oracle International Corporation Method for index tuning of a SQL statement, and index merging for a multi-statement SQL workload, using a cost-based relational query optimizer
US8983934B2 (en) 2003-09-06 2015-03-17 Oracle International Corporation SQL tuning base
US20050187917A1 (en) * 2003-09-06 2005-08-25 Oracle International Corporation Method for index tuning of a SQL statement, and index merging for a multi-statement SQL workload, using a cost-based relational query optimizer
US20050177557A1 (en) * 2003-09-06 2005-08-11 Oracle International Corporation Automatic prevention of run-away query execution
US20050120001A1 (en) * 2003-09-06 2005-06-02 Oracle International Corporation SQL structure analyzer
US20050125393A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation SQL tuning sets
US7805411B2 (en) 2003-09-06 2010-09-28 Oracle International Corporation Auto-tuning SQL statements
US20050125427A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation Automatic SQL tuning advisor
US20050125452A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation SQL profile
US7739263B2 (en) 2003-09-06 2010-06-15 Oracle International Corporation Global hints
US20050120000A1 (en) * 2003-09-06 2005-06-02 Oracle International Corporation Auto-tuning SQL statements
US20050125398A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation Global hints
US7664778B2 (en) 2003-09-06 2010-02-16 Oracle International Corporation SQL tuning sets
US7664730B2 (en) 2003-09-06 2010-02-16 Oracle International Corporation Method and system for implementing a SQL profile
US20050138015A1 (en) * 2003-09-06 2005-06-23 Oracle International Corporation High load SQL driven statistics collection
US7472254B2 (en) 2003-10-10 2008-12-30 Iora, Ltd. Systems and methods for modifying a set of data objects
US20050080823A1 (en) * 2003-10-10 2005-04-14 Brian Collins Systems and methods for modifying a set of data objects
US7757226B2 (en) 2004-03-17 2010-07-13 Oracle International Corporation Method and mechanism for performing a rolling upgrade of distributed computer software
US7890461B2 (en) 2004-03-19 2011-02-15 Hitachi, Ltd. System executing log data transfer synchronously and database data transfer asynchronously
US20050251523A1 (en) * 2004-05-07 2005-11-10 Oracle International Corporation Minimizing downtime for application changes in database systems
US20060004828A1 (en) * 2004-05-14 2006-01-05 Oracle International Corporation Finer grain dependency tracking for database objects
US7788285B2 (en) 2004-05-14 2010-08-31 Oracle International Corporation Finer grain dependency tracking for database objects
US20080256137A1 (en) * 2004-06-03 2008-10-16 Noluo Kawamura Method and system for data processing with data replication for the same
US20050273474A1 (en) * 2004-06-03 2005-12-08 Nobuo Kawamura Method and system for data processing with data replication for the same
US20080098044A1 (en) * 2004-06-25 2008-04-24 Todd Stephen J Methods, apparatus and computer programs for data replication
US8150812B2 (en) 2004-06-25 2012-04-03 International Business Machines Corporation Methods, apparatus and computer programs for data replication
US20050289198A1 (en) * 2004-06-25 2005-12-29 International Business Machines Corporation Methods, apparatus and computer programs for data replication
US7716181B2 (en) * 2004-06-25 2010-05-11 International Business Machines Corporation Methods, apparatus and computer programs for data replication comprising a batch of descriptions of data changes
US9020887B2 (en) 2004-12-21 2015-04-28 Proofpoint, Inc. Managing the status of documents in a distributed storage system
US20060190504A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Simulating multi-user activity while maintaining original linear request order for asynchronous transactional events
US8639677B2 (en) * 2005-02-18 2014-01-28 International Business Machines Corporation Database replication techniques for maintaining original linear request order for asynchronous transactional events
US8214353B2 (en) 2005-02-18 2012-07-03 International Business Machines Corporation Support for schema evolution in a multi-node peer-to-peer replication environment
US20060190503A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Online repair of a replicated table
US8037056B2 (en) * 2005-02-18 2011-10-11 International Business Machines Corporation Online repair of a replicated table
US20080215586A1 (en) * 2005-02-18 2008-09-04 International Business Machines Corporation Simulating Multi-User Activity While Maintaining Original Linear Request Order for Asynchronous Transactional Events
US7376675B2 (en) * 2005-02-18 2008-05-20 International Business Machines Corporation Simulating multi-user activity while maintaining original linear request order for asynchronous transactional events
US20060190498A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Replication-only triggers
US9189534B2 (en) 2005-02-18 2015-11-17 International Business Machines Corporation Online repair of a replicated table
US9286346B2 (en) 2005-02-18 2016-03-15 International Business Machines Corporation Replication-only triggers
US20060190497A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Support for schema evolution in a multi-node peer-to-peer replication environment
US20100049751A1 (en) * 2005-06-10 2010-02-25 Dominic Benjamin Giampaolo Methods and Apparatuses for Data Protection
US8239356B2 (en) * 2005-06-10 2012-08-07 Apple Inc. Methods and apparatuses for data protection
US20100114847A1 (en) * 2005-06-10 2010-05-06 Dominic Benjamin Giampaolo Methods and Apparatuses for Data Protection
US8255371B2 (en) 2005-06-10 2012-08-28 Apple Inc. Methods and apparatuses for data protection
US20080059469A1 (en) * 2006-08-31 2008-03-06 International Business Machines Corporation Replication Token Based Synchronization
US20080114853A1 (en) * 2006-10-05 2008-05-15 Holt John M Network protocol for network communications
US20080114816A1 (en) * 2006-11-10 2008-05-15 Sybase, Inc. Replication system with methodology for replicating database sequences
US7587435B2 (en) * 2006-11-10 2009-09-08 Sybase, Inc. Replication system with methodology for replicating database sequences
US9785514B1 (en) * 2007-09-28 2017-10-10 Veritas Technologies Llc Techniques for file system recovery
US11347771B2 (en) * 2007-11-28 2022-05-31 International Business Machines Corporation Content engine asynchronous upgrade framework
US20090138523A1 (en) * 2007-11-28 2009-05-28 Wan-Chang Pi Content engine asynchronous upgrade framework
US7962458B2 (en) * 2008-06-12 2011-06-14 Gravic, Inc. Method for replicating explicit locks in a data replication engine
US8301593B2 (en) 2008-06-12 2012-10-30 Gravic, Inc. Mixed mode synchronous and asynchronous replication system
US20090313311A1 (en) * 2008-06-12 2009-12-17 Gravic, Inc. Mixed mode synchronous and asynchronous replication system
US20100191884A1 (en) * 2008-06-12 2010-07-29 Gravic, Inc. Method for replicating locks in a data replication engine
US8219676B2 (en) * 2009-06-22 2012-07-10 Citrix Systems, Inc. Systems and methods for web logging of trace data in a multi-core system
US20100325371A1 (en) * 2009-06-22 2010-12-23 Ashwin Jagadish Systems and methods for web logging of trace data in a multi-core system
US8429134B2 (en) * 2009-09-08 2013-04-23 Oracle International Corporation Distributed database recovery
US20110060724A1 (en) * 2009-09-08 2011-03-10 Oracle International Corporation Distributed database recovery
US8862617B2 (en) 2010-02-09 2014-10-14 Google Inc. System and method for replicating objects in a distributed storage system
US9305069B2 (en) 2010-02-09 2016-04-05 Google Inc. Method and system for uploading data into a distributed storage system
US8874523B2 (en) 2010-02-09 2014-10-28 Google Inc. Method and system for providing efficient access to a tape storage system
US8886602B2 (en) * 2010-02-09 2014-11-11 Google Inc. Location assignment daemon (LAD) for a distributed storage system
US8938418B2 (en) 2010-02-09 2015-01-20 Google Inc. Method and system for efficiently replicating data in non-relational databases
US20110196822A1 (en) * 2010-02-09 2011-08-11 Yonatan Zunger Method and System For Uploading Data Into A Distributed Storage System
US20110196873A1 (en) * 2010-02-09 2011-08-11 Alexander Kesselman System and Method for Replicating Objects In A Distributed Storage System
US20150142743A1 (en) * 2010-02-09 2015-05-21 Google Inc. Location Assignment Daemon (LAD) For A Distributed Storage System
US20110196832A1 (en) * 2010-02-09 2011-08-11 Yonatan Zunger Location Assignment Daemon (LAD) For A Distributed Storage System
US20110196900A1 (en) * 2010-02-09 2011-08-11 Alexandre Drobychev Storage of Data In A Distributed Storage System
US9298736B2 (en) 2010-02-09 2016-03-29 Google Inc. Pruning of blob replicas
US9747322B2 (en) 2010-02-09 2017-08-29 Google Inc. Storage of data in a distributed storage system
US9317524B2 (en) * 2010-02-09 2016-04-19 Google Inc. Location assignment daemon (LAD) for a distributed storage system
US20110196829A1 (en) * 2010-02-09 2011-08-11 Vickrey Rebekah C Method and System for Providing Efficient Access to a Tape Storage System
US9659031B2 (en) 2010-02-09 2017-05-23 Google Inc. Systems and methods of simulating the state of a distributed storage system
US8805810B2 (en) 2011-08-01 2014-08-12 Tagged, Inc. Generalized reconciliation in a distributed database
US20130036105A1 (en) * 2011-08-01 2013-02-07 Tagged, Inc. Reconciling a distributed database from hierarchical viewpoints
US20140108348A1 (en) * 2012-10-11 2014-04-17 Matthew Allen Ahrens Retrieving point-in-time copies of a source database for creating virtual databases
US10346369B2 (en) * 2012-10-11 2019-07-09 Delphix Corp. Retrieving point-in-time copies of a source database for creating virtual databases
US10067952B2 (en) 2012-10-11 2018-09-04 Delphix Corporation Retrieving point-in-time copies of a source database for creating virtual databases
US9723045B2 (en) 2012-10-18 2017-08-01 Hewlett Packard Enterprise Development Lp Communicating tuples in a message
US9405816B2 (en) 2013-03-05 2016-08-02 Microsoft Technology Licensing, Llc Reconciliation of geo-replicated database clusters
US10896417B2 (en) * 2016-04-06 2021-01-19 Ford Global Technologies, Llc Wireless payment transactions in a vehicle environment
US20170293910A1 (en) * 2016-04-06 2017-10-12 Ford Global Technologies, Llc Wireless payment transactions in a vehicle environment
US10354236B1 (en) * 2018-02-11 2019-07-16 Loopring Project Ltd Methods for preventing front running in digital asset transactions

Similar Documents

Publication Publication Date Title
US5781912A (en) Recoverable data replication between source site and destination site without distributed transactions
US5870761A (en) Parallel queue propagation
US5870760A (en) Dequeuing using queue batch numbers
US6253212B1 (en) Method and system for maintaining checkpoint values
US6647510B1 (en) Method and apparatus for making available data that was locked by a dead transaction before rolling back the entire dead transaction
US9652519B2 (en) Replicating data across multiple copies of a table in a database system
US6295610B1 (en) Recovering resources in parallel
US6185577B1 (en) Method and apparatus for incremental undo
US7158999B2 (en) Reorganization and repair of an ICF catalog while open and in-use in a digital data storage system
US7991745B2 (en) Database log capture that publishes transactions to multiple targets to handle unavailable targets by separating the publishing of subscriptions and subsequently recombining the publishing
US7003531B2 (en) Synchronization of plural databases in a database replication system
US6098078A (en) Maintaining consistency of database replicas
US7966298B2 (en) Record-level locking and page-level recovery in a database management system
US6678704B1 (en) Method and system for controlling recovery downtime by maintaining a checkpoint value
US6026406A (en) Batch processing of updates to indexes
EP0280773A2 (en) Method for recovery enhancement in a transaction-oriented data processing system
US9672244B2 (en) Efficient undo-processing during data redistribution
US6957236B1 (en) Providing a useable version of a data item
US20120191680A1 (en) Asynchronous Deletion of a Range of Messages Processed by a Parallel Database Replication Apply Process
JPH06168169A (en) Distributed transaction processing using two-phase commit protocol provided with assumption commit without log force
US6970872B1 (en) Techniques for reducing latency in a multi-node system when obtaining a resource that does not reside in cache
US20070288529A1 (en) Framework to optimize delete all rows operations on database objects
US7437525B2 (en) Guaranteed undo retention
US7051051B1 (en) Recovering from failed operations in a database system
JP4280306B2 (en) Log-based data architecture for transaction message queuing systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: ORACLE CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DEMERS, ALAN;JAIN, SANDEEP;REEL/FRAME:008467/0573

Effective date: 19970401

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ORACLE CORPORATION;REEL/FRAME:014662/0001

Effective date: 20031028

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12