US20050262377A1 - Method and system for automated, no downtime, real-time, continuous data protection - Google Patents

Method and system for automated, no downtime, real-time, continuous data protection Download PDF

Info

Publication number
US20050262377A1
US20050262377A1 US10/841,398 US84139804A US2005262377A1 US 20050262377 A1 US20050262377 A1 US 20050262377A1 US 84139804 A US84139804 A US 84139804A US 2005262377 A1 US2005262377 A1 US 2005262377A1
Authority
US
United States
Prior art keywords
state
data
finite
event
given
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/841,398
Other versions
US7096392B2 (en
Inventor
Siew Sim-Tang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quest Software Inc
Aventail LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/841,398 priority Critical patent/US7096392B2/en
Priority to EP05742226A priority patent/EP1745059A4/en
Priority to PCT/US2005/015651 priority patent/WO2005111051A1/en
Publication of US20050262377A1 publication Critical patent/US20050262377A1/en
Assigned to ASEMPRA TECHNOLOGIES, INC. reassignment ASEMPRA TECHNOLOGIES, INC. NUNC PRO TUNC ASSIGNMENT (SEE DOCUMENT FOR DETAILS). Assignors: SIM-TANG, SIEW YONG
Priority to US11/507,257 priority patent/US7363549B2/en
Application granted granted Critical
Publication of US7096392B2 publication Critical patent/US7096392B2/en
Assigned to ASEMPRA (ASSIGNMENT FOR THE BENEFIT OF CREDITORS,) LLC reassignment ASEMPRA (ASSIGNMENT FOR THE BENEFIT OF CREDITORS,) LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ASEMPRA TECHNOLOGIES, INC.
Assigned to BAKBONE SOFTWARE, INC. reassignment BAKBONE SOFTWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ASEMPRA (ASSIGNMENT FOR THE BENEFIT OF CREDITORS), LLC
Assigned to QUEST SOFTWARE, INC. reassignment QUEST SOFTWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BAKBONE SOFTWARE INCORPORATED
Assigned to DELL SOFTWARE INC. reassignment DELL SOFTWARE INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: QUEST SOFTWARE, INC.
Assigned to BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FIRST LIEN COLLATERAL AGENT reassignment BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FIRST LIEN COLLATERAL AGENT PATENT SECURITY AGREEMENT (NOTES) Assignors: APPASSURE SOFTWARE, INC., ASAP SOFTWARE EXPRESS, INC., BOOMI, INC., COMPELLENT TECHNOLOGIES, INC., CREDANT TECHNOLOGIES, INC., DELL INC., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL USA L.P., FORCE10 NETWORKS, INC., GALE TECHNOLOGIES, INC., PEROT SYSTEMS CORPORATION, SECUREWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT reassignment BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT PATENT SECURITY AGREEMENT (ABL) Assignors: APPASSURE SOFTWARE, INC., ASAP SOFTWARE EXPRESS, INC., BOOMI, INC., COMPELLENT TECHNOLOGIES, INC., CREDANT TECHNOLOGIES, INC., DELL INC., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL USA L.P., FORCE10 NETWORKS, INC., GALE TECHNOLOGIES, INC., PEROT SYSTEMS CORPORATION, SECUREWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT (TERM LOAN) Assignors: APPASSURE SOFTWARE, INC., ASAP SOFTWARE EXPRESS, INC., BOOMI, INC., COMPELLENT TECHNOLOGIES, INC., CREDANT TECHNOLOGIES, INC., DELL INC., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL USA L.P., FORCE10 NETWORKS, INC., GALE TECHNOLOGIES, INC., PEROT SYSTEMS CORPORATION, SECUREWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to PEROT SYSTEMS CORPORATION, DELL INC., ASAP SOFTWARE EXPRESS, INC., DELL PRODUCTS L.P., WYSE TECHNOLOGY L.L.C., DELL SOFTWARE INC., CREDANT TECHNOLOGIES, INC., APPASSURE SOFTWARE, INC., COMPELLANT TECHNOLOGIES, INC., SECUREWORKS, INC., DELL USA L.P., DELL MARKETING L.P., FORCE10 NETWORKS, INC. reassignment PEROT SYSTEMS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: AVENTAIL LLC, DELL PRODUCTS, L.P., DELL SOFTWARE INC.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY AGREEMENT Assignors: AVENTAIL LLC, DELL PRODUCTS L.P., DELL SOFTWARE INC.
Assigned to SECUREWORKS, INC., DELL PRODUCTS L.P., DELL USA L.P., DELL INC., DELL MARKETING L.P., ASAP SOFTWARE EXPRESS, INC., APPASSURE SOFTWARE, INC., COMPELLENT TECHNOLOGIES, INC., DELL SOFTWARE INC., FORCE10 NETWORKS, INC., PEROT SYSTEMS CORPORATION, WYSE TECHNOLOGY L.L.C., CREDANT TECHNOLOGIES, INC. reassignment SECUREWORKS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT
Assigned to PEROT SYSTEMS CORPORATION, FORCE10 NETWORKS, INC., DELL SOFTWARE INC., SECUREWORKS, INC., COMPELLENT TECHNOLOGIES, INC., CREDANT TECHNOLOGIES, INC., DELL INC., ASAP SOFTWARE EXPRESS, INC., WYSE TECHNOLOGY L.L.C., DELL PRODUCTS L.P., DELL USA L.P., APPASSURE SOFTWARE, INC., DELL MARKETING L.P. reassignment PEROT SYSTEMS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Assigned to DELL SOFTWARE INC., AVENTAIL LLC, DELL PRODUCTS, L.P. reassignment DELL SOFTWARE INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Assigned to AVENTAIL LLC, DELL SOFTWARE INC., DELL PRODUCTS L.P. reassignment AVENTAIL LLC RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT FIRST LIEN PATENT SECURITY AGREEMENT Assignors: DELL SOFTWARE INC.
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECOND LIEN PATENT SECURITY AGREEMENT Assignors: DELL SOFTWARE INC.
Assigned to AVENTAIL LLC, QUEST SOFTWARE INC. (F/K/A DELL SOFTWARE INC.) reassignment AVENTAIL LLC CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED AT REEL: 040587 FRAME: 0624. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Assigned to QUEST SOFTWARE INC. reassignment QUEST SOFTWARE INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: DELL SOFTWARE INC.
Assigned to AVENTAIL LLC, QUEST SOFTWARE INC. (F/K/A DELL SOFTWARE INC.) reassignment AVENTAIL LLC RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS RECORDED AT R/F 040581/0850 Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT FIRST LIEN PATENT SECURITY AGREEMENT Assignors: QUEST SOFTWARE INC.
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECOND LIEN PATENT SECURITY AGREEMENT Assignors: QUEST SOFTWARE INC.
Assigned to GOLDMAN SACHS BANK USA reassignment GOLDMAN SACHS BANK USA FIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: ANALYTIX DATA SERVICES INC., BINARYTREE.COM LLC, erwin, Inc., One Identity LLC, ONE IDENTITY SOFTWARE INTERNATIONAL DESIGNATED ACTIVITY COMPANY, OneLogin, Inc., QUEST SOFTWARE INC.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. reassignment MORGAN STANLEY SENIOR FUNDING, INC. SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: ANALYTIX DATA SERVICES INC., BINARYTREE.COM LLC, erwin, Inc., One Identity LLC, ONE IDENTITY SOFTWARE INTERNATIONAL DESIGNATED ACTIVITY COMPANY, OneLogin, Inc., QUEST SOFTWARE INC.
Assigned to QUEST SOFTWARE INC. reassignment QUEST SOFTWARE INC. RELEASE OF SECOND LIEN SECURITY INTEREST IN PATENTS Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT
Assigned to QUEST SOFTWARE INC. reassignment QUEST SOFTWARE INC. RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/82Solving problems relating to consistency

Definitions

  • the present invention relates generally to enterprise data protection.
  • IT information technology
  • the backup device may acquire a “snapshot” of the contents of an entire hard disk at a particular time and then store this for later use, e.g., reintroduction onto the disk (or onto a new disk) should the computer fail.
  • the problems with the snapshot approaches are well known and appreciated.
  • critical data can change as the snapshot is taken, which results in incomplete updates (e.g., half a transaction) being captured so that, when reintroduced, the data is not fully consistent.
  • changes in data occurring after a snapshot is taken are always at risk.
  • Data recovery on a conventional data protection system is a tedious and time consuming operation. It involves first shutting down a host server, and then selecting a version of the data history. That selected version of the data history must then be copied back to the host server, and then the host server must be re-started. All of these steps are manually driven. After a period of time, the conventional data protection system must then perform a backup on the changed data. As these separate and distinct processes and systems are carried out, there are significant periods of application downtime. Stated another way, with the current state of the art, the processes of initial data upload, continuous backup, data resynchronization, and data recovery, are separate and distinct, include many manual steps, and involve different and uncoordinated systems, processes and operations.
  • a data management system or “DMS” provides an automated, continuous, real-time, substantially no downtime data protection service to one or more data sources associated with a set of application host servers.
  • the data management system typically comprises one or more regions, with each region having one or more clusters.
  • a given cluster has one or more nodes that share storage.
  • a host driver embedded in an application server captures real-time data transactions, preferably in the form of an event journal that is provided to a DMS cluster.
  • the driver functions to translate traditional file/database/block I/O and the like into a continuous, application-aware, output data stream.
  • the host driver includes an event processor that provides the data protection service, preferably by implementing a finite state machine (FSM).
  • FSM finite state machine
  • the data protection is provided to a given data source in the host server by taking advantage of the continuous, real-time data that the host driver is capturing and providing to other DMS components.
  • the state of the most current data in DMS matches the state of the data in the host server; as a consequence, the data protection is provided under the control of the finite state machine as a set of interconnected phases or “states.”
  • the otherwise separate processes are simply phases of the overall data protection cycle.
  • this data protection cycle preferably loops around indefinitely until, for example, a user terminates the service.
  • a given data protection phase (a given state) changes only as the state of the data and the environment change (a given incident).
  • FIG. 1 is an illustrative enterprise network in which the present invention may be deployed
  • FIG. 2 is an illustration of a general data management system (DMS) of the present invention
  • FIG. 3 is an illustration of a representative DMS network according to one embodiment of the present invention.
  • FIG. 4 illustrates how a data management system may be used to provide one or more data services according to the present invention
  • FIG. 5 is a representative host driver according to a preferred embodiment of the present invention having an I/O filter and one or more data agents;
  • FIG. 6 illustrates the host driver architecture in a more general fashion
  • FIG. 7 illustrates a preferred implementation of a event processor finite state machine (FSM) that provides automated, real-time, continuous, zero downtime data protection service according to the present invention.
  • FSM finite state machine
  • FIG. 1 illustrates a representative enterprise 100 in which the present invention may be implemented.
  • the enterprise 100 comprises a primary data tier 102 and a secondary data tier 104 distributed over IP-based wide area networks 106 and 108 .
  • Wide area network 106 interconnects two primary data centers 110 and 112
  • wide area network 108 interconnects a regional or satellite office 114 to the rest of the enterprise.
  • the primary data tier 102 comprises application servers 116 running various applications such as databases, email servers, file servers, and the like, together with associated primary storage 118 (e.g., direct attached storage (DAS), network attached storage (NAS), storage area network (SAN)).
  • DAS direct attached storage
  • NAS network attached storage
  • SAN storage area network
  • the secondary data tier 104 typically comprises one or more data management server nodes, and secondary storage 120 , which may be DAS, NAS, and SAN.
  • the secondary storage may be serial ATA interconnection through SCSI, Fibre Channel (FC or the like), or iSCSI.
  • the data management server nodes create a logical layer that offers object virtualization and protected data storage.
  • the secondary data tier is interconnected to the primary data tier, preferably through one or more host drivers (as described below) to provide real-time data services. Preferably, and as described below, the real-time data services are provided through a given I/O protocol for data transfer.
  • Data management policies 126 are implemented across the secondary storage in a well-known manner.
  • a similar architecture is provided in data center 112 . In this example, the regional office 114 does not have its own secondary storage, but relies instead on the facilities in the primary data centers.
  • a “host driver” 128 is associated with one or more of the application(s) running in the application servers 116 to transparently and efficiently capture the real-time, continuous history of all (or substantially all) transactions and changes to data associated with such application(s) across the enterprise network.
  • the present invention facilitates real-time, so-called “application aware” protection, with substantially no data loss, to provide continuous data protection and other data services including, without limitation, data distribution, data replication, data copy, data access, and the like.
  • a given host driver 128 intercepts data events between an application and its primary data storage, and it may also receive data and application events directly from the application and database.
  • each of the primary data centers includes a set of one or more data management servers 130 a - n that cooperate with the host drivers 128 to facilitate the data services.
  • the data center 110 supports a first core region 130
  • the data center 112 supports a second core region 132 .
  • a given data management server 130 is implemented using commodity hardware and software (e.g., an Intel processor-based blade server running Linux operating system, or the like) and having associated disk storage and memory.
  • the host drivers 128 and data management servers 130 comprise a data management system (DMS) that provides potentially global data services across the enterprise.
  • DMS data management system
  • FIG. 2 illustrates a preferred hierarchical structure of a data management system 200 .
  • the data management system 200 comprises one or more regions 202 a - n , with each region 202 comprising one or more clusters 204 a - n .
  • a given cluster 204 includes one or more nodes 206 a - n and a shared storage 208 shared by the nodes 206 within the cluster 204 .
  • a given node 206 is a data management server as described above with respect to FIG. 1 .
  • Within a DMS cluster 204 preferably all the nodes 206 perform parallel access to the data in the shared storage 208 .
  • the nodes 206 are hot swappable to enable new nodes to be added and existing nodes to be removed without causing cluster downtime.
  • a cluster is a tightly-coupled, share everything grouping of nodes.
  • the DMS is a loosely-coupled share nothing grouping of DMS clusters.
  • all DMS clusters have shared knowledge of the entire network, and all clusters preferably share partial or summary information about the data that they possess. Network connections (e.g., sessions) to one DMS node in a DMS cluster may be re-directed to another DMS node in another cluster when data is not present in the first DMS cluster but may be present in the second DMS cluster.
  • new DMS clusters may be added to the DMS cloud without interfering with the operation of the existing DMS clusters.
  • a DMS cluster fails, its data may be accessed in another cluster transparently, and its data service responsibility may be passed on to another DMS cluster.
  • FIG. 3 illustrates the data management system (DMS) as a network (in effect, a wide area network “cloud”) of peer-to-peer DMS service nodes.
  • the DMS cloud 300 typically comprises one or more DMS regions, with each region comprising one or more DMS “clusters.”
  • DMS regions typically there are two different types of DMS regions, in this example an “edge” region 306 and a “core” region 308 . This nomenclature is not to be taken to limit the invention, of course.
  • FIG. 3 illustrates the data management system (DMS) as a network (in effect, a wide area network “cloud”) of peer-to-peer DMS service nodes.
  • the DMS cloud 300 typically comprises one or more DMS regions, with each region comprising one or more DMS “clusters.”
  • typically there are two different types of DMS regions in this example an “edge” region 306 and a “core” region 308 . This nomenclature is not to be taken to limit the invention, of course.
  • an edge region 306 typically is a smaller office or data center where the amount of data hosted is limited and/or where a single node DMS cluster is sufficient to provide necessary data services.
  • core regions 308 are medium or large size data centers where one or more multi-node clusters are required or desired to provide the necessary data services.
  • the DMS preferably also includes one or more management gateways 310 for controlling the system. As seen in FIG. 3 , conceptually the DMS can be visualized as a set of data sources 312 .
  • a data source is a representation of a related group of fine grain data.
  • a data source may be a directory of files and subdirectory, or it may be a database, or a combination of both.
  • a data source 312 inside a DMS cluster captures a range of history and continuous changes of, for example, an external data source in a host server.
  • a data source may reside in one cluster, and it may replicate to other clusters or regions based on subscription rules. If a data source exists in the storage of a DMS cluster, preferably it can be accessed through any one of the DMS nodes in that cluster. If a data source does not exist in a DMS cluster, then the requesting session may be redirected to another DMS cluster that has the data; alternatively, the current DMS cluster may perform an on-demand replication to bring in the data.
  • an illustrative DMS network 400 provides a wide range of data services to data sources associated with a set of application host servers.
  • the DMS host driver 402 embedded in an application server 404 connects the application and its data to the DMS cluster.
  • the DMS host drivers can be considered as an extension of the DMS cloud reaching to the data of the application servers.
  • the DMS network offers a wide range of data services that include, by way of example only: data protection (and recovery), disaster recovery (data distribution and data replication), data copy, and data query and access.
  • the data services and, in particular, data protection and disaster recovery preferably are stream based data services where meaningful application and data events are forwarded from one end point to another end point continuously as a stream.
  • a stream-based data service is a service that involves two end points sending a stream of real-time application and data events.
  • Data distribution refers to streaming a data source from one DMS cluster into another DMS cluster
  • data replication refers to streaming a data source from a DMS cluster to another external host server.
  • both data distribution and data replication are real-time continuous movement of a data source from one location to another to prepare for disaster recovery.
  • Data replication differs from data distribution in that, in the latter case, the data source is replicated within the DMS network where the history of the data source is maintained.
  • Data replication typically is host based replication, where the continuous events and changes are applied to the host data such that the data is overwritten by the latest events; therefore, the history is lost.
  • Data copy is a data access service where a consistent data source (or part of a data source) at any point-in-time can be constructed and retrieved. This data service allows data of the most current point-in-time, or a specific point-in-time in the past, to be retrieved when the data is in a consistent state. These data services are merely representative.
  • the DMS provides these and other data services in real-time with data and application awareness to ensure continuous application data consistency and to allow for fine grain data access and recovery.
  • the DMS has the capability to capture fine grain and consistent data.
  • a given DMS host driver uses an I/O filter to intercept data events between an application and its primary data storage. The host driver also receives data and application events directly from the application and database.
  • the host driver 500 may be embedded in the host server where the application resides, or in the network on the application data path. By capturing data through the application, fine grain data is captured along with application events, thereby enabling the DMS cluster to provide application aware data services in a manner that has not been possible in the prior art.
  • a host server embedded host driver is used for illustrating the driver behavior.
  • the host driver 500 in a host server connects to one of the DMS nodes in a DMS cluster (in a DMS region) to perform or facilitate a data service.
  • the host driver preferably includes two logical subsystems, namely, an I/O filter 502 , and at least one data agent 504 .
  • An illustrative data agent 504 preferably includes one or more modules, namely, an application module 506 , a database module 508 , an I/O module 510 , and an event processor or event processing engine 512 .
  • the application module 506 is configured with an application 514 , one or more network devices and/or the host system itself to receive application level events 516 .
  • Events include, without limitation, entry or deletion of some critical data, installation or upgrade of application software or the operating system, a system alert, detecting of a virus, an administrator generated checkpoint, and so on.
  • One or more application events are queued for processing into an event queue 518 inside or otherwise associated with the data agent.
  • the event processor 512 over time may instruct the application module 506 to re-configure with its event source to capture different application level events.
  • a database module 508 is available for use.
  • the database module 508 preferably registers with a database 520 to obtain notifications from a database.
  • the module 508 also may integrate with the database 520 through one or more database triggers, or it may also instruct the database 520 to generate a checkpoint 522 .
  • the database module 508 also may lock the database 520 to force a database manager (not shown) to flush out its data from memory to disk, thereby generating a consistent disk image (a binary table checkpoint). This process is also known as “quiescing” the database. After a consistent image is generated, the database module 508 then lifts a lock to release the database from its quiescent state.
  • the database events preferably are also queued for processing into the event queue 518 .
  • database events include, without limitation, a database checkpoint, specific database requests (such as schema changes or other requests), access failure, and so on.
  • the event processor 512 may be used to re-configure the events that will be captured by the database module.
  • the I/O module 510 instructs the I/O filter 502 to capture a set of one or more I/O events that are of interest to the data agent. For example, a given I/O module 510 may control the filter to capture I/O events synchronously, or the module 510 may control the filter to only capture several successful post I/O events. When the I/O module 510 receives I/O events 524 , it forwards the I/O events to the event queue 518 for processing. The event processor 512 may also be used to re-configure the I/O module 510 and, thus, the I/O filter 502 .
  • the event processor 512 functions to generate an application aware, real-time event journal (in effect, a continuous stream) for use by one or more DMS nodes to provide one or more data services.
  • Application aware event journaling is a technique to create real-time data capture so that, among other things, consistent data checkpoints of an application can be identified and metadata can be extracted. For example, application awareness is the ability to distinguish a file from a directory, a journal file from a control or binary raw data file, or to know how a file or a directory object is modified by a given application.
  • an application aware solution when protecting a general purpose file server, is capable of distinguishing a file from a directory, and of identifying a consistent file checkpoint (e.g., zero-buffered write, flush or close events), and of interpreting and capturing file system object attributes such as an access control list.
  • a consistent file checkpoint e.g., zero-buffered write, flush or close events
  • an application aware data protection may ignore activities applied to a temporary file.
  • Another example of application awareness is the ability to identify a group of related files, directories or raw volumes that belong to a given application.
  • the solution when protecting a database with an application aware solution, is capable of identifying the group of volumes or directories and files that make up a given database, of extracting the name of the database, and of distinguishing journal files from binary table files and control files.
  • the state of the database journal may be more current than the state of the binary tables of the database in primary storage during runtime.
  • application aware event journaling tracks granular application consistent checkpoints; thus, when used in conjunction with data protection, the event journal is useful in reconstructing an application data state to a consistent point-in-time in the past, and it also capable of retrieving a granular object in the past without having to recover an entire data volume.
  • the host driver 600 comprises an I/O filter 602 , a control agent 604 , and one or more data agents 606 .
  • the control agent 604 receives commands from a DMS core 608 , which may include a host object 610 and one or more data source objects 612 a - n , and it controls the behavior of the one or more data agents 606 .
  • each data agent 606 manages one data source for one data service.
  • data agent 1 may be protecting directory “dir 1 ”
  • data agent 2 may be copying file “foo.html” into the host
  • data agent 3 may be protecting a database on the host.
  • Each data agent typically will have the modules and architecture described above and illustrative in FIG. 5 . Given data agents, of course, may share one or more modules depending on the actual implementation.
  • the data agents register as needed with the I/O filter 602 , the database 614 and/or the application 616 to receive (as the case may be): I/O events from the I/O filter, database events from the database, and/or application events from the application, the operating system and other (e.g., network) devices. Additional internal events or other protocol-specific information may also be inserted into the event queue 618 and dispatched to a given data agent for processing.
  • the output of the event processor in each data agent comprises a part of the event journal.
  • FIG. 7 illustrates a preferred embodiment of the invention, wherein a given event processor in a given host driver provides a data protection service by implementing a finite state machine 700 .
  • the behavior of the event processor depends on what state it is at, and this behavior preferably is described in an event processor data protection state table.
  • the “state” of the event processor preferably is driven by a given “incident” as described in an event processor data protection incident table.
  • the state of the event processor may change. The change from one state to another is sometimes referred to as a transition.
  • FIG. 7 illustrates a data protection state transition diagram of the given event processor.
  • an incident may or may not drive the event processor into another state.
  • the tail of an incident arrow connects to a prior state (i.e., branches out of a prior state), and the head of an incident arrow connects to a next state. If an incident listed incident table does not branch out from a state, then it is invalid for (i.e., it cannot occur in) that state. For example, it is not possible for a “Done-Upload” incident to occur in the “UBlackout” state.
  • the inventive data protection service is initiated on a data source in a host server as follows. As illustrated in FIG. 6 , it is assumed that a control agent 604 has created a data agent 606 having an event processor that outputs the event journal data stream, as has been described. As this point, the event processor in the data agent 606 is transitioned to a first state, which is called “Initial-Upload” for illustrative purposes. During the “Initial-Upload” state 702 , the event processor self-generates upload events, and it also receives other raw events from its associated event queue. The event processor simultaneously uploads the initial baseline data source, and it backs up the on-going changes from the application.
  • the event processor also manages data that is dirty or out-of-sync, as indicated in a given data structure.
  • a representative data structure is a “sorted” source tree, which is a list (sorted using an appropriate sort technique) that includes, for example, an entry per data item.
  • the list preferably also includes an indicator or flag specifying whether a given data item is uploaded or not, as well as whether the item is in-(or out-of) sync with the data in the DMS.
  • the event processor performs resynchronization on the items that are out-of-sync. As indicated in FIG.
  • a “Reboot” incident that occurs when the state machine is in state 702 does not change the state of the event processor; rather, the event processor simply continues processing from where it left off.
  • a “Blackout” incident transitions the event processor to a state 704 called (for illustration only) “UBlackout.” This is a blackout state that occurs as the event processor uploads the initial baseline data source, or as the event processor is backing up the on-going changes from the application.
  • the state 704 changes back to the “Initial-Upload” state 702 when a so-called “Reconnected” incident occurs.
  • the event processor When upload is completed and all the data is in synchronized with the data in the DMS, the event processor generates a “Done-upload” incident, which causes the event processor to move to a new state 706 .
  • This new state is called “Regular-backup” for illustrative purposes.
  • the event processor processes all the raw events from the event queue, and it generates a meaningful checkpoint real time event journal stream to the DMS for maintaining the data history. This operation has been described above.
  • the event processor exits its regular backup state 706 under one of three (3) conditions: a blackout incident, a reboot incident, or a begin recovery incident.
  • the state of the event processor transitions from state 706 to a new state 708 , which is called “PBlackout” for illustration purposes. This is a blackout state that occurs during regular backup.
  • a “Reboot” incident occurs, the event processor transitions to a different state 710 , which is called “Upward-Resync” for illustrative purposes.
  • the upward resynchronization state 710 is also reached from state 708 upon a Reconnected incident during the latter state. Upward resynchronization is a state that is entered when there is a suspicion that the state of the data in the host is out-of-sync with the state of the most current data in the DMS.
  • a transition from state 706 to state 710 occurs because, after “Reboot,” the event processor does not know if the data state of the host is identical with the state of the data in DMS.
  • the event processor synchronizes the state of the host data to the state of the DMS data (in other words, to bring the DMS data to the same state as the host data).
  • update events to the already synchronized data items
  • data history is streamed into the DMS continuously, preferably as a real time event journal.
  • An authorized user can invoke a recovery at any of the states when the host server is connected to the DMS core, namely, during the “Regular-backup” and “Upward-resync” states 706 and 710 . If the authorized user does so, a “Begin-recovery” incident occurs, which drives the event processor state to the “Recovering-frame” state 712 .
  • the event processor reconstructs the sorted source tree, which (as noted above) contains structural information of the data to be recovered.
  • state 712 and depending on the underlying data, the application may or may not be able to access the data.
  • a “Done-Recovering-Frame” incident is generated, which then transitions the event processor to a new state 714 , referred to as “Recovering” for illustration purposes.
  • incidents such as “Blackout,” “Reconnected,” and “Reboot” do not change the state of the event processor.
  • the event processor recovers the actual data from the DMS, preferably a data point at a time. It also recovers data as an application access request arrives to enable the application to continuing running.
  • application update events are streamed to the DMS so that history is continued to be maintained, even as the event processor is recovering the data in the host.
  • data recovery is completed, once again the state of the data (at both ends of the stream) is synchronized, and the corruption at the host is fixed. Thus, a so-called “Done-recovered” incident is generated, and the event processor transitions back to the “Regular-backup” state 706 .
  • the event processor marks the updated data item as dirty or out-of-sync in its sorted source tree.
  • a “termination” incident may be introduced to terminate the data protection service at a given state.
  • a termination incident may apply to a given state, or more generally, to any given state, in which latter case the event processor is transitioned (from its then-current state) to a terminated state. This releases the data agent and its event processor from further provision of the data protection service.
  • Initial-upload When a data protection command is forwarded from a control agent to a data agent, Initial-Upload is the entrance state of the event processor. At this state, the event processor gathers the list of data items of the data source to be protected to create a data list, and then one at a time, moves the data to create initial baseline data on a DMS region through a DMS core. The data list is called the sorted source tree.
  • the upload is a stream of granular application-aware data chunks that are attached to upload events. During this phase, the application does not have to be shutdown.
  • the checkpoint granular data, metadata, and data events are, in real-time, continuously streamed into the DMS core.
  • the update events for the data that are not already uploaded are dropped; preferably, only the update events for data already uploaded are streamed to the DMS.
  • the DMS core receives the real time event journal stream that includes the baseline upload events and the change events. It processes these events and organizes the data to maintain their history in the DMS persistent storage. If DMS fails while processing an upload or an update data event, preferably a failure event is forwarded back to the data agent and entered into the queue as a protocol specific event.
  • the event processor marks the target item associated with the failure “dirty”(or out-of-sync) and then performs data synchronization with the DMS on that target item.
  • UBlackout This is a blackout state during Initial-upload. A blackout occurs when the connection from a data agent to a DMS core fails. This failure may be caused by network failure, or by a DMS node failure. During this state, the application continues to run; it updates the data, the updates are captured asynchronously by the I/O filter.
  • the event processor records (within the sorted source tree, for example) that given application-aware data items have changed (i.e., are dirty or out-of-sync).
  • An application-aware data item includes a file, a transaction, a record, an email or the like. Although these items are opaque to the event processor, they are meaningful as a unit to their application. If DMS fails while processing an upload or an update data event, preferably a failure event is forwarded back to the data agent and entered into the queue as a protocol specific event. The event processor marks the target item associated with the failure “dirty”(or out-of-sync) and then performs data synchronization with the DMS on that target item. Regular-backup This is a regular backup state when uploads are completed. In this state, the latest data state in the DMS is identical with the state of the data in the host server (when there is no failure).
  • This state is entered when there is a suspicion that the state of the data in the host is out-of-sync with the state of the most current data in the DMS, and it is also known that the data in the host server is not corrupted.
  • This state is entered after a blackout when data in the host is changed; or, the state is entered after a host server is rebooted and the state of the most current data at the DMS is unknown. During this state, it is assumed that the host server data is good and is more current then the latest data in the DMS.
  • the event processor is keeping track of the updated (dirty) data at the host server during a blackout, preferably it only compares that data with the corresponding copy in the DMS; it then sends to the DMS the deltas (e.g., as checkpoint delta events). If, during the case of a host server reboot, the dirty data are not known, preferably the event processor goes over the entire data source, re-creates a sorted source tree, and then compares each and every individual data item, sending delta events to the DMS when necessary. During this phase, the application does not have to be shutdown. Upward- resynchronization occurs simultaneously while the application is accessing and updating the data in the primary storage.
  • the update events for the data objects that are dirty and are not yet re-synchronized preferably are dropped; the other events are processed.
  • the event processor tracks both the resynchronization and update activities accordingly and outputs to the DMS core a real time event journal stream.
  • the DMS core receives the real time event journal stream, which includes requests for data checkpoints, resynchronization delta events, and the change events.
  • the DMS core processes these events and organizes the data in the DMS persistent storage to maintain their history. Recovering-frame Recovery is initiated by an authorized user who identifies that the primary copy of the data in the host server has become incorrect or corrupted. A recovery can be applied to an entire data source, or to a subset of a data source.
  • the DMS core When a recovery initiative is handled in a DMS core, the DMS core immediately freezes and terminates the backup process of the target data to be recovered, e.g., by sending a recovery command either directly to the data agent or to the control agent.
  • Recovering-frame is an entrance state into data recovery at the host server.
  • the event processor first instructs the I/O filter to filter the READ requests synchronously so that it can participates in the handling of data access requests. It also preferably instructs the I/O filter to fail all the WRITE requests by returning error to the caller.
  • the event processor may serve the data or fail the request.
  • the event processor gets from the DMS core the list of the data items at the specific point-in-time to be recovered and constructs a recovery list, e.g., a new sorted source tree. Once the list is in place, the event processor first uses the list to recover the data structure in the primary storage, and then transitions into Recovering state.
  • Recovering This is the next state of a recovery process. After Recovering-frame is completed, the event processor must have already recovered the data structure in the primary storage. During Recovering state, the event processor re-configures the I/O filter to filter all the READ and WRITE events synchronously so that it can participate in handling data access. The event processor also begins recovering the actual data, e.g., by going down the new sorted source tree one item at a time to request the data or the delta to apply to its corrupted data. When an access request for data that has not been recovered (which can be detected using the sorted source tree) arrives, the event processor immediately recovers the requested data. When update events arrive, the event processor processes the data and sends the real-time event journal to the DMS for backup.
  • the update events also pass down to the primary storage.
  • the event processor also must mark the item recovered so that the most recent data does not get overwritten by data from the DMS.
  • This type of recovery is called Virtual-On-Demand recovery; it allows recovery to happen simultaneously while an application accesses and updates the recovering data. If the state of the DMS data is adjusted prior to the host recovery, then only the stream of backup events needs to be applied to the data in the DMS. If the data state at the DMS is not adjusted prior to recovery, then as the recovering data overwrites the host data, the recovery events must be shipped back to the DMS along with the most current application data update events to adjust the data state of the DMS data.
  • Incident Table Incident Description Blackout Environmental changed. Network connection is interrupted or the connected DMS core goes down. Reconnected Environmental changed. Network connection to a DMS core resumed and the data service can be continued. Done-upload Data state in the DMS changed. A baseline copy of the entire data source is fully uploaded to the DMS. Reboot Environmental changed. The host server or the data agent is restarted. Done-resync Data state in the DMS changed. The state of the data in the host server is, from this incident onward, in synchronized with the most current point-in-time data in the DMS. Begin-recovery Data state changed as initiated by user.
  • the state of the data (entire or partial) at the host server is incorrect or corrupted and has to be recovered using a former point-in-time data in the DMS.
  • Done-recovering-frame Data state in the host server changed At this point on, the structure of the host data is recovered to the intended recovering point-in-time.
  • Done-recovered Data state in the host server changed At this point on, the data state of the host server is fully recovered and fully in synchronized with the most current point-in-time data state of the data source at the DMS.
  • the finite state switching mechanism as described above may be varied. It may be implemented by breaking up a given state (as described) into multiple smaller states, or by combining two or more states into a more complex state.
  • the “UBlackout” state may be combined with the “Initial-upload” state into one state that manages data uploads, data updates, and that is aware of the connection status.
  • the “Recovering-frame” state may be combined with the “Recovering” state into one state that performs data structure and data recovery as a process.
  • the “PBlackout” state may be combined with the “Regular-backup” state.
  • the “PBlackout” state may also be combined with the “Upward-resync” state. All three states “PBlackout,” “Regular-backup” and “Upward-resync” may be merged into one state that has a process to carry out the combined functions.
  • the “Initial-upload” state may be split into two states with the new state being the target state of the “Reconnected” incident after the “UBlackout” state.
  • This new upload state may include a process to compare the DMS and host data, and this state may be connected back to the “Initial-upload” state through a new incident, such as “Done-compare.” There may also be a new state that handles data comparison from the “Initial-upload” state after the “Reboot” incident, and that new state would be connected back to “Initial-upload” via a new incident, such as “Done-compare.” As another variant, each of the “Recovering-frame” and “Recovering” states may also be split into two states, with the new states being used to handle data comparison after the “Reconnected” or “Reboot” incidents, as the case may be.
  • the finite state machine illustrated in the embodiment of FIG. 7 should not be taken to limit the present invention, although it is a desirable implementation. More generally, the finite state machine may be implemented in any convenient manner in which the initial data upload, continuous backup, data resynchronization and data recovery can be seen to comprise an integrated data protection cycle provided to the data source without (at the same time) interrupting the application aware, real-time event data stream that is being generated by the data agent.
  • any finite state machine (FSM) or similar process or structure that protects the data source without interrupting the application aware, real-time data stream, e.g., by continuously transitioning among a set of connected operating states may be deemed to be within the scope of the present invention.
  • these operating states typically include several or all of the following: initial data upload, continuous backup, data resynchronization, and data recovery.
  • the finite state machine may be entered at states other than Initial-upload.
  • an IT administrator may use a new server to recover a data source, and then have the new server act as the master server where the application runs. DMS continues protect the data.
  • another entry point into the state diagram would then exist, and that entry point may be an incident labeled (for illustrative purposes only) “Recover and Begin Data Protection.”
  • the new server enters the FSM at “Recovering-Frame” and then transitions to “Recovering” and then “Regular-Backup,” as previously described.
  • the entry point to the FSM may be state 706 (Regular-Backup), or state 708 (Upward-Resync).
  • state 706 Registered-Backup
  • state 708 Upward-Resync
  • DMS data protection service
  • DMS is automated, real-time, and continuous, and it exhibits no or substantially no downtime. This is because DMS is keeping track of the real-time data history, and because preferably the state of the most current data in a DMS region, cluster or node (as the case may be) must match the state of the data in the original host server at all times.
  • data recovery on a conventional data protection system means shutting down a host server, selecting a version of the data history, copying the data history back to the host server, and then turning on the host server. All of these steps are manually driven. After a period of time, the conventional data protection system then performs a backup on the changed data.
  • the otherwise separate processes are simply phases of the overall data protection cycle. This is highly advantageous, and it is enabled because DMS keeps a continuous data history. Stated another way, there is no gap in the data.
  • the data protection cycle preferably loops around indefinitely until, for example, a user terminates the service.
  • a given data protection phase (the state) changes as the state of the data and the environment change (the incident).
  • all of the phases (states) are interconnected to form a finite state machine that provides the data protection service.
  • the data protection service provided by the DMS has no effective downtime because the data upload, data resynchronization, data recovery and data backup are simply integrated phases of a data protection cycle. There is no application downtime.
  • the present invention has numerous advantages over the prior art such as tape backup, disk backup, volume replication, storage snapshots, application replication, remote replication, and manual recovery. Indeed, existing fragmented approaches are complex, resource inefficient, expensive to operate, and often reliable. From an architectural standpoint, they are not well suited to scaling to support heterogeneous, enterprise-wide data management.
  • the present invention overcomes these and other problems of the prior art by providing real-time data management services. As has been described, the invention transparently and efficiently captures the real-time continuous history of all or substantially all transactions and data changes in the enterprise.
  • the solution operates over local and wide area IP networks to form a coherent data management, protection and recovery infrastructure. It eliminates data loss, reduces downtime, and ensures application consistent recovery to any point in time.
  • the present invention addresses enterprise data protection and data management problems by continuously protecting all data changes and transactions in real time across local and wide area networks.
  • the method and system of the invention take advantage of inexpensive, commodity processors to efficiently parallel process and route application-aware data changes between applications and low cost near storage.
  • the present invention also relates to apparatus for performing the operations herein.
  • the apparatus is implemented as a processor and associated program code that implements a finite state machine with a plurality of states and to effect transitions between the states.
  • this apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • a computer readable storage medium such as, but is not limited to, any type of disk including optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.

Abstract

A data management system or “DMS” provides an automated, continuous, real-time, substantially no downtime data protection service to one or more data sources associated with a set of application host servers. To facilitate the data protection service, a host driver embedded in an application server captures real-time data transactions, preferably in the form of an event journal that is provided to other DMS components. The driver functions to translate traditional file/database/block I/O and the like into a continuous, application-aware, output data stream. The host driver includes an event processor that provides the data protection service, preferably by implementing a finite state machine (FSM). In particular, the data protection is provided to a given data source in the host server by taking advantage of the continuous, real-time data that the host driver is capturing and providing to other DMS components. The state of the most current data in DMS matches the state of the data in the host server; as a consequence, the data protection is provided under the control of the finite state machine as a set of interconnected phases or “states.” The otherwise separate processes (initial data upload, continuous backup, blackout and data resynchronization, and recovery) are simply phases of the overall data protection cycle. As implemented by the finite state machine, this data protection cycle preferably loops around indefinitely until, for example, a user terminates the service. A given data protection phase (a given state) changes only as the state of the data and the environment change (a given incident).

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is related to commonly-owned application Ser. No., ______, filed May ______, 2004, and titled “METHOD AND SYSTEM FOR REAL-TIME EVENT JOURNALING TO PROVIDE ENTERPRISE DATA SERVICES.”
  • BACKGROUND OF THE INVENTION
  • 1. Technical Field
  • The present invention relates generally to enterprise data protection.
  • 2. Background of the Related Art
  • A critical information technology (IT) problem is how to cost-effectively deliver network wide data protection and rapid data recovery. In 2002, for example, companies spent an estimated $50B worldwide managing data backup/restore and an estimated $30B in system downtime costs. The “code red” virus alone cost an estimated $2.8B in downtime, data loss, and recovery. The reason for these staggering costs is simple—traditional schedule based tape and in-storage data protection and recovery approaches can no longer keep pace with rapid data growth, geographically distributed operations, and the real time requirements of 24×7×265 enterprise data centers.
  • Traditionally, system managers have use tape backup devices to store system data on a periodic basis. For example, the backup device may acquire a “snapshot” of the contents of an entire hard disk at a particular time and then store this for later use, e.g., reintroduction onto the disk (or onto a new disk) should the computer fail. The problems with the snapshot approaches are well known and appreciated. First, critical data can change as the snapshot is taken, which results in incomplete updates (e.g., half a transaction) being captured so that, when reintroduced, the data is not fully consistent. Second, changes in data occurring after a snapshot is taken are always at risk. Third, as storage device size grows, the bandwidth required to repeatedly offload and store the complete snapshot can become impractical. Most importantly, storage based snapshot does not capture fine grain application data and, therefore, it cannot recover fine grain application data objects without reintroducing (i.e. recovering) the entire backup volume to a new application computer server to extract the fine grain data object.
  • Data recovery on a conventional data protection system is a tedious and time consuming operation. It involves first shutting down a host server, and then selecting a version of the data history. That selected version of the data history must then be copied back to the host server, and then the host server must be re-started. All of these steps are manually driven. After a period of time, the conventional data protection system must then perform a backup on the changed data. As these separate and distinct processes and systems are carried out, there are significant periods of application downtime. Stated another way, with the current state of the art, the processes of initial data upload, continuous backup, data resynchronization, and data recovery, are separate and distinct, include many manual steps, and involve different and uncoordinated systems, processes and operations.
  • BRIEF SUMMARY OF THE INVENTION
  • A data management system or “DMS” provides an automated, continuous, real-time, substantially no downtime data protection service to one or more data sources associated with a set of application host servers. The data management system typically comprises one or more regions, with each region having one or more clusters. A given cluster has one or more nodes that share storage. To facilitate the data protection service, a host driver embedded in an application server captures real-time data transactions, preferably in the form of an event journal that is provided to a DMS cluster. The driver functions to translate traditional file/database/block I/O and the like into a continuous, application-aware, output data stream. According to the invention, the host driver includes an event processor that provides the data protection service, preferably by implementing a finite state machine (FSM). In particular, the data protection is provided to a given data source in the host server by taking advantage of the continuous, real-time data that the host driver is capturing and providing to other DMS components. The state of the most current data in DMS matches the state of the data in the host server; as a consequence, the data protection is provided under the control of the finite state machine as a set of interconnected phases or “states.” The otherwise separate processes (initial data upload, continuous backup, blackout and data resynchronization, and recovery) are simply phases of the overall data protection cycle. As implemented by the finite state machine, this data protection cycle preferably loops around indefinitely until, for example, a user terminates the service. A given data protection phase (a given state) changes only as the state of the data and the environment change (a given incident).
  • The foregoing has outlined some of the more pertinent features of the invention. These features should be construed to be merely illustrative. Many other beneficial results can be attained by applying the disclosed invention in a different manner or by modifying the invention as will be described.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 is an illustrative enterprise network in which the present invention may be deployed;
  • FIG. 2 is an illustration of a general data management system (DMS) of the present invention;
  • FIG. 3 is an illustration of a representative DMS network according to one embodiment of the present invention;
  • FIG. 4 illustrates how a data management system may be used to provide one or more data services according to the present invention;
  • FIG. 5 is a representative host driver according to a preferred embodiment of the present invention having an I/O filter and one or more data agents;
  • FIG. 6 illustrates the host driver architecture in a more general fashion; and
  • FIG. 7 illustrates a preferred implementation of a event processor finite state machine (FSM) that provides automated, real-time, continuous, zero downtime data protection service according to the present invention.
  • DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT
  • FIG. 1 illustrates a representative enterprise 100 in which the present invention may be implemented. This architecture is meant to be taken by way of illustration and not to limit the applicability of the present invention. In this illustrative example, the enterprise 100 comprises a primary data tier 102 and a secondary data tier 104 distributed over IP-based wide area networks 106 and 108. Wide area network 106 interconnects two primary data centers 110 and 112, and wide area network 108 interconnects a regional or satellite office 114 to the rest of the enterprise. The primary data tier 102 comprises application servers 116 running various applications such as databases, email servers, file servers, and the like, together with associated primary storage 118 (e.g., direct attached storage (DAS), network attached storage (NAS), storage area network (SAN)). The secondary data tier 104 typically comprises one or more data management server nodes, and secondary storage 120, which may be DAS, NAS, and SAN. The secondary storage may be serial ATA interconnection through SCSI, Fibre Channel (FC or the like), or iSCSI. The data management server nodes create a logical layer that offers object virtualization and protected data storage. The secondary data tier is interconnected to the primary data tier, preferably through one or more host drivers (as described below) to provide real-time data services. Preferably, and as described below, the real-time data services are provided through a given I/O protocol for data transfer. Data management policies 126 are implemented across the secondary storage in a well-known manner. A similar architecture is provided in data center 112. In this example, the regional office 114 does not have its own secondary storage, but relies instead on the facilities in the primary data centers.
  • As illustrated, a “host driver” 128 is associated with one or more of the application(s) running in the application servers 116 to transparently and efficiently capture the real-time, continuous history of all (or substantially all) transactions and changes to data associated with such application(s) across the enterprise network. As will be described below, the present invention facilitates real-time, so-called “application aware” protection, with substantially no data loss, to provide continuous data protection and other data services including, without limitation, data distribution, data replication, data copy, data access, and the like. In operation, a given host driver 128 intercepts data events between an application and its primary data storage, and it may also receive data and application events directly from the application and database. In a representative embodiment, the host driver 128 is embedded in the host application server 116 where the application resides; alternatively, the host driver is embedded in the network on the application data path. By intercepting data through the application, fine grain (but opaque) data is captured to facilitate the data service(s). To this end, and as also illustrated in FIG. 1, each of the primary data centers includes a set of one or more data management servers 130 a-n that cooperate with the host drivers 128 to facilitate the data services. In this illustrative example, the data center 110 supports a first core region 130, and the data center 112 supports a second core region 132. A given data management server 130 is implemented using commodity hardware and software (e.g., an Intel processor-based blade server running Linux operating system, or the like) and having associated disk storage and memory. Generalizing, the host drivers 128 and data management servers 130 comprise a data management system (DMS) that provides potentially global data services across the enterprise.
  • FIG. 2 illustrates a preferred hierarchical structure of a data management system 200. As illustrated, the data management system 200 comprises one or more regions 202 a-n, with each region 202 comprising one or more clusters 204 a-n. A given cluster 204 includes one or more nodes 206 a-n and a shared storage 208 shared by the nodes 206 within the cluster 204. A given node 206 is a data management server as described above with respect to FIG. 1. Within a DMS cluster 204, preferably all the nodes 206 perform parallel access to the data in the shared storage 208. Preferably, the nodes 206 are hot swappable to enable new nodes to be added and existing nodes to be removed without causing cluster downtime. Preferably, a cluster is a tightly-coupled, share everything grouping of nodes. At a higher level, the DMS is a loosely-coupled share nothing grouping of DMS clusters. Preferably, all DMS clusters have shared knowledge of the entire network, and all clusters preferably share partial or summary information about the data that they possess. Network connections (e.g., sessions) to one DMS node in a DMS cluster may be re-directed to another DMS node in another cluster when data is not present in the first DMS cluster but may be present in the second DMS cluster. Also, new DMS clusters may be added to the DMS cloud without interfering with the operation of the existing DMS clusters. When a DMS cluster fails, its data may be accessed in another cluster transparently, and its data service responsibility may be passed on to another DMS cluster.
  • FIG. 3 illustrates the data management system (DMS) as a network (in effect, a wide area network “cloud”) of peer-to-peer DMS service nodes. As discussed above with respect to FIG. 2, the DMS cloud 300 typically comprises one or more DMS regions, with each region comprising one or more DMS “clusters.” In the illustrative embodiment of FIG. 3, typically there are two different types of DMS regions, in this example an “edge” region 306 and a “core” region 308. This nomenclature is not to be taken to limit the invention, of course. As illustrated in FIG. 1, an edge region 306 typically is a smaller office or data center where the amount of data hosted is limited and/or where a single node DMS cluster is sufficient to provide necessary data services. Typically, core regions 308 are medium or large size data centers where one or more multi-node clusters are required or desired to provide the necessary data services. The DMS preferably also includes one or more management gateways 310 for controlling the system. As seen in FIG. 3, conceptually the DMS can be visualized as a set of data sources 312. A data source is a representation of a related group of fine grain data. For example, a data source may be a directory of files and subdirectory, or it may be a database, or a combination of both. A data source 312 inside a DMS cluster captures a range of history and continuous changes of, for example, an external data source in a host server. A data source may reside in one cluster, and it may replicate to other clusters or regions based on subscription rules. If a data source exists in the storage of a DMS cluster, preferably it can be accessed through any one of the DMS nodes in that cluster. If a data source does not exist in a DMS cluster, then the requesting session may be redirected to another DMS cluster that has the data; alternatively, the current DMS cluster may perform an on-demand replication to bring in the data.
  • Referring now to FIG. 4, an illustrative DMS network 400 provides a wide range of data services to data sources associated with a set of application host servers. As noted above, and as will be described in more detail below, the DMS host driver 402 embedded in an application server 404 connects the application and its data to the DMS cluster. In this manner, the DMS host drivers can be considered as an extension of the DMS cloud reaching to the data of the application servers. As illustrated in FIG. 4, the DMS network offers a wide range of data services that include, by way of example only: data protection (and recovery), disaster recovery (data distribution and data replication), data copy, and data query and access. The data services and, in particular, data protection and disaster recovery, preferably are stream based data services where meaningful application and data events are forwarded from one end point to another end point continuously as a stream. More generally, a stream-based data service is a service that involves two end points sending a stream of real-time application and data events. For data protection, this means streaming data from a data source (e.g., an external host server) into a DMS cluster, where the data source and its entire history can be captured and protected. Data distribution refers to streaming a data source from one DMS cluster into another DMS cluster, while data replication refers to streaming a data source from a DMS cluster to another external host server. Preferably, both data distribution and data replication are real-time continuous movement of a data source from one location to another to prepare for disaster recovery. Data replication differs from data distribution in that, in the latter case, the data source is replicated within the DMS network where the history of the data source is maintained. Data replication typically is host based replication, where the continuous events and changes are applied to the host data such that the data is overwritten by the latest events; therefore, the history is lost. Data copy is a data access service where a consistent data source (or part of a data source) at any point-in-time can be constructed and retrieved. This data service allows data of the most current point-in-time, or a specific point-in-time in the past, to be retrieved when the data is in a consistent state. These data services are merely representative.
  • The DMS provides these and other data services in real-time with data and application awareness to ensure continuous application data consistency and to allow for fine grain data access and recovery. To offer such application and data aware services, the DMS has the capability to capture fine grain and consistent data. As will be illustrated and described, a given DMS host driver uses an I/O filter to intercept data events between an application and its primary data storage. The host driver also receives data and application events directly from the application and database.
  • Referring now to FIG. 5, an illustrative embodiment is shown of a DMS host driver 500. As noted above, the host driver 500 may be embedded in the host server where the application resides, or in the network on the application data path. By capturing data through the application, fine grain data is captured along with application events, thereby enabling the DMS cluster to provide application aware data services in a manner that has not been possible in the prior art.
  • In this embodiment, a host server embedded host driver is used for illustrating the driver behavior. In particular, the host driver 500 in a host server connects to one of the DMS nodes in a DMS cluster (in a DMS region) to perform or facilitate a data service. The host driver preferably includes two logical subsystems, namely, an I/O filter 502, and at least one data agent 504. An illustrative data agent 504 preferably includes one or more modules, namely, an application module 506, a database module 508, an I/O module 510, and an event processor or event processing engine 512. The application module 506 is configured with an application 514, one or more network devices and/or the host system itself to receive application level events 516. These events include, without limitation, entry or deletion of some critical data, installation or upgrade of application software or the operating system, a system alert, detecting of a virus, an administrator generated checkpoint, and so on. One or more application events are queued for processing into an event queue 518 inside or otherwise associated with the data agent. The event processor 512 over time may instruct the application module 506 to re-configure with its event source to capture different application level events.
  • If an application saves its data into a database, then a database module 508 is available for use. The database module 508 preferably registers with a database 520 to obtain notifications from a database. The module 508 also may integrate with the database 520 through one or more database triggers, or it may also instruct the database 520 to generate a checkpoint 522. The database module 508 also may lock the database 520 to force a database manager (not shown) to flush out its data from memory to disk, thereby generating a consistent disk image (a binary table checkpoint). This process is also known as “quiescing” the database. After a consistent image is generated, the database module 508 then lifts a lock to release the database from its quiescent state. The database events preferably are also queued for processing into the event queue 518. Generalizing, database events include, without limitation, a database checkpoint, specific database requests (such as schema changes or other requests), access failure, and so on. As with application module, the event processor 512 may be used to re-configure the events that will be captured by the database module.
  • The I/O module 510 instructs the I/O filter 502 to capture a set of one or more I/O events that are of interest to the data agent. For example, a given I/O module 510 may control the filter to capture I/O events synchronously, or the module 510 may control the filter to only capture several successful post I/O events. When the I/O module 510 receives I/O events 524, it forwards the I/O events to the event queue 518 for processing. The event processor 512 may also be used to re-configure the I/O module 510 and, thus, the I/O filter 502.
  • The event processor 512 functions to generate an application aware, real-time event journal (in effect, a continuous stream) for use by one or more DMS nodes to provide one or more data services. Application aware event journaling is a technique to create real-time data capture so that, among other things, consistent data checkpoints of an application can be identified and metadata can be extracted. For example, application awareness is the ability to distinguish a file from a directory, a journal file from a control or binary raw data file, or to know how a file or a directory object is modified by a given application. Thus, when protecting a general purpose file server, an application aware solution is capable of distinguishing a file from a directory, and of identifying a consistent file checkpoint (e.g., zero-buffered write, flush or close events), and of interpreting and capturing file system object attributes such as an access control list. By interpreting file system attributes, an application aware data protection may ignore activities applied to a temporary file. Another example of application awareness is the ability to identify a group of related files, directories or raw volumes that belong to a given application. Thus, when protecting a database with an application aware solution, the solution is capable of identifying the group of volumes or directories and files that make up a given database, of extracting the name of the database, and of distinguishing journal files from binary table files and control files. It also knows, for example, that the state of the database journal may be more current than the state of the binary tables of the database in primary storage during runtime. These are just representative examples, of course. In general, application aware event journaling tracks granular application consistent checkpoints; thus, when used in conjunction with data protection, the event journal is useful in reconstructing an application data state to a consistent point-in-time in the past, and it also capable of retrieving a granular object in the past without having to recover an entire data volume.
  • Referring now to FIG. 6, the host driver architecture is shown in a more generalized fashion. In this drawing, the host driver 600 comprises an I/O filter 602, a control agent 604, and one or more data agents 606. The control agent 604 receives commands from a DMS core 608, which may include a host object 610 and one or more data source objects 612 a-n, and it controls the behavior of the one or more data agents 606. Preferably, each data agent 606 manages one data source for one data service. For example, data agent 1 may be protecting directory “dir1,” data agent 2 may be copying file “foo.html” into the host, and data agent 3 may be protecting a database on the host. These are merely representative data service examples, of course. Each data agent typically will have the modules and architecture described above and illustrative in FIG. 5. Given data agents, of course, may share one or more modules depending on the actual implementation. In operation, the data agents register as needed with the I/O filter 602, the database 614 and/or the application 616 to receive (as the case may be): I/O events from the I/O filter, database events from the database, and/or application events from the application, the operating system and other (e.g., network) devices. Additional internal events or other protocol-specific information may also be inserted into the event queue 618 and dispatched to a given data agent for processing. The output of the event processor in each data agent comprises a part of the event journal.
  • FIG. 7 illustrates a preferred embodiment of the invention, wherein a given event processor in a given host driver provides a data protection service by implementing a finite state machine 700. As will be seen, the behavior of the event processor depends on what state it is at, and this behavior preferably is described in an event processor data protection state table. The “state” of the event processor preferably is driven by a given “incident” as described in an event processor data protection incident table. Generally, when a given incident occurs, the state of the event processor may change. The change from one state to another is sometimes referred to as a transition. One of ordinary skill in the art will appreciate that FIG. 7 illustrates a data protection state transition diagram of the given event processor. In particular, it shows an illustrative data protection cycle as the FSM 700. At each state, as represented by an oval, an incident, as represented by an arrow, may or may not drive the event processor into another state. The tail of an incident arrow connects to a prior state (i.e., branches out of a prior state), and the head of an incident arrow connects to a next state. If an incident listed incident table does not branch out from a state, then it is invalid for (i.e., it cannot occur in) that state. For example, it is not possible for a “Done-Upload” incident to occur in the “UBlackout” state.
  • With reference now to FIGS. 6-7, the inventive data protection service is initiated on a data source in a host server as follows. As illustrated in FIG. 6, it is assumed that a control agent 604 has created a data agent 606 having an event processor that outputs the event journal data stream, as has been described. As this point, the event processor in the data agent 606 is transitioned to a first state, which is called “Initial-Upload” for illustrative purposes. During the “Initial-Upload” state 702, the event processor self-generates upload events, and it also receives other raw events from its associated event queue. The event processor simultaneously uploads the initial baseline data source, and it backs up the on-going changes from the application. Preferably, only change events for data already uploaded are sent to the DMS. The event processor also manages data that is dirty or out-of-sync, as indicated in a given data structure. In particular, a representative data structure is a “sorted” source tree, which is a list (sorted using an appropriate sort technique) that includes, for example, an entry per data item. The list preferably also includes an indicator or flag specifying whether a given data item is uploaded or not, as well as whether the item is in-(or out-of) sync with the data in the DMS. As will be seen, the event processor performs resynchronization on the items that are out-of-sync. As indicated in FIG. 7, a “Reboot” incident that occurs when the state machine is in state 702 does not change the state of the event processor; rather, the event processor simply continues processing from where it left off. In contrast, a “Blackout” incident transitions the event processor to a state 704 called (for illustration only) “UBlackout.” This is a blackout state that occurs as the event processor uploads the initial baseline data source, or as the event processor is backing up the on-going changes from the application. The state 704 changes back to the “Initial-Upload” state 702 when a so-called “Reconnected” incident occurs.
  • When upload is completed and all the data is in synchronized with the data in the DMS, the event processor generates a “Done-upload” incident, which causes the event processor to move to a new state 706. This new state is called “Regular-backup” for illustrative purposes. During the regular backup state 706, the event processor processes all the raw events from the event queue, and it generates a meaningful checkpoint real time event journal stream to the DMS for maintaining the data history. This operation has been described above. As illustrated in the state transition diagram, the event processor exits its regular backup state 706 under one of three (3) conditions: a blackout incident, a reboot incident, or a begin recovery incident. Thus, if during regular backup a “Blackout” incident occurs, the state of the event processor transitions from state 706 to a new state 708, which is called “PBlackout” for illustration purposes. This is a blackout state that occurs during regular backup. If, however, during regular backup, a “Reboot” incident occurs, the event processor transitions to a different state 710, which is called “Upward-Resync” for illustrative purposes. The upward resynchronization state 710 is also reached from state 708 upon a Reconnected incident during the latter state. Upward resynchronization is a state that is entered when there is a suspicion that the state of the data in the host is out-of-sync with the state of the most current data in the DMS. For this transition, it should also be known that the data in the host server is not corrupted. Thus, a transition from state 706 to state 710 occurs because, after “Reboot,” the event processor does not know if the data state of the host is identical with the state of the data in DMS. During the “Upward-Resync” 710 state, whether the state is reached from state 706 or state 708, the event processor synchronizes the state of the host data to the state of the DMS data (in other words, to bring the DMS data to the same state as the host data). During this time, update events (to the already synchronized data items) are continuously forwarded to the DMS as a real time event stream. When the resynchronization is completed, the data state at both the host and the DMS are identical, and thus a “Done-Resync” incident is generated. This incident transitions the event processor back to the “Regular-backup” state 706. Alternatively, with the event processor in the Upward-Resync state 710, a “Begin-Recovery” incident transitions the event processor to yet another new state 712, which is referred to “Recovering-frame” for illustration purposes.
  • In particular, once a baseline data is uploaded to the DMS, data history is streamed into the DMS continuously, preferably as a real time event journal. An authorized user can invoke a recovery at any of the states when the host server is connected to the DMS core, namely, during the “Regular-backup” and “Upward-resync” states 706 and 710. If the authorized user does so, a “Begin-recovery” incident occurs, which drives the event processor state to the “Recovering-frame” state 712.
  • During the “Recovering-frame” state 712, the event processor reconstructs the sorted source tree, which (as noted above) contains structural information of the data to be recovered. During state 712, and depending on the underlying data, the application may or may not be able to access the data. Once the data structure is recovered, a “Done-Recovering-Frame” incident is generated, which then transitions the event processor to a new state 714, referred to as “Recovering” for illustration purposes. Before the data structure is recovered, incidents such as “Blackout,” “Reconnected,” and “Reboot” do not change the state of the event processor. During the “Recovering” state 714, the event processor recovers the actual data from the DMS, preferably a data point at a time. It also recovers data as an application access request arrives to enable the application to continuing running. During state 714, application update events are streamed to the DMS so that history is continued to be maintained, even as the event processor is recovering the data in the host. When data recovery is completed, once again the state of the data (at both ends of the stream) is synchronized, and the corruption at the host is fixed. Thus, a so-called “Done-recovered” incident is generated, and the event processor transitions back to the “Regular-backup” state 706.
  • During the “UBlackout” or the “PBlackout” states (704 or 708), the event processor marks the updated data item as dirty or out-of-sync in its sorted source tree.
  • Processing continues in a cycle (theoretically without end), with the event processor transitioning from state-to-state as given incidents (as described above) occur. The above described incidents, of course, are merely representative.
  • Although not indicated in the state transition diagram, a “termination” incident may be introduced to terminate the data protection service at a given state. In particular, a termination incident may apply to a given state, or more generally, to any given state, in which latter case the event processor is transitioned (from its then-current state) to a terminated state. This releases the data agent and its event processor from further provision of the data protection service.
  • The following Table I provide additional details of a preferred implementation of the event processor data protection state table 702 and the event processor data protection incident table 704.
    State Table:
    State Description
    Initial-upload When a data protection command is forwarded from a control agent to a
    data agent, Initial-Upload is the entrance state of the event processor.
    At this state, the event processor gathers the list of data items of the data
    source to be protected to create a data list, and then one at a time, moves
    the data to create initial baseline data on a DMS region through a DMS
    core. The data list is called the sorted source tree. The upload is a stream
    of granular application-aware data chunks that are attached to upload
    events.
    During this phase, the application does not have to be shutdown.
    Simultaneously, while the baseline is uploading and as the application
    updates the data on the host, the checkpoint granular data, metadata, and
    data events are, in real-time, continuously streamed into the DMS core.
    The update events for the data that are not already uploaded are dropped;
    preferably, only the update events for data already uploaded are streamed
    to the DMS.
    The DMS core receives the real time event journal stream that includes the
    baseline upload events and the change events. It processes these events
    and organizes the data to maintain their history in the DMS persistent
    storage.
    If DMS fails while processing an upload or an update data event,
    preferably a failure event is forwarded back to the data agent and entered
    into the queue as a protocol specific event. The event processor marks the
    target item associated with the failure “dirty”(or out-of-sync) and then
    performs data synchronization with the DMS on that target item.
    UBlackout This is a blackout state during Initial-upload. A blackout occurs when the
    connection from a data agent to a DMS core fails. This failure may be
    caused by network failure, or by a DMS node failure.
    During this state, the application continues to run; it updates the data, the
    updates are captured asynchronously by the I/O filter. The event
    processor records (within the sorted source tree, for example) that given
    application-aware data items have changed (i.e., are dirty or out-of-sync).
    An application-aware data item includes a file, a transaction, a record, an
    email or the like. Although these items are opaque to the event processor,
    they are meaningful as a unit to their application.
    If DMS fails while processing an upload or an update data event,
    preferably a failure event is forwarded back to the data agent and entered
    into the queue as a protocol specific event. The event processor marks the
    target item associated with the failure “dirty”(or out-of-sync) and then
    performs data synchronization with the DMS on that target item.
    Regular-backup This is a regular backup state when uploads are completed. In this state,
    the latest data state in the DMS is identical with the state of the data in the
    host server (when there is no failure).
    During this phase, as the application accesses its data, real-time
    continuous event journal is streamed into the DMS core. The DMS core
    receives the real time event journal stream, processes these events, and
    organizes the data in the DMS persistent storage to maintain their history.
    PBlackout This is a blackout state that occurs during Regular-backup or Upward-
    resync. A blackout occurs when the connection from a data agent to a
    DMS core fails. This failure may be caused by network failure, or by the
    DMS node failure.
    As in UBlackout, during this state, the application continues to update the
    data, the updates are captured asynchronously by the I/O filter, and the
    event processor simply records (e.g., in the sorted source tree) what
    application-aware data items have changed.
    Upward-resync This state is entered when there is a suspicion that the state of the data in
    the host is out-of-sync with the state of the most current data in the DMS,
    and it is also known that the data in the host server is not corrupted.
    This state is entered after a blackout when data in the host is changed; or,
    the state is entered after a host server is rebooted and the state of the most
    current data at the DMS is unknown.
    During this state, it is assumed that the host server data is good and is
    more current then the latest data in the DMS. If the event processor is
    keeping track of the updated (dirty) data at the host server during a
    blackout, preferably it only compares that data with the corresponding
    copy in the DMS; it then sends to the DMS the deltas (e.g., as checkpoint
    delta events). If, during the case of a host server reboot, the dirty data are
    not known, preferably the event processor goes over the entire data source,
    re-creates a sorted source tree, and then compares each and every
    individual data item, sending delta events to the DMS when necessary.
    During this phase, the application does not have to be shutdown. Upward-
    resynchronization occurs simultaneously while the application is accessing
    and updating the data in the primary storage. The update events for the
    data objects that are dirty and are not yet re-synchronized preferably are
    dropped; the other events are processed. The event processor tracks both
    the resynchronization and update activities accordingly and outputs to the
    DMS core a real time event journal stream.
    The DMS core receives the real time event journal stream, which includes
    requests for data checkpoints, resynchronization delta events, and the
    change events. The DMS core processes these events and organizes the
    data in the DMS persistent storage to maintain their history.
    Recovering-frame Recovery is initiated by an authorized user who identifies that the primary
    copy of the data in the host server has become incorrect or corrupted. A
    recovery can be applied to an entire data source, or to a subset of a data
    source.
    When a recovery initiative is handled in a DMS core, the DMS core
    immediately freezes and terminates the backup process of the target data
    to be recovered, e.g., by sending a recovery command either directly to the
    data agent or to the control agent. In an illustrative embodiment, the DMS
    core may also adjust its most current data state to bring forward the target
    history to be recovered to be the most current state. For example, if a file
    has four versions (v4, v3, v2, v1), and if an authorized user wants to
    recover to version 2, the DMS core creates a version 5, which content is
    identical to version 2, i.e., v5 = v2, v4, v3, v2, v1.
    Recovering-frame is an entrance state into data recovery at the host server.
    During this state, the event processor first instructs the I/O filter to filter
    the READ requests synchronously so that it can participates in the
    handling of data access requests. It also preferably instructs the I/O filter
    to fail all the WRITE requests by returning error to the caller. When
    READ requests arrive, depend on the requesting target, the event
    processor may serve the data or fail the request.
    Simultaneously, the event processor gets from the DMS core the list of the
    data items at the specific point-in-time to be recovered and constructs a
    recovery list, e.g., a new sorted source tree. Once the list is in place, the
    event processor first uses the list to recover the data structure in the
    primary storage, and then transitions into Recovering state.
    Recovering This is the next state of a recovery process. After Recovering-frame is
    completed, the event processor must have already recovered the data
    structure in the primary storage.
    During Recovering state, the event processor re-configures the I/O filter to
    filter all the READ and WRITE events synchronously so that it can
    participate in handling data access. The event processor also begins
    recovering the actual data, e.g., by going down the new sorted source tree
    one item at a time to request the data or the delta to apply to its corrupted
    data.
    When an access request for data that has not been recovered (which can be
    detected using the sorted source tree) arrives, the event processor
    immediately recovers the requested data.
    When update events arrive, the event processor processes the data and
    sends the real-time event journal to the DMS for backup. The update
    events also pass down to the primary storage. The event processor also
    must mark the item recovered so that the most recent data does not get
    overwritten by data from the DMS.
    This type of recovery is called Virtual-On-Demand recovery; it allows
    recovery to happen simultaneously while an application accesses and
    updates the recovering data.
    If the state of the DMS data is adjusted prior to the host recovery, then
    only the stream of backup events needs to be applied to the data in the
    DMS. If the data state at the DMS is not adjusted prior to recovery, then
    as the recovering data overwrites the host data, the recovery events must
    be shipped back to the DMS along with the most current application data
    update events to adjust the data state of the DMS data.
  • Incident Table:
    Incident Description
    Blackout Environmental changed.
    Network connection is interrupted or the connected DMS core goes
    down.
    Reconnected Environmental changed.
    Network connection to a DMS core resumed and the data service can be
    continued.
    Done-upload Data state in the DMS changed.
    A baseline copy of the entire data source is fully uploaded to the DMS.
    Reboot Environmental changed.
    The host server or the data agent is restarted.
    Done-resync Data state in the DMS changed.
    The state of the data in the host server is, from this incident onward, in
    synchronized with the most current point-in-time data in the DMS.
    Begin-recovery Data state changed as initiated by user.
    The state of the data (entire or partial) at the host server is incorrect or
    corrupted and has to be recovered using a former point-in-time data in
    the DMS.
    Done-recovering-frame Data state in the host server changed
    At this point on, the structure of the host data is recovered to the
    intended recovering point-in-time.
    Done-recovered Data state in the host server changed
    At this point on, the data state of the host server is fully recovered and
    fully in synchronized with the most current point-in-time data state of
    the data source at the DMS.

    Variants:
  • The finite state switching mechanism as described above may be varied. It may be implemented by breaking up a given state (as described) into multiple smaller states, or by combining two or more states into a more complex state. In addition, one of ordinary skill in the art will appreciate that some of the incidents and behaviors may be adjusted and/or re-ordered to achieve the same goal of providing the continuous, real-time, substantially no downtime data protection service. Thus, for example, the “UBlackout” state may be combined with the “Initial-upload” state into one state that manages data uploads, data updates, and that is aware of the connection status. The “Recovering-frame” state may be combined with the “Recovering” state into one state that performs data structure and data recovery as a process. The “PBlackout” state may be combined with the “Regular-backup” state. The “PBlackout” state may also be combined with the “Upward-resync” state. All three states “PBlackout,” “Regular-backup” and “Upward-resync” may be merged into one state that has a process to carry out the combined functions. Also, the “Initial-upload” state may be split into two states with the new state being the target state of the “Reconnected” incident after the “UBlackout” state. This new upload state may include a process to compare the DMS and host data, and this state may be connected back to the “Initial-upload” state through a new incident, such as “Done-compare.” There may also be a new state that handles data comparison from the “Initial-upload” state after the “Reboot” incident, and that new state would be connected back to “Initial-upload” via a new incident, such as “Done-compare.” As another variant, each of the “Recovering-frame” and “Recovering” states may also be split into two states, with the new states being used to handle data comparison after the “Reconnected” or “Reboot” incidents, as the case may be.
  • As can be seen, the finite state machine illustrated in the embodiment of FIG. 7 should not be taken to limit the present invention, although it is a desirable implementation. More generally, the finite state machine may be implemented in any convenient manner in which the initial data upload, continuous backup, data resynchronization and data recovery can be seen to comprise an integrated data protection cycle provided to the data source without (at the same time) interrupting the application aware, real-time event data stream that is being generated by the data agent. Thus, any finite state machine (FSM) or similar process or structure that protects the data source without interrupting the application aware, real-time data stream, e.g., by continuously transitioning among a set of connected operating states, may be deemed to be within the scope of the present invention. As noted above, these operating states typically include several or all of the following: initial data upload, continuous backup, data resynchronization, and data recovery.
  • One of ordinary skill will also appreciate that the finite state machine may be entered at states other than Initial-upload. Thus, for example, an IT administrator may use a new server to recover a data source, and then have the new server act as the master server where the application runs. DMS continues protect the data. In such case, another entry point into the state diagram would then exist, and that entry point may be an incident labeled (for illustrative purposes only) “Recover and Begin Data Protection.” In this scenario, the new server enters the FSM at “Recovering-Frame” and then transitions to “Recovering” and then “Regular-Backup,” as previously described. As another example, assume an IT administrator makes a copy of the data on a new server and now wishes to provide (via DMS) data protection to that data source with respect to that new server. In this scenario, the entry point to the FSM may be state 706 (Regular-Backup), or state 708 (Upward-Resync). Thus, as these examples illustrate, more generally the finite state machine may be entered at any convenient state as determined by the user and the desired data protection goal.
  • Unlike a conventional data protection system the data protection service provided by DMS is automated, real-time, and continuous, and it exhibits no or substantially no downtime. This is because DMS is keeping track of the real-time data history, and because preferably the state of the most current data in a DMS region, cluster or node (as the case may be) must match the state of the data in the original host server at all times. In contrast, data recovery on a conventional data protection system means shutting down a host server, selecting a version of the data history, copying the data history back to the host server, and then turning on the host server. All of these steps are manually driven. After a period of time, the conventional data protection system then performs a backup on the changed data. In the present invention, as has been described above, the otherwise separate processes (initial data upload, continuous backup, blackout and data resynchronization, and recovery) are simply phases of the overall data protection cycle. This is highly advantageous, and it is enabled because DMS keeps a continuous data history. Stated another way, there is no gap in the data. The data protection cycle preferably loops around indefinitely until, for example, a user terminates the service. A given data protection phase (the state) changes as the state of the data and the environment change (the incident). Preferably, all of the phases (states) are interconnected to form a finite state machine that provides the data protection service.
  • The data protection service provided by the DMS has no effective downtime because the data upload, data resynchronization, data recovery and data backup are simply integrated phases of a data protection cycle. There is no application downtime.
  • The present invention has numerous advantages over the prior art such as tape backup, disk backup, volume replication, storage snapshots, application replication, remote replication, and manual recovery. Indeed, existing fragmented approaches are complex, resource inefficient, expensive to operate, and often reliable. From an architectural standpoint, they are not well suited to scaling to support heterogeneous, enterprise-wide data management. The present invention overcomes these and other problems of the prior art by providing real-time data management services. As has been described, the invention transparently and efficiently captures the real-time continuous history of all or substantially all transactions and data changes in the enterprise. The solution operates over local and wide area IP networks to form a coherent data management, protection and recovery infrastructure. It eliminates data loss, reduces downtime, and ensures application consistent recovery to any point in time. These and other advantages are provided through the use of an application aware I/O driver that captures and outputs a continuous data stream—in the form of an event journal—to other data management nodes in the system.
  • As one of ordinary skill in the art will appreciate, the present invention addresses enterprise data protection and data management problems by continuously protecting all data changes and transactions in real time across local and wide area networks. Preferably, and as illustrated in FIG. 1, the method and system of the invention take advantage of inexpensive, commodity processors to efficiently parallel process and route application-aware data changes between applications and low cost near storage.
  • While the present invention has been described in the context of a method or process, the present invention also relates to apparatus for performing the operations herein. In an illustrated embodiment, the apparatus is implemented as a processor and associated program code that implements a finite state machine with a plurality of states and to effect transitions between the states. As described above, this apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • While the above written description also describes a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary, as alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, or the like. References in the specification to a given embodiment indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic.

Claims (28)

1. Apparatus comprising a processor and program code to implement a finite state machine with a plurality of states and to effect transitions between the states, the finite state machine providing a data protection service to a data source associated with a host and comprising:
a first state for acting upon an upload event;
a second state for acting upon a backup event, the finite state machine transitioning from the first state to the second state upon the occurrence of a given incident;
a third state for acting upon a resynchronization event, the finite state machine transitioning from the second state to the third state upon the occurrence of a given incident;
a fourth state for acting upon a recovery initiation event, the finite state machine transitioning to the fourth state from either the second state or the third state upon the occurrence of a given incident; and
a fifth state for acting upon a recovery event, the finite state machine transitioning from the fourth state to the fifth state upon the occurrence of a given incident;
wherein, upon completion of the recovery event, the finite state machine transitions from the fifth state back to the second state upon the occurrence of a given incident.
2. The apparatus as described in claim 1 further including a sixth state for acting upon a blackout event that occurs while the finite state machine is in the first state, wherein, upon occurrence of a given incident, the finite state machine transitions from the first state to the sixth state.
3. The apparatus as described in claim 2 wherein the given incident that causes the finite state machine to transition from the first state to the sixth state is a blackout.
4. The apparatus as described in claim 2 wherein the finite state machine transitions from the sixth state back to the first state upon the occurrence of a given incident while the finite state machine is in the sixth state.
5. The apparatus as described in claim 4 wherein the given incident that causes the finite state machine to transition from the sixth state back to the first state is a reconnection.
6. The apparatus as described in claim 1 further including a seventh state for acting upon a blackout event that occurs while the finite state machine is in either the second state or the third state.
7. The apparatus as described in claim 6 wherein the finite state machine transitions from the second state to the seventh state upon occurrence of a given incident, wherein the given incident that causes this transition is a blackout that occurs while the finite state machine is in the second state.
8. The apparatus as described in claim 6 wherein the finite state machine transitions from the third state to the seventh state upon occurrence of a given incident, wherein the given incident that causes this transition is a blackout that occurs while the finite state machine is in the third state.
9. The apparatus as described in claim 8 wherein the finite state machine transitions from the seventh state back to the third state upon the occurrence of a given incident while the finite state machine is in the seventh state.
10. The apparatus as described in claim 9 wherein the given incident that causes the finite state machine to transition from the seventh state back to the third state is a reconnection.
11. The apparatus as described in claim 1 wherein the given incident that causes the finite state machine to transition from the first state to the second state while the finite state machine is in the first state is completion of an upload of a baseline copy of the data source.
12. The apparatus as described in claim 1 wherein the given incident that causes the finite state machine to transition from the second state to the third state while the finite state machine is in the second state is a reboot;
13. The apparatus as described in claim 1 wherein the given incident that causes the finite state machine to transition from the second state to the fourth state while the finite state machine is in the second state is a request to initiate a recovery.
14. The apparatus as described in claim 1 wherein the given incident that causes the finite state machine to transition from the third state to the fourth state while the finite state machine is in the third state is a request to initiate a recovery.
15. The apparatus as described in claim 1 wherein the finite state machine transitions from the third state back to the second state upon the occurrence of a given incident, wherein the given incident that causes this transition is an indication that given data is synchronized.
16. The apparatus as described in claim 1 wherein the given incident that causes the finite state machine to transition from the fourth state to the fifth state while the finite state machine is in the fourth state is an indication that given data recovery is initiated.
17. The apparatus as described in claim 1 wherein the given incident that causes the finite state machine to transition from the fifth state back to the second state while the finite state machine is in the fifth state is an indication that given data recovery is completed.
18. A method of protecting a data source associated with a host, comprising:
generating an application aware, real-time event data stream; and
protecting the data source without interrupting the application aware, real-time event data stream, wherein data protection is provided by continuously transitioning among a set of states.
19. The method as described in claim 18 wherein the set of states comprise:
a first state for acting upon an upload event;
a second state for acting upon a backup event, the first state transitioning to the second state upon the occurrence of a given incident;
a third state for acting upon a resynchronization event, the second state transitioning to the third state upon the occurrence of a given incident;
a fourth state for acting upon a recovery initiation event, the fourth state being reached by a transition from either the second state or the third state upon the occurrence of a given incident; and
a fifth state for acting upon a recovery event, the fourth state transitioning to the fifth state upon the occurrence of a given incident;
wherein, upon completion of the recovery event, the fifth state transitions back to the second state upon the occurrence of a given incident.
20. The method as described in claim 19 further including a sixth state for acting upon a blackout event that during the first state, wherein, upon occurrence of a given incident, the first state transitions to the sixth state.
21. The method as described in claim 19 further including a seventh state for acting upon a blackout event that occurs during either the second state or the third state, wherein, upon occurrence of a given incident, the second state or the third state, as the case may be, transitions to the seventh state.
22. A method of protecting a data source associated with a host, comprising:
generating from the data source an application aware, real-time event data stream; and
providing the data source initial data upload, continuous backup, data resynchronization and data recovery as phases of an integrated data protection cycle without interrupting the application aware, real-time event data stream.
23. A system for providing a data protection service to a data source associated with a host, comprising:
program code executable in a processor that generates an application aware, real-time event data stream associated with the data source; and
program code executable in a processor to implement a finite state machine, the finite state machine protecting the data source without interrupting the application aware, real-time event data stream by continuously transitioning among a set of states.
24. The system as described in claim 23 wherein the set of states comprise initial data upload, continuous backup, data resynchronization and data recovery.
25. The system as described in claim 24 wherein the states are phases of an integrated data protection cycle for the data source.
26. The system as described in claim 24 wherein the finite state machine has a given entry state.
27. The system as described in claim 26 wherein the given entry state is an initial data upload state.
28. The system as described in claim 23 wherein the finite state machine has a given entry state, wherein the given entry state is selected from a set of states that include: upload, backup, resynchronization and recovery.
US10/841,398 2004-05-07 2004-05-07 Method and system for automated, no downtime, real-time, continuous data protection Active 2024-05-10 US7096392B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US10/841,398 US7096392B2 (en) 2004-05-07 2004-05-07 Method and system for automated, no downtime, real-time, continuous data protection
EP05742226A EP1745059A4 (en) 2004-05-07 2005-05-05 Method and system for automated, no downtime, real-time, continuous data protection
PCT/US2005/015651 WO2005111051A1 (en) 2004-05-07 2005-05-05 Method and system for automated, no downtime, real-time, continuous data protection
US11/507,257 US7363549B2 (en) 2004-05-07 2006-08-21 Method and system for automated, no downtime, real-time, continuous data protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/841,398 US7096392B2 (en) 2004-05-07 2004-05-07 Method and system for automated, no downtime, real-time, continuous data protection

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/507,257 Continuation US7363549B2 (en) 2004-05-07 2006-08-21 Method and system for automated, no downtime, real-time, continuous data protection

Publications (2)

Publication Number Publication Date
US20050262377A1 true US20050262377A1 (en) 2005-11-24
US7096392B2 US7096392B2 (en) 2006-08-22

Family

ID=35376611

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/841,398 Active 2024-05-10 US7096392B2 (en) 2004-05-07 2004-05-07 Method and system for automated, no downtime, real-time, continuous data protection
US11/507,257 Active US7363549B2 (en) 2004-05-07 2006-08-21 Method and system for automated, no downtime, real-time, continuous data protection

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/507,257 Active US7363549B2 (en) 2004-05-07 2006-08-21 Method and system for automated, no downtime, real-time, continuous data protection

Country Status (3)

Country Link
US (2) US7096392B2 (en)
EP (1) EP1745059A4 (en)
WO (1) WO2005111051A1 (en)

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050076262A1 (en) * 2003-09-23 2005-04-07 Revivio, Inc. Storage management device
US20060010227A1 (en) * 2004-06-01 2006-01-12 Rajeev Atluri Methods and apparatus for accessing data from a primary data storage system for secondary storage
US20060015584A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Autonomous service appliance
US20060015764A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Transparent service provider
US20060015645A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Network traffic routing
US20060031468A1 (en) * 2004-06-01 2006-02-09 Rajeev Atluri Secondary data storage and recovery system
US20070244938A1 (en) * 2006-04-17 2007-10-18 Microsoft Corporation Creating host-level application-consistent backups of virtual machines
US20070271304A1 (en) * 2006-05-19 2007-11-22 Inmage Systems, Inc. Method and system of tiered quiescing
US20070271428A1 (en) * 2006-05-19 2007-11-22 Inmage Systems, Inc. Method and apparatus of continuous data backup and access using virtual machines
US20070276885A1 (en) * 2006-05-29 2007-11-29 Microsoft Corporation Creating frequent application-consistent backups efficiently
US20070282921A1 (en) * 2006-05-22 2007-12-06 Inmage Systems, Inc. Recovery point data view shift through a direction-agnostic roll algorithm
US7325161B1 (en) 2004-06-30 2008-01-29 Symantec Operating Corporation Classification of recovery targets to enable automated protection setup
US20080033922A1 (en) * 2006-08-04 2008-02-07 Pavel Cisler Searching a backup archive
US20080034018A1 (en) * 2006-08-04 2008-02-07 Pavel Cisler Managing backup of content
US20080059894A1 (en) * 2006-08-04 2008-03-06 Pavel Cisler Conflict resolution in recovery of electronic data
US20080059542A1 (en) * 2006-08-30 2008-03-06 Inmage Systems, Inc. Ensuring data persistence and consistency in enterprise storage backup systems
US7360123B1 (en) * 2004-06-30 2008-04-15 Symantec Operating Corporation Conveying causal relationships between at least three dimensions of recovery management
US7360110B1 (en) 2004-06-30 2008-04-15 Symantec Operating Corporation Parameterization of dimensions of protection systems and uses thereof
US7363365B2 (en) 2004-07-13 2008-04-22 Teneros Inc. Autonomous service backup and migration
US20080243956A1 (en) * 2007-03-27 2008-10-02 Hitachi, Ltd. Management device and method for storage device executing cdp-based recovery
US20080307000A1 (en) * 2007-06-08 2008-12-11 Toby Charles Wood Paterson Electronic Backup of Applications
US20080313238A1 (en) * 2004-10-27 2008-12-18 International Business Machines Corporation Read-Copy Update System And Method
US20090307169A1 (en) * 2007-01-05 2009-12-10 International Business Machines Corporation Distributable Serializable Finite State Machine
US20090313503A1 (en) * 2004-06-01 2009-12-17 Rajeev Atluri Systems and methods of event driven recovery management
US20100023797A1 (en) * 2008-07-25 2010-01-28 Rajeev Atluri Sequencing technique to account for a clock error in a backup system
US7664983B2 (en) 2004-08-30 2010-02-16 Symantec Corporation Systems and methods for event driven recovery management
US20100049717A1 (en) * 2008-08-20 2010-02-25 Ryan Michael F Method and systems for sychronization of process control servers
US7725760B2 (en) 2003-09-23 2010-05-25 Symantec Operating Corporation Data storage system
US7730222B2 (en) 2004-08-24 2010-06-01 Symantec Operating System Processing storage-related I/O requests using binary tree data structures
US20100169591A1 (en) * 2005-09-16 2010-07-01 Rajeev Atluri Time ordered view of backup data on behalf of a host
US20100169282A1 (en) * 2004-06-01 2010-07-01 Rajeev Atluri Acquisition and write validation of data of a networked host node to perform secondary storage
US20100169466A1 (en) * 2008-12-26 2010-07-01 Rajeev Atluri Configuring hosts of a secondary data storage and recovery system
US20100169281A1 (en) * 2006-05-22 2010-07-01 Rajeev Atluri Coalescing and capturing data between events prior to and after a temporal window
US20100169587A1 (en) * 2005-09-16 2010-07-01 Rajeev Atluri Causation of a data read against a first storage system to optionally store a data write to preserve the version to allow viewing and recovery
US20100169592A1 (en) * 2008-12-26 2010-07-01 Rajeev Atluri Generating a recovery snapshot and creating a virtual view of the recovery snapshot
US20100169452A1 (en) * 2004-06-01 2010-07-01 Rajeev Atluri Causation of a data read operation against a first storage system by a server associated with a second storage system according to a host generated instruction
US20100169283A1 (en) * 2006-05-22 2010-07-01 Rajeev Atluri Recovery point data view formation with generation of a recovery view and a coalesce policy
US20100257403A1 (en) * 2009-04-03 2010-10-07 Microsoft Corporation Restoration of a system from a set of full and partial delta system snapshots across a distributed system
US7827362B2 (en) 2004-08-24 2010-11-02 Symantec Corporation Systems, apparatus, and methods for processing I/O requests
US7853566B2 (en) 2006-08-04 2010-12-14 Apple Inc. Navigation of electronic backups
US7856424B2 (en) 2006-08-04 2010-12-21 Apple Inc. User interface for backup management
US7860839B2 (en) 2006-08-04 2010-12-28 Apple Inc. Application-based backup-restore of electronic information
US7904428B2 (en) 2003-09-23 2011-03-08 Symantec Corporation Methods and apparatus for recording write requests directed to a data store
US7979656B2 (en) 2004-06-01 2011-07-12 Inmage Systems, Inc. Minimizing configuration changes in a fabric-based data protection solution
US7991748B2 (en) 2003-09-23 2011-08-02 Symantec Corporation Virtual data store creation and use
US8005793B1 (en) * 2006-04-18 2011-08-23 Netapp, Inc. Retaining persistent point in time data during volume migration
US8010900B2 (en) 2007-06-08 2011-08-30 Apple Inc. User interface for electronic backup
US8055613B1 (en) * 2008-04-29 2011-11-08 Netapp, Inc. Method and apparatus for efficiently detecting and logging file system changes
US8166415B2 (en) 2006-08-04 2012-04-24 Apple Inc. User interface for backup management
US8307004B2 (en) 2007-06-08 2012-11-06 Apple Inc. Manipulating electronic backups
US8311988B2 (en) * 2006-08-04 2012-11-13 Apple Inc. Consistent back up of electronic information
US8370853B2 (en) 2006-08-04 2013-02-05 Apple Inc. Event notification management
US8429425B2 (en) 2007-06-08 2013-04-23 Apple Inc. Electronic backup and restoration of encrypted data
US20130132340A1 (en) * 2010-08-02 2013-05-23 Beijing Lenovo Software Ltd. File synchronization method, electronic device and synchronization system
US8468136B2 (en) 2007-06-08 2013-06-18 Apple Inc. Efficient data backup
US8521973B2 (en) 2004-08-24 2013-08-27 Symantec Operating Corporation Systems and methods for providing a modification history for a location within a data store
US8725965B2 (en) 2007-06-08 2014-05-13 Apple Inc. System setup for electronic backup
US8745523B2 (en) 2007-06-08 2014-06-03 Apple Inc. Deletion in electronic backups
US20140379921A1 (en) * 2013-06-21 2014-12-25 Amazon Technologies, Inc. Resource silos at network-accessible services
US8943026B2 (en) 2011-01-14 2015-01-27 Apple Inc. Visual representation of a local backup
US8984029B2 (en) 2011-01-14 2015-03-17 Apple Inc. File system management
US9009115B2 (en) 2006-08-04 2015-04-14 Apple Inc. Restoring electronic information
US20160077798A1 (en) * 2014-09-16 2016-03-17 Salesforce.Com, Inc. In-memory buffer service
US20160239388A1 (en) * 2015-02-13 2016-08-18 Netapp, Inc. Managing multi-level backups into the cloud
US9454587B2 (en) 2007-06-08 2016-09-27 Apple Inc. Searching and restoring of backups
US9558078B2 (en) 2014-10-28 2017-01-31 Microsoft Technology Licensing, Llc Point in time database restore from storage snapshots
US9881018B2 (en) * 2014-08-14 2018-01-30 International Business Machines Corporation File management in thin provisioning storage environments
US11537476B2 (en) * 2020-03-25 2022-12-27 Sap Se Database management system backup and recovery management

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060047855A1 (en) 2004-05-13 2006-03-02 Microsoft Corporation Efficient chunking algorithm
US8108429B2 (en) * 2004-05-07 2012-01-31 Quest Software, Inc. System for moving real-time data events across a plurality of devices in a network for simultaneous data protection, replication, and access services
US7565661B2 (en) 2004-05-10 2009-07-21 Siew Yong Sim-Tang Method and system for real-time event journaling to provide enterprise data services
US7680834B1 (en) * 2004-06-08 2010-03-16 Bakbone Software, Inc. Method and system for no downtime resychronization for real-time, continuous data protection
US7519870B1 (en) * 2004-06-08 2009-04-14 Asempra Technologies, Inc. Method and system for no downtime, initial data upload for real-time, continuous data protection
US7567974B2 (en) 2004-09-09 2009-07-28 Microsoft Corporation Method, system, and apparatus for configuring a data protection system
US7865470B2 (en) * 2004-09-09 2011-01-04 Microsoft Corporation Method, system, and apparatus for translating logical information representative of physical data in a data protection system
US7487395B2 (en) * 2004-09-09 2009-02-03 Microsoft Corporation Method, system, and apparatus for creating an architectural model for generating robust and easy to manage data protection applications in a data protection system
US8145601B2 (en) 2004-09-09 2012-03-27 Microsoft Corporation Method, system, and apparatus for providing resilient data transfer in a data protection system
US7769709B2 (en) * 2004-09-09 2010-08-03 Microsoft Corporation Method, system, and apparatus for creating an archive routine for protecting data in a data protection system
US7979404B2 (en) 2004-09-17 2011-07-12 Quest Software, Inc. Extracting data changes and storing data history to allow for instantaneous access to and reconstruction of any point-in-time data
US7613787B2 (en) 2004-09-24 2009-11-03 Microsoft Corporation Efficient algorithm for finding candidate objects for remote differential compression
US7904913B2 (en) 2004-11-02 2011-03-08 Bakbone Software, Inc. Management interface for a system that provides automated, real-time, continuous data protection
US20060218435A1 (en) * 2005-03-24 2006-09-28 Microsoft Corporation Method and system for a consumer oriented backup
US7478278B2 (en) * 2005-04-14 2009-01-13 International Business Machines Corporation Template based parallel checkpointing in a massively parallel computer system
US7644046B1 (en) * 2005-06-23 2010-01-05 Hewlett-Packard Development Company, L.P. Method of estimating storage system cost
US7788521B1 (en) * 2005-07-20 2010-08-31 Bakbone Software, Inc. Method and system for virtual on-demand recovery for real-time, continuous data protection
US7689602B1 (en) 2005-07-20 2010-03-30 Bakbone Software, Inc. Method of creating hierarchical indices for a distributed object system
US7734954B2 (en) * 2007-01-03 2010-06-08 International Business Machines Corporation Method, computer program product, and system for providing a multi-tiered snapshot of virtual disks
US8131723B2 (en) 2007-03-30 2012-03-06 Quest Software, Inc. Recovering a file system to any point-in-time in the past with guaranteed structure, content consistency and integrity
US8364648B1 (en) 2007-04-09 2013-01-29 Quest Software, Inc. Recovering a database to any point-in-time in the past with guaranteed data consistency
US7613888B2 (en) * 2007-04-11 2009-11-03 International Bsuiness Machines Corporation Maintain owning application information of data for a data storage system
US7610459B2 (en) * 2007-04-11 2009-10-27 International Business Machines Corporation Maintain owning application information of data for a data storage system
US8032497B2 (en) * 2007-09-26 2011-10-04 International Business Machines Corporation Method and system providing extended and end-to-end data integrity through database and other system layers
US7802068B2 (en) * 2007-09-28 2010-09-21 Oracle America, Inc. Self-organizing heterogeneous distributed storage system
US8095865B2 (en) * 2007-11-21 2012-01-10 Microsoft Corporation Layout manager
US8041678B2 (en) * 2008-06-20 2011-10-18 Microsoft Corporation Integrated data availability and historical data protection
US7908515B1 (en) * 2008-09-29 2011-03-15 Emc Corporation Methods and apparatus for action regulation for continuous data replication systems
US8055937B2 (en) * 2008-12-22 2011-11-08 QuorumLabs, Inc. High availability and disaster recovery using virtualization
US8332365B2 (en) 2009-03-31 2012-12-11 Amazon Technologies, Inc. Cloning and recovery of data volumes
US8370306B1 (en) * 2009-11-13 2013-02-05 Symantec Corporation Systems and methods for recovering from continuous-data-protection blackouts
US8307243B2 (en) * 2010-02-01 2012-11-06 International Business Machines Corporation Parallel debugging in a massively parallel computing system
US8453145B1 (en) 2010-05-06 2013-05-28 Quest Software, Inc. Systems and methods for instant provisioning of virtual machine files
US9547562B1 (en) 2010-08-11 2017-01-17 Dell Software Inc. Boot restore system for rapidly restoring virtual machine backups
US8392378B2 (en) * 2010-12-09 2013-03-05 International Business Machines Corporation Efficient backup and restore of virtual input/output server (VIOS) cluster
US8578460B2 (en) 2011-05-23 2013-11-05 Microsoft Corporation Automating cloud service reconnections
US8510200B2 (en) 2011-12-02 2013-08-13 Spireon, Inc. Geospatial data based assessment of driver behavior
US10169822B2 (en) 2011-12-02 2019-01-01 Spireon, Inc. Insurance rate optimization through driver behavior monitoring
US9372762B2 (en) * 2011-12-08 2016-06-21 Veritas Technologies Llc Systems and methods for restoring application data
US9779379B2 (en) 2012-11-05 2017-10-03 Spireon, Inc. Container verification through an electrical receptacle and plug associated with a container and a transport vehicle of an intermodal freight transport system
US8933802B2 (en) 2012-11-05 2015-01-13 Spireon, Inc. Switch and actuator coupling in a chassis of a container associated with an intermodal freight transport system
US9069805B2 (en) 2012-11-16 2015-06-30 Sap Se Migration of business object data in parallel with productive business application usage
US9563655B2 (en) 2013-03-08 2017-02-07 Oracle International Corporation Zero and near-zero data loss database backup and recovery
US9639448B2 (en) 2013-06-27 2017-05-02 Sap Se Multi-version systems for zero downtime upgrades
US9110930B2 (en) * 2013-08-22 2015-08-18 International Business Machines Corporation Parallel application checkpoint image compression
US9779449B2 (en) 2013-08-30 2017-10-03 Spireon, Inc. Veracity determination through comparison of a geospatial location of a vehicle with a provided data
US9767424B2 (en) 2013-10-16 2017-09-19 Sap Se Zero downtime maintenance with maximum business functionality
US9436724B2 (en) 2013-10-21 2016-09-06 Sap Se Migrating data in tables in a database
US20150186991A1 (en) 2013-12-31 2015-07-02 David M. Meyer Creditor alert when a vehicle enters an impound lot
US9785510B1 (en) 2014-05-09 2017-10-10 Amazon Technologies, Inc. Variable data replication for storage implementing data backup
US9734021B1 (en) 2014-08-18 2017-08-15 Amazon Technologies, Inc. Visualizing restoration operation granularity for a database
US9632713B2 (en) * 2014-12-03 2017-04-25 Commvault Systems, Inc. Secondary storage editor
US9551788B2 (en) 2015-03-24 2017-01-24 Jim Epler Fleet pan to provide measurement and location of a stored transport item while maximizing space in an interior cavity of a trailer
US10567500B1 (en) 2015-12-21 2020-02-18 Amazon Technologies, Inc. Continuous backup of data in a distributed data store
US10423493B1 (en) 2015-12-21 2019-09-24 Amazon Technologies, Inc. Scalable log-based continuous data protection for distributed databases
US10853182B1 (en) 2015-12-21 2020-12-01 Amazon Technologies, Inc. Scalable log-based secondary indexes for non-relational databases
US11151078B2 (en) 2016-02-24 2021-10-19 Micro Focus Llc Structured data archival with reduced downtime
US10754844B1 (en) 2017-09-27 2020-08-25 Amazon Technologies, Inc. Efficient database snapshot generation
US10990581B1 (en) 2017-09-27 2021-04-27 Amazon Technologies, Inc. Tracking a size of a database change log
US11182372B1 (en) 2017-11-08 2021-11-23 Amazon Technologies, Inc. Tracking database partition change log dependencies
US11269731B1 (en) 2017-11-22 2022-03-08 Amazon Technologies, Inc. Continuous data protection
US11042503B1 (en) 2017-11-22 2021-06-22 Amazon Technologies, Inc. Continuous data protection and restoration
US10621049B1 (en) 2018-03-12 2020-04-14 Amazon Technologies, Inc. Consistent backups based on local node clock
US11126505B1 (en) 2018-08-10 2021-09-21 Amazon Technologies, Inc. Past-state backup generator and interface for database systems
US11042454B1 (en) 2018-11-20 2021-06-22 Amazon Technologies, Inc. Restoration of a data source

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729743A (en) * 1995-11-17 1998-03-17 Deltatech Research, Inc. Computer apparatus and method for merging system deltas
US6366988B1 (en) * 1997-07-18 2002-04-02 Storactive, Inc. Systems and methods for electronic data storage management
US6393582B1 (en) * 1998-12-10 2002-05-21 Compaq Computer Corporation Error self-checking and recovery using lock-step processor pair architecture
US6460055B1 (en) * 1999-12-16 2002-10-01 Livevault Corporation Systems and methods for backing up data files
US20020144177A1 (en) * 1998-12-10 2002-10-03 Kondo Thomas J. System recovery from errors for processor and associated components
US6463565B1 (en) * 1999-01-05 2002-10-08 Netspeak Corporation Method for designing object-oriented table driven state machines
US20020147807A1 (en) * 1998-01-23 2002-10-10 Domenico Raguseo Dynamic redirection
US6487581B1 (en) * 1999-05-24 2002-11-26 Hewlett-Packard Company Apparatus and method for a multi-client event server
US20020178397A1 (en) * 2001-05-23 2002-11-28 Hitoshi Ueno System for managing layered network
US6526418B1 (en) * 1999-12-16 2003-02-25 Livevault Corporation Systems and methods for backing up data files
US6625623B1 (en) * 1999-12-16 2003-09-23 Livevault Corporation Systems and methods for backing up data files
US6751753B2 (en) * 2001-02-27 2004-06-15 Sun Microsystems, Inc. Method, system, and program for monitoring system components
US6779003B1 (en) * 1999-12-16 2004-08-17 Livevault Corporation Systems and methods for backing up data files
US6816872B1 (en) * 1990-04-26 2004-11-09 Timespring Software Corporation Apparatus and method for reconstructing a file from a difference signature and an original file
US6826711B2 (en) * 2000-02-18 2004-11-30 Avamar Technologies, Inc. System and method for data protection with multidimensional parity
US6836756B1 (en) * 2000-11-13 2004-12-28 Nortel Networks Limited Time simulation techniques to determine network availability
US6847984B1 (en) * 1999-12-16 2005-01-25 Livevault Corporation Systems and methods for backing up data files
US6907551B2 (en) * 2000-10-02 2005-06-14 Ntt Docomo, Inc. Fault notification method and related provider facility
US6993706B2 (en) * 2002-01-15 2006-01-31 International Business Machines Corporation Method, apparatus, and program for a state machine framework

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5794252A (en) * 1995-01-24 1998-08-11 Tandem Computers, Inc. Remote duplicate database facility featuring safe master audit trail (safeMAT) checkpointing
US6487561B1 (en) 1998-12-31 2002-11-26 Emc Corporation Apparatus and methods for copying, backing up, and restoring data using a backup segment size larger than the storage block size
US7386610B1 (en) * 2000-09-18 2008-06-10 Hewlett-Packard Development Company, L.P. Internet protocol data mirroring

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6816872B1 (en) * 1990-04-26 2004-11-09 Timespring Software Corporation Apparatus and method for reconstructing a file from a difference signature and an original file
US5729743A (en) * 1995-11-17 1998-03-17 Deltatech Research, Inc. Computer apparatus and method for merging system deltas
US5893119A (en) * 1995-11-17 1999-04-06 Deltatech Research, Inc. Computer apparatus and method for merging system deltas
US6366988B1 (en) * 1997-07-18 2002-04-02 Storactive, Inc. Systems and methods for electronic data storage management
US20020147807A1 (en) * 1998-01-23 2002-10-10 Domenico Raguseo Dynamic redirection
US6393582B1 (en) * 1998-12-10 2002-05-21 Compaq Computer Corporation Error self-checking and recovery using lock-step processor pair architecture
US20020144177A1 (en) * 1998-12-10 2002-10-03 Kondo Thomas J. System recovery from errors for processor and associated components
US6463565B1 (en) * 1999-01-05 2002-10-08 Netspeak Corporation Method for designing object-oriented table driven state machines
US6487581B1 (en) * 1999-05-24 2002-11-26 Hewlett-Packard Company Apparatus and method for a multi-client event server
US6526418B1 (en) * 1999-12-16 2003-02-25 Livevault Corporation Systems and methods for backing up data files
US6625623B1 (en) * 1999-12-16 2003-09-23 Livevault Corporation Systems and methods for backing up data files
US6779003B1 (en) * 1999-12-16 2004-08-17 Livevault Corporation Systems and methods for backing up data files
US6460055B1 (en) * 1999-12-16 2002-10-01 Livevault Corporation Systems and methods for backing up data files
US6847984B1 (en) * 1999-12-16 2005-01-25 Livevault Corporation Systems and methods for backing up data files
US6826711B2 (en) * 2000-02-18 2004-11-30 Avamar Technologies, Inc. System and method for data protection with multidimensional parity
US6907551B2 (en) * 2000-10-02 2005-06-14 Ntt Docomo, Inc. Fault notification method and related provider facility
US6836756B1 (en) * 2000-11-13 2004-12-28 Nortel Networks Limited Time simulation techniques to determine network availability
US6751753B2 (en) * 2001-02-27 2004-06-15 Sun Microsystems, Inc. Method, system, and program for monitoring system components
US20020178397A1 (en) * 2001-05-23 2002-11-28 Hitoshi Ueno System for managing layered network
US6993706B2 (en) * 2002-01-15 2006-01-31 International Business Machines Corporation Method, apparatus, and program for a state machine framework

Cited By (126)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7725760B2 (en) 2003-09-23 2010-05-25 Symantec Operating Corporation Data storage system
US7991748B2 (en) 2003-09-23 2011-08-02 Symantec Corporation Virtual data store creation and use
US20050076262A1 (en) * 2003-09-23 2005-04-07 Revivio, Inc. Storage management device
US7904428B2 (en) 2003-09-23 2011-03-08 Symantec Corporation Methods and apparatus for recording write requests directed to a data store
US7725667B2 (en) 2003-09-23 2010-05-25 Symantec Operating Corporation Method for identifying the time at which data was written to a data store
US20060031468A1 (en) * 2004-06-01 2006-02-09 Rajeev Atluri Secondary data storage and recovery system
US7979656B2 (en) 2004-06-01 2011-07-12 Inmage Systems, Inc. Minimizing configuration changes in a fabric-based data protection solution
US20100169282A1 (en) * 2004-06-01 2010-07-01 Rajeev Atluri Acquisition and write validation of data of a networked host node to perform secondary storage
US7698401B2 (en) 2004-06-01 2010-04-13 Inmage Systems, Inc Secondary data storage and recovery system
US9209989B2 (en) 2004-06-01 2015-12-08 Inmage Systems, Inc. Causation of a data read operation against a first storage system by a server associated with a second storage system according to a host generated instruction
US20100169452A1 (en) * 2004-06-01 2010-07-01 Rajeev Atluri Causation of a data read operation against a first storage system by a server associated with a second storage system according to a host generated instruction
US8055745B2 (en) 2004-06-01 2011-11-08 Inmage Systems, Inc. Methods and apparatus for accessing data from a primary data storage system for secondary storage
US8949395B2 (en) 2004-06-01 2015-02-03 Inmage Systems, Inc. Systems and methods of event driven recovery management
US20060010227A1 (en) * 2004-06-01 2006-01-12 Rajeev Atluri Methods and apparatus for accessing data from a primary data storage system for secondary storage
US8224786B2 (en) 2004-06-01 2012-07-17 Inmage Systems, Inc. Acquisition and write validation of data of a networked host node to perform secondary storage
US9098455B2 (en) 2004-06-01 2015-08-04 Inmage Systems, Inc. Systems and methods of event driven recovery management
US20090313503A1 (en) * 2004-06-01 2009-12-17 Rajeev Atluri Systems and methods of event driven recovery management
US7360110B1 (en) 2004-06-30 2008-04-15 Symantec Operating Corporation Parameterization of dimensions of protection systems and uses thereof
US7360123B1 (en) * 2004-06-30 2008-04-15 Symantec Operating Corporation Conveying causal relationships between at least three dimensions of recovery management
US8307238B1 (en) 2004-06-30 2012-11-06 Symantec Operating Corporation Parameterization of dimensions of protection systems and uses thereof
US7325161B1 (en) 2004-06-30 2008-01-29 Symantec Operating Corporation Classification of recovery targets to enable automated protection setup
US9448898B2 (en) 2004-07-13 2016-09-20 Ongoing Operations LLC Network traffic routing
US20060015645A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Network traffic routing
US20060015584A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Autonomous service appliance
US20060015764A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Transparent service provider
US7363365B2 (en) 2004-07-13 2008-04-22 Teneros Inc. Autonomous service backup and migration
US7363366B2 (en) 2004-07-13 2008-04-22 Teneros Inc. Network traffic routing
US8504676B2 (en) 2004-07-13 2013-08-06 Ongoing Operations LLC Network traffic routing
US7730222B2 (en) 2004-08-24 2010-06-01 Symantec Operating System Processing storage-related I/O requests using binary tree data structures
US7827362B2 (en) 2004-08-24 2010-11-02 Symantec Corporation Systems, apparatus, and methods for processing I/O requests
US8521973B2 (en) 2004-08-24 2013-08-27 Symantec Operating Corporation Systems and methods for providing a modification history for a location within a data store
US7664983B2 (en) 2004-08-30 2010-02-16 Symantec Corporation Systems and methods for event driven recovery management
US8990510B2 (en) * 2004-10-27 2015-03-24 International Business Machines Corporation Read-copy update system and method
US20080313238A1 (en) * 2004-10-27 2008-12-18 International Business Machines Corporation Read-Copy Update System And Method
US8601225B2 (en) 2005-09-16 2013-12-03 Inmage Systems, Inc. Time ordered view of backup data on behalf of a host
US20100169591A1 (en) * 2005-09-16 2010-07-01 Rajeev Atluri Time ordered view of backup data on behalf of a host
US8683144B2 (en) 2005-09-16 2014-03-25 Inmage Systems, Inc. Causation of a data read against a first storage system to optionally store a data write to preserve the version to allow viewing and recovery
US20100169587A1 (en) * 2005-09-16 2010-07-01 Rajeev Atluri Causation of a data read against a first storage system to optionally store a data write to preserve the version to allow viewing and recovery
US20070244938A1 (en) * 2006-04-17 2007-10-18 Microsoft Corporation Creating host-level application-consistent backups of virtual machines
US8321377B2 (en) 2006-04-17 2012-11-27 Microsoft Corporation Creating host-level application-consistent backups of virtual machines
US9529807B2 (en) 2006-04-17 2016-12-27 Microsoft Technology Licensing, Llc Creating host-level application-consistent backups of virtual machines
US8005793B1 (en) * 2006-04-18 2011-08-23 Netapp, Inc. Retaining persistent point in time data during volume migration
US8868858B2 (en) * 2006-05-19 2014-10-21 Inmage Systems, Inc. Method and apparatus of continuous data backup and access using virtual machines
US20070271304A1 (en) * 2006-05-19 2007-11-22 Inmage Systems, Inc. Method and system of tiered quiescing
US20070271428A1 (en) * 2006-05-19 2007-11-22 Inmage Systems, Inc. Method and apparatus of continuous data backup and access using virtual machines
US8554727B2 (en) 2006-05-19 2013-10-08 Inmage Systems, Inc. Method and system of tiered quiescing
US20100169281A1 (en) * 2006-05-22 2010-07-01 Rajeev Atluri Coalescing and capturing data between events prior to and after a temporal window
US20070282921A1 (en) * 2006-05-22 2007-12-06 Inmage Systems, Inc. Recovery point data view shift through a direction-agnostic roll algorithm
US20100169283A1 (en) * 2006-05-22 2010-07-01 Rajeev Atluri Recovery point data view formation with generation of a recovery view and a coalesce policy
US8838528B2 (en) 2006-05-22 2014-09-16 Inmage Systems, Inc. Coalescing and capturing data between events prior to and after a temporal window
US8527470B2 (en) * 2006-05-22 2013-09-03 Rajeev Atluri Recovery point data view formation with generation of a recovery view and a coalesce policy
US7676502B2 (en) 2006-05-22 2010-03-09 Inmage Systems, Inc. Recovery point data view shift through a direction-agnostic roll algorithm
US7613750B2 (en) 2006-05-29 2009-11-03 Microsoft Corporation Creating frequent application-consistent backups efficiently
US20070276885A1 (en) * 2006-05-29 2007-11-29 Microsoft Corporation Creating frequent application-consistent backups efficiently
US8370853B2 (en) 2006-08-04 2013-02-05 Apple Inc. Event notification management
US20080034018A1 (en) * 2006-08-04 2008-02-07 Pavel Cisler Managing backup of content
US8538927B2 (en) 2006-08-04 2013-09-17 Apple Inc. User interface for backup management
US7809687B2 (en) 2006-08-04 2010-10-05 Apple Inc. Searching a backup archive
US9715394B2 (en) 2006-08-04 2017-07-25 Apple Inc. User interface for backup management
US7809688B2 (en) 2006-08-04 2010-10-05 Apple Inc. Managing backup of content
US9009115B2 (en) 2006-08-04 2015-04-14 Apple Inc. Restoring electronic information
US7860839B2 (en) 2006-08-04 2010-12-28 Apple Inc. Application-based backup-restore of electronic information
US8504527B2 (en) 2006-08-04 2013-08-06 Apple Inc. Application-based backup-restore of electronic information
US8166415B2 (en) 2006-08-04 2012-04-24 Apple Inc. User interface for backup management
US20080059894A1 (en) * 2006-08-04 2008-03-06 Pavel Cisler Conflict resolution in recovery of electronic data
US8495024B2 (en) 2006-08-04 2013-07-23 Apple Inc. Navigation of electronic backups
US8775378B2 (en) 2006-08-04 2014-07-08 Apple Inc. Consistent backup of electronic information
US7853566B2 (en) 2006-08-04 2010-12-14 Apple Inc. Navigation of electronic backups
US8311988B2 (en) * 2006-08-04 2012-11-13 Apple Inc. Consistent back up of electronic information
US20080033922A1 (en) * 2006-08-04 2008-02-07 Pavel Cisler Searching a backup archive
US7856424B2 (en) 2006-08-04 2010-12-21 Apple Inc. User interface for backup management
US7853567B2 (en) 2006-08-04 2010-12-14 Apple Inc. Conflict resolution in recovery of electronic data
US7634507B2 (en) 2006-08-30 2009-12-15 Inmage Systems, Inc. Ensuring data persistence and consistency in enterprise storage backup systems
US20080059542A1 (en) * 2006-08-30 2008-03-06 Inmage Systems, Inc. Ensuring data persistence and consistency in enterprise storage backup systems
US8561007B2 (en) 2007-01-05 2013-10-15 International Business Machines Corporation Distributable serializable finite state machine
US8255852B2 (en) * 2007-01-05 2012-08-28 International Business Machines Corporation Distributable serializable finite state machine
US9600766B2 (en) 2007-01-05 2017-03-21 International Business Machines Corporation Distributable serializable finite state machine
US20090307169A1 (en) * 2007-01-05 2009-12-10 International Business Machines Corporation Distributable Serializable Finite State Machine
US7877361B2 (en) * 2007-03-27 2011-01-25 Hitachi, Ltd. Management device and method for storage device executing CDP-based recovery
US20080243956A1 (en) * 2007-03-27 2008-10-02 Hitachi, Ltd. Management device and method for storage device executing cdp-based recovery
US8099392B2 (en) 2007-06-08 2012-01-17 Apple Inc. Electronic backup of applications
US9354982B2 (en) 2007-06-08 2016-05-31 Apple Inc. Manipulating electronic backups
US8504516B2 (en) 2007-06-08 2013-08-06 Apple Inc. Manipulating electronic backups
US8468136B2 (en) 2007-06-08 2013-06-18 Apple Inc. Efficient data backup
US8566289B2 (en) 2007-06-08 2013-10-22 Apple Inc. Electronic backup of applications
US9454587B2 (en) 2007-06-08 2016-09-27 Apple Inc. Searching and restoring of backups
US8429425B2 (en) 2007-06-08 2013-04-23 Apple Inc. Electronic backup and restoration of encrypted data
US8725965B2 (en) 2007-06-08 2014-05-13 Apple Inc. System setup for electronic backup
US8745523B2 (en) 2007-06-08 2014-06-03 Apple Inc. Deletion in electronic backups
US8307004B2 (en) 2007-06-08 2012-11-06 Apple Inc. Manipulating electronic backups
US20080307000A1 (en) * 2007-06-08 2008-12-11 Toby Charles Wood Paterson Electronic Backup of Applications
US9360995B2 (en) 2007-06-08 2016-06-07 Apple Inc. User interface for electronic backup
US10891020B2 (en) 2007-06-08 2021-01-12 Apple Inc. User interface for electronic backup
US9286166B2 (en) * 2007-06-08 2016-03-15 Apple Inc. Electronic backup of applications
US20150112942A1 (en) * 2007-06-08 2015-04-23 Apple Inc. Electronic backup of applications
US8965851B2 (en) * 2007-06-08 2015-02-24 Apple Inc. Electronic backup of applications
US8965929B2 (en) 2007-06-08 2015-02-24 Apple Inc. Manipulating electronic backups
US8010900B2 (en) 2007-06-08 2011-08-30 Apple Inc. User interface for electronic backup
US8055613B1 (en) * 2008-04-29 2011-11-08 Netapp, Inc. Method and apparatus for efficiently detecting and logging file system changes
US20100023797A1 (en) * 2008-07-25 2010-01-28 Rajeev Atluri Sequencing technique to account for a clock error in a backup system
US8028194B2 (en) 2008-07-25 2011-09-27 Inmage Systems, Inc Sequencing technique to account for a clock error in a backup system
US20100049717A1 (en) * 2008-08-20 2010-02-25 Ryan Michael F Method and systems for sychronization of process control servers
WO2010022146A1 (en) * 2008-08-20 2010-02-25 Ge Fanuc Intelligent Platforms, Inc Method and systems for synchronization of process control servers
US8069227B2 (en) 2008-12-26 2011-11-29 Inmage Systems, Inc. Configuring hosts of a secondary data storage and recovery system
US8527721B2 (en) 2008-12-26 2013-09-03 Rajeev Atluri Generating a recovery snapshot and creating a virtual view of the recovery snapshot
US20100169466A1 (en) * 2008-12-26 2010-07-01 Rajeev Atluri Configuring hosts of a secondary data storage and recovery system
US20100169592A1 (en) * 2008-12-26 2010-07-01 Rajeev Atluri Generating a recovery snapshot and creating a virtual view of the recovery snapshot
US20100257403A1 (en) * 2009-04-03 2010-10-07 Microsoft Corporation Restoration of a system from a set of full and partial delta system snapshots across a distributed system
US9361309B2 (en) * 2010-08-02 2016-06-07 Beijing Lenovo Software Ltd. File synchronization method, electronic device and synchronization system
US20130132340A1 (en) * 2010-08-02 2013-05-23 Beijing Lenovo Software Ltd. File synchronization method, electronic device and synchronization system
US8943026B2 (en) 2011-01-14 2015-01-27 Apple Inc. Visual representation of a local backup
US9411812B2 (en) 2011-01-14 2016-08-09 Apple Inc. File system management
US10303652B2 (en) 2011-01-14 2019-05-28 Apple Inc. File system management
US8984029B2 (en) 2011-01-14 2015-03-17 Apple Inc. File system management
US20140379921A1 (en) * 2013-06-21 2014-12-25 Amazon Technologies, Inc. Resource silos at network-accessible services
US10158579B2 (en) * 2013-06-21 2018-12-18 Amazon Technologies, Inc. Resource silos at network-accessible services
US9881018B2 (en) * 2014-08-14 2018-01-30 International Business Machines Corporation File management in thin provisioning storage environments
US10528527B2 (en) 2014-08-14 2020-01-07 International Business Machines Corporation File management in thin provisioning storage environments
US11157457B2 (en) 2014-08-14 2021-10-26 International Business Machines Corporation File management in thin provisioning storage environments
US9767022B2 (en) 2014-09-16 2017-09-19 Salesforce.Com, Inc. In-memory buffer service
US9417840B2 (en) * 2014-09-16 2016-08-16 Salesforce.Com, Inc. In-memory buffer service
US20160077798A1 (en) * 2014-09-16 2016-03-17 Salesforce.Com, Inc. In-memory buffer service
US9558078B2 (en) 2014-10-28 2017-01-31 Microsoft Technology Licensing, Llc Point in time database restore from storage snapshots
US20160239388A1 (en) * 2015-02-13 2016-08-18 Netapp, Inc. Managing multi-level backups into the cloud
US9946609B2 (en) * 2015-02-13 2018-04-17 Netapp, Inc. Managing multi-level backups into the cloud
US11537476B2 (en) * 2020-03-25 2022-12-27 Sap Se Database management system backup and recovery management

Also Published As

Publication number Publication date
US7096392B2 (en) 2006-08-22
US20060282697A1 (en) 2006-12-14
US7363549B2 (en) 2008-04-22
EP1745059A1 (en) 2007-01-24
EP1745059A4 (en) 2009-10-21
WO2005111051A1 (en) 2005-11-24

Similar Documents

Publication Publication Date Title
US7096392B2 (en) Method and system for automated, no downtime, real-time, continuous data protection
US7680834B1 (en) Method and system for no downtime resychronization for real-time, continuous data protection
US7788521B1 (en) Method and system for virtual on-demand recovery for real-time, continuous data protection
US7519870B1 (en) Method and system for no downtime, initial data upload for real-time, continuous data protection
US8060889B2 (en) Method and system for real-time event journaling to provide enterprise data services
US8108429B2 (en) System for moving real-time data events across a plurality of devices in a network for simultaneous data protection, replication, and access services
US7979404B2 (en) Extracting data changes and storing data history to allow for instantaneous access to and reconstruction of any point-in-time data
US9804934B1 (en) Production recovery using a point in time snapshot
US8712970B1 (en) Recovering a database to any point-in-time in the past with guaranteed data consistency
US9904605B2 (en) System and method for enhancing availability of a distributed object storage system during a partial database outage
US7596713B2 (en) Fast backup storage and fast recovery of data (FBSRD)
US8352523B1 (en) Recovering a file system to any point-in-time in the past with guaranteed structure, content consistency and integrity
US8949395B2 (en) Systems and methods of event driven recovery management

Legal Events

Date Code Title Description
AS Assignment

Owner name: ASEMPRA TECHNOLOGIES, INC., CALIFORNIA

Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:SIM-TANG, SIEW YONG;REEL/FRAME:017724/0924

Effective date: 20050429

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: ASEMPRA (ASSIGNMENT FOR THE BENEFIT OF CREDITORS,)

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ASEMPRA TECHNOLOGIES, INC.;REEL/FRAME:023196/0389

Effective date: 20090430

AS Assignment

Owner name: BAKBONE SOFTWARE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ASEMPRA (ASSIGNMENT FOR THE BENEFIT OF CREDITORS), LLC;REEL/FRAME:023263/0180

Effective date: 20090501

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: QUEST SOFTWARE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BAKBONE SOFTWARE INCORPORATED;REEL/FRAME:026651/0373

Effective date: 20110410

AS Assignment

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:QUEST SOFTWARE, INC.;REEL/FRAME:031043/0281

Effective date: 20130701

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT (TERM LOAN);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031899/0261

Effective date: 20131029

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, TEXAS

Free format text: PATENT SECURITY AGREEMENT (ABL);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031898/0001

Effective date: 20131029

Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FIRST LIEN COLLATERAL AGENT, TEXAS

Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;BOOMI, INC.;AND OTHERS;REEL/FRAME:031897/0348

Effective date: 20131029

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, TE

Free format text: PATENT SECURITY AGREEMENT (ABL);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031898/0001

Effective date: 20131029

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT (TERM LOAN);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031899/0261

Effective date: 20131029

Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FI

Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;BOOMI, INC.;AND OTHERS;REEL/FRAME:031897/0348

Effective date: 20131029

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: PEROT SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: SECUREWORKS, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: COMPELLANT TECHNOLOGIES, INC., MINNESOTA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: APPASSURE SOFTWARE, INC., VIRGINIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS L.P.;DELL SOFTWARE INC.;REEL/FRAME:040039/0642

Effective date: 20160907

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS, L.P.;DELL SOFTWARE INC.;REEL/FRAME:040030/0187

Effective date: 20160907

Owner name: SECUREWORKS, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: COMPELLENT TECHNOLOGIES, INC., MINNESOTA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: PEROT SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: APPASSURE SOFTWARE, INC., VIRGINIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS, L.P.;DELL SOFTWARE INC.;REEL/FRAME:040030/0187

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS L.P.;DELL SOFTWARE INC.;REEL/FRAME:040039/0642

Effective date: 20160907

Owner name: SECUREWORKS, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: COMPELLENT TECHNOLOGIES, INC., MINNESOTA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: APPASSURE SOFTWARE, INC., VIRGINIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: PEROT SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

AS Assignment

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:040521/0016

Effective date: 20161031

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:040521/0467

Effective date: 20161031

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:040521/0016

Effective date: 20161031

Owner name: DELL PRODUCTS, L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:040521/0467

Effective date: 20161031

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:040521/0016

Effective date: 20161031

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:040521/0467

Effective date: 20161031

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NEW YORK

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:DELL SOFTWARE INC.;REEL/FRAME:040581/0850

Effective date: 20161031

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:DELL SOFTWARE INC.;REEL/FRAME:040581/0850

Effective date: 20161031

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NEW YORK

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:DELL SOFTWARE INC.;REEL/FRAME:040587/0624

Effective date: 20161031

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:DELL SOFTWARE INC.;REEL/FRAME:040587/0624

Effective date: 20161031

AS Assignment

Owner name: QUEST SOFTWARE INC. (F/K/A DELL SOFTWARE INC.), CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED AT REEL: 040587 FRAME: 0624. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:044811/0598

Effective date: 20171114

Owner name: QUEST SOFTWARE INC. (F/K/A DELL SOFTWARE INC.), CA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED AT REEL: 040587 FRAME: 0624. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:044811/0598

Effective date: 20171114

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED AT REEL: 040587 FRAME: 0624. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:044811/0598

Effective date: 20171114

AS Assignment

Owner name: QUEST SOFTWARE INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:DELL SOFTWARE INC.;REEL/FRAME:044800/0848

Effective date: 20161101

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment: 12

AS Assignment

Owner name: QUEST SOFTWARE INC. (F/K/A DELL SOFTWARE INC.), CALIFORNIA

Free format text: RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS RECORDED AT R/F 040581/0850;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:046211/0735

Effective date: 20180518

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS RECORDED AT R/F 040581/0850;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:046211/0735

Effective date: 20180518

Owner name: QUEST SOFTWARE INC. (F/K/A DELL SOFTWARE INC.), CA

Free format text: RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS RECORDED AT R/F 040581/0850;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:046211/0735

Effective date: 20180518

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NEW YORK

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0486

Effective date: 20180518

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NEW YORK

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0347

Effective date: 20180518

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0486

Effective date: 20180518

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0347

Effective date: 20180518

AS Assignment

Owner name: QUEST SOFTWARE INC., CALIFORNIA

Free format text: RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:059105/0479

Effective date: 20220201

Owner name: QUEST SOFTWARE INC., CALIFORNIA

Free format text: RELEASE OF SECOND LIEN SECURITY INTEREST IN PATENTS;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:059096/0683

Effective date: 20220201

Owner name: GOLDMAN SACHS BANK USA, NEW YORK

Free format text: FIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNORS:QUEST SOFTWARE INC.;ANALYTIX DATA SERVICES INC.;BINARYTREE.COM LLC;AND OTHERS;REEL/FRAME:058945/0778

Effective date: 20220201

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND

Free format text: SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNORS:QUEST SOFTWARE INC.;ANALYTIX DATA SERVICES INC.;BINARYTREE.COM LLC;AND OTHERS;REEL/FRAME:058952/0279

Effective date: 20220201