US20040210895A1 - Method and system for making an application highly available - Google Patents
Method and system for making an application highly available Download PDFInfo
- Publication number
- US20040210895A1 US20040210895A1 US10/418,459 US41845903A US2004210895A1 US 20040210895 A1 US20040210895 A1 US 20040210895A1 US 41845903 A US41845903 A US 41845903A US 2004210895 A1 US2004210895 A1 US 2004210895A1
- Authority
- US
- United States
- Prior art keywords
- cluster
- application component
- recited
- service layer
- application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1479—Generic software techniques for error detection or fault masking
- G06F11/1482—Generic software techniques for error detection or fault masking by means of middleware or OS functionality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Definitions
- the present disclosure relates to computer systems. More specifically, the present disclosure relates to a method and system for making an application highly available.
- a cluster is based on the principle of hardware redundancy and consists of two or more independent servers that are managed as a single system for increased availability, easier manageability and greater scalability.
- a highly available application is an application that is managed or monitored by a cluster and is continuously operational for a long length of time.
- a cluster may maintain applications, such as web servers, databases, etc. with nearly zero downtime. If a system within a cluster should fail, other systems in the cluster are capable of carrying on the operations of the failed system with minimal interruption.
- This backup operational mode is known as failover. Failover may be part of a mission-critical system so that the system is fault tolerant. For example, failover may involve automatically offloading tasks to a standby system component so that the procedure remains seamless to the end user. Failover can apply to all aspects of a system. For example, within a network, failover may be a mechanism to protect against a failed connection, storage device or web server.
- a failed system can be examined and repaired by an administrator. Once a repair is completed, the failed system can resume its original operations. The process of migrating a failed application back to its original system is known as failback.
- the collection of systems which makes up a cluster may be interconnected via shared storage and network interfaces and an identical operating environment may be maintained within each system. To an end user, a cluster appears as if it is one system, although multiple systems may be reacting and responding to requests.
- Each computer system that is connected to a cluster may be referred to as a node.
- a node can send a message or “heartbeat” to another node to notify that node of its existence.
- API application program interface
- the present disclosure relates to a method for making an application including at least one application component highly available within a clustered environment, wherein the method may utilize a cluster service layer capable of supporting at least one cluster platform.
- the method comprises detecting a cluster on an installation node, verifying whether the at least one application component can be installed on the detected cluster, installing the at least one application component on the detected cluster, and putting the at least one application component online.
- the present disclosure also relates to a computer recording medium including computer executable code for making an application including at least one application component highly available within a clustered environment, wherein the computer executable code may utilize a cluster service layer capable of supporting at least one cluster platform.
- the computer recording medium comprises code for detecting a cluster on an installation node, code for verifying whether the at least one application component can be installed on the detected cluster, code for installing the at least one application component on the detected cluster, and code for putting the at least one application component online.
- the present disclosure also relates to a programmed computer system for making an application including at least one application component highly available within a clustered environment, wherein the programmed computer system may utilize a cluster service layer capable of supporting at least one cluster platform.
- the programmed computer system resides on a computer-readable medium and includes instructions for causing a computer to detect a cluster on an installation node, verify whether the at least one application component can be installed on the detected cluster, install the at least one application component on the detected cluster, and put the at least one application component online.
- the present disclosure also relates to a programmed computer apparatus for making an application including at least one application component highly available within a clustered environment, wherein the programmed computer apparatus may utilize a cluster service layer capable of supporting at least one cluster platform.
- the programmed computer apparatus performs steps comprising detecting a cluster on an installation node, verifying whether the at least one application component can be installed on the detected cluster, installing the at least one application component on the detected cluster, and putting the at least one application component online.
- FIG. 1 shows an example of a computer system capable of implementing the method and system of the present disclosure
- FIG. 2 shows a schematic representation of the method and system according to the present disclosure
- FIG. 3 shows a diagram of the method of making an application highly available according to the method and system of the present disclosure
- FIGS. 4A-4B show an example of communication between a cluster service, cluster service layer and a high availability service according to the method and system of the present disclosure
- FIG. 5 shows an example of communication between a cluster service, cluster service layer and a high availability service according to the method and system of the present disclosure
- FIG. 6 shows a flow chart of installation of an application component onto a cluster according to the method and system of the present disclosure.
- FIG. 1 shows a sample configuration of a computer system to which the present disclosure may be applied.
- the system and method of the present disclosure may be implemented in the form of a software application running on a computer system, for example, a mainframe, personal computer (PC), handheld computer, server etc.
- the software application may be stored on a recording media locally accessible by the computer system, for example, floppy disk, compact disk, hard disk, etc., or may be remote from the computer system and accessible via a hard wired or wireless connection to a network, for example, a local area network, or the Internet.
- FIG. 1 An example of a computer system capable of implementing the present method and system is shown in FIG. 1.
- the computer system referred to generally as system 100 may include a cluster server 101 , for example, the Microsoft Cluster Service, Sun Cluster, etc., consisting of a primary server 102 and a secondary server 103 .
- the primary and secondary servers 102 , 103 may provide identical services, such as acting as structured query language (“SQL”), system and performance agents and performing high availability and agent technology base services.
- Any number of applications 104 for example an SQL server and MS Exchange, and workstations 105 may interact with the cluster server 101 via resource groups 106 .
- resources 110 for example, internet protocol (“IP”) addresses, application images, disk drives, etc. may be organized into resource groups 106 .
- a resource group 106 may be a logical organization of resources 110 in a single package or container.
- a resource group 106 may facilitate administration of a cluster resource 110 by maintaining all dependent resources in a single unit.
- IP internet protocol
- the high availability service (“HAS”) 107 may be a service/daemon that runs on all nodes l to n within a cluster.
- Application components 108 for example, agent technology common services, SQL agents, exchange agents, event management, workload management, etc., may use the HAS 107 to make requests via a cluster service layer (“CSL”) 109 .
- the CSL 109 may be a shared library or a dynamically loaded or dynamic-link library (“DLL”) that application components 108 load upon start to determine whether they are operating in a cluster environment 101 .
- DLL dynamically loaded or dynamic-link library
- an application component 108 determines that it is operating in a cluster environment 101 , maintenance of this operating status may be ensured through a combination of enhancements to common services and to the cluster 101 itself.
- Such all enhancement may include high availability to components which monitor SQL databases.
- Each application component 108 may have a relationship with at least one resource group 106 that may organize resources 110 into a single package.
- the process of making an application component 108 highly available may involve a pre-setup step 111 and a set-up step 112 .
- the pre-setup step it can be verified whether the application component 108 can be installed on the cluster 101 .
- an application 104 or application component 108 may compromise data or resource integrity of the system.
- an application 104 or application component 108 may not be suitable for a clustered environment 101 .
- Applications 104 or application components 108 that may be suitable for a cluster 101 can be easily restarted by implementing, for example, automatic integrity recovery, can be easily monitored, have a start and stop procedure, are moveable from one node to another in the event of failure, and depend on resources which can be made highly available and can be configured to work with an IP address. Queries may be designed to test for such features.
- the set-up step 112 may follow. During the set-up step 112 , the application component 108 may be installed in the cluster 101 , thereby making the application component 108 highly available.
- the installation of the application component 108 may include modifying the setup or configuration of an application component 108 to add hooks into the CSL 109 , installing data files on shared disks and installing the application component 108 on all nodes of the cluster 101 .
- Installation may also include installing all sub-components, such as binaries, COM, ActiveX components, etc. that are required to run a particular application component 108 .
- the application component 108 and sub-components may also be registered with the cluster 101 , which may include the step of creating resources 110 required by the application component 108 , such as an IP address resource, a network name resource, etc. and the step of creating a resource group 106 for the application component 108 so that all resources required to run the application 104 can be grouped together.
- the resources 110 and resource groups 106 may also be registered with the cluster 101 .
- the application component 108 may be configured with the cluster 101 , and such configuration may include providing a generalized set of function calls.
- the function calls may use the API calls of the cluster service 101 , which can be provided by the cluster vendor. Once an application component 108 has been successfully installed, registered and configured, the application component 108 may be brought online in the cluster environment 101 .
- Application components 108 may use the CSL 109 to interact with the HAS 107 and the cluster server 101 .
- communication may occur between the CSL 109 and the HAS 107 .
- a communication may include a structured query about the status of an application component 108 in the cluster environment 101 .
- the communication may be executed by an SQL Agent 114 and sent from the CSL 109 to the HAS 107 via a named pipe 115 .
- the HAS 107 can process the query, obtain an answer from the cluster service 101 and send a reply to the query via the same path in reverse.
- the query may request, for example, the status of a specified resource group 106 , application component or components 108 , etc. and information can be obtained as to whether a resource 110 is running on a particular node.
- the query may be a non-blocking call, meaning that a reply will return immediately.
- the query may also be a blocking call, meaning that a reply will return only when there is a change in status of the resource on a particular node.
- FIG. 4B illustrates a blocking call configuration, wherein the CSL 109 will wait until a new pipe 116 is created and block until the new pipe 116 is created.
- the HAS 107 will not create the new pipe 116 until a change in status of a specified resource group 106 , application component or components 108 , etc. occurs.
- FIG. 5 An alternative model for communication between a CSL 109 , HAS 107 and the cluster service 101 is shown in FIG. 5.
- a resource 117 may be created in any resource group 106 that needs to be monitored. If the resource group 106 is brought offline, the resource 117 can notify the HAS 107 of the change and appropriate actions may be taken. During periodic time intervals, such as every 1-60 seconds, the HAS 107 can poll information from the cluster service 101 and determine If a change in status has occurred in any resource group 106 .
- An application component 108 may communicate with the HAS 107 for registration, notification, etc. via the CSL 109 and the use of shared memory.
- a shared memory region 118 may be created and used for communication between the CSL 109 and HAS 107 .
- a separate shared memory region 119 may be created by the HAS 107 when an application component 108 requires blocking and non-blocking calls for large amounts of data.
- the CSL 109 is an API that may be used by a software developer to make application components 108 highly available.
- the CSL 109 may support a variety of cluster platforms so that a developer can use the CSL API for each cluster platform instead of learning different APIs.
- the CSL 109 can have calls to check if an application 104 or application component 108 is running in a clustered environment 101 , start the application 104 or application component if the application 104 or the application component 108 are not running, start the cluster service 101 if the cluster service 101 is not running, create or modify resources 110 , such as IP addresses, cluster names, applications etc., create or modify resource groups 106 , add or modify resource dependencies (e.g., if one resource 110 needs another resource 110 to be running before it cam be started), get cluster information, such as cluster name, shared disks, node names, number of nodes, etc., put a resource 110 online or offline, and fail a resource 110 .
- create or modify resources 110 such as IP addresses, cluster names, applications etc.
- create or modify resource groups 106 add or modify resource dependencies (e.g., if one resource 110 needs another resource 110 to be running before it cam be started)
- cluster information such as cluster name, shared disks, node names, number of nodes, etc.,
- the use of the CSL 109 allows a highly available application 104 to interact with the cluster environment 101 , thus making that application 104 cluster aware.
- the CSL 109 may be used by each application component 108 to check if the application component 108 is running in the cluster 101 , check if the HAS 107 is running, register the application component 108 with the HAS 107 , determine the state and location of the application component 108 , create or modify resources 110 and dependencies, and retrieve information about the cluster 101 , such as groups 106 , resources 110 , shared disks, etc.
- the CSL 109 may have a call to fail a resource 110 .
- a primary system component such as a processor, server, network, database, etc.
- Tasks such as application components 108 , resources 110 , etc. can automatically be offloaded to a secondary system component having the same function as the primary system component so that there is little or no interruption of a procedure.
- a failed application 104 may be migrated back to its original node upon repair or completion of down time of a system component.
- FIG. 6 shows a method of installation of an application component 108 onto a cluster 101 .
- Step 201 it is determined whether a cluster 101 has been detected, for example, by a query. If a cluster 101 is detected, Step 202 verifies that it is possible to install the application component 108 onto the cluster 101 .
- Verification may include determining whether an application 104 or application component 108 has certain characteristics, for example, that the application 104 or application component does not compromise data or resource integrity of the system, can be easily restarted by implementing, for example, automatic integrity recovery, can be easily monitored, has a start and stop procedure, is moveable from one node to another in the event of failure, and depends on resources which can be made highly available, and/or can be configured to work with an IP address, etc.
- the application component 108 is installed (Step 203 ).
- Installation of the application component 108 may involve performing one or more steps including creating resources 106 for the application component 108 (Step 204 ), assigning the application component 108 to a group 106 (Step 205 ), modifying the application component 108 to, for example, add hooks into the cluster service layer 109 (Step 206 ), installing data files required by the application component 108 (Step 207 ), installing the application component 108 on all nodes of the cluster 101 (Step 208 ), using function calls to configure the application component 108 to the cluster 101 (Step 209 ), installing sub-components required to run the application component 108 (Step 210 ), and registering the sub-components with the cluster 10 (Step 211 ).
- the application component 108 can be put online (Step 212 ).
- the present system and method thus provides an efficient and convenient way for making an application highly available within a clustered environment, the method utilizing a cluster service layer capable of supporting more than one cluster platform.
Abstract
Description
- 1. Technical Field
- The present disclosure relates to computer systems. More specifically, the present disclosure relates to a method and system for making an application highly available.
- 2. Description of the Related Art
- As mission-critical eBusiness applications increase, there is a greater demand for solutions to minimize downtime of a computer system. To meet this demand, organizations may employ cluster solutions. A cluster is based on the principle of hardware redundancy and consists of two or more independent servers that are managed as a single system for increased availability, easier manageability and greater scalability.
- High availability is used to describe the service that clusters provide. A highly available application is an application that is managed or monitored by a cluster and is continuously operational for a long length of time. A cluster may maintain applications, such as web servers, databases, etc. with nearly zero downtime. If a system within a cluster should fail, other systems in the cluster are capable of carrying on the operations of the failed system with minimal interruption. This backup operational mode is known as failover. Failover may be part of a mission-critical system so that the system is fault tolerant. For example, failover may involve automatically offloading tasks to a standby system component so that the procedure remains seamless to the end user. Failover can apply to all aspects of a system. For example, within a network, failover may be a mechanism to protect against a failed connection, storage device or web server.
- A failed system can be examined and repaired by an administrator. Once a repair is completed, the failed system can resume its original operations. The process of migrating a failed application back to its original system is known as failback.
- The collection of systems which makes up a cluster may be interconnected via shared storage and network interfaces and an identical operating environment may be maintained within each system. To an end user, a cluster appears as if it is one system, although multiple systems may be reacting and responding to requests. Each computer system that is connected to a cluster may be referred to as a node. A node can send a message or “heartbeat” to another node to notify that node of its existence.
- There exist a variety of approaches to clustering that vary from vendor to vendor and different clusters may run different applications. Presently, there is no single application program interface (“API”) which can be applied across multiple cluster solutions. Some vendors have provided vendor specific APIs, but none have developed a common interface to multiple cluster solutions. Accordingly, there is a need for a common set of APIs that is capable of interfacing with multiple cluster solutions from multiple vendors so that the process of making an application highly available and capable of interacting with the cluster can be simplified.
- The present disclosure relates to a method for making an application including at least one application component highly available within a clustered environment, wherein the method may utilize a cluster service layer capable of supporting at least one cluster platform. The method comprises detecting a cluster on an installation node, verifying whether the at least one application component can be installed on the detected cluster, installing the at least one application component on the detected cluster, and putting the at least one application component online.
- The present disclosure also relates to a computer recording medium including computer executable code for making an application including at least one application component highly available within a clustered environment, wherein the computer executable code may utilize a cluster service layer capable of supporting at least one cluster platform. The computer recording medium comprises code for detecting a cluster on an installation node, code for verifying whether the at least one application component can be installed on the detected cluster, code for installing the at least one application component on the detected cluster, and code for putting the at least one application component online.
- The present disclosure also relates to a programmed computer system for making an application including at least one application component highly available within a clustered environment, wherein the programmed computer system may utilize a cluster service layer capable of supporting at least one cluster platform. The programmed computer system resides on a computer-readable medium and includes instructions for causing a computer to detect a cluster on an installation node, verify whether the at least one application component can be installed on the detected cluster, install the at least one application component on the detected cluster, and put the at least one application component online.
- The present disclosure also relates to a programmed computer apparatus for making an application including at least one application component highly available within a clustered environment, wherein the programmed computer apparatus may utilize a cluster service layer capable of supporting at least one cluster platform. The programmed computer apparatus performs steps comprising detecting a cluster on an installation node, verifying whether the at least one application component can be installed on the detected cluster, installing the at least one application component on the detected cluster, and putting the at least one application component online.
- A more complete appreciation of the present disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
- FIG. 1 shows an example of a computer system capable of implementing the method and system of the present disclosure;
- FIG. 2 shows a schematic representation of the method and system according to the present disclosure;
- FIG. 3 shows a diagram of the method of making an application highly available according to the method and system of the present disclosure;
- FIGS. 4A-4B show an example of communication between a cluster service, cluster service layer and a high availability service according to the method and system of the present disclosure;
- FIG. 5 shows an example of communication between a cluster service, cluster service layer and a high availability service according to the method and system of the present disclosure; and
- FIG. 6 shows a flow chart of installation of an application component onto a cluster according to the method and system of the present disclosure.
- In describing preferred embodiments of the present disclosure illustrated in the drawings, specific terminology is employed for sake of clarity. However, the present disclosure is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents which operate in a similar manner.
- FIG. 1 shows a sample configuration of a computer system to which the present disclosure may be applied. The system and method of the present disclosure may be implemented in the form of a software application running on a computer system, for example, a mainframe, personal computer (PC), handheld computer, server etc. The software application may be stored on a recording media locally accessible by the computer system, for example, floppy disk, compact disk, hard disk, etc., or may be remote from the computer system and accessible via a hard wired or wireless connection to a network, for example, a local area network, or the Internet.
- An example of a computer system capable of implementing the present method and system is shown in FIG. 1. The computer system referred to generally as
system 100 may include acluster server 101, for example, the Microsoft Cluster Service, Sun Cluster, etc., consisting of aprimary server 102 and asecondary server 103. The primary andsecondary servers applications 104, for example an SQL server and MS Exchange, andworkstations 105 may interact with thecluster server 101 viaresource groups 106. As shown in FIG. 2,resources 110, for example, internet protocol (“IP”) addresses, application images, disk drives, etc. may be organized intoresource groups 106. Aresource group 106 may be a logical organization ofresources 110 in a single package or container. Aresource group 106 may facilitate administration of acluster resource 110 by maintaining all dependent resources in a single unit. - As shown in FIG. 2, the high availability service (“HAS”)107 may be a service/daemon that runs on all nodesl to n within a cluster.
Application components 108, for example, agent technology common services, SQL agents, exchange agents, event management, workload management, etc., may use theHAS 107 to make requests via a cluster service layer (“CSL”) 109. The CSL 109 may be a shared library or a dynamically loaded or dynamic-link library (“DLL”) thatapplication components 108 load upon start to determine whether they are operating in acluster environment 101. If anapplication component 108 determines that it is operating in acluster environment 101, maintenance of this operating status may be ensured through a combination of enhancements to common services and to thecluster 101 itself. Such all enhancement may include high availability to components which monitor SQL databases. Eachapplication component 108 may have a relationship with at least oneresource group 106 that may organizeresources 110 into a single package. - As shown in FIG. 3, the process of making an
application component 108 highly available may involve apre-setup step 111 and a set-upstep 112. During the pre-setup step, it can be verified whether theapplication component 108 can be installed on thecluster 101. For example, anapplication 104 orapplication component 108 may compromise data or resource integrity of the system. Additionally, anapplication 104 orapplication component 108 may not be suitable for a clusteredenvironment 101.Applications 104 orapplication components 108 that may be suitable for acluster 101 can be easily restarted by implementing, for example, automatic integrity recovery, can be easily monitored, have a start and stop procedure, are moveable from one node to another in the event of failure, and depend on resources which can be made highly available and can be configured to work with an IP address. Queries may be designed to test for such features. After it is determined that theapplication component 108 can be installed on thecluster 101, the set-upstep 112 may follow. During the set-upstep 112, theapplication component 108 may be installed in thecluster 101, thereby making theapplication component 108 highly available. The installation of theapplication component 108 may include modifying the setup or configuration of anapplication component 108 to add hooks into theCSL 109, installing data files on shared disks and installing theapplication component 108 on all nodes of thecluster 101. Installation may also include installing all sub-components, such as binaries, COM, ActiveX components, etc. that are required to run aparticular application component 108. Theapplication component 108 and sub-components may also be registered with thecluster 101, which may include the step of creatingresources 110 required by theapplication component 108, such as an IP address resource, a network name resource, etc. and the step of creating aresource group 106 for theapplication component 108 so that all resources required to run theapplication 104 can be grouped together. Theresources 110 andresource groups 106 may also be registered with thecluster 101. Theapplication component 108 may be configured with thecluster 101, and such configuration may include providing a generalized set of function calls. The function calls may use the API calls of thecluster service 101, which can be provided by the cluster vendor. Once anapplication component 108 has been successfully installed, registered and configured, theapplication component 108 may be brought online in thecluster environment 101. -
Application components 108 may use theCSL 109 to interact with theHAS 107 and thecluster server 101. As shown in FIGS. 4A-4B, communication may occur between theCSL 109 and theHAS 107. A communication may include a structured query about the status of anapplication component 108 in thecluster environment 101. As shown in FIG. 4A, the communication may be executed by anSQL Agent 114 and sent from theCSL 109 to theHAS 107 via a namedpipe 115. As shown in FIG. 2, theHAS 107 can process the query, obtain an answer from thecluster service 101 and send a reply to the query via the same path in reverse. The query may request, for example, the status of a specifiedresource group 106, application component orcomponents 108, etc. and information can be obtained as to whether aresource 110 is running on a particular node. The query may be a non-blocking call, meaning that a reply will return immediately. The query may also be a blocking call, meaning that a reply will return only when there is a change in status of the resource on a particular node. FIG. 4B illustrates a blocking call configuration, wherein theCSL 109 will wait until anew pipe 116 is created and block until thenew pipe 116 is created. TheHAS 107 will not create thenew pipe 116 until a change in status of a specifiedresource group 106, application component orcomponents 108, etc. occurs. - An alternative model for communication between a
CSL 109, HAS 107 and thecluster service 101 is shown in FIG. 5. In this model, aresource 117 may be created in anyresource group 106 that needs to be monitored. If theresource group 106 is brought offline, theresource 117 can notify theHAS 107 of the change and appropriate actions may be taken. During periodic time intervals, such as every 1-60 seconds, theHAS 107 can poll information from thecluster service 101 and determine If a change in status has occurred in anyresource group 106. Anapplication component 108 may communicate with theHAS 107 for registration, notification, etc. via theCSL 109 and the use of shared memory. When theHAS 107 is started, a sharedmemory region 118 may be created and used for communication between theCSL 109 and HAS 107. A separate sharedmemory region 119 may be created by theHAS 107 when anapplication component 108 requires blocking and non-blocking calls for large amounts of data. - The
CSL 109 is an API that may be used by a software developer to makeapplication components 108 highly available. TheCSL 109 may support a variety of cluster platforms so that a developer can use the CSL API for each cluster platform instead of learning different APIs. TheCSL 109 can have calls to check if anapplication 104 orapplication component 108 is running in a clusteredenvironment 101, start theapplication 104 or application component if theapplication 104 or theapplication component 108 are not running, start thecluster service 101 if thecluster service 101 is not running, create or modifyresources 110, such as IP addresses, cluster names, applications etc., create or modifyresource groups 106, add or modify resource dependencies (e.g., if oneresource 110 needs anotherresource 110 to be running before it cam be started), get cluster information, such as cluster name, shared disks, node names, number of nodes, etc., put aresource 110 online or offline, and fail aresource 110. The use of theCSL 109 allows a highlyavailable application 104 to interact with thecluster environment 101, thus making thatapplication 104 cluster aware. TheCSL 109 may be used by eachapplication component 108 to check if theapplication component 108 is running in thecluster 101, check if theHAS 107 is running, register theapplication component 108 with theHAS 107, determine the state and location of theapplication component 108, create or modifyresources 110 and dependencies, and retrieve information about thecluster 101, such asgroups 106,resources 110, shared disks, etc. - As stated above, the
CSL 109 may have a call to fail aresource 110. This occurs when a primary system component, such as a processor, server, network, database, etc., becomes unavailable through either failure or scheduled down time. Tasks, such asapplication components 108,resources 110, etc. can automatically be offloaded to a secondary system component having the same function as the primary system component so that there is little or no interruption of a procedure. A failedapplication 104 may be migrated back to its original node upon repair or completion of down time of a system component. - FIG. 6 shows a method of installation of an
application component 108 onto acluster 101. InStep 201 it is determined whether acluster 101 has been detected, for example, by a query. If acluster 101 is detected,Step 202 verifies that it is possible to install theapplication component 108 onto thecluster 101. Verification may include determining whether anapplication 104 orapplication component 108 has certain characteristics, for example, that theapplication 104 or application component does not compromise data or resource integrity of the system, can be easily restarted by implementing, for example, automatic integrity recovery, can be easily monitored, has a start and stop procedure, is moveable from one node to another in the event of failure, and depends on resources which can be made highly available, and/or can be configured to work with an IP address, etc. Once it has been verified that theapplication component 108 can be installed, theapplication component 108 is installed (Step 203). Installation of theapplication component 108 may involve performing one or more steps including creatingresources 106 for the application component 108 (Step 204), assigning theapplication component 108 to a group 106 (Step 205), modifying theapplication component 108 to, for example, add hooks into the cluster service layer 109 (Step 206), installing data files required by the application component 108 (Step 207), installing theapplication component 108 on all nodes of the cluster 101 (Step 208), using function calls to configure theapplication component 108 to the cluster 101 (Step 209), installing sub-components required to run the application component 108 (Step 210), and registering the sub-components with the cluster 10 (Step 211). Upon successful installation of theapplication component 108, theapplication component 108 can be put online (Step 212). - The present system and method thus provides an efficient and convenient way for making an application highly available within a clustered environment, the method utilizing a cluster service layer capable of supporting more than one cluster platform. Numerous additional modifications and variations of the present disclosure are possible in view of the above-teachings. It is therefore to be understood that within the scope of the appended claims, the present disclosure may be practiced other than as specifically described herein.
Claims (88)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/418,459 US7234072B2 (en) | 2003-04-17 | 2003-04-17 | Method and system for making an application highly available |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/418,459 US7234072B2 (en) | 2003-04-17 | 2003-04-17 | Method and system for making an application highly available |
Publications (2)
Publication Number | Publication Date |
---|---|
US20040210895A1 true US20040210895A1 (en) | 2004-10-21 |
US7234072B2 US7234072B2 (en) | 2007-06-19 |
Family
ID=33159107
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/418,459 Active 2024-09-18 US7234072B2 (en) | 2003-04-17 | 2003-04-17 | Method and system for making an application highly available |
Country Status (1)
Country | Link |
---|---|
US (1) | US7234072B2 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005096736A2 (en) * | 2004-03-31 | 2005-10-20 | Unisys Corporation | Clusterization with automated deployment of a cluster-unaware application |
US20060090097A1 (en) * | 2004-08-26 | 2006-04-27 | Ngan Ching-Yuk P | Method and system for providing high availability to computer applications |
US20090037902A1 (en) * | 2007-08-02 | 2009-02-05 | Alexander Gebhart | Transitioning From Static To Dynamic Cluster Management |
US8078910B1 (en) | 2008-12-15 | 2011-12-13 | Open Invention Network, Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US8082468B1 (en) | 2008-12-15 | 2011-12-20 | Open Invention Networks, Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US8195722B1 (en) | 2008-12-15 | 2012-06-05 | Open Invention Network, Llc | Method and system for providing storage checkpointing to a group of independent computer applications |
US8281317B1 (en) | 2008-12-15 | 2012-10-02 | Open Invention Network Llc | Method and computer readable medium for providing checkpointing to windows application groups |
US20130185712A1 (en) * | 2012-01-16 | 2013-07-18 | Canon Kabushiki Kaisha | Apparatus, control method, and storage medium |
US8745442B1 (en) | 2011-04-28 | 2014-06-03 | Open Invention Network, Llc | System and method for hybrid kernel- and user-space checkpointing |
US8752049B1 (en) | 2008-12-15 | 2014-06-10 | Open Invention Network, Llc | Method and computer readable medium for providing checkpointing to windows application groups |
US8752048B1 (en) | 2008-12-15 | 2014-06-10 | Open Invention Network, Llc | Method and system for providing checkpointing to windows application groups |
US8826070B1 (en) | 2008-12-15 | 2014-09-02 | Open Invention Network, Llc | Method and system for providing storage checkpointing to a group of independent computer applications |
US8880473B1 (en) | 2008-12-15 | 2014-11-04 | Open Invention Network, Llc | Method and system for providing storage checkpointing to a group of independent computer applications |
US8943500B1 (en) | 2008-12-15 | 2015-01-27 | Open Invention Network, Llc | System and method for application isolation |
WO2015116490A1 (en) * | 2014-01-31 | 2015-08-06 | Google Inc. | Efficient resource utilization in data centers |
US20150326448A1 (en) * | 2014-05-07 | 2015-11-12 | Verizon Patent And Licensing Inc. | Network-as-a-service product director |
US20150326451A1 (en) * | 2014-05-07 | 2015-11-12 | Verizon Patent And Licensing Inc. | Network-as-a-service architecture |
US9256496B1 (en) * | 2008-12-15 | 2016-02-09 | Open Invention Network, Llc | System and method for hybrid kernel—and user-space incremental and full checkpointing |
US9354977B1 (en) * | 2008-12-15 | 2016-05-31 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US10191796B1 (en) * | 2011-01-31 | 2019-01-29 | Open Invention Network, Llc | System and method for statistical application-agnostic fault detection in environments with data trend |
US10592942B1 (en) | 2009-04-10 | 2020-03-17 | Open Invention Network Llc | System and method for usage billing of hosted applications |
US10628272B1 (en) | 2008-12-15 | 2020-04-21 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
CN113342475A (en) * | 2021-07-05 | 2021-09-03 | 统信软件技术有限公司 | Server cluster construction method, computing device and storage medium |
US11307941B1 (en) | 2011-04-28 | 2022-04-19 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US11538078B1 (en) | 2009-04-10 | 2022-12-27 | International Business Machines Corporation | System and method for usage billing of hosted applications |
US11625307B1 (en) | 2011-04-28 | 2023-04-11 | International Business Machines Corporation | System and method for hybrid kernel- and user-space incremental and full checkpointing |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7644141B2 (en) * | 2006-03-23 | 2010-01-05 | International Business Machines Corporation | High-availability identification and application installation |
US8688773B2 (en) * | 2008-08-18 | 2014-04-01 | Emc Corporation | System and method for dynamically enabling an application for business continuity |
US9081888B2 (en) | 2010-03-31 | 2015-07-14 | Cloudera, Inc. | Collecting and aggregating log data with fault tolerance |
US8874526B2 (en) | 2010-03-31 | 2014-10-28 | Cloudera, Inc. | Dynamically processing an event using an extensible data model |
US9317572B2 (en) | 2010-03-31 | 2016-04-19 | Cloudera, Inc. | Configuring a system to collect and aggregate datasets |
US9172608B2 (en) * | 2012-02-07 | 2015-10-27 | Cloudera, Inc. | Centralized configuration and monitoring of a distributed computing cluster |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6101508A (en) * | 1997-08-01 | 2000-08-08 | Hewlett-Packard Company | Clustered file management for network resources |
US6178529B1 (en) * | 1997-11-03 | 2001-01-23 | Microsoft Corporation | Method and system for resource monitoring of disparate resources in a server cluster |
US6192401B1 (en) * | 1997-10-21 | 2001-02-20 | Sun Microsystems, Inc. | System and method for determining cluster membership in a heterogeneous distributed system |
US20020091814A1 (en) * | 1998-07-10 | 2002-07-11 | International Business Machines Corp. | Highly scalable and highly available cluster system management scheme |
US20020178396A1 (en) * | 2001-04-23 | 2002-11-28 | Wong Joseph D. | Systems and methods for providing automated diagnostic services for a cluster computer system |
US20020184555A1 (en) * | 2001-04-23 | 2002-12-05 | Wong Joseph D. | Systems and methods for providing automated diagnostic services for a cluster computer system |
US6681390B2 (en) * | 1999-07-28 | 2004-01-20 | Emc Corporation | Upgrade of a program |
US6725261B1 (en) * | 2000-05-31 | 2004-04-20 | International Business Machines Corporation | Method, system and program products for automatically configuring clusters of a computing environment |
US6839752B1 (en) * | 2000-10-27 | 2005-01-04 | International Business Machines Corporation | Group data sharing during membership change in clustered computer system |
-
2003
- 2003-04-17 US US10/418,459 patent/US7234072B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6101508A (en) * | 1997-08-01 | 2000-08-08 | Hewlett-Packard Company | Clustered file management for network resources |
US6192401B1 (en) * | 1997-10-21 | 2001-02-20 | Sun Microsystems, Inc. | System and method for determining cluster membership in a heterogeneous distributed system |
US6178529B1 (en) * | 1997-11-03 | 2001-01-23 | Microsoft Corporation | Method and system for resource monitoring of disparate resources in a server cluster |
US20020091814A1 (en) * | 1998-07-10 | 2002-07-11 | International Business Machines Corp. | Highly scalable and highly available cluster system management scheme |
US6681390B2 (en) * | 1999-07-28 | 2004-01-20 | Emc Corporation | Upgrade of a program |
US6725261B1 (en) * | 2000-05-31 | 2004-04-20 | International Business Machines Corporation | Method, system and program products for automatically configuring clusters of a computing environment |
US6839752B1 (en) * | 2000-10-27 | 2005-01-04 | International Business Machines Corporation | Group data sharing during membership change in clustered computer system |
US20020178396A1 (en) * | 2001-04-23 | 2002-11-28 | Wong Joseph D. | Systems and methods for providing automated diagnostic services for a cluster computer system |
US20020184555A1 (en) * | 2001-04-23 | 2002-12-05 | Wong Joseph D. | Systems and methods for providing automated diagnostic services for a cluster computer system |
Cited By (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005096736A3 (en) * | 2004-03-31 | 2007-03-15 | Unisys Corp | Clusterization with automated deployment of a cluster-unaware application |
WO2005096736A2 (en) * | 2004-03-31 | 2005-10-20 | Unisys Corporation | Clusterization with automated deployment of a cluster-unaware application |
US8122280B2 (en) * | 2004-08-26 | 2012-02-21 | Open Invention Network, Llc | Method and system for providing high availability to computer applications |
US8176364B1 (en) * | 2004-08-26 | 2012-05-08 | Open Invention Network, Llc | Method and system for providing high availability to computer applications |
US7783914B1 (en) * | 2004-08-26 | 2010-08-24 | Open Invention Network Llc | Method and system for providing high availability to computer applications |
US8037367B1 (en) * | 2004-08-26 | 2011-10-11 | Open Invention Network, Llc | Method and system for providing high availability to computer applications |
US20060090097A1 (en) * | 2004-08-26 | 2006-04-27 | Ngan Ching-Yuk P | Method and system for providing high availability to computer applications |
US8959395B2 (en) | 2004-08-26 | 2015-02-17 | Red Hat, Inc. | Method and system for providing high availability to computer applications |
US20140359354A1 (en) * | 2004-08-26 | 2014-12-04 | Red Hat, Inc. | Method and system for providing high availability to computer applications |
US9141502B2 (en) | 2004-08-26 | 2015-09-22 | Redhat, Inc. | Method and system for providing high availability to computer applications |
US9311200B1 (en) * | 2004-08-26 | 2016-04-12 | Red Hat, Inc. | Method and system for providing high availability to computer applications |
US8458534B1 (en) | 2004-08-26 | 2013-06-04 | Red Hat, Inc. | Method and system for providing high availability to computer applications |
US9223671B2 (en) * | 2004-08-26 | 2015-12-29 | Red Hat, Inc. | Method and system for providing high availability to computer applications |
US8402305B1 (en) | 2004-08-26 | 2013-03-19 | Red Hat, Inc. | Method and system for providing high availability to computer applications |
US9286109B1 (en) | 2005-08-26 | 2016-03-15 | Open Invention Network, Llc | Method and system for providing checkpointing to windows application groups |
US9389959B1 (en) * | 2005-08-26 | 2016-07-12 | Open Invention Network Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US8458693B2 (en) * | 2007-08-02 | 2013-06-04 | Sap Ag | Transitioning from static to dynamic cluster management |
US20090037902A1 (en) * | 2007-08-02 | 2009-02-05 | Alexander Gebhart | Transitioning From Static To Dynamic Cluster Management |
US8826070B1 (en) | 2008-12-15 | 2014-09-02 | Open Invention Network, Llc | Method and system for providing storage checkpointing to a group of independent computer applications |
US9183089B1 (en) * | 2008-12-15 | 2015-11-10 | Open Invention Network, Llc | System and method for hybrid kernel and user-space checkpointing using a chacter device |
US8752049B1 (en) | 2008-12-15 | 2014-06-10 | Open Invention Network, Llc | Method and computer readable medium for providing checkpointing to windows application groups |
US8752048B1 (en) | 2008-12-15 | 2014-06-10 | Open Invention Network, Llc | Method and system for providing checkpointing to windows application groups |
US10310754B1 (en) | 2008-12-15 | 2019-06-04 | Open Invention Network Llc | Method and system for providing storage checkpointing to a group of independent computer applications |
US8880473B1 (en) | 2008-12-15 | 2014-11-04 | Open Invention Network, Llc | Method and system for providing storage checkpointing to a group of independent computer applications |
US8881171B1 (en) | 2008-12-15 | 2014-11-04 | Open Invention Network, Llc | Method and computer readable medium for providing checkpointing to windows application groups |
US8645754B1 (en) | 2008-12-15 | 2014-02-04 | Open Invention Network, Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US8943500B1 (en) | 2008-12-15 | 2015-01-27 | Open Invention Network, Llc | System and method for application isolation |
US8527809B1 (en) | 2008-12-15 | 2013-09-03 | Open Invention Network, Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US11249855B1 (en) * | 2008-12-15 | 2022-02-15 | Open Invention Network Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US11645163B1 (en) * | 2008-12-15 | 2023-05-09 | International Business Machines Corporation | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US9164843B1 (en) * | 2008-12-15 | 2015-10-20 | Open Invention Network, Llc | System and method for hybrid kernel and user-space checkpointing using a character device |
US11263086B1 (en) | 2008-12-15 | 2022-03-01 | Open Invention Network Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US10990487B1 (en) | 2008-12-15 | 2021-04-27 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US10901856B1 (en) | 2008-12-15 | 2021-01-26 | Open Invention Network Llc | Method and system for providing checkpointing to windows application groups |
US10635549B1 (en) | 2008-12-15 | 2020-04-28 | Open Invention Network Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US8347140B1 (en) | 2008-12-15 | 2013-01-01 | Open Invention Network Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US9256496B1 (en) * | 2008-12-15 | 2016-02-09 | Open Invention Network, Llc | System and method for hybrid kernel—and user-space incremental and full checkpointing |
US10628272B1 (en) | 2008-12-15 | 2020-04-21 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US8281317B1 (en) | 2008-12-15 | 2012-10-02 | Open Invention Network Llc | Method and computer readable medium for providing checkpointing to windows application groups |
US8195722B1 (en) | 2008-12-15 | 2012-06-05 | Open Invention Network, Llc | Method and system for providing storage checkpointing to a group of independent computer applications |
US9354977B1 (en) * | 2008-12-15 | 2016-05-31 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US8082468B1 (en) | 2008-12-15 | 2011-12-20 | Open Invention Networks, Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US8078910B1 (en) | 2008-12-15 | 2011-12-13 | Open Invention Network, Llc | Method and system for providing coordinated checkpointing to a group of independent computer applications |
US10467108B1 (en) * | 2008-12-15 | 2019-11-05 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US10592942B1 (en) | 2009-04-10 | 2020-03-17 | Open Invention Network Llc | System and method for usage billing of hosted applications |
US10606634B1 (en) | 2009-04-10 | 2020-03-31 | Open Invention Network Llc | System and method for application isolation |
US11538078B1 (en) | 2009-04-10 | 2022-12-27 | International Business Machines Corporation | System and method for usage billing of hosted applications |
US10896082B1 (en) | 2011-01-31 | 2021-01-19 | Open Invention Network Llc | System and method for statistical application-agnostic fault detection in environments with data trend |
US10191796B1 (en) * | 2011-01-31 | 2019-01-29 | Open Invention Network, Llc | System and method for statistical application-agnostic fault detection in environments with data trend |
US10514987B1 (en) | 2011-04-28 | 2019-12-24 | Open Invention Network Llc | System and method for hybrid kernel and user-space checkpointing using a character device |
US11226874B1 (en) | 2011-04-28 | 2022-01-18 | Open Invention Network Llc | System and method for hybrid kernel and user-space checkpointing using a character device |
US11656954B1 (en) | 2011-04-28 | 2023-05-23 | Philips North America Llc | System and method for hybrid kernel and user-space checkpointing using a character device |
US11625307B1 (en) | 2011-04-28 | 2023-04-11 | International Business Machines Corporation | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US11307941B1 (en) | 2011-04-28 | 2022-04-19 | Open Invention Network Llc | System and method for hybrid kernel- and user-space incremental and full checkpointing |
US10621052B1 (en) | 2011-04-28 | 2020-04-14 | Open Invention Network Llc | System and method for hybrid kernel and user-space checkpointing using a character device |
US11301339B1 (en) | 2011-04-28 | 2022-04-12 | Open Invention Network Llc | System and method for hybrid kernel and user-space checkpointing using a character device |
US8745442B1 (en) | 2011-04-28 | 2014-06-03 | Open Invention Network, Llc | System and method for hybrid kernel- and user-space checkpointing |
US9274775B2 (en) * | 2012-01-16 | 2016-03-01 | Canon Kabushiki Kaisha | Apparatus, control method, and storage medium to instruct a framework to stop a target application based on a usage amount of a resource and a type of the target application |
US20130185712A1 (en) * | 2012-01-16 | 2013-07-18 | Canon Kabushiki Kaisha | Apparatus, control method, and storage medium |
US9823948B2 (en) | 2014-01-31 | 2017-11-21 | Google Inc. | Efficient resource utilization in data centers |
GB2538198B (en) * | 2014-01-31 | 2021-07-07 | Google Llc | Efficient resource utilization in data centers |
WO2015116490A1 (en) * | 2014-01-31 | 2015-08-06 | Google Inc. | Efficient resource utilization in data centers |
US9213576B2 (en) | 2014-01-31 | 2015-12-15 | Google Inc. | Efficient resource utilization in data centers |
GB2538198A (en) * | 2014-01-31 | 2016-11-09 | Google Inc | Efficient resource utilization in data centers |
KR101719116B1 (en) | 2014-01-31 | 2017-03-22 | 구글 인코포레이티드 | Efficient resource utilization in data centers |
KR20160097372A (en) * | 2014-01-31 | 2016-08-17 | 구글 인코포레이티드 | Efficient resource utilization in data centers |
US9870580B2 (en) * | 2014-05-07 | 2018-01-16 | Verizon Patent And Licensing Inc. | Network-as-a-service architecture |
US20150326451A1 (en) * | 2014-05-07 | 2015-11-12 | Verizon Patent And Licensing Inc. | Network-as-a-service architecture |
US20150326448A1 (en) * | 2014-05-07 | 2015-11-12 | Verizon Patent And Licensing Inc. | Network-as-a-service product director |
US9672502B2 (en) * | 2014-05-07 | 2017-06-06 | Verizon Patent And Licensing Inc. | Network-as-a-service product director |
CN113342475A (en) * | 2021-07-05 | 2021-09-03 | 统信软件技术有限公司 | Server cluster construction method, computing device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US7234072B2 (en) | 2007-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7234072B2 (en) | Method and system for making an application highly available | |
US7610582B2 (en) | Managing a computer system with blades | |
US6671704B1 (en) | Method and apparatus for handling failures of resource managers in a clustered environment | |
US8769132B2 (en) | Flexible failover policies in high availability computing systems | |
US6996502B2 (en) | Remote enterprise management of high availability systems | |
US8185776B1 (en) | System and method for monitoring an application or service group within a cluster as a resource of another cluster | |
US6134673A (en) | Method for clustering software applications | |
US8560889B2 (en) | Adding scalability and fault tolerance to generic finite state machine frameworks for use in automated incident management of cloud computing infrastructures | |
US7246256B2 (en) | Managing failover of J2EE compliant middleware in a high availability system | |
US7543174B1 (en) | Providing high availability for an application by rapidly provisioning a node and failing over to the node | |
US7475127B2 (en) | Real composite objects for providing high availability of resources on networked systems | |
US20010056554A1 (en) | System for clustering software applications | |
US7590683B2 (en) | Restarting processes in distributed applications on blade servers | |
US20080172679A1 (en) | Managing Client-Server Requests/Responses for Failover Memory Managment in High-Availability Systems | |
TWI511046B (en) | Dynamic cli mapping for clustered software entities | |
US7165097B1 (en) | System for distributed error reporting and user interaction | |
US20030196148A1 (en) | System and method for peer-to-peer monitoring within a network | |
US7228344B2 (en) | High availability enhancement for servers using structured query language (SQL) | |
US20040210888A1 (en) | Upgrading software on blade servers | |
US7120821B1 (en) | Method to revive and reconstitute majority node set clusters | |
US20040210887A1 (en) | Testing software on blade servers | |
Corsava et al. | Intelligent architecture for automatic resource allocation in computer clusters | |
JP2004086879A (en) | Method to monitor remotely accessible resources and to provide persistent and consistent resource states | |
US8595349B1 (en) | Method and apparatus for passive process monitoring | |
CN1728697A (en) | Fault-tolerance method in application of request proxy structure of public object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: COMPUTER ASSOCIATES THINK, INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ESFAHANY, KOUROS H.;REEL/FRAME:014925/0100 Effective date: 20040120 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |