WO2005096736A2 - Clusterization with automated deployment of a cluster-unaware application - Google Patents

Clusterization with automated deployment of a cluster-unaware application Download PDF

Info

Publication number
WO2005096736A2
WO2005096736A2 PCT/US2005/010661 US2005010661W WO2005096736A2 WO 2005096736 A2 WO2005096736 A2 WO 2005096736A2 US 2005010661 W US2005010661 W US 2005010661W WO 2005096736 A2 WO2005096736 A2 WO 2005096736A2
Authority
WO
WIPO (PCT)
Prior art keywords
cluster
group
processor
node
participating nodes
Prior art date
Application number
PCT/US2005/010661
Other languages
French (fr)
Other versions
WO2005096736A3 (en
Inventor
Donna Lynn Goodman
Suresh Chunduru
Ernest Daza
Original Assignee
Unisys Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisys Corporation filed Critical Unisys Corporation
Priority to EP05738713A priority Critical patent/EP1761867A2/en
Publication of WO2005096736A2 publication Critical patent/WO2005096736A2/en
Publication of WO2005096736A3 publication Critical patent/WO2005096736A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates

Definitions

  • Embodiments of the invention are in the field of clusterization of software applications, and more specifically, relate to a method of clusterizing a cluster- unaware application with automated deployment without modifying the cluster- unaware application.
  • a cluster is a group of computers that work together to run a common set of applications and appear as a single system to the client and applications.
  • the computers are physically connected by cables and programmatically connected by cluster software. These connections allow the computers to use failover and load balancing, which is not possible with a stand-alone computer.
  • Clustering provided by cluster software such as Microsoft Cluster
  • MSCS Microsoft Corporation
  • MSCS Mission- critical applications
  • the cluster is designed so as to avoid a single point-of-failure.
  • Applications can be distributed over more than one computer (also called node), achieving a degree of parallelism and failure recovery, and providing more availability.
  • Multiple nodes in a cluster remain in constant communication. If one of the nodes in a cluster becomes unavailable as a result of failure or maintenance, another node takes over the failing node's workload and begins providing service. This process is known as failover.
  • failover With very high availability, users who were accessing the service would be able to continue to access the service, and would be unaware that the service was briefly interrupted and is now provided by a different node.
  • An embodiment of the present invention is a technique for clusterizing a cluster-unaware application. Based on an analysis of the cluster-unaware application, the binaries of the cluster-unaware application are differentiated from the data files of the cluster-unaware application. DLL files corresponding to a custom resource type for the cluster-unaware application are created based on the behavior of the cluster-unaware application. A cluster and participating nodes in the cluster are identified. A cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster is created. The binaries and the data files are deployed automatically in the cluster. The DLL files corresponding to the custom resource type are deployed automatically on the participating nodes in the cluster.
  • Figure 1 is a diagram illustrating a system in which one embodiment of the invention can be practiced.
  • Figure 2 shows the information that forms the clusterized application X residing on a node in one embodiment of the present invention.
  • Figure 3 is a flowchart illustrating the method of the present invention.
  • Figure 4 illustrates the process of creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308 of Figure 3).
  • Figure 5 illustrates an embodiment of the optional process of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (optional block 310 of Figure 3).
  • Figure 6 illustrates an embodiment of the process of deploying the binaries and the data files in the cluster (block 312 of Figure 3).
  • Figure 7 illustrates an embodiment of the process of deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314 of Figure 3).
  • Figure 8 illustrates an embodiment of the optional process of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (optional block 316 of Figure 3).
  • An embodiment of the present invention is a technique for clusterizing a cluster-unaware application. Based on an analysis of the cluster-unaware application, the binaries of the cluster-unaware application are differentiated from the data files of the cluster-unaware application. Dynamic Link Library (DLL) files corresponding to a custom resource type for the cluster-unaware application are created based on the behavior of the cluster-unaware application. A cluster and participating nodes in the cluster are identified. A cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster is created. The binaries and the data files are deployed in the cluster. The DLL files corresponding to the custom resource type are deployed on the participating nodes in the cluster.
  • DLL Dynamic Link Library
  • a clusterized application is an application capable of running in a cluster environment.
  • a cluster-unaware application can be clusterized if it has the following characteristics. First, the application uses the TCP/IP as a network protocol. Second, the application maintains data in a configurable location.
  • the application supports transaction processing.
  • FIG. 1 is a diagram illustrating an exemplary system 100 in which one embodiment of the invention can be practiced.
  • the system 100 includes a server system 104 interfacing with a client 180 and with a processor platform 190.
  • the client 180 communicates with the server system via a communication network.
  • the client can access an application running on the server system using the virtual Internet Protocol (IP) address of the application.
  • IP Internet Protocol
  • the server system 104 includes a cluster 106.
  • the cluster 106 includes a node 110, a node 160, and a common storage 170.
  • Each of the nodes 110, 140 is a computer system.
  • Node 110 comprises a memory 120, a processor unit 130 and an input/output unit 132.
  • node 140 comprises a memory 150, a processor unit 160 and an input/output unit 162.
  • Each processor unit may include several elements such as data queue, arithmetic logical unit, memory read register, memory write register, etc.
  • Cluster software such as the Microsoft Cluster Service (MSCS) provides clustering services for a cluster. In order for the system 106 to operate as a cluster, identical copies of the cluster software must be running on each of the nodes 110, 140. Copy 122 of the cluster software resides in the memory 120 of node 110. Copy 152 of the cluster software resides in the memory 150 of node 140.
  • MSCS Microsoft Cluster Service
  • a cluster folder containing cluster-level information is included in the memory of each of the nodes of the cluster.
  • Cluster-level information includes DLL files of the applications that are running in the cluster.
  • Cluster folder 128 is included in the memory 120 of node 110.
  • Cluster folder 158 is included in the memory 150 of node 140.
  • a group of cluster-aware applications 126 is stored in the memory 120 of node 110. Identical copies 156 of these applications are stored in the memory 150 of node 140.
  • Application X is a cluster-unaware application. The present invention clusterizes Application X so that it could be run in a cluster and stored the clusterized application X in the memory 120 of node 110. An identical copy of the clusterized application X is stored in the memory 150 of node 140.
  • Computer nodes 110 and 140 access a common storage 170.
  • the common storage 170 contains information that is shared by the nodes in the cluster. This information includes data of the applications running in the cluster. Typically, only one computer node can access the common storage at a time. It is noted that, in other cluster configurations, using different type of cluster software and different type of operating system for the computer nodes in the cluster, a common storage may not be needed. In such a cluster with no common storage, data for the clustered applications are stored with the clustered applications, and are copied and updated for each of the nodes in the cluster.
  • the processor platform 190 is a computer system that interfaces with the cluster 104. It includes a processor 192, a memory 194, and a mass storage device 196.
  • the processor 192 represents a central processing unit of any type of architecture, such as embedded processors, mobile processors, microcontrollers, digital signal processors, superscalar computers, vector processors, single instruction multiple data (SIMD) computers, complex instruction set computers (CISC), reduced instruction set computers (RISC), very long instruction word (VLIW), or hybrid architecture.
  • the memory 194 stores system code and data.
  • the memory 194 is typically implemented with dynamic random access memory (DRAM) or static random access memory (SRAM).
  • the system memory may include program code or code segments implementing one embodiment of the invention.
  • the memory 194 includes a clusterizer of the present invention when loaded from mass storage 196. The clusterizer may also simulate the clusterizing functions described herein.
  • the clusterizer contains instructions that, when executed by the processor 192, cause the processor to perform the tasks or operations as described in the following.
  • the mass storage device 196 stores archive information such as code, programs, files, data, databases, applications, and operating systems.
  • the mass storage device 196 may include compact disk (CD) ROM, a digital video/versatile disc (DVD), floppy drive, and hard drive, and any other magnetic or optic storage devices such as tape drive, tape library, redundant arrays of inexpensive disks (RAIDs), etc.
  • the mass storage device 196 provides a mechanism to read machine-accessible media.
  • the machine-accessible media may contain computer readable program code to perform tasks as described in the following.
  • FIG. 2 shows the information that forms the clusterized application X 126 (respectively 156) residing on node 110 (respectively, node 140) in one embodiment of the present invention.
  • the cluster-unaware application X comprises the binaries and the data files. After clusterization, the binaries of application X are stored in the clusterized application X on each of the participating nodes of the cluster, while the data files of the application X are stored in the common storage 170 ( Figure 1). It is noted that, in effect, clusterization using the method of the present invention has made no modification to the binaries and the data files per se of the cluster-unaware application X.
  • Clusterized application X 126 comprises the binaries 202 of the cluster- unaware application X.
  • the clusterized application X 126 also comprises basic cluster resources 204 for the application X, and an instance of the custom resource type represented by DLL files 206.
  • the basic cluster resources 204 and the instance of the custom resource type 206 are logical objects created by the cluster at cluster- level.
  • the basic cluster resources 204 include a common storage resource identifying the common storage 170, an application IP address resource identifying the IP address of the clusterized application X, and a network name resource identifying the network name of the clusterized application X.
  • the DLL files 206 for the custom resource type include the cluster resource dynamic link library (DLL) file and the cluster administrator extension DLL file. These DLL files are stored in the cluster folder 128 in node 110 ( Figure 1).
  • FIG. 3 is a flowchart illustrating the method of the present invention.
  • block 302 and 304 of the flowchart are performed manually.
  • the cluster-unaware application X is analyzed and, based on this analysis, the binaries of the cluster-unaware application X are differentiated from its data files (block 302). Binaries are a file or files that contain the executable form of the program code whose execution causes the application to run.
  • Binaries do not include data files required by the application since data generally needs to be updated and data size is usually very large.
  • the behavior of the cluster-unaware application X is also analyzed in order to determine a custom resource type for the cluster-unaware application.
  • a custom resource type means that the implemented resource type is different from the standard or out-of-the-box Microsoft cluster resource such as IP Address resource or WINS service resource.
  • the behavior analysis includes determination of how the cluster-unaware application behaves when it starts, or restarts, or stops, and how its health can be checked on a periodic basis. Based on this behavior analysis, DLL files corresponding to and defining the custom resource type for the cluster-unaware application are created (block 304).
  • these DLL files are used to send command requests to the cluster- unaware application X to control its behavior.
  • these custom resource DLL files are created using the Microsoft Visual C++® development system.
  • Microsoft Corporation has published a number of Technical Articles for Writing Microsoft Cluster Server (MSCS) Resource DLLs. These articles describe in detail how to use the Microsoft Visual C++® development system to develop resource DLLs.
  • Resource DLLs are created by running the "Resource Type AppWizard" of Microsoft Corporation within the developer studio. This builds a skeletal resource DLL and/or Cluster Administrator extension DLL with all the entry points defined, declared, and exported.
  • the skeletal resource DLL provides only the most basic failover and tailback capability.
  • the skeletal resource DLL is customized to produce the cluster resource DLL.
  • the behavior of the cluster resource DLL depends on the necessary needs of the application being clusterized, which may include some or all of the following functions/features: Startup Open Online LooksAlive IsAlive Offline Close Terminate ResourceControl ResourceTypeControl
  • the cluster resource DLL and the cluster administrator extension DLL allow application X to function properly in a cluster environment.
  • the capabilities of the DLLs are directly related to the capabilities of the cluster- unaware application X.
  • the cluster resource DLL file includes program code that can bring the application X on-line in an orderly way by starting underlying programs of application X. It also includes program code for taking the application X off-line in an orderly way by performing required actions to save the application state information before stopping the services of application X.
  • the following functionality is implemented in the cluster resource DLL: • Ability to bring the application resource on-line by starting the application components in the required order. • Ability to take the application resource off-line by stopping application components in the required order. This might involve, in case of some applications, sending commands to the application to persist application data and/or other data that is needed when the application is failed over to another node in the cluster.
  • Process 300 identified a cluster and participating nodes in the cluster (block 306).
  • the cluster and the participating nodes may be identified via user input to a dialog box, or via default settings.
  • Process 300 allows a user to select a subset of nodes or all the nodes in the cluster to participate in the clusterization of the application X.
  • Process 300 creates a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308). After creating the group of basic cluster resources, process 300 can perform the optional step of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (block 310).
  • Process 300 deploys the binaries and the data files in the cluster (block 312) and deploys the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314).
  • process 300 can perform the optional step of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (block 316).
  • Process 300 then terminates.
  • Figure 4 illustrates the process 308 of creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308 of Figure 3).
  • process 308 associates a specified cluster group name with the cluster group (block 402).
  • Process 308 verifies whether a cluster group by the specified name already exists in the cluster. If it does not already exist, process 308 creates the cluster group with the specified name.
  • Process 308 includes a specified common storage resource in the cluster group, the specified common storage resource identifying a common storage where the data files of the application X reside (block 404).
  • Process 308 verifies whether the specified common storage resource already exists in the specified cluster group. If it does not already exist in this cluster group but exists in another cluster group, process 308 moves the specified common storage resource from this other cluster group to the specified cluster group. If there are any dependent cluster resources in this other cluster group, the move will fail automatically and the user will be notified of what went wrong. Note that this is the common storage where ail data specific to the clusterized application X will be stored for access by all cluster nodes that are participating in the clusterization of application X.
  • Process 308 includes a specified IP address resource in the cluster group, the specified IP address resource identifying an IP address to be used to support the clusterized application X name (block 406). Process 308 verifies whether the specified IP address already exists in the cluster as an IP address cluster resource.
  • process 308 creates the IP address resource in the specified cluster group per user specification. If the IP address already exists as a cluster resource in this cluster, process 308 verifies if it already exists in the specified cluster group. If not, process 308 moves the IP address resource to the specified cluster group from its current cluster group. Note that this is the IP address that is used to host the virtual application X name by which all program clients of application X access the clusterized application X.
  • Process 308 includes a specified network name resource in the cluster group, the specified network name resource identifying a network name to be used by client programs to connect to the clusterized application X (block 408). Process 308 verifies if the specified network name already exists in the cluster as a network name cluster resource.
  • process 308 creates the network name resource in the cluster group per user specification. If the network name exists as a cluster resource in this cluster, process 308 verifies if it already exists in the specified cluster group. If not, process 308 moves the network name resource to the specified cluster group from its current cluster group. Note that this is the network name by which all program clients of application X will access the clusterized application X.
  • Figure 5 illustrates an embodiment of the optional process 310 of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (optional block 310 of Figure 3). Process 310 performs confidence tests on the group of basic cluster resources to verify that the target cluster is operational and is currently in a healthy state.
  • the basic cluster resources in the cluster group are capable of being hosted by each of the participating nodes. If, for example, one of the participating nodes cannot access the common storage specified in the cluster group, this indicates a problem that will prevent successful clusterization. This also indicates, at a higher level, the health of the overall cluster and its nodes.
  • process 310 Upon Start, process 310 brings the group of basic cluster resources online on a current node of the participating nodes (block 502). Process 310 fails over the group of basic cluster resources to another node in the group of the participating nodes (block 504). Process 310 then takes the group of basic cluster resources off-line (block 506). Process 310 repeats blocks 402 through 406 for each remaining node of the participating nodes (block 508).
  • EOD Ease of Deployment
  • Figure 6 illustrates an embodiment of the process 312 of deploying the binaries and the data files in the cluster (block 312 of Figure 3).
  • process 312 Upon Start, process 312 verifies that a participating node meets software requirements of the cluster-unaware application by executing a first install program on the participating node (block 602). This block 602 is optional. Process 312 only executes this block 602 if a cluster-unaware application to be clusterized has software requirements.
  • Process 312 installs the binaries and the data files by executing a second install program on the participating node (block 604). Specifically, process 312 installs the binaries on the participating node and installs the data files on the common storage identified by the specified common storage resource in the cluster group (common storage 170 of Figure 1). The binaries are installed locally on each participating node, but a the same location across all the participating nodes (for example, at location C: ⁇ Program Files ⁇ APPLX ⁇ APPLX9.1 on each node). Installing data on the common storage allows the data to be accessible to all the nodes participating in the clusterization of application X.
  • Process 312 installs at least one configuration name for the cluster- unaware application and the network name for the cluster-unaware application by executing a third install program (block 606).
  • the configuration name determines the service name. There may be more than one configuration name, each corresponding to a service.
  • the network name previously specified by the network name resource in the cluster group, is the name to be used by the client programs to connect to the clusterized application X. Process 312 then terminates.
  • FIG 7 illustrates an embodiment of the process 314 of deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314 of Figure 3).
  • the DLL files are the cluster resource DLL and the cluster administrator extension DLL.
  • process 314 installs the cluster resource DLL and the cluster administrator extension DLL on each of the participating nodes (block 702). Installing these DLL files includes storing them in the cluster folder of each of the participating nodes.
  • Process 314 registers the custom resource type, defined by the cluster resource DLL and the cluster administrator extension DLL, with the cluster so that the cluster is aware of this custom resource type (block 704). Process 314 then terminates.
  • Figure 8 illustrates an embodiment of the optional process of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (optional block 316 of Figure 3).
  • Process 316 performs diagnostics tests on the cluster group to verify that the clusterized application X can operate in the target cluster.
  • the diagnostics tests also include an application specific test to test that the client programs of the clusterized application X can connect to the clusterized application X.
  • the application specific test acts as an application client and verifies the client connectivity.
  • Process 316 performs a number of tests including: cluster group fail-over to all possible nodes, checking of online/offline functionality, and testing of the ability to shutdown and start each of the nodes. The automated testing of integrity ensures that each node and each functional element is systematically and completely tested.
  • process 316 Upon Start, process 316 brings the cluster group, which now includes the group of basic cluster resources and the custom resource type, on-line on a current node of the participating nodes (block 802).
  • Process 316 fails over the cluster group to another node of the participating nodes (block 804).
  • Process 316 then takes the cluster group off-line (block 806).
  • Process 316 repeats blocks 802 through 806 for each remaining node of the participating nodes (block 808).
  • Process 316 then shuts down the current node (block 810) and verifies that the cluster group fails over properly to another node of the participating nodes (block 812).
  • Process 316 repeats blocks 810 and 812 for each remaining node of the participating nodes (block 814).
  • the cluster group comes back on-line where they existed at the start of the test. If any part of this testing process fails, the clusterization process is considered unsuccessful.
  • Elements of one embodiment of the invention may be implemented by hardware, firmware, software or any combination thereof.
  • hardware generally refers to an element having a physical structure such as electronic, electromagnetic, optical, electro-optical, mechanical, electro-mechanical parts, etc.
  • software generally refers to a logical structure, a method, a procedure, a program, a routine, a process, an algorithm, a formula, a function, an expression, etc.
  • firmware generally refers to a logical structure, a method, a procedure, a program, a routine, a process, an algorithm, a formula, a function, an expression, etc that is implemented or embodied in a hardware structure (e.g., flash memory, ROM, EROM).
  • firmware may include microcode, writable control store, micro-programmed structure.
  • the elements of an embodiment of the present invention are essentially the code segments to perform the necessary tasks.
  • the software/firmware may include the actual code to carry out the operations described in one embodiment of the invention, or code that emulates or simulates the operations.
  • the program or code segments can be stored in a processor or machine accessible medium or transmitted by a computer data signal embodied in a carrier wave, or a signal modulated by a carrier, over a transmission medium.
  • the "processor readable or accessible medium” or “machine readable or accessible medium” may include any medium that can store, transmit, or transfer information.
  • Examples of the processor readable or machine accessible medium include an electronic circuit, a semiconductor memory device, a read only memory (ROM), a flash memory, an erasable ROM (EROM), a floppy diskette, a compact disk (CD) ROM, an optical disk, a hard disk, a fiber optic medium, a radio frequency (RF) link, etc.
  • the computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc.
  • the code segments may be downloaded via computer networks such as the Internet, Intranet, etc.
  • the machine accessible medium may be embodied in an article of manufacture.
  • the machine accessible medium may include data that, when accessed by a machine, cause the machine to perform the operations described in the following.
  • the machine accessible medium may also include program code embedded therein.
  • the program code may include machine-readable code to perform the operations described in the following.
  • data here refers to any type of information that is encoded for machine-readable purposes. Therefore, it may include program, code, data, file, etc.
  • All or part of an embodiment of the invention may be implemented by hardware, software, or firmware, or any combination thereof.
  • the hardware, software, or firmware element may have several modules coupled to one another.
  • a hardware module is coupled to another module by mechanical, electrical, optical, electromagnetic or any physical connections.
  • a software module is coupled to another module by a function, procedure, method, subprogram, or subroutine call, a jump, a link, a parameter, variable, and argument passing, a function return, etc.
  • a software module is coupled to another module to receive variables, parameters, arguments, pointers, etc. and/or to generate or pass results, updated variables, pointers, etc.
  • a firmware module is coupled to another module by any combination of hardware and software coupling methods above.
  • a hardware, software, or firmware module may be coupled to any one of another hardware, software, or firmware module.
  • a module may also be a software driver or interface to interact with the operating system running on the platform.
  • a module may also be a hardware driver to configure, set up, initialize, send and receive data to and from a hardware device.
  • An apparatus may include any combination of hardware, software, and firmware modules.
  • One embodiment of the invention may be described as a process which is usually depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a program, a procedure, etc.

Abstract

An embodiment of the present invention is a technique for clusterizing a cluster-unaware application. Based on an analysis of the cluster-unaware application, the binaries of the cluster-unaware application are differentiated from the data files of the cluster-unaware application. DLL files corresponding to a custom resource type for the cluster-unaware application are created based on the behavior of the cluster-unaware application. A cluster and participating nodes in the cluster are identified. A cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster is created. The binaries and the data files are deployed automatically in the cluster. The DLL files corresponding to the custom resource type are deployed automatically on the participating nodes in the cluster.

Description

CLUSTERIZATION WITH AUTOMATED DEPLOYMENT OF A CLUSTER-UNAWARE APPLICATION
BACKGROUND
FIELD OF THE INVENTION Embodiments of the invention are in the field of clusterization of software applications, and more specifically, relate to a method of clusterizing a cluster- unaware application with automated deployment without modifying the cluster- unaware application.
DESCRIPTION OF RELATED ART A cluster is a group of computers that work together to run a common set of applications and appear as a single system to the client and applications. The computers are physically connected by cables and programmatically connected by cluster software. These connections allow the computers to use failover and load balancing, which is not possible with a stand-alone computer. Clustering, provided by cluster software such as Microsoft Cluster
Server (MSCS) of Microsoft Corporation, provides high availability for mission- critical applications such as databases, messaging systems, and file and print services. High availability means that the cluster is designed so as to avoid a single point-of-failure. Applications can be distributed over more than one computer (also called node), achieving a degree of parallelism and failure recovery, and providing more availability. Multiple nodes in a cluster remain in constant communication. If one of the nodes in a cluster becomes unavailable as a result of failure or maintenance, another node takes over the failing node's workload and begins providing service. This process is known as failover. With very high availability, users who were accessing the service would be able to continue to access the service, and would be unaware that the service was briefly interrupted and is now provided by a different node.
The advantages of clustering make it highly desirable to run applications in a cluster. However, only applications that have built-in cluster-awareness can readily be run in a cluster. Therefore, it is desirable to have a technique for clusterizing applications that do not have built-in cluster-awareness.
Manual installation of a clusterized application on each of the nodes in a cluster requires very specific knowledge of the cluster (such as group policies, groups, resources, resource types), thus, a high level of expertise from the installer. Manual installation is also very time-consuming (in the order of hours) and error-prone for large and complex clusters. Errors in installation may not be evident during the installation process. Testing of the functionality (such as fail over) in a systematic and comprehensive manner is required in order to check for installation errors. Manually testing the functionality is extremely tedious, time-consuming, and also requires a relatively high level of expertise from the person performing the testing. Thus, it is desirable to have a technique for automated installation of a clusterized application and an automated testing of the functionality.
SUMMARY OF THE INVENTION
An embodiment of the present invention is a technique for clusterizing a cluster-unaware application. Based on an analysis of the cluster-unaware application, the binaries of the cluster-unaware application are differentiated from the data files of the cluster-unaware application. DLL files corresponding to a custom resource type for the cluster-unaware application are created based on the behavior of the cluster-unaware application. A cluster and participating nodes in the cluster are identified. A cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster is created. The binaries and the data files are deployed automatically in the cluster. The DLL files corresponding to the custom resource type are deployed automatically on the participating nodes in the cluster.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention may be best understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:
Figure 1 is a diagram illustrating a system in which one embodiment of the invention can be practiced.
Figure 2 shows the information that forms the clusterized application X residing on a node in one embodiment of the present invention.
Figure 3 is a flowchart illustrating the method of the present invention.
Figure 4 illustrates the process of creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308 of Figure 3).
Figure 5 illustrates an embodiment of the optional process of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (optional block 310 of Figure 3).
Figure 6 illustrates an embodiment of the process of deploying the binaries and the data files in the cluster (block 312 of Figure 3).
Figure 7 illustrates an embodiment of the process of deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314 of Figure 3).
Figure 8 illustrates an embodiment of the optional process of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (optional block 316 of Figure 3). DESCRIPTION
An embodiment of the present invention is a technique for clusterizing a cluster-unaware application. Based on an analysis of the cluster-unaware application, the binaries of the cluster-unaware application are differentiated from the data files of the cluster-unaware application. Dynamic Link Library (DLL) files corresponding to a custom resource type for the cluster-unaware application are created based on the behavior of the cluster-unaware application. A cluster and participating nodes in the cluster are identified. A cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster is created. The binaries and the data files are deployed in the cluster. The DLL files corresponding to the custom resource type are deployed on the participating nodes in the cluster.
In the following description, numerous specific details are set forth.
However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in order not to obscure the understanding of this description.
A clusterized application is an application capable of running in a cluster environment. A cluster-unaware application can be clusterized if it has the following characteristics. First, the application uses the TCP/IP as a network protocol. Second, the application maintains data in a configurable location.
Third, the application supports transaction processing.
Most database applications, transaction processing applications, file and print server applications, and other groupware applications have the above characteristics, thus can be clusterized. Figure 1 is a diagram illustrating an exemplary system 100 in which one embodiment of the invention can be practiced. The system 100 includes a server system 104 interfacing with a client 180 and with a processor platform 190. The client 180 communicates with the server system via a communication network. The client can access an application running on the server system using the virtual Internet Protocol (IP) address of the application. The server system 104 includes a cluster 106. The cluster 106 includes a node 110, a node 160, and a common storage 170. Each of the nodes 110, 140 is a computer system. Node 110 comprises a memory 120, a processor unit 130 and an input/output unit 132. Similarly, node 140 comprises a memory 150, a processor unit 160 and an input/output unit 162. Each processor unit may include several elements such as data queue, arithmetic logical unit, memory read register, memory write register, etc. Cluster software such as the Microsoft Cluster Service (MSCS) provides clustering services for a cluster. In order for the system 106 to operate as a cluster, identical copies of the cluster software must be running on each of the nodes 110, 140. Copy 122 of the cluster software resides in the memory 120 of node 110. Copy 152 of the cluster software resides in the memory 150 of node 140. A cluster folder containing cluster-level information is included in the memory of each of the nodes of the cluster. Cluster-level information includes DLL files of the applications that are running in the cluster. Cluster folder 128 is included in the memory 120 of node 110. Cluster folder 158 is included in the memory 150 of node 140. A group of cluster-aware applications 126 is stored in the memory 120 of node 110. Identical copies 156 of these applications are stored in the memory 150 of node 140. Application X is a cluster-unaware application. The present invention clusterizes Application X so that it could be run in a cluster and stored the clusterized application X in the memory 120 of node 110. An identical copy of the clusterized application X is stored in the memory 150 of node 140. Computer nodes 110 and 140 access a common storage 170. The common storage 170 contains information that is shared by the nodes in the cluster. This information includes data of the applications running in the cluster. Typically, only one computer node can access the common storage at a time. It is noted that, in other cluster configurations, using different type of cluster software and different type of operating system for the computer nodes in the cluster, a common storage may not be needed. In such a cluster with no common storage, data for the clustered applications are stored with the clustered applications, and are copied and updated for each of the nodes in the cluster. The processor platform 190 is a computer system that interfaces with the cluster 104. It includes a processor 192, a memory 194, and a mass storage device 196. The processor 192 represents a central processing unit of any type of architecture, such as embedded processors, mobile processors, microcontrollers, digital signal processors, superscalar computers, vector processors, single instruction multiple data (SIMD) computers, complex instruction set computers (CISC), reduced instruction set computers (RISC), very long instruction word (VLIW), or hybrid architecture. The memory 194 stores system code and data. The memory 194 is typically implemented with dynamic random access memory (DRAM) or static random access memory (SRAM). The system memory may include program code or code segments implementing one embodiment of the invention. The memory 194 includes a clusterizer of the present invention when loaded from mass storage 196. The clusterizer may also simulate the clusterizing functions described herein. The clusterizer contains instructions that, when executed by the processor 192, cause the processor to perform the tasks or operations as described in the following. The mass storage device 196 stores archive information such as code, programs, files, data, databases, applications, and operating systems. The mass storage device 196 may include compact disk (CD) ROM, a digital video/versatile disc (DVD), floppy drive, and hard drive, and any other magnetic or optic storage devices such as tape drive, tape library, redundant arrays of inexpensive disks (RAIDs), etc. The mass storage device 196 provides a mechanism to read machine-accessible media. The machine-accessible media may contain computer readable program code to perform tasks as described in the following. Figure 2 shows the information that forms the clusterized application X 126 (respectively 156) residing on node 110 (respectively, node 140) in one embodiment of the present invention. Before being clusterized by the method of the present invention, the cluster-unaware application X comprises the binaries and the data files. After clusterization, the binaries of application X are stored in the clusterized application X on each of the participating nodes of the cluster, while the data files of the application X are stored in the common storage 170 (Figure 1). It is noted that, in effect, clusterization using the method of the present invention has made no modification to the binaries and the data files per se of the cluster-unaware application X. Clusterized application X 126 comprises the binaries 202 of the cluster- unaware application X. When the clusterized application X is run on node 110, the clusterized application X 126 also comprises basic cluster resources 204 for the application X, and an instance of the custom resource type represented by DLL files 206. The basic cluster resources 204 and the instance of the custom resource type 206 are logical objects created by the cluster at cluster- level. The basic cluster resources 204 include a common storage resource identifying the common storage 170, an application IP address resource identifying the IP address of the clusterized application X, and a network name resource identifying the network name of the clusterized application X. The DLL files 206 for the custom resource type include the cluster resource dynamic link library (DLL) file and the cluster administrator extension DLL file. These DLL files are stored in the cluster folder 128 in node 110 (Figure 1). Note that when the clusterized application X is run on the node 110 (i.e., application X is owned by node 110 at that time), an instance of the custom resource type as defined by these DLL files is created at cluster-level and stored in the clusterized application X 128. Figure 3 is a flowchart illustrating the method of the present invention. In one embodiment of the invention, block 302 and 304 of the flowchart are performed manually. The cluster-unaware application X is analyzed and, based on this analysis, the binaries of the cluster-unaware application X are differentiated from its data files (block 302). Binaries are a file or files that contain the executable form of the program code whose execution causes the application to run. Binaries do not include data files required by the application since data generally needs to be updated and data size is usually very large. The behavior of the cluster-unaware application X is also analyzed in order to determine a custom resource type for the cluster-unaware application. A custom resource type means that the implemented resource type is different from the standard or out-of-the-box Microsoft cluster resource such as IP Address resource or WINS service resource. The behavior analysis includes determination of how the cluster-unaware application behaves when it starts, or restarts, or stops, and how its health can be checked on a periodic basis. Based on this behavior analysis, DLL files corresponding to and defining the custom resource type for the cluster-unaware application are created (block 304). These DLL files are used to send command requests to the cluster- unaware application X to control its behavior. In one embodiment of the invention, these custom resource DLL files are created using the Microsoft Visual C++® development system. Microsoft Corporation has published a number of Technical Articles for Writing Microsoft Cluster Server (MSCS) Resource DLLs. These articles describe in detail how to use the Microsoft Visual C++® development system to develop resource DLLs. Resource DLLs are created by running the "Resource Type AppWizard" of Microsoft Corporation within the developer studio. This builds a skeletal resource DLL and/or Cluster Administrator extension DLL with all the entry points defined, declared, and exported. The skeletal resource DLL provides only the most basic failover and tailback capability. Based on the behavior and needs of the application being clusterized, the skeletal resource DLL is customized to produce the cluster resource DLL. The behavior of the cluster resource DLL depends on the necessary needs of the application being clusterized, which may include some or all of the following functions/features: Startup Open Online LooksAlive IsAlive Offline Close Terminate ResourceControl ResourceTypeControl The cluster resource DLL and the cluster administrator extension DLL allow application X to function properly in a cluster environment. The capabilities of the DLLs are directly related to the capabilities of the cluster- unaware application X. The following are some of the capabilities of these DLL files. These DLL files can monitor the application health as defined by MSCS and update the status to the cluster resource monitor and the cluster administrator. The cluster resource DLL file includes program code that can bring the application X on-line in an orderly way by starting underlying programs of application X. It also includes program code for taking the application X off-line in an orderly way by performing required actions to save the application state information before stopping the services of application X. Specifically, the following functionality is implemented in the cluster resource DLL: • Ability to bring the application resource on-line by starting the application components in the required order. • Ability to take the application resource off-line by stopping application components in the required order. This might involve, in case of some applications, sending commands to the application to persist application data and/or other data that is needed when the application is failed over to another node in the cluster. • Ability to check the health of application components on a periodic basis and respond to "IsAlive" and "LooksAlive" calls from the cluster resource monitor. The cluster administrator extension DLL file provides the capability to display the application X's configuration that is clusterized through the cluster administrator. This will enable the user to interpret the configuration details of the application as hosted in cluster. Note that this information in most cases will be read-only and user should re-run the clusterization process to make any changes to the application configuration in the cluster. This will ensure that application is properly deployed and configured. Process 300 identified a cluster and participating nodes in the cluster (block 306). The cluster and the participating nodes may be identified via user input to a dialog box, or via default settings. Process 300 allows a user to select a subset of nodes or all the nodes in the cluster to participate in the clusterization of the application X. Process 300 creates a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308). After creating the group of basic cluster resources, process 300 can perform the optional step of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (block 310). Process 300 deploys the binaries and the data files in the cluster (block 312) and deploys the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314). After these deployments, process 300 can perform the optional step of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (block 316). Process 300 then terminates. Figure 4 illustrates the process 308 of creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308 of Figure 3). Upon Start, process 308 associates a specified cluster group name with the cluster group (block 402). Process 308 verifies whether a cluster group by the specified name already exists in the cluster. If it does not already exist, process 308 creates the cluster group with the specified name. Process 308 includes a specified common storage resource in the cluster group, the specified common storage resource identifying a common storage where the data files of the application X reside (block 404). Process 308 verifies whether the specified common storage resource already exists in the specified cluster group. If it does not already exist in this cluster group but exists in another cluster group, process 308 moves the specified common storage resource from this other cluster group to the specified cluster group. If there are any dependent cluster resources in this other cluster group, the move will fail automatically and the user will be notified of what went wrong. Note that this is the common storage where ail data specific to the clusterized application X will be stored for access by all cluster nodes that are participating in the clusterization of application X. Process 308 includes a specified IP address resource in the cluster group, the specified IP address resource identifying an IP address to be used to support the clusterized application X name (block 406). Process 308 verifies whether the specified IP address already exists in the cluster as an IP address cluster resource. If it does not, process 308 creates the IP address resource in the specified cluster group per user specification. If the IP address already exists as a cluster resource in this cluster, process 308 verifies if it already exists in the specified cluster group. If not, process 308 moves the IP address resource to the specified cluster group from its current cluster group. Note that this is the IP address that is used to host the virtual application X name by which all program clients of application X access the clusterized application X. Process 308 includes a specified network name resource in the cluster group, the specified network name resource identifying a network name to be used by client programs to connect to the clusterized application X (block 408). Process 308 verifies if the specified network name already exists in the cluster as a network name cluster resource. If not, process 308 creates the network name resource in the cluster group per user specification. If the network name exists as a cluster resource in this cluster, process 308 verifies if it already exists in the specified cluster group. If not, process 308 moves the network name resource to the specified cluster group from its current cluster group. Note that this is the network name by which all program clients of application X will access the clusterized application X. Figure 5 illustrates an embodiment of the optional process 310 of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (optional block 310 of Figure 3). Process 310 performs confidence tests on the group of basic cluster resources to verify that the target cluster is operational and is currently in a healthy state. Before further clusterization is attempted, it may be desirable to verify that the basic cluster resources in the cluster group are capable of being hosted by each of the participating nodes. If, for example, one of the participating nodes cannot access the common storage specified in the cluster group, this indicates a problem that will prevent successful clusterization. This also indicates, at a higher level, the health of the overall cluster and its nodes.
Upon Start, process 310 brings the group of basic cluster resources online on a current node of the participating nodes (block 502). Process 310 fails over the group of basic cluster resources to another node in the group of the participating nodes (block 504). Process 310 then takes the group of basic cluster resources off-line (block 506). Process 310 repeats blocks 402 through 406 for each remaining node of the participating nodes (block 508).
Deployment of the binaries and data files on each participating node is complex and highly error-prone process if performed manually. Any error will adversely affect the clusterization. The method of the present invention relies on Ease of Deployment (EOD) methodology for the automated deployment of the application X in the cluster. EOD is an automated mechanism which can execute an install program on a target machine, using the silent install option provided by most Windows-based install programs. In the embodiment illustrated in Figure 6, there are three install components that need to be executed on each participating node.
Figure 6 illustrates an embodiment of the process 312 of deploying the binaries and the data files in the cluster (block 312 of Figure 3).
Upon Start, process 312 verifies that a participating node meets software requirements of the cluster-unaware application by executing a first install program on the participating node (block 602). This block 602 is optional. Process 312 only executes this block 602 if a cluster-unaware application to be clusterized has software requirements.
Process 312 installs the binaries and the data files by executing a second install program on the participating node (block 604). Specifically, process 312 installs the binaries on the participating node and installs the data files on the common storage identified by the specified common storage resource in the cluster group (common storage 170 of Figure 1). The binaries are installed locally on each participating node, but a the same location across all the participating nodes (for example, at location C:\Program Files\APPLX\APPLX9.1 on each node). Installing data on the common storage allows the data to be accessible to all the nodes participating in the clusterization of application X.
Process 312 installs at least one configuration name for the cluster- unaware application and the network name for the cluster-unaware application by executing a third install program (block 606). The configuration name determines the service name. There may be more than one configuration name, each corresponding to a service. The network name, previously specified by the network name resource in the cluster group, is the name to be used by the client programs to connect to the clusterized application X. Process 312 then terminates.
Figure 7 illustrates an embodiment of the process 314 of deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314 of Figure 3). The DLL files are the cluster resource DLL and the cluster administrator extension DLL. Upon Start, process 314 installs the cluster resource DLL and the cluster administrator extension DLL on each of the participating nodes (block 702). Installing these DLL files includes storing them in the cluster folder of each of the participating nodes. Process 314 registers the custom resource type, defined by the cluster resource DLL and the cluster administrator extension DLL, with the cluster so that the cluster is aware of this custom resource type (block 704). Process 314 then terminates. Figure 8 illustrates an embodiment of the optional process of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (optional block 316 of Figure 3). Process 316 performs diagnostics tests on the cluster group to verify that the clusterized application X can operate in the target cluster. In another embodiment, the diagnostics tests also include an application specific test to test that the client programs of the clusterized application X can connect to the clusterized application X. The application specific test acts as an application client and verifies the client connectivity. Process 316 performs a number of tests including: cluster group fail-over to all possible nodes, checking of online/offline functionality, and testing of the ability to shutdown and start each of the nodes. The automated testing of integrity ensures that each node and each functional element is systematically and completely tested. Upon Start, process 316 brings the cluster group, which now includes the group of basic cluster resources and the custom resource type, on-line on a current node of the participating nodes (block 802). Process 316 fails over the cluster group to another node of the participating nodes (block 804). Process 316 then takes the cluster group off-line (block 806). Process 316 repeats blocks 802 through 806 for each remaining node of the participating nodes (block 808). Process 316 then shuts down the current node (block 810) and verifies that the cluster group fails over properly to another node of the participating nodes (block 812). Process 316 repeats blocks 810 and 812 for each remaining node of the participating nodes (block 814). At the end of the test, the cluster group comes back on-line where they existed at the start of the test. If any part of this testing process fails, the clusterization process is considered unsuccessful.
Elements of one embodiment of the invention may be implemented by hardware, firmware, software or any combination thereof. The term hardware generally refers to an element having a physical structure such as electronic, electromagnetic, optical, electro-optical, mechanical, electro-mechanical parts, etc. The term software generally refers to a logical structure, a method, a procedure, a program, a routine, a process, an algorithm, a formula, a function, an expression, etc. The term firmware generally refers to a logical structure, a method, a procedure, a program, a routine, a process, an algorithm, a formula, a function, an expression, etc that is implemented or embodied in a hardware structure (e.g., flash memory, ROM, EROM). Examples of firmware may include microcode, writable control store, micro-programmed structure. When implemented in software or firmware, the elements of an embodiment of the present invention are essentially the code segments to perform the necessary tasks. The software/firmware may include the actual code to carry out the operations described in one embodiment of the invention, or code that emulates or simulates the operations. The program or code segments can be stored in a processor or machine accessible medium or transmitted by a computer data signal embodied in a carrier wave, or a signal modulated by a carrier, over a transmission medium. The "processor readable or accessible medium" or "machine readable or accessible medium" may include any medium that can store, transmit, or transfer information. Examples of the processor readable or machine accessible medium include an electronic circuit, a semiconductor memory device, a read only memory (ROM), a flash memory, an erasable ROM (EROM), a floppy diskette, a compact disk (CD) ROM, an optical disk, a hard disk, a fiber optic medium, a radio frequency (RF) link, etc. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc. The code segments may be downloaded via computer networks such as the Internet, Intranet, etc. The machine accessible medium may be embodied in an article of manufacture. The machine accessible medium may include data that, when accessed by a machine, cause the machine to perform the operations described in the following. The machine accessible medium may also include program code embedded therein. The program code may include machine-readable code to perform the operations described in the following. The term "data" here refers to any type of information that is encoded for machine-readable purposes. Therefore, it may include program, code, data, file, etc.
All or part of an embodiment of the invention may be implemented by hardware, software, or firmware, or any combination thereof. The hardware, software, or firmware element may have several modules coupled to one another. A hardware module is coupled to another module by mechanical, electrical, optical, electromagnetic or any physical connections. A software module is coupled to another module by a function, procedure, method, subprogram, or subroutine call, a jump, a link, a parameter, variable, and argument passing, a function return, etc. A software module is coupled to another module to receive variables, parameters, arguments, pointers, etc. and/or to generate or pass results, updated variables, pointers, etc. A firmware module is coupled to another module by any combination of hardware and software coupling methods above. A hardware, software, or firmware module may be coupled to any one of another hardware, software, or firmware module. A module may also be a software driver or interface to interact with the operating system running on the platform. A module may also be a hardware driver to configure, set up, initialize, send and receive data to and from a hardware device. An apparatus may include any combination of hardware, software, and firmware modules.
One embodiment of the invention may be described as a process which is usually depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a program, a procedure, etc.
While the invention has been described in terms of several embodiments, those of ordinary skill in the art will recognize that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims.
The description is thus to be regarded as illustrative instead of limiting.

Claims

CLAIMS What is claimed is: 1. A method for clusterizing a cluster-unaware application, the method comprising the operations of: (a) differentiating binaries of the cluster-unaware application from data files of the cluster-unaware application based on an analysis of the cluster-unaware application; (b) creating DLL files corresponding to a custom resource type for the cluster-unaware application based on the behavior of the cluster-unaware application; (c) identifying a cluster and participating nodes in the cluster; (d) creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster; (e) deploying automatically the binaries and the data files in the cluster; and (f) deploying automatically the DLL files corresponding to the custom resource type on the participating nodes in the cluster.
2. The method of claim 1 further comprising: verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes.
3. The method of claim 2 wherein verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes comprises: (1 ) bringing the group of basic cluster resources on-line on a current node of the participating nodes; (2) failing over the group of basic cluster resources to another node of the participating nodes; (3) taking the group of basic cluster resources off-line; and (4) repeating operations (1) through (3) for each remaining node of the participating nodes.
4. The method of claim 1 further comprising: performing a diagnostics test procedure to verify that the cluster- unaware application has been properly clusterized.
5. The method of claim 4 wherein performing the diagnostics test procedure comprises: (1) bringing the group of basic cluster resources and the custom resource type on-line on a current node of the participating nodes; (2) failing over the group of basic cluster resources and the custom resource type to another node of the participating nodes; (3) taking the group of basic cluster resources and the custom resource type off-line; and (4) repeating operations (1) through (3) for each remaining node of the participating nodes.
6. The method of claim 5 wherein performing the diagnostics test procedure further comprises: (5) shutting down the current node; (6) verifying that the group of basic cluster resources and the custom resource type fail over properly to another node of the participating nodes; and (7) repeating operations (5) and (6) for each remaining node of the participating nodes.
7. The method of claim 1 wherein creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster comprises: associating a specified cluster group name with the cluster group; including a specified common storage resource in the cluster group, the specified common storage resource identifying a common storage; including a specified IP address resource in the cluster group, the specified IP address resource identifying an IP address; and including a specified network name resource in the cluster group, the specified network name resource identifying a network name.
8. The method of claim 7 wherein deploying automatically the binaries and the data files in the cluster comprises: executing a first install program on the participating node to install the binaries and the data files; and executing a second install program to install at least one configuration name for the cluster-unaware application and the network name for the cluster- unaware application.
9. The method of claim 8 further comprising: executing a third install program on a participating node to verify that the participating node meets software requirements of the cluster-unaware application.
10. The method of claim 8 wherein executing a first install program on the participating node to install the binaries and the data files comprises: installing the binaries on the participating node; and installing the data files on a common storage corresponding to the specified common storage resource.
11. The method of claim 1 wherein creating DLL files corresponding to a custom resource type comprises: creating a cluster resource DLL and a cluster administrator extension DLL
12. The method of claim 11 wherein deploying automatically the DLL files corresponding to the custom resource type on the participating nodes in the cluster comprises: installing the cluster resource DLL and the cluster administrator extension DLL on each of the participating nodes; and registering the cluster resource DLL and the cluster administrator extension DLL with the cluster.
13. An article of manufacture comprising: a machine-accessible medium including data that, when accessed by a machine, causes the machine to perform operations comprising: receiving binaries and data files of the cluster-unaware application, the binaries having been differentiated from the data files; receiving DLL files corresponding to a custom resource type for the cluster-unaware application, the DLL files having been created based on the behavior of the cluster-unaware application; identifying a cluster and participating nodes in the cluster; creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster; deploying the binaries and the data files in the cluster; and deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster.
14. The article of manufacture of claim 13 wherein the data further comprise data that, when accessed by the machine, cause the machine to perform operations comprising: verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes.
15. The article of manufacture of claim 14 wherein the data causing the machine to perform the operation of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes comprise data that, when accessed by the machine, cause the machine to perform operations comprising: (1) bringing the group of basic cluster resources on-line on a current node of the participating nodes; (2) failing over the group of basic cluster resources to another node of the participating nodes; (3) taking the group of basic cluster resources off-line; and (4) repeating operations (1) through (3) for each remaining node of the participating nodes.
16. The article of manufacture of claim 13 wherein the data further comprise data that, when accessed by the machine, cause the machine to perform operations comprising: performing a diagnostics test procedure to verify that the cluster- unaware application has been properly clusterized.
17. The article of manufacture of claim 16 wherein the data causing the machine to perform the operation of performing the diagnostics test procedure comprise data that, when accessed by the machine, cause the machine to perform operations comprising: (1 ) bringing the group of basic cluster resources and the custom resource type on-line on a current node of the participating nodes; (2) failing over the group of basic cluster resources and the custom resource type to another node of the participating nodes; (3) taking the group of basic cluster resources and the custom resource type off-line; and (4) repeating operations (1) through (3) for each remaining node of the participating nodes.
18. The article of manufacture of claim 17 wherein the data causing the machine to perform the operation of performing the diagnostics test procedure further comprise data that, when accessed by the machine, cause the machine to perform operations comprising: (5) shutting down the current node; (6) verifying that the group of basic cluster resources and the custom resource type fail over properly to another node of the participating nodes; and (7) repeating operations (5) and (6) for each remaining node of the participating nodes.
19. The article of manufacture of claim 13 wherein the data causing the machine to perform the operation of creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster comprise data that, when accessed by the machine, cause the machine to perform operations comprising: associating a specified cluster group name with the cluster group; including a specified common storage resource in the cluster group, the specified common storage resource identifying a common storage; including a specified IP address resource in the cluster group, the specified IP address resource identifying an IP address; and including a specified network name resource in the cluster group, the specified network name resource identifying a network name.
20. The article of manufacture of claim 19 wherein the data causing the machine to perform the operation of deploying the binaries and the data files in the cluster comprise data that, when accessed by the machine, cause the machine to perform operations comprising: executing a first install program on the participating node to install the binaries and the data files; and executing a second install program to install at least one configuration name for the cluster-unaware application and the network name for the cluster- unaware application.
21. The article of manufacture of claim 20 wherein the data causing the machine to perform the operation of deploying the binaries and the data files in the cluster further comprise data that, when accessed by the machine, cause the machine to perform operations comprising: executing a third install program on a participating node to verify that the participating node meets software requirements of the cluster-unaware application.
22. The article of manufacture of claim 20 wherein the data causing the machine to perform the operation of executing a first install program on the participating node to install the binaries and the data files comprise data that, when accessed by the machine, cause the machine to perform operations comprising: installing the binaries on the participating node; and installing the data files on a common storage corresponding to the specified common storage resource.
23. The article of manufacture of claim 13 wherein the data causing the machine to perform the operation of receiving DLL files corresponding to a custom resource type comprise data that, when accessed by the machine, cause the machine to perform operations comprising: receiving a cluster resource DLL and an cluster administrator extension DLL.
24. The article of manufacture of claim 23 wherein the data causing the machine to perform the operation of deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster comprise data that, when accessed by the machine, cause the machine to perform operations comprising: installing the cluster resource DLL and the cluster administrator extension DLL on each of the participating nodes; and registering the cluster resource DLL and the cluster administrator extension DLL with the cluster.
25. A system comprising: a processor; and a memory coupled to the processor, the memory containing instructions that, when executed by the processor, cause the processor to: receive binaries and data files of the cluster-unaware application, the binaries having been differentiated from the data files; receive DLL files corresponding to a custom resource type for the cluster-unaware application, the DLL files having been created based on the "behavior of the cluster-unaware application; identify a cluster and participating nodes in the cluster; create a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster; deploy the binaries and the data files in the cluster; and deploy the DLL files corresponding to the custom resource type on the participating nodes in the cluster.
26. The system of claim 25 wherein the instructions further comprise instructions that, when executed by the processor, cause the processor to: verify that the group of basic cluster resources is capable of being hosted by each of the participating nodes.
27. The system of claim 26 wherein the instructions causing the processor to verify that the group of basic cluster resources is capable of being hosted by each of the participating nodes comprise instructions that, when accessed by the processor, cause the processor to: (1 ) bring the group of basic cluster resources on-line on a current node of the participating nodes; (2) fail over the group of basic cluster resources to another node of the participating nodes; (3) take the group of basic cluster resources off-line; and (4) repeat operations (1 ) through (3) for each remaining node of the participating nodes.
28. The system of claim 25 wherein the instructions further comprise instructions that, when accessed by the processor, cause the processor to: perform a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized.
29. The system of claim 28 wherein the instructions causing the processor to perform the diagnostics test procedure comprise instructions that, when accessed by the processor, cause the processor to: (1 ) bring the group of basic cluster resources and the custom resource type on-line on a current node of the participating nodes; (2) fail over the group of basic cluster resources and the custom resource type to another node of the participating nodes; (3) take the group of basic cluster resources and the custom resource type off-line; and (4) repeat operations (1 ) through (3) for each remaining node of the participating nodes.
30. The system of claim 29 wherein the instructions causing the processor to perform the diagnostics test procedure further comprise instructions that, when accessed by the processor, cause the processor to: (5) shut down the current node; (6) verify that the group of basic cluster resources and the custom resource type fail over properly to another node of the participating nodes; and (7) repeat operations (5) and (6) for each remaining node of the participating nodes.
31. The system of claim 25 wherein the instructions causing the processor to create a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster comprise instructions that, when accessed by the processor, cause the processor to: associate a specified cluster group name with the cluster group; include a specified common storage resource in the cluster group, the specified common storage resource identifying a common storage; include a specified IP address resource in the cluster group, the specified IP address resource identifying an IP address; and include a specified network name resource in the cluster group, the specified network name resource identifying a network name.
32. The system of claim 31 wherein the instructions causing the processor to deploy the binaries and the data files in the cluster comprise instructions that, when accessed by the processor, cause the processor to: execute a first install program on the participating node to install the binaries and the data files; and execute a second install program to install at least one configuration name for the cluster-unaware application and the network name for the cluster- unaware application.
33. The system of claim 32 wherein the instructions causing the processor to deploy the binaries and the data files in the cluster further comprise instructions that, when accessed by the processor, cause the processor to: execute a third install program on a participating node to verify that the participating node meets software requirements of the cluster-unaware application.
34. The system of claim 32 wherein the instructions causing the processor to execute a first install program on the participating node to install the binaries and the data files comprise instructions that, when accessed by the processor, cause the processor to: install the binaries on the participating node; and install the data files on a common storage corresponding to the specified common storage resource.
35. The system of claim 25 wherein the instructions causing the processor to receive DLL files corresponding to a custom resource type comprise instructions that, when accessed by the processor, cause the processor to: receive a cluster resource DLL and an cluster administrator extension DLL
36. The system of claim 35 wherein the instructions causing the processor to deploy the DLL files corresponding to the custom resource type on the participating nodes in the cluster comprise instructions that, when accessed by the processor, cause the processor to: install the cluster resource DLL and the cluster administrator extension DLL on each of the participating nodes; and register the cluster resource DLL and the cluster administrator extension DLL with the cluster.
PCT/US2005/010661 2004-03-31 2005-03-30 Clusterization with automated deployment of a cluster-unaware application WO2005096736A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP05738713A EP1761867A2 (en) 2004-03-31 2005-03-30 Clusterization with automated deployment of a cluster-unaware application

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US81465704A 2004-03-31 2004-03-31
US10/814,657 2004-03-31

Publications (2)

Publication Number Publication Date
WO2005096736A2 true WO2005096736A2 (en) 2005-10-20
WO2005096736A3 WO2005096736A3 (en) 2007-03-15

Family

ID=35125519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/010661 WO2005096736A2 (en) 2004-03-31 2005-03-30 Clusterization with automated deployment of a cluster-unaware application

Country Status (2)

Country Link
EP (1) EP1761867A2 (en)
WO (1) WO2005096736A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8621427B2 (en) 2010-06-30 2013-12-31 International Business Machines Corporation Code modification of rule-based implementations
WO2016127756A1 (en) * 2015-02-15 2016-08-18 北京京东尚科信息技术有限公司 Flexible deployment method for cluster and management system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134673A (en) * 1997-05-13 2000-10-17 Micron Electronics, Inc. Method for clustering software applications
US20030028594A1 (en) * 2001-07-31 2003-02-06 International Business Machines Corporation Managing intended group membership using domains
US20040210895A1 (en) * 2003-04-17 2004-10-21 Esfahany Kouros H. Method and system for making an application highly available

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134673A (en) * 1997-05-13 2000-10-17 Micron Electronics, Inc. Method for clustering software applications
US20030028594A1 (en) * 2001-07-31 2003-02-06 International Business Machines Corporation Managing intended group membership using domains
US20040210895A1 (en) * 2003-04-17 2004-10-21 Esfahany Kouros H. Method and system for making an application highly available

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8621427B2 (en) 2010-06-30 2013-12-31 International Business Machines Corporation Code modification of rule-based implementations
US9092246B2 (en) 2010-06-30 2015-07-28 International Business Machines Corporation Code modification of rule-based implementations
WO2016127756A1 (en) * 2015-02-15 2016-08-18 北京京东尚科信息技术有限公司 Flexible deployment method for cluster and management system

Also Published As

Publication number Publication date
EP1761867A2 (en) 2007-03-14
WO2005096736A3 (en) 2007-03-15

Similar Documents

Publication Publication Date Title
JP5535484B2 (en) Automated software testing framework
US10768923B2 (en) Release orchestration for performing pre release, version specific testing to validate application versions
US8352916B2 (en) Facilitating the automated testing of daily builds of software
US9535684B2 (en) Management of software updates in a virtualized environment of a datacenter using dependency relationships
US7698391B2 (en) Performing a provisioning operation associated with a software application on a subset of the nodes on which the software application is to operate
US8800047B2 (en) System, method and program product for dynamically performing an audit and security compliance validation in an operating environment
US20060047776A1 (en) Automated failover in a cluster of geographically dispersed server nodes using data replication over a long distance communication link
US7392148B2 (en) Heterogeneous multipath path network test system
US11385993B2 (en) Dynamic integration of command line utilities
US10303458B2 (en) Multi-platform installer
US20080320071A1 (en) Method, apparatus and program product for creating a test framework for testing operating system components in a cluster system
US20070168728A1 (en) Automated context-sensitive operating system switch
US8341599B1 (en) Environments sharing remote mounted middleware
US20080115134A1 (en) Repair of system defects with reduced application downtime
US9043781B2 (en) Algorithm for automated enterprise deployments
US11442765B1 (en) Identifying dependencies for processes for automated containerization
US7434104B1 (en) Method and system for efficiently testing core functionality of clustered configurations
US9342784B1 (en) Rule based module for analyzing computing environments
US9256509B1 (en) Computing environment analyzer
US7472052B2 (en) Method, apparatus and computer program product for simulating a storage configuration for a computer system
US20060168564A1 (en) Integrated chaining process for continuous software integration and validation
US20120036496A1 (en) Plug-in based high availability application management framework (amf)
US20080126793A1 (en) Automated Operation of IT Resources With Multiple Choice Configuration
US11487878B1 (en) Identifying cooperating processes for automated containerization
EP1761867A2 (en) Clusterization with automated deployment of a cluster-unaware application

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

NENP Non-entry into the national phase in:

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

WWE Wipo information: entry into national phase

Ref document number: 2005738713

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 2005738713

Country of ref document: EP