US20090282151A1 - Semi-hierarchical system and method for administration of clusters of computer resources - Google Patents

Semi-hierarchical system and method for administration of clusters of computer resources Download PDF

Info

Publication number
US20090282151A1
US20090282151A1 US12/509,368 US50936809A US2009282151A1 US 20090282151 A1 US20090282151 A1 US 20090282151A1 US 50936809 A US50936809 A US 50936809A US 2009282151 A1 US2009282151 A1 US 2009282151A1
Authority
US
United States
Prior art keywords
controller
resources
controlled
proxy
tier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/509,368
Inventor
Myung Mun Bae
Jose E. Moreira
Ramendra Kumar Sahoo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/509,368 priority Critical patent/US20090282151A1/en
Publication of US20090282151A1 publication Critical patent/US20090282151A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Definitions

  • the invention related to the administration of a plurality of computer resources and more particularly to resource management and monitoring by distributed proxies.
  • Very large scale computer resource cluster systems particularly large scale cellular machines, introduce significant system management challenges.
  • the ability to track and analyze every possible fault condition, whether it's transient (soft) or permanent (hard) conditions, in large cellular machines is a major issue from the points of view of systems software, hardware, and architecture.
  • the difficulty is primarily due to the fact that the number of entities to be monitored is so large that interaction between the management system and the managed entities is overwhelmingly complex and expensive.
  • cluster resource management system consists of one or a plurality of centralized control workstations (CWSs) with all of the nodes reporting to the CWS being termed Client nodes (C-nodes).
  • CWSs centralized control workstations
  • Client nodes C-nodes
  • CWS centralized control workstation
  • FIG. 1 illustrates a prior art multilevel hierarchical approach to the management of computer resources
  • FIG. 2 illustrates a prior art two-level hierarchical approach to the management of computer resources
  • FIG. 4 provides a logical diagram of a two-level, three-tier semi-hierarchical management system in accordance with the present invention.
  • FIG. 5 provides a logical diagram of an n level, n+1 tier semi-hierarchical management system in accordance with the present invention.
  • the present system and method provide a semi-hierarchical n level, n+1 tier approach to the management of clusters of computer resources.
  • the first or top level consists of the controller nodes, which are similar to CWSs in terms of functionality and purpose.
  • a first tier is defined at the top level.
  • At least one intermediate tier is introduced between two of the levels and comprises at least one proxy or a plurality of proxies.
  • Controller nodes or controller resources represent those entities which have control or management functionality within a cluster system.
  • the term “controller” encompasses the more restrictive terms “service node” and “server”.
  • Controlled nodes or controlled resources represent those entities which are required to be managed within a cluster system.
  • the term “controlled” encompasses the more restrictive terms of “compute node” and “client”.
  • a “proxy” is a process or a set of processes, running either on one or more controlled nodes or controller nodes (e.g., CWSs), which represents the processes of clustered controlled nodes/computer resources collectively and which acts as a conduit between the controlled processes at the last or bottom level and controller computer resources at a first or top level.
  • controller nodes e.g., CWSs
  • the flexibility of a proxy running either on controller or controlled nodes makes a semi-hierarchical system customizable and provides greater management availability with improved fail-over recovery.
  • a “proxy resource manager” is a management entity which can run on one or more controller nodes. True resource information can be gathered and propagated through the proxy-running nodes, with filtering provided by the PxRMs so that only critical information is conveyed to the controller nodes which provide services. Proxy resource managers can, if necessary, be transparently moved from one node to another node without reporting false information about the resources.
  • level is used to classify various components which are arranged to differentiate between entities and to manage entities within a very large cluster using the inventive semi-hierarchical approach.
  • level accommodates more or less similar types of computers or processing units which have similar functionality. For example, compute nodes which just concentrate on intensive computations, irrespective of the type of computing, would fall into the same level of controlled nodes or resources. Service nodes whose main purpose is to provide support services can be branded under a totally separate level of controller nodes or resources. Based on the functionality, if the computers are of the same or similar types, with some of them having additional features, they would still belong to the same level, but would be classified into different tiers as further discussed below.
  • Another way of differentiating computer resources into different levels is from a networking and/or a communication point of view.
  • a list of computers would fall into the same level provided that they have same type of networking features and provided that there is no difference in communication bandwidth between two different computers of the same level.
  • BG/L service nodes there are two different types of service nodes based on the administrative role and the control role. However, because all of the service nodes have the same type of networking or communication features, all of the service nodes would fall under the same “level” type. Further the I/O nodes and the compute nodes, in case of BG/L, would also come under a common “level” category.
  • a “tier” is a logical grouping of nodes within or across levels. Computers in the same tier must be of the same type; and, software and hardware contents of one node in a tier should have the same functionality as contents of another node in the same tier. From a system management perspective, computer nodes in the same tier should look the same and perform the same work in case there is a fail-over.
  • the preferred way to address a very large number of resource components is to redefine the component level nodes, as detailed above, so that each of the resource management subsystems can have a number of hardware and/or software entities, treated as either attached devices (in the case of hardware) or as proxies (for software or system based tools).
  • the foregoing enables any “bring up” and “bring down” of the compute nodes as an attached device bring-up or bring down recorded through device level support.
  • proxies for software or system based tools
  • a set of controlled nodes in a cellular cluster system is managed through a proxy process that runs either on another node belonging to same level (e.g., an I/O node or another node of same type), on a controller node which is on a different level, or on both.
  • the controlled nodes are considered as controllable external entities (e.g., devices) attached to the proxy process running either on another same level node or a different level node.
  • the proxy present either for the controlled node level or the controller node level basically handles, and effectively cuts down, the midlevel server node functionalities which were illustrated at 110 , 120 and 130 of FIG. 1 .
  • the proxies can be customized to be present in either or both levels.
  • Important results of using proxies include the fact that, to the outside world, a very large cluster is represented by means of the proxies for the controlled nodes. Thus, there is no requirement for individual controlled node control, hence a 100,000 node cluster can be easily viewed as a 1000 node proxy cluster. Further, failovers can be easily addressed through the proxies.
  • the controller nodes will be coordinated, so that controlled nodes can be failed over to another controller node in case of a failure to provide the stipulated services to the corresponding set of controlled nodes.
  • the inventive system and method does not require one or multiple CWSs.
  • the management is non-intrusive to the applications running on the controlled nodes; and, from a management perspective, only the controller nodes and the nodes running proxies are visible, resulting in a system which can be effectively seen as much smaller than it is.
  • the controller nodes can be self-managed, along with the controlled node automatic failover feature, without a centralized control workstation.
  • a set of nodes and/or the nodes from the controlled node level with proxies which may be I/O nodes, talk to the core controlled nodes in a large scale cluster.
  • the size of the top level cluster will be effectively smaller than the total number of bottom level nodes.
  • the nodes with proxies are distributed peers rather than hierarchical controlling nodes. The reason that the system is referred to as “semi-hierarchical” is because an n+1 tier functionality is being provided through an n level system, as further detailed below with reference to FIGS. 3A , 3 B, 4 and 5 .
  • FIGS. 3A and 3B there are two levels, with each level having a plurality of computers performing similar functionality.
  • the two levels are the controlled level and the controller level.
  • four controller nodes, 313 - 316 reside in level 1 , 310
  • four controlled nodes, 323 - 326 reside in level 2 , 320 .
  • the controller nodes are partitioned into Tier 1 , 350 , comprising nodes with administrative authority, namely controller nodes 313 and 314 , and Tier 2 , 360 , comprising controller nodes 315 and 316 which are responsible for cluster management and have the inventive proxy functionality.
  • Tier 2 , 360 also encompasses controlled node 323 having the inventive proxy functionality and being associated with the plurality of controlled nodes 324 - 326 .
  • Controlled nodes 324 - 326 are logically partitioned into Tier 3 , 370 .
  • the FIG. 3A system is logically partitioned into three tiers of computer nodes, each tier being a portion of at least one of the levels, with each tier performing at least one of the controlled or controller functions.
  • Tier 1 consists of purely management nodes and is a controller level.
  • Tier 3 consists of purely managed nodes and is a controlled level.
  • Tier 2 the intermediate tier, consists of nodes which have the flexibility to overlap the activities from both controlled and controller levels. While an even partitioning of controller nodes is illustrated, it may be preferable to partition a smaller set of controller resources to the intermediate tier.
  • the semi-hierarchical approach makes it possible to avoid the introduction of more than two levels of hierarchy (used conventionally to manage systems), because of the flexible intermediate level.
  • events from the Tier 3 controlled nodes, 324 - 326 are monitored by the Tier 2 controlled node with proxy, 323 , and can be filtered prior to being communicated to the Tier 2 controller nodes with proxies, 315 and 316 .
  • proxies can run on one or both of controller and controlled nodes, with filtering provided at one or both proxies.
  • an additional supervisory component (not shown) can monitor events at the Tier 3 controlled nodes for statistical purposes and then forward the events to the Tier 2 controller node for filtering.
  • Control or management actions in the illustrated system can be sent directly from the Tier 1 controller nodes to the Tier 3 controlled nodes or can, preferably, be sent from the Tier 2 controller nodes to the Tier 3 controlled nodes. This minimizes the apparent size of the managed cluster from the viewpoint of the Tier 1 controller nodes.
  • FIG. 4 provides an alternative illustration of a two level, three tier embodiment of the present invention.
  • the two level system includes controller level 410 and controlled level 420 .
  • the controller nodes, 412 - 417 are logically partitioned between two tiers, with controller nodes 412 , 413 , and 414 being partitioned into Tier 1 , 450 , to perform purely administrative functions.
  • Controller nodes 415 , 416 , and 417 are partitioned into Tier 2 , 460 , and are provided with controller side proxies to perform direct management and event filtering for the controlled nodes of the system.
  • the controlled nodes are logically partitioned into controlled nodes having proxies, nodes 425 , 426 and 427 , which are found in intermediate Tier 2 , 460 , and controlled nodes 475 - 477 , 485 - 487 , and 495 - 497 which are found in Tier 3 , 470 .
  • Each of the Tier 2 controlled nodes with proxies is associated with a cluster of Tier 3 controlled nodes.
  • Controlled nodes 475 , 485 and 495 are associated with, and provide event information to, a Tier 2 controlled node with proxy, node 425 .
  • controlled nodes 476 , 486 and 496 are associated with, and provide event information to, a Tier 2 controlled node with proxy, node 426 .
  • controlled nodes 477 , 487 and 497 are associated with, and provide event information to, a Tier 2 controlled node with proxy, node 427 .
  • the Tier 2 proxies act as interfaces between the Tier 3 controlled nodes and the Tier 1 controller nodes, gathering event information, filtering the gathered event information, and providing filtered event information to the controller nodes. Management actions from the controller nodes are preferably directed through the Tier 2 nodes with proxies with the result being minimization of the apparent size of the cluster of managed resources.
  • FIG. 5 illustrates another implementation of the invention in an n level with n+1 tier arrangement.
  • four tiers, 550 , 560 , 570 , and 580 are provided for three levels, 510 , 520 and 530 .
  • Level 1 , 510 , as well as Tier 1 , 550 is comprised of cluster controller nodes which perform purely administrative management functions.
  • Level 2 , 520 includes controller nodes with proxies, 515 , 516 , 517 , which are logically partitioned into Tier 2 , 560 .
  • Level 2 , 520 , controlled nodes with proxies, 525 , 526 , and 527 are logically partitioned into Tier 3 , 570 , and are associated with the Level 3 , 530 , Tier 4 , 580 , controlled nodes. More specifically, Tier 4 controlled nodes 575 , 585 , and 595 are associated with and provide event information to controlled node with proxy 525 .
  • Tier 4 controlled nodes 576 , 586 , and 596 are associated with and provide event information to controlled node with proxy 526 .
  • Tier 4 controlled nodes 577 , 587 , and 597 are associated with and provide event information to controlled node with proxy 527 .
  • the controller nodes with proxies will filter the unnecessary events and collect only the essential or severe events to be provided to the central data repository on one of the Tier 1 controller nodes. Predictive analysis will then be performed at the Tier 1 controller node or nodes and actions taken as necessary.
  • n level, n+1 tier organization of the nodes for cluster system management can be formulated depending upon the proxy arrangement, policy decisions (e.g., what information is deemed critical information that must be forwarded to the controller nodes and what information can be filtered, stored, and/or discarded), and other node hardware and software requirements, as will be apparent to those having skill in the art.
  • the inventive system can comprise the following components, along with the hardware tools already present in a cluster environment and along with the above-described proxy software, for a large scale system management and control process: an optional GUI interface running at the top level (e.g., WebSM) controller node with administrative authority; cluster system management coordination between management nodes (in this case controller nodes with administrative authority) and other controller nodes (e.g., CSM); predictive failure analysis running on one or more controller nodes, including fault isolation tools; and, as a specific case, rack-based supervisors running hardware-software interfaces (e.g., translating the transient or permanent hardware errors to text based event logs in expanded and short form).
  • GUI interface running at the top level (e.g., WebSM) controller node with administrative authority
  • CSM controller nodes
  • predictive failure analysis running on one or more controller nodes, including fault isolation tools
  • rack-based supervisors running hardware-software interfaces (e.g., translating the transient or permanent hardware errors to text based event logs in expanded and short form).

Abstract

A method for managing clustered computer resources, and particularly very large scale clusters of computer resources by a semi-hierarchical n level, n+1 tier approach. Controller resources and controlled resources exist at different hardware levels. The top level consists of controller nodes and a first tier is defined for at least part of the top level. At a last level, at which controlled nodes are found, a last tier is defined. Additional levels of controlled and controller resources may exist between the top and last levels. At least one logical intermediate tier is introduced between adjacent levels and comprises at least one proxy or set of proxy processes.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a Divisional Application of U.S. Ser. No. 10/307,103 filed Nov. 27, 2002.
  • FIELD OF THE INVENTION
  • The invention related to the administration of a plurality of computer resources and more particularly to resource management and monitoring by distributed proxies.
  • BACKGROUND OF THE INVENTION
  • Very large scale computer resource cluster systems, particularly large scale cellular machines, introduce significant system management challenges. The ability to track and analyze every possible fault condition, whether it's transient (soft) or permanent (hard) conditions, in large cellular machines is a major issue from the points of view of systems software, hardware, and architecture. The difficulty is primarily due to the fact that the number of entities to be monitored is so large that interaction between the management system and the managed entities is overwhelmingly complex and expensive.
  • There are a number of available system management tools for clusters of computer resources. However, the existing technologies typically target small to medium size clusters. Typically a cluster resource management system consists of one or a plurality of centralized control workstations (CWSs) with all of the nodes reporting to the CWS being termed Client nodes (C-nodes). Small and medium size cluster management approaches cannot be directly applied to a system which is at least two orders of magnitude larger than the existing systems for the following reasons:
  • 1. There is no clear road map or scalability feature addressed in the current systems to scale up to a very large cluster (e.g. 65536 nodes).
  • 2. Most available tools are based on the popular operating systems (e.g., Linux, AIX, or Solaris) and applying them to specialized operating systems is an overwhelming task.
  • 3. Many existing tools rely on a centralized control point, called a centralized control workstation (CWS), which both limits the size of the cluster and becomes a single point of failure for the cluster operation.
  • FIGS. 1 and 2 depict representative prior art hierarchical approaches to cluster management. A three-level cascading model is shown in FIG. 1 with two different levels of CWSs, specifically server node 101 over midlevel server nodes 110, 120 and 130, wherein midlevel server 110 manages client nodes 115, 117, and 119, midlevel server 120 manages client nodes 125, 127, and 129, and midlevel server 130 manages client nodes 135, 137, and 139. Alternatively, a very powerful centralized CWS can be provided to handle several thousands of C-nodes simultaneously. As illustrated in FIG. 2, centralized management server 201 directly manages the client nodes 210, 220, 230, 240, 250, and 260 in a standard two-level hierarchical system.
  • However, each of the foregoing approaches not only introduces more complexity and more resources, but also reduces the reliability and performance of the system significantly because of the load on the central server and the presence of many single points of failure.
  • Therefore, it is apparent that the current technologies may not be directly applied to very large clusters since they cannot be easily scaled up to manage large numbers of computers (e.g., 65536 nodes). Even with multiple CWSs, it would be necessary to introduce another level of management, which again introduces more complexity and at least on other single point of failure at the top management level.
  • It is therefore an objective of the present invention to provide a management system and method for clustered computer resources which is scalable to manage very large cluster.
  • It is another objective of the present invention to provide a management system and method for clustered computer resources which is flexible to react to fail-over conditions.
  • SUMMARY OF THE INVENTION
  • The foregoing and other objective are realized by the present invention which proposes a new system and method for managing clustered computer resources, and more particularly very large scale clusters of computer resources. The system and method provide a semi-hierarchical n level, n+1 tier approach. The top level consists of only controller nodes. A first tier is defined at the top level. At a bottom or last level, at which the cluster of controlled nodes is found, a last tier is defined. Additional levels of controller or controlled nodes may exist between the top and bottom levels. At least one intermediate tier is introduced between two of the levels and comprises at least one proxy or a plurality of proxies. A proxy is a process or set of processes representing processes of the clustered computer resources. Proxies can run either on controller nodes or on the controlled nodes or controlled node clusters and represent interfaces between controller and controlled nodes, facilitating the administration of a plurality of controlled nodes.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention will now be described in greater detail with specific reference to the attached figures wherein:
  • FIG. 1 illustrates a prior art multilevel hierarchical approach to the management of computer resources;
  • FIG. 2 illustrates a prior art two-level hierarchical approach to the management of computer resources;
  • FIGS. 3A and 3B provide illustration of a two level system in which the present invention is implemented;
  • FIG. 4 provides a logical diagram of a two-level, three-tier semi-hierarchical management system in accordance with the present invention; and
  • FIG. 5 provides a logical diagram of an n level, n+1 tier semi-hierarchical management system in accordance with the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present system and method provide a semi-hierarchical n level, n+1 tier approach to the management of clusters of computer resources. The first or top level consists of the controller nodes, which are similar to CWSs in terms of functionality and purpose. A first tier is defined at the top level. At a last level, at which the cluster of controlled nodes is found, a last tier is defined. Additional levels of controller or controlled nodes may exist between the top and bottom levels. At least one intermediate tier is introduced between two of the levels and comprises at least one proxy or a plurality of proxies.
  • For purposes of the ensuing description, the following terms are defined:
  • “Controller” nodes or controller resources represent those entities which have control or management functionality within a cluster system. The term “controller” encompasses the more restrictive terms “service node” and “server”.
  • “Controlled” nodes or controlled resources represent those entities which are required to be managed within a cluster system. The term “controlled” encompasses the more restrictive terms of “compute node” and “client”.
  • A “proxy” is a process or a set of processes, running either on one or more controlled nodes or controller nodes (e.g., CWSs), which represents the processes of clustered controlled nodes/computer resources collectively and which acts as a conduit between the controlled processes at the last or bottom level and controller computer resources at a first or top level. The flexibility of a proxy running either on controller or controlled nodes makes a semi-hierarchical system customizable and provides greater management availability with improved fail-over recovery.
  • A “proxy resource manager” (PxRM) is a management entity which can run on one or more controller nodes. True resource information can be gathered and propagated through the proxy-running nodes, with filtering provided by the PxRMs so that only critical information is conveyed to the controller nodes which provide services. Proxy resource managers can, if necessary, be transparently moved from one node to another node without reporting false information about the resources.
  • The term “level” is used to classify various components which are arranged to differentiate between entities and to manage entities within a very large cluster using the inventive semi-hierarchical approach. The term “level” accommodates more or less similar types of computers or processing units which have similar functionality. For example, compute nodes which just concentrate on intensive computations, irrespective of the type of computing, would fall into the same level of controlled nodes or resources. Service nodes whose main purpose is to provide support services can be branded under a totally separate level of controller nodes or resources. Based on the functionality, if the computers are of the same or similar types, with some of them having additional features, they would still belong to the same level, but would be classified into different tiers as further discussed below. Another way of differentiating computer resources into different levels is from a networking and/or a communication point of view. A list of computers would fall into the same level provided that they have same type of networking features and provided that there is no difference in communication bandwidth between two different computers of the same level. For example, in case of BG/L service nodes, there are two different types of service nodes based on the administrative role and the control role. However, because all of the service nodes have the same type of networking or communication features, all of the service nodes would fall under the same “level” type. Further the I/O nodes and the compute nodes, in case of BG/L, would also come under a common “level” category.
  • A “tier” is a logical grouping of nodes within or across levels. Computers in the same tier must be of the same type; and, software and hardware contents of one node in a tier should have the same functionality as contents of another node in the same tier. From a system management perspective, computer nodes in the same tier should look the same and perform the same work in case there is a fail-over.
  • The preferred way to address a very large number of resource components is to redefine the component level nodes, as detailed above, so that each of the resource management subsystems can have a number of hardware and/or software entities, treated as either attached devices (in the case of hardware) or as proxies (for software or system based tools). The foregoing enables any “bring up” and “bring down” of the compute nodes as an attached device bring-up or bring down recorded through device level support. Hence there is no need for a different hardware-based “heartbeat” monitoring mechanism to monitor each and every compute node.
  • A set of controlled nodes in a cellular cluster system is managed through a proxy process that runs either on another node belonging to same level (e.g., an I/O node or another node of same type), on a controller node which is on a different level, or on both. The controlled nodes are considered as controllable external entities (e.g., devices) attached to the proxy process running either on another same level node or a different level node.
  • The proxy present either for the controlled node level or the controller node level basically handles, and effectively cuts down, the midlevel server node functionalities which were illustrated at 110, 120 and 130 of FIG. 1. Depending upon the requirements of the cluster, the proxies can be customized to be present in either or both levels. Important results of using proxies include the fact that, to the outside world, a very large cluster is represented by means of the proxies for the controlled nodes. Thus, there is no requirement for individual controlled node control, hence a 100,000 node cluster can be easily viewed as a 1000 node proxy cluster. Further, failovers can be easily addressed through the proxies. The controller nodes will be coordinated, so that controlled nodes can be failed over to another controller node in case of a failure to provide the stipulated services to the corresponding set of controlled nodes.
  • In this way, a set of controlled nodes are simply represented through a process and, thus, the number of nodes to manage is significantly reduced. The inventive system and method, therefore, does not require one or multiple CWSs. The management is non-intrusive to the applications running on the controlled nodes; and, from a management perspective, only the controller nodes and the nodes running proxies are visible, resulting in a system which can be effectively seen as much smaller than it is. The controller nodes can be self-managed, along with the controlled node automatic failover feature, without a centralized control workstation.
  • In the case of the controller nodes, a set of nodes and/or the nodes from the controlled node level with proxies, which may be I/O nodes, talk to the core controlled nodes in a large scale cluster. In this view, the size of the top level cluster will be effectively smaller than the total number of bottom level nodes. The nodes with proxies are distributed peers rather than hierarchical controlling nodes. The reason that the system is referred to as “semi-hierarchical” is because an n+1 tier functionality is being provided through an n level system, as further detailed below with reference to FIGS. 3A, 3B, 4 and 5.
  • In a first detailed embodiment of the invention, as illustrated in FIGS. 3A and 3B, there are two levels, with each level having a plurality of computers performing similar functionality. The two levels are the controlled level and the controller level. In the system illustrated in FIG. 3A, four controller nodes, 313-316, reside in level 1, 310, and four controlled nodes, 323-326, reside in level 2, 320. By the present invention, as illustrated in FIG. 3B, the controller nodes are partitioned into Tier 1, 350, comprising nodes with administrative authority, namely controller nodes 313 and 314, and Tier 2, 360, comprising controller nodes 315 and 316 which are responsible for cluster management and have the inventive proxy functionality. As illustrated, Tier 2, 360, also encompasses controlled node 323 having the inventive proxy functionality and being associated with the plurality of controlled nodes 324-326. Controlled nodes 324-326 are logically partitioned into Tier 3, 370.
  • Therefore, the FIG. 3A system is logically partitioned into three tiers of computer nodes, each tier being a portion of at least one of the levels, with each tier performing at least one of the controlled or controller functions. Tier 1 consists of purely management nodes and is a controller level. Tier 3 consists of purely managed nodes and is a controlled level. Tier 2, the intermediate tier, consists of nodes which have the flexibility to overlap the activities from both controlled and controller levels. While an even partitioning of controller nodes is illustrated, it may be preferable to partition a smaller set of controller resources to the intermediate tier. The semi-hierarchical approach makes it possible to avoid the introduction of more than two levels of hierarchy (used conventionally to manage systems), because of the flexible intermediate level.
  • In operation, events from the Tier 3 controlled nodes, 324-326, are monitored by the Tier 2 controlled node with proxy, 323, and can be filtered prior to being communicated to the Tier 2 controller nodes with proxies, 315 and 316. As noted above, proxies can run on one or both of controller and controlled nodes, with filtering provided at one or both proxies. With each proxy running on Tier 2, treating Tier 3 computers or nodes as attached devices or controllable units, the system management and control process as a whole are simplified. Should so-called “heartbeat” information still be required, an additional supervisory component (not shown) can monitor events at the Tier 3 controlled nodes for statistical purposes and then forward the events to the Tier 2 controller node for filtering. In either case, events will flow through the proxy nodes prior to being selectively provided to the Tier 1 controller nodes. Control or management actions in the illustrated system can be sent directly from the Tier 1 controller nodes to the Tier 3 controlled nodes or can, preferably, be sent from the Tier 2 controller nodes to the Tier 3 controlled nodes. This minimizes the apparent size of the managed cluster from the viewpoint of the Tier 1 controller nodes.
  • FIG. 4 provides an alternative illustration of a two level, three tier embodiment of the present invention. The two level system includes controller level 410 and controlled level 420. The controller nodes, 412-417, are logically partitioned between two tiers, with controller nodes 412, 413, and 414 being partitioned into Tier 1, 450, to perform purely administrative functions. Controller nodes 415, 416, and 417 are partitioned into Tier 2, 460, and are provided with controller side proxies to perform direct management and event filtering for the controlled nodes of the system. The controlled nodes are logically partitioned into controlled nodes having proxies, nodes 425, 426 and 427, which are found in intermediate Tier 2, 460, and controlled nodes 475-477, 485-487, and 495-497 which are found in Tier 3, 470. Each of the Tier 2 controlled nodes with proxies is associated with a cluster of Tier 3 controlled nodes. Controlled nodes 475, 485 and 495 are associated with, and provide event information to, a Tier 2 controlled node with proxy, node 425. Similarly, controlled nodes 476, 486 and 496 are associated with, and provide event information to, a Tier 2 controlled node with proxy, node 426. Further, controlled nodes 477, 487 and 497 are associated with, and provide event information to, a Tier 2 controlled node with proxy, node 427. As detailed above, the Tier 2 proxies act as interfaces between the Tier 3 controlled nodes and the Tier 1 controller nodes, gathering event information, filtering the gathered event information, and providing filtered event information to the controller nodes. Management actions from the controller nodes are preferably directed through the Tier 2 nodes with proxies with the result being minimization of the apparent size of the cluster of managed resources.
  • By virtue of the illustrated system, a very large cluster system (˜100s of thousands of computers) can be viewed as a manageable unit of clusters (e.g., roughly on the order of 1000 multicomputers) through the introduction of proxies. FIG. 5 illustrates another implementation of the invention in an n level with n+1 tier arrangement. As illustrated in FIG. 5, four tiers, 550, 560, 570, and 580, are provided for three levels, 510, 520 and 530. Level 1, 510, as well as Tier 1, 550, is comprised of cluster controller nodes which perform purely administrative management functions. Level 2, 520, includes controller nodes with proxies, 515, 516, 517, which are logically partitioned into Tier 2, 560. Level 2, 520, controlled nodes with proxies, 525, 526, and 527, are logically partitioned into Tier 3, 570, and are associated with the Level 3, 530, Tier 4, 580, controlled nodes. More specifically, Tier 4 controlled nodes 575, 585, and 595 are associated with and provide event information to controlled node with proxy 525. Tier 4 controlled nodes 576, 586, and 596 are associated with and provide event information to controlled node with proxy 526. Tier 4 controlled nodes 577, 587, and 597 are associated with and provide event information to controlled node with proxy 527. For statistical analysis of data, isolation of failures, and prediction of future problems, the controller nodes with proxies will filter the unnecessary events and collect only the essential or severe events to be provided to the central data repository on one of the Tier 1 controller nodes. Predictive analysis will then be performed at the Tier 1 controller node or nodes and actions taken as necessary.
  • A similar more complex n level, n+1 tier organization of the nodes for cluster system management can be formulated depending upon the proxy arrangement, policy decisions (e.g., what information is deemed critical information that must be forwarded to the controller nodes and what information can be filtered, stored, and/or discarded), and other node hardware and software requirements, as will be apparent to those having skill in the art.
  • The inventive system can comprise the following components, along with the hardware tools already present in a cluster environment and along with the above-described proxy software, for a large scale system management and control process: an optional GUI interface running at the top level (e.g., WebSM) controller node with administrative authority; cluster system management coordination between management nodes (in this case controller nodes with administrative authority) and other controller nodes (e.g., CSM); predictive failure analysis running on one or more controller nodes, including fault isolation tools; and, as a specific case, rack-based supervisors running hardware-software interfaces (e.g., translating the transient or permanent hardware errors to text based event logs in expanded and short form).
  • The invention has been detailed with reference to several preferred embodiments. It will be apparent to one skilled in the art that modifications can be made without departing from the spirit and scope of the invention as set forth in the appended claims.

Claims (15)

1. A method for providing management of a system of scalable clusters of computer resources, wherein the computer resources comprises at least controlled and controller resources, comprising the steps of:
providing at least one first level of controller resources;
providing at least one second level of controlled resources, wherein said at least one first said at least one second levels are different hardware levels; and
providing an intermediate tier comprising a logical grouping of nodes from more than one of said hardware levels and comprising at least one proxy set of processes representing the processes of clustered controller and controlled computer resources at the different hardware levels and acting as an interface between said controller and said controlled computer resources.
2. The method of claim 1 wherein said at least one proxy is provided in at least one of controller and controlled computer resources.
3. The method of claim 1 further comprising providing at least one proxy resource manager running on at least one controller resource.
4. The method of claim 4 wherein said at least one proxy manager is adapted to move from one controller resource to another controller resource.
5. The method of claim 3 further comprising at least one of said at least one proxy set of processes gathering event information from said controlled resources and providing event information to said at least one proxy manager.
6. The method of claim 3 further comprising at least one of said at least one proxy set of processes filtering said event information prior to providing filtered event information to said at least one proxy manager.
7. A method for managing computer resources in a system comprising a first tier of computer resources comprising a plurality of controller resources, a last tier of computer resources comprising a plurality of controlled nodes, wherein said first tier and said last tier are different hardware levels from each other, and at least one intermediate tier comprising a logical grouping of nodes from said first and last tier and having at least one proxy set of processes representing the processes of clustered controller and controlled computer resources and acting as an interface between said controller and said controlled computer resources, said method comprising the steps of:
at least one of said at least one proxy set of processes gathering event information from said plurality of controlled resources in said last tier;
at least one of said at least one proxy set of processes filtering said event information; and
at least one of said at least one proxy set of processes providing filtered event information to at least one of said plurality of controller resources in said first tier.
8. The method of claim 7 further comprising said at least one of said plurality of controller resources managing said plurality of controlled resources in response to said filtered event information.
9. The method of claim 8 wherein said managing comprises performing predictive analysis.
10. A method for providing management of scalable clusters of computer resources, including controlled and controller resources, comprising the steps of:
providing n different hardware levels of computer resources wherein a first level comprises controller resources and a last level comprises at least one cluster of controlled resources;
providing n+1 logical tiers wherein a first tier comprises said first level, a last tier comprises said last level, and an intermediate tier comprising resources from more than one hardware level of resources and comprises at least one proxy comprising at least one process for representing cluster resources; and
providing at least one proxy manager.
11. The method of claim 10 wherein said proxy runs on at least one of a controller resource and a controlled resource.
12. The method of claim 10 wherein at least one proxy resource manager runs on at least one controller resource.
13. The method of claim 12 wherein said at least one proxy manager can move from one controller resource to another controller resource.
14. The method of claim 12 further comprising said at least one proxy gathering event information from said controlled resources at said last level and providing event information to said at least one proxy manager.
15. The method of claim 14 further comprising said at least one proxy filtering event information.
US12/509,368 2002-11-27 2009-07-24 Semi-hierarchical system and method for administration of clusters of computer resources Abandoned US20090282151A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/509,368 US20090282151A1 (en) 2002-11-27 2009-07-24 Semi-hierarchical system and method for administration of clusters of computer resources

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/307,103 US7577730B2 (en) 2002-11-27 2002-11-27 Semi-hierarchical system and method for administration of clusters of computer resources
US12/509,368 US20090282151A1 (en) 2002-11-27 2009-07-24 Semi-hierarchical system and method for administration of clusters of computer resources

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/307,103 Division US7577730B2 (en) 2002-11-27 2002-11-27 Semi-hierarchical system and method for administration of clusters of computer resources

Publications (1)

Publication Number Publication Date
US20090282151A1 true US20090282151A1 (en) 2009-11-12

Family

ID=32325826

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/307,103 Expired - Fee Related US7577730B2 (en) 2002-11-27 2002-11-27 Semi-hierarchical system and method for administration of clusters of computer resources
US12/509,368 Abandoned US20090282151A1 (en) 2002-11-27 2009-07-24 Semi-hierarchical system and method for administration of clusters of computer resources

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/307,103 Expired - Fee Related US7577730B2 (en) 2002-11-27 2002-11-27 Semi-hierarchical system and method for administration of clusters of computer resources

Country Status (1)

Country Link
US (2) US7577730B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012068867A1 (en) * 2010-11-22 2012-05-31 刘建 Virtual machine management system and using method thereof
CN102594881A (en) * 2012-02-08 2012-07-18 中兴通讯股份有限公司 Virtual machine load balancing method, management modules and virtual machine cluster system

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060015505A1 (en) * 2004-07-16 2006-01-19 Henseler David A Role-based node specialization within a distributed processing system
US7526534B2 (en) * 2004-07-16 2009-04-28 Cassatt Corporation Unified system services layer for a distributed processing system
US8185776B1 (en) 2004-09-30 2012-05-22 Symantec Operating Corporation System and method for monitoring an application or service group within a cluster as a resource of another cluster
US8151245B2 (en) 2004-12-17 2012-04-03 Computer Associates Think, Inc. Application-based specialization for computing nodes within a distributed processing system
US7757116B2 (en) * 2007-04-04 2010-07-13 Vision Solutions, Inc. Method and system for coordinated multiple cluster failover
WO2009125373A2 (en) * 2008-04-09 2009-10-15 Nxp B.V. Aggressive resource management
EP2467779A2 (en) * 2009-08-19 2012-06-27 Nxp B.V. Lazy resource management
US9317572B2 (en) * 2010-03-31 2016-04-19 Cloudera, Inc. Configuring a system to collect and aggregate datasets
US8874526B2 (en) 2010-03-31 2014-10-28 Cloudera, Inc. Dynamically processing an event using an extensible data model
US9081888B2 (en) 2010-03-31 2015-07-14 Cloudera, Inc. Collecting and aggregating log data with fault tolerance
US9082127B2 (en) 2010-03-31 2015-07-14 Cloudera, Inc. Collecting and aggregating datasets for analysis
US8880592B2 (en) 2011-03-31 2014-11-04 Cloudera, Inc. User interface implementation for partial display update
US20130097322A1 (en) * 2011-10-17 2013-04-18 Alcatel-Lucent Usa, Inc. Scalable distributed multicluster device management server architecture and method of operation thereof
US9128949B2 (en) 2012-01-18 2015-09-08 Cloudera, Inc. Memory allocation buffer for reduction of heap fragmentation
US9172608B2 (en) 2012-02-07 2015-10-27 Cloudera, Inc. Centralized configuration and monitoring of a distributed computing cluster
US9405692B2 (en) 2012-03-21 2016-08-02 Cloudera, Inc. Data processing performance enhancement in a distributed file system
US9338008B1 (en) 2012-04-02 2016-05-10 Cloudera, Inc. System and method for secure release of secret information over a network
US9842126B2 (en) 2012-04-20 2017-12-12 Cloudera, Inc. Automatic repair of corrupt HBases
US9753954B2 (en) 2012-09-14 2017-09-05 Cloudera, Inc. Data node fencing in a distributed file system
US9342557B2 (en) 2013-03-13 2016-05-17 Cloudera, Inc. Low latency query engine for Apache Hadoop
US9477731B2 (en) 2013-10-01 2016-10-25 Cloudera, Inc. Background format optimization for enhanced SQL-like queries in Hadoop
US9934382B2 (en) 2013-10-28 2018-04-03 Cloudera, Inc. Virtual machine image encryption
US9690671B2 (en) 2013-11-01 2017-06-27 Cloudera, Inc. Manifest-based snapshots in distributed computing environments
US9747333B2 (en) 2014-10-08 2017-08-29 Cloudera, Inc. Querying operating system state on multiple machines declaratively
CN107409436B (en) * 2015-03-27 2020-02-21 华为技术有限公司 Cloud platform, application running method and access network unit

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781703A (en) * 1996-09-06 1998-07-14 Candle Distributed Solutions, Inc. Intelligent remote agent for computer performance monitoring
US6425008B1 (en) * 1999-02-16 2002-07-23 Electronic Data Systems Corporation System and method for remote management of private networks having duplicate network addresses
US20020184065A1 (en) * 2001-03-30 2002-12-05 Cody Menard System and method for correlating and diagnosing system component performance data
US7606898B1 (en) * 2000-10-24 2009-10-20 Microsoft Corporation System and method for distributed management of shared computers

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7155515B1 (en) * 2001-02-06 2006-12-26 Microsoft Corporation Distributed load balancing for single entry-point systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781703A (en) * 1996-09-06 1998-07-14 Candle Distributed Solutions, Inc. Intelligent remote agent for computer performance monitoring
US6425008B1 (en) * 1999-02-16 2002-07-23 Electronic Data Systems Corporation System and method for remote management of private networks having duplicate network addresses
US7606898B1 (en) * 2000-10-24 2009-10-20 Microsoft Corporation System and method for distributed management of shared computers
US20020184065A1 (en) * 2001-03-30 2002-12-05 Cody Menard System and method for correlating and diagnosing system component performance data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012068867A1 (en) * 2010-11-22 2012-05-31 刘建 Virtual machine management system and using method thereof
CN102594881A (en) * 2012-02-08 2012-07-18 中兴通讯股份有限公司 Virtual machine load balancing method, management modules and virtual machine cluster system
WO2013117079A1 (en) * 2012-02-08 2013-08-15 中兴通讯股份有限公司 Virtual machine load balancing method, management modules and virtual machine cluster system

Also Published As

Publication number Publication date
US20040103166A1 (en) 2004-05-27
US7577730B2 (en) 2009-08-18

Similar Documents

Publication Publication Date Title
US7577730B2 (en) Semi-hierarchical system and method for administration of clusters of computer resources
CN104734878B (en) The method and system of software definition networking disaster recovery
US11061942B2 (en) Unstructured data fusion by content-aware concurrent data processing pipeline
Legrand et al. MonALISA: An agent based, dynamic service system to monitor, control and optimize distributed systems
US7822841B2 (en) Method and system for hosting multiple, customized computing clusters
JP6329899B2 (en) System and method for cloud computing
JP5102901B2 (en) Method and system for maintaining data integrity between multiple data servers across a data center
US6892316B2 (en) Switchable resource management in clustered computer system
US9059933B2 (en) Provisioning virtual private data centers
EP1405187B1 (en) Method and system for correlating and determining root causes of system and enterprise events
DE102018214774A1 (en) Technologies for dynamically managing the reliability of disaggregated resources in a managed node
US7693970B2 (en) Secured shared storage architecture
US20070016822A1 (en) Policy-based, cluster-application-defined quorum with generic support interface for cluster managers in a shared storage environment
US8634330B2 (en) Inter-cluster communications technique for event and health status communications
KR20070085283A (en) Apparatus, system, and method for facilitating storage management
US11303538B2 (en) Methods and systems for analysis of process performance
US10911329B2 (en) Path and cadence optimization for efficient data collection from devices
DE602005001550T2 (en) METHOD AND DEVICE FOR SUPPORTING TRANSACTIONS
JP4864210B2 (en) Work group server implementation method and apparatus
Thein et al. Availability modeling and analysis on virtualized clustering with rejuvenation
US7433351B1 (en) Isolation of data, control, and management traffic in a storage area network
Sahoo et al. Providing persistent and consistent resources through event log analysis and predictions for large-scale computing systems
US8171106B2 (en) Per file system usage of networks
Legrand Monitoring and control of large-scale distributed systems
Muller LANs to WANs: the complete management guide

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION