US20050207414A1 - Apparatus and method for automatic cluster network device address assignment - Google Patents

Apparatus and method for automatic cluster network device address assignment Download PDF

Info

Publication number
US20050207414A1
US20050207414A1 US11/137,937 US13793705A US2005207414A1 US 20050207414 A1 US20050207414 A1 US 20050207414A1 US 13793705 A US13793705 A US 13793705A US 2005207414 A1 US2005207414 A1 US 2005207414A1
Authority
US
United States
Prior art keywords
address
cluster
value
byte
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/137,937
Inventor
Murali Duvvury
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US11/137,937 priority Critical patent/US20050207414A1/en
Assigned to CISCO TECHNOLOGY, INC. reassignment CISCO TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DUVVURY, MURALI
Publication of US20050207414A1 publication Critical patent/US20050207414A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5069Address allocation for group communication, multicast communication or broadcast communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/604Address structures or formats

Definitions

  • the present invention relates to the field of data communications networks. More particularly, the present invention relates to an apparatus and method for automatic address assignment for network devices in a cluster.
  • a network is a communication system that links two or more computers and peripheral devices, and allows users to access resources on other computers and exchange messages with other users.
  • a network allows users to share resources on their own systems with other network users and to access information on centrally located systems or systems that are located at remote offices. It may provide connections to the Internet or to the networks of other organizations.
  • the network typically includes a cable that attaches to network interface cards (“NICs”) in each of the devices within the network. Users may interact with network-enabled software applications to make a network request, such as to get a file or print on a network printer.
  • the application may also communicate with the network software, which may then interact with the network hardware to transmit information to other devices attached to the network.
  • a local area network is a network that is located in a relatively small physical area, such as a building, in which computers and other network devices are linked, usually via a wiring-based cabling scheme.
  • a LAN typically includes a shared medium to which workstations attach and through which they communicate.
  • LANs often use broadcasting methods for data communication, whereby any device on the LAN can transmit a message that all other devices on the LAN then “listen” to. However, only the device or devices to which the message is addressed actually receive the message. Data is typically packaged into frames for transmission on the LAN.
  • Ethernet which traditionally has a maximum bandwidth of 10 Mbps.
  • Traditional Ethernet is a half-duplex technology, in which each Ethernet network device checks the network to determine whether data is being transmitted before it transmits, and defers transmission if the network is in use. In spite of transmission deferral, two or more Ethernet network devices can transmit at the same time, which results in a collision. When a collision occurs, the network devices enter a back-off phase and retransmit later.
  • FIG. 1 is a block diagram illustrating a network connection between a user 10 and a server 20 .
  • FIG. 1 is an example which may be consistent with any type of network, including a LAN, a wide area network (“WAN”), or a combination of networks, such as the Internet.
  • WAN wide area network
  • Routers are internetworking devices. They are typically used to connect similar and heterogeneous network segments into Internetworks. For example, two LANs may be connected across a dial-up line, across the Integrated Services Digital Network (“ISDN”), or across a leased line via routers. Routers may also be found throughout the Internet. End users may connect to a local Internet Service Provider (“ISP”) (not shown).
  • ISP Internet Service Provider
  • LANs commonly experience a steady increase in traffic even if the number of users remains constant, due to increased network usage of software applications using the LAN. Eventually, performance drops below an acceptable level and it becomes necessary to separate the LAN into smaller, more lightly loaded segments.
  • LANs are becoming increasingly congested and overburdened.
  • several factors have combined to stress the capabilities of traditional LANs, including faster computers, faster operating systems, and more network-intensive software applications.
  • Switching is a technology that alleviates congestion in Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI) and other similar LANs by reducing traffic and increasing bandwidth.
  • LAN switches are designed to work with existing media infrastructures so that they can be installed with minimal disruption of existing networks.
  • a Media Access Control (“MAC”) address is the unique hexadecimal serial number assigned to each Ethernet network device to identify it on the network. With Ethernet devices, this address is permanently set at the time of manufacture. Each network device has a unique MAC address, so that it will be able to receive only the frames that were sent to it. If MAC addresses were not unique, there would be no way to distinguish between two stations. Devices on a network monitor network traffic and search for their own MAC address in each frame to determine whether they should decode it or not. Special circumstances exist for broadcasting to every device on the network.
  • Ethernet uses variable-length frames of data to transmit information from a source to one or more destinations. Every Ethernet frame has two fields defined as the source and destination addresses, which indicate the MAC addresses of the network devices where a frame originated and where it is ultimately destined, respectively.
  • FIG. 2 -A illustrates the structure of an Ethernet frame, as defined by the IEEE. As shown in FIG. 2 -A, the Ethernet frame 22 includes a Preamble 24 , a Start of Frame Delimiter 26 , a Destination Address 28 , a Source Address 30 , a Length of data field 32 , a variable-length Data field 34 , a Pad 36 , and a Checksum 38 .
  • the Preamble 24 is a seven-byte field, with each byte containing the bit pattern 10101010 to allow for clock synchronization between sending and receiving stations (not shown).
  • the Start of Frame Delimiter 26 is a one-byte field containing the bit pattern 10101011 to denote the start of the frame itself.
  • the Destination Address 28 and the Source Address 30 are typically six-byte fields which specify the unique MAC addresses of the receiving and sending stations. Special addresses allow for multicasting to a group of stations and for broadcasting to all stations on the network.
  • the Length of Data field 32 specifies the number of bytes present in the Data field 34 , from a minimum of 0 to a maximum of 1500.
  • the Pad field 36 is used to fill out the length of the entire frame 22 to a minimum of 64 bytes when the Data field 34 contains a small number of bytes.
  • the Checksum field 38 is a 32-bit hash code of the Data field 34 , which can used by the receiving station to detect data transmission errors.
  • switching refers to a technology in which a network device (known as a switch) connects two or more LAN segments.
  • a switch transmits frames of data from one segment to their destinations on the same or other segments.
  • a switch When a switch begins to operate, it examines the MAC address of the frames that flow through it to build a table of known sources. If the switch determines that the destination of a frame is on the same segment as the source of the frame, it drops, or filters, the frame because there is no need to transmit it. If the switch determines that the destination is on another segment, it transmits the frame onto the destination segment only. Finally, using a technique known as flooding, if the destination segment is unknown, the switch transmits the frame on all segments except the source segment.
  • a LAN switch behaves similarly to a bridge, which is a different kind of network device.
  • the primary difference is that switches have higher data throughput than bridges, because their frame forwarding algorithms are typically performed by application-specific integrated circuits (“ASICs”) especially designed for that purpose, as opposed to the more general purpose (and relatively slower) microprocessors typically used in bridges.
  • ASICs application-specific integrated circuits
  • switches are designed to divide a large, unwieldy local network into smaller segments, insulating each segment from local traffic on other segments, thus increasing aggregate bandwidth while still retaining full connectivity. Switches typically have higher port counts than bridges, allowing several independent data paths through the device. This higher port count also increases the data throughput capabilities of a switch.
  • a switch Because a switch maintains a table of the source MAC addresses received on every port, it “learns” to which port a station is attached every time the station transmits. Then, each packet that arrives for that station is forwarded only to the correct port, eliminating the waste of bandwidth on the other ports. Since station addresses are relearned every time a station transmits, if stations are relocated the switch will reconfigure its forwarding table immediately upon receiving a transmission from the stations.
  • Ethernet switch 200 includes a Layer 1 Physical Interface (“PHY”) 202 , 204 , and a Layer 2 Media Access Control Interface (“MAC”) 206 , 208 , for each port on the Ethernet switch 200 .
  • a network interface card (“NIC”) consists of a MAC and a PHY.
  • An Ethernet switch also contains a MAC and PHY on every port. Thus, an Ethernet switch may appear to a network as multiple NICs coupled together.
  • Each switch PHY 202 , 204 receives the incoming data bit stream and passes it to its corresponding MAC 206 , 208 , which reassembles the original Ethernet frames.
  • Ethernet switch 200 also includes a frame buffer memory 210 , 212 , for each port, a source address table memory 220 , discovery protocol logic 230 , learning logic 240 , forwarding logic 250 , packet redirection logic 260 , and a configuration and management interface 270 .
  • the learning logic 240 will look at the source address (“SA”) within a received Ethernet frame and populate the Source Address Table (“SAT”) memory 220 with three columns: MAC address 280 , port number 282 , and age 284 .
  • SA source address
  • SAT Source Address Table
  • the MAC address is the same as the source address that a sender has embedded into the frame.
  • the age item will be a date stamp to indicate when the last frame was received from a particular MAC SA.
  • the port number may be 1 or 2.
  • the SAT is also known as the Switch Forwarding Table (“SFT”).
  • Forwarding logic 250 examines at the destination address (“DA”) of a received Ethernet frame. This now becomes the new MAC address, which is then compared with the entries in the SAT.
  • DA destination address
  • the destination address is a specific address, known as a “broadcast” address
  • the frame is destined for all ports on the network. In this case, the Ethernet switch will forward the frame to all ports, except the one on which the frame was received.
  • a broadcast address is six bytes with all ones, or “FF.FF.FF.FF.FF.FF” in hexadecimal notation. If the MAC address is found in the SAT and the corresponding port number is different from the received port, the frame is forwarded to that particular port number only.
  • the frame is not forwarded; instead, it is discarded. This is known as “filtering.”
  • the frame is discarded because the transmitting station and the receiving station are connected on the same shared LAN segment on that particular port and the receiver has already tuned into the frame.
  • the MAC address is not found in the table, the frame is forwarded to all ports. The reason a particular destination address is not present in the SAT table is that the receiving device could be new on the network, or the recipient has been very quiet (has not recently sent a frame). In both cases, the bridge SAT will not have a current entry. Flooding the frame on all ports is the brute way of ensuring that the frame is routed to its intended recipient.
  • Ethernet switch 200 uses the “age” entry in the SAT to determine whether that MAC address is still in use on the LAN. If the age has exceeded a certain preset value, the entry is removed. This conserves memory space and makes the bridge faster because fewer entries need to be scanned for address matching. Finally, the frame buffer memories 210 , 212 will store frames on each port in case there is a backlog of frames to be forwarded.
  • discovery protocol logic 230 receives, processes, and sends Cisco Discovery Protocol (“CDP”) or other discovery protocol packets to neighboring network devices on the network.
  • Packet redirection logic 260 examines the source and destination addresses of Ethernet packets under control of the configuration and management interface 270 and forwards them to other network devices in a cluster configuration.
  • the program code corresponding to discovery protocol logic 230 , learning logic 240 , forwarding logic 250 , packet redirection logic 260 , configuration and management interface 270 , and other necessary functions may all be stored on a computer-readable medium.
  • computer-readable media suitable for this purpose may include, without limitation, floppy diskettes, hard drives, RAM, ROM, EEPROM, nonvolatile RAM, or flash memory.
  • FIG. 3 illustrates the topology of a typical Ethernet network 40 in which a LAN switch 42 has been installed.
  • exemplary Ethernet network 40 includes a LAN switch 42 .
  • LAN switch 42 has five ports: 44 , 46 , 48 , 50 , and 52 .
  • the first port 44 is connected to LAN segment 54 .
  • the second port 46 is connected to LAN segment 56 .
  • the third port 48 is connected to LAN segment 58 .
  • the fourth port 50 is connected to LAN segment 60 .
  • the fifth port 52 is connected to LAN segment 62 .
  • the Ethernet network 40 also includes a plurality of servers 64 -A- 64 -C and a plurality of clients 66 -A- 66 -K, each of which is attached to one of the LAN segments 54 , 56 , 58 , 60 , or 62 . If server 64 -A on port 44 needs to transmit to client 66 -D on port 46 , the LAN switch 42 forwards Ethernet frames from port 44 to port 46 , thus sparing ports 48 , 50 , and 52 from frames destined for client 66 -D.
  • server 64 -C needs to send data to client 66 -J at the same time that server 64 -A sends data to client 66 -D, it can do so because the LAN switch can forward frames from port 48 to port 50 at the same time it is forwarding frames from port 44 to port 46 . If server 64 -A on port 44 needs to send data to client 66 C, which is also connected to port 44 , the LAN switch 42 does not need to forward any frames.
  • Performance improves in LANs in which LAN switches are installed because the LAN switch creates isolated collision domains. Thus, by spreading users over several collision domains, collisions are avoided and performance improves. In addition, many LAN switch installations dedicate certain ports to a single users, giving those users an effective bandwidth of 10 Mbps when using traditional Ethernet.
  • each of the LAN switches 70 -A and 70 -B contains eight ports, 72 -A- 72 -H and 74 -A- 74 -H.
  • On each of the LAN switches 70 -A and 70 -B four ports 72 -A- 72 -D and 74 -A- 74 -D are connected to computer workstations 76 -A- 76 -D and 76 -E- 76 -H, respectively.
  • each LAN switch i.e., ports 72 -E- 72 -H on LAN switch 70 -A, and ports 74 -E- 74 -H on LAN switch 70 -B
  • the other four ports on each LAN switch are dedicated to interswitch communication.
  • the aggregate interswitch communication rate of the switches connected as shown in FIG. 4 is 400 Mbps.
  • the total number of ports available for connecting to workstations or other network devices on each LAN switch is diminished due to the dedicated interswitch connections that are necessary, to implement the cascaded configuration.
  • FIG. 5 illustrates an exemplary group of network devices in a LAN 78 , and the interconnections between the network devices in the LAN 78 .
  • the LAN 78 includes seven network devices: six LAN switches 80 -A- 80 -F and a router 82 .
  • Each network device is connected to one or more of the other network devices in the LAN 78 .
  • Computer workstations, network printers and other network devices are also connected to the LAN 78 , but not shown. It is to be understood that the LAN configuration shown in FIG. 5 is exemplary only, and not in any way limiting.
  • network devices such as LAN switches need to be configured and managed, because they typically include a number of programmable features that can be changed by a network administrator for optimal performance in a particular network. Without limitation, such features typically include whether each port on the network device is enabled or disabled, the data transmission speed setting on each port, and the duplex setting on each port.
  • Many commercially-available network devices contain embedded HTML Web servers, which allow the network device to be configured and managed remotely via a Web browser.
  • IP Internet Protocol
  • the IP address of a network device is a unique address that specifies the logical location of a host or client on the Internet.
  • a network administrator can enter the device's IP address or URL into a Web browser such as Netscape NavigatorTM, available from Netscape Communications Corp. of Mountain View, Calif., or Internet ExplorerTM, available from Microsoft Corporation of Redmond, Wash., to access the network device and configure it from anywhere in the Internet.
  • a Web browser such as Netscape NavigatorTM, available from Netscape Communications Corp. of Mountain View, Calif., or Internet ExplorerTM, available from Microsoft Corporation of Redmond, Wash.
  • DNS domain name service
  • Another object of the present invention is to facilitate communication between the commander device and other cluster network devices without having to explicitly assign IP addresses to network devices in the cluster.
  • a group of network devices such as Ethernet switches, are logically configured as a single cluster, with one commander device and one or more member devices.
  • Each network device in the cluster contains an embedded HTML server that facilitates configuration and management of the network device via a management station running a Web browser.
  • Each device in the cluster is identified by a unique Universal Resource Locator (“URL”).
  • URL Universal Resource Locator
  • the cluster commander automatically assigns private IP addresses to the other devices in the cluster.
  • Network devices in the cluster constantly monitor network traffic on all their ports to detect conflicts between the automatically assigned IP addresses and the IP addresses of network devices outside of the cluster. When a conflict is detected, the cluster commander assigns a different private IP address to the cluster network device that caused the conflict. The process of detecting and correcting IP address conflicts continues continuously to enable the cluster network devices to react automatically to network configuration changes.
  • FIG. 1 is a block diagram of an exemplary network connection between a user and a server.
  • FIG. 2 -A is a diagram illustrating the structure of an Ethernet data frame.
  • FIG. 2 -B is a block diagram of an Ethernet switch in accordance with one aspect of the present invention.
  • FIG. 3 is a block diagram illustrating the topology of an exemplary LAN incorporating a LAN switch.
  • FIG. 4 is a block diagram illustrating an exemplary LAN with two LAN switches interconnected in a cascaded configuration.
  • FIG. 5 is a block diagram illustrating the topology of an exemplary LAN incorporating six LAN switches and a router.
  • FIG. 6 is a block diagram illustrating an exemplary SNMP network.
  • FIG. 7 is a block diagram illustrating a cluster of network devices according to one aspect of the present invention.
  • FIG. 8 is a block diagram illustrating a cluster of network devices in a star configuration according to one aspect of the present invention.
  • FIG. 9 is a block diagram illustrating a cluster of network devices in a daisy chain configuration according to one aspect of the present invention.
  • FIG. 10 is a block diagram illustrating a cluster of network devices in a hybrid configuration according to one aspect of the present invention.
  • FIG. 11 is a sample configuration screen for a switch cluster according to one aspect of the present invention.
  • FIG. 12 is a block diagram of configuration data packet processing by a commander device according to one aspect of the present invention.
  • FIG. 13 is a block diagram illustrating the CMP/RARP packet format according to aspects of the present invention.
  • FIG. 14 is a block diagram illustrating a cluster ADD message format according to aspects of the present invention.
  • FIG. 15A is a block diagram illustrating the format of the CMP/RARP portion of a cluster ADD message according to aspects of the present invention.
  • FIG. 15B is a block diagram illustrating the format of the cluster parameter extension portion of a cluster ADD message according to aspects of the present invention.
  • FIG. 16 is a block diagram illustrating the format of an address conflict detection message according to aspects of the present invention.
  • FIG. 17 is a flow chart illustrating an automatic IP address generation algorithm according to one embodiment of the present invention.
  • FIG. 18 is a flow chart illustrating an automatic IP address conflict correction algorithm according to one embodiment of the present invention.
  • Network devices such as LAN switches, may be configured and managed using either out-of-band or in-band techniques. Out-of-band configuration and management are typically performed by connecting to the console port on the network device and using the management console locally from a terminal or remotely through a modem.
  • network devices may be configured and managed “in-band,”either by connecting via Telnet to the network device and using a management console, or by communicating with the network device's in-band management interface using the industry standard Simple Network Management Protocol (“SNMP”). This can be accomplished by using an SNMP-compatible network management application and the network device's Management Interface Base (“MIB”) files.
  • MIB Management Interface Base
  • Embodiments of the present invention use a subset of the Transmission Control Protocol/Internet Protocol (“TCP/IP”) suite as the underlying mechanism to transport the SNMP configuration and management data.
  • the protocols implemented in embodiments of the present invention include the Internet Protocol (“IP”), the Internet Control Message Protocol (“ICMP”), the User Datagram Protocol (“UDP”), the Trivial File Transfer Protocol (“TFTP”), the Bootstrap Protocol (“BOOTP”), the Address Resolution Protocol (“ARP”), and the Reverse Address Resolution Protocol (“RARP”).
  • IP Internet Protocol
  • ICMP Internet Control Message Protocol
  • UDP User Datagram Protocol
  • TFTP Trivial File Transfer Protocol
  • BOOTP Bootstrap Protocol
  • ARP Address Resolution Protocol
  • RARP Reverse Address Resolution Protocol
  • the Management Information Base (“MIB”) variables of network devices are accessible through SNMP.
  • SNMP is an application-layer protocol designed to facilitate the exchange of management information between network devices.
  • SNMP is used to monitor IP gateways and their networks, and defines a set of variables that the gateway must keep and specifies that all operations on the gateway are a side effect of fetching or storing to data variables.
  • SNMP consists of three parts: a Structure of Management Information (“SMI”), a Management Information Base (“MIB”) and the protocol itself.
  • SMI and MIB define and store the set of managed entities, while SNMP itself conveys information to and from the SMI and the MIB.
  • SNMP places all operations in a get-request, get-next-request, and set-request format.
  • an SNMP manager can get a value from an SNMP agent or store a value into that SNMP agent.
  • the SNMP manager can be part of a network management system (“NMS”), and the SNMP agent can reside on a networking device such as a LAN switch.
  • the switch MIB files may be compiled with network management software, which then permits the SNMP agent to respond to MIB-related queries being sent by the NMS.
  • NMS Network Management software
  • CiscoWorksTM uses the switch MIB variables to set device variables and to poll devices on the network for specific information.
  • the CiscoWorksTM software permits the results of a poll to be displayed as a graph and analyzed in order to troubleshoot internetworking problems, increase network performance, verify the configuration of devices, and monitor traffic loads.
  • the SNMP agent 86 in network device 88 gathers data from the MIB 90 , also in network device 88 .
  • the MIB 90 is the repository for information about device parameters and network data.
  • the SNMP agent 86 can send traps, or notification of certain events, to the SNMP manager 92 , which is part of the Network Management Software (“NMS”) 94 running on the management console 96 .
  • the SNMP manager 92 uses information in the MIB 90 to perform the operations described in Table 1.
  • TABLE 1 SNMP Manager Operations Operation Description Get-request Retrieve a value from a specific variable. Get-next- retrieve a value from a variable within a table.
  • an SNMP manager does not need to know the exact variable name.
  • a sequential search is performed to find the needed variable within a table.
  • Get-response The reply to a get-request, get-next-request, and set-request sent by an NMS.
  • Set-request Store a value in a specific variable. trap An unsolicited message sent by an SNMP agent to an SNMP manager indicating that some event has occurred.
  • Embodiments of the present invention support the following configuration and management interfaces: HTML (web-based) interfaces, SNMP, and a proprietary Internet Operating System (“IOS”) command line interpreter (“CLI”).
  • HTML web-based
  • SNMP Network-based
  • IOS Internet Operating System
  • CLI command line interpreter
  • Each of these management interfaces can be used to monitor and configure a LAN switch or a group of switches, known as a cluster.
  • the cluster management tools are web-based, and may be accessed via an ordinary browser, such as Netscape NavigatorTM or Microsoft Internet ExplorerTM.
  • Embedded HTML-based management tools display images of switches and graphical user interfaces.
  • FIG. 7 an exemplary switch cluster 98 is shown which includes a commander switch 100 and one or more member switches 102 -A- 102 -N.
  • Management station 104 is connected to the commander switch 100 , which redirects configuration requests to the member switches 102 -A- 102 -N.
  • a single IP address for the entire cluster 98 is assigned to the commander switch 100 , which distributes configuration information to the other switches in the cluster.
  • a cluster with up to 15 member switches may be configured and managed via the IP address of the commander switch 100 .
  • the member switches 102 -A- 102 -N in the cluster do not need individual IP addresses, and may be managed through the IP address of the commander switch. However, if so desired (e.g., if IP addresses are available), any of member switches 102 -A- 102 -N may be assigned its own IP address as well. In such a case, a member switch may be configured and managed either through the IP address of the commander switch or through its own IP address.
  • the web-based management features are based on an embedded HTML web site within the Flash memory of each network device in the cluster.
  • Web-based management uses the Hypertext Transfer Protocol (“HTTP”), an in-band form of communication, which means that the Web-based management features of the network device are accessed through one of the Ethernet ports that are also used to receive and transmit normal data in each network device.
  • HTTP Hypertext Transfer Protocol
  • HTTP is an application-level protocol for distributed, collaborative, hypermedia information systems.
  • HTIP allows an open-ended set of methods that indicate the purpose of a request. It builds on the discipline of reference provided by the Uniform Resource Identifier (“URI”), as a location (“URL”) or name (“URN”), for indicating the resource to which a method is to be applied. Messages are passed in a format similar to that used by Internet mail as defined by the Multipurpose Internet Mail Extensions (“MIME”).
  • URI Uniform Resource Identifier
  • URL location
  • UPN name
  • MIME Multipurpose Internet Mail Extensions
  • a cluster is a group of connected network devices such as LAN switches that are managed as a single entity.
  • the switches can be in the same location, or they can be distributed across a network.
  • all communication with cluster switches is through a single IP address assigned to the commander switch.
  • Clusters may be configured in a variety of topologies. As an example, FIG. 8 illustrates a switch cluster 106 configured in a “star,” or “radial stack,” topology. In this configuration, each of the eight member switches 102 -A- 102 -H in cluster 106 is directly connected to one of the ports 108 A- 108 -H of commander switch 100 .
  • FIG. 9 A second example of a cluster configuration, known as a “daisy chain” configuration, is shown in FIG. 9 .
  • cluster 110 only member switch 102 -A is directly connected to the commander switch 100 .
  • Member switches 102 -B- 102 -G are each connected to an “upstream” switch (one that is fewer “hops” away from commander switch 100 ) and to a “downstream” switch (one that is more “hops” away from commander switch 100 ).
  • the last switch in the chain (member switch 102 -H) is only connected to its upstream “neighbor” 102 -G.
  • FIG. 10 illustrates a “hybrid” cluster configuration with one commander switch 100 and seven member switches 102 -A- 102 -G.
  • member switches 102 -A and 102 -E are in a star configuration with respect to commander switch 100 .
  • Member switch 102 -B is in a daisy chain configuration with respect to member switch 102 -A
  • member switches 102 -C and 102 -D are in a star configuration with respect to member switch 102 -B.
  • member switches 102 -F and 102 -G are in a star configuration with respect to member switch 102 -E.
  • hybrid cluster 112 as shown in FIG. 10 consists of a combination of star and daisy chain configurations.
  • the commander switch is the single point of access used to configure and monitor all the switches in a cluster.
  • member switches are managed through a commander switch.
  • the commander switch is used to manage the cluster, and is managed directly by the network management station.
  • Member switches operate under the control of the commander. While it is a part of a cluster, a member switch is not managed directly, unless it has been assigned its own IP address, as mentioned earlier. Rather, requests intended for a member switch are first sent to the commander, then forwarded to the appropriate member switch in the cluster.
  • switches When switches are first installed, they are cabled together according to the network configuration desired for a particular application, and an IP address is assigned to the commander switch.
  • the commander switch must be enabled as the commander switch of the cluster. Once the commander switch has been enabled, it can use information known about the network topology to identify other network devices in the network that may be added to the cluster. According to one embodiment of the present invention, the commander switch uses the CiscoTM Discovery Protocol (“CDP”) to automatically identify candidate network devices.
  • CDP CiscoTM Discovery Protocol
  • discovery of candidate network devices may be performed manually by inspecting the network topology and the network devices attached to the network.
  • CDP is a media-independent device discovery protocol which can be used by a network administrator to view information about other network devices directly attached to a particular network device.
  • network management applications can retrieve the device type and SNMP-agent address of neighboring network devices. This enables applications to send SNMP queries to neighboring devices. CDP thus allows network management applications to discover devices that are neighbors of already known devices, such as neighbors running lower-layer, transparent protocols.
  • CDP runs on all media that support the Subnetwork Access Protocol (“SNAP”), including LAN and Frame Relay.
  • SNAP Subnetwork Access Protocol
  • CDP runs over the data link layer only.
  • Each network device sends periodic messages to a multicast address and listens to the periodic messages sent by others in order to learn about neighboring devices and determine when their interfaces to the media go up or down.
  • Each device also advertises at least one address at which it can receive SNMP messages.
  • the advertisements contain holdtime information, which indicates the period of time a receiving device should hold CDP information from a neighbor before discarding it
  • With CDP network management applications can learn the device type and the SNMP agent address of neighboring devices. This process enables applications to send SNMP queries to neighboring devices.
  • any of the switches in the cluster may be accessed by entering the IP address of the commander switch into a Web browser.
  • the single password that is entered to log in to the commander switch also grants access to all the member switches in the cluster.
  • the method of creating a cluster of Ethernet switches depends on each particular network configuration. If the switches are arranged in a star topology, as in FIG. 8 , with the commander switch at the center, all of the member switches may be added to the cluster at once. On the other hand, if the switches are connected in a daisy-chain topology, as in FIG. 9 , the candidate switch that is connected to the commander switch is added first, and then each subsequent switch in the chain is added as it is discovered by CDP. If switches are daisy-chained off a star topology, as in the exemplary hybrid configuration shown in FIG. 10 , all the switches that are directly connected to the commander switch may be added first, and then the daisy-chained switches may be added one at a time.
  • a maximum of sixteen switches in a cluster there can be a maximum of sixteen switches in a cluster: fifteen member switches and one commander switch. If passwords are defined for the candidate member switches, the network administrator must know them all before they can be added to the cluster. In addition, a candidate switch according to embodiments of the present invention must not already be a member switch or a commander switch of another active cluster.
  • member switches continue forwarding but cannot be managed through the commander switch.
  • Member switches retain the ability to be managed through normal standalone means, such as the console-port CLI, and they can be managed through SNMP, HTML, and Telnet after they have been assigned an IP address.
  • Recovery from a failed command switch can be accomplished by replacing the failed unit with a cluster member or another switch.
  • the network administrator To have a cluster member ready to replace the commander switch, the network administrator must assign an IP address to another cluster member, and know the command-switch enable password for that switch.
  • the commander switch when a cluster is formed, automatically changes three parameters on all the member switches in the cluster: the IOS host name, the enable password, and the SNMP community string. If a switch has not been assigned an IOS host name, the commander switch appends a number to the name of the commander switch and assigns it sequentially to the member switches. For example, a commander switch named eng-cluster could name a cluster member switch eng-cluster-5. If an IOS host name has already been assigned to a switch, the switch retains its IOS host name.
  • FIG. 11 shows a switch cluster with one commander switch 100 and four member switches 102 -A- 102 -D as it is displayed on a sample Cluster ManagerTM page.
  • One advantage of the present invention is that a network administrator need set only one IP address, one password, and one system SNMP configuration in order to manage an entire cluster of switches.
  • a cluster can be formed from switches located in several different buildings on a campus, and may be linked by fiber optic, Fast Ethernet, or Gigabit Ethernet connections.
  • Clusters may be managed from a management station through ASCII terminal consoles, telnet sessions, SNMP management stations and Web Consoles. All configuration and management requests are first directed to the cluster commander. Any required authentication is done by the commander. If necessary, the commander acts as a redirector and forwards requests to the appropriate member switch and forwards the reply to the management station.
  • a member switch can be in only one cluster at a time and can have only one commander.
  • a cluster can be formed for a fully interconnected group of CDP neighbors.
  • a network device can join a cluster when the network device is a CDP neighbor of the cluster.
  • switches in a cluster may be interconnected using 10 Mbps Ethernet, 100 Mbps Fast Ethernet, or 1000 Mbps Gigabit Ethernet.
  • the primary external configuration and management interface to the cluster is a TCP/IP connection to the commander switch.
  • HTTP, SNMP, and telnet protocols run on top of the IP stack in the operating system.
  • the cluster may also be managed via the console port of the commander.
  • a Web browser on the management station 104 communicates with the switch cluster 98 by establishing an HTTP connection to the commander switch 100 .
  • Special CLI commands help present output from the commander switch 100 to the browser in a format that is easily processed on the browser.
  • Communication between the commander switch 100 and member switches 102 -A- 102 -N is accomplished by the commander switch 100 translating the desired actions into commands the member switches 102 -A- 102 -N would be able to interpret if they were acting as stand-alone switches, i.e., if they were not part of a cluster.
  • the commander switch 100 manages SNMP communication for all switches in the cluster 98 .
  • the commander switch 100 forwards the set and get requests from SNMP applications to member switches 102 -A- 102 -N, and it forwards traps and other responses from the member switches 102 -A- 102 -N back to the management station 104 .
  • read-write and read-only community strings are set up for an entire cluster.
  • Community strings provide authentication in the exchange of SNMP messages.
  • the commander switch appends numbers to the community strings of member switches so that these modified community strings can provide authentication for the member switches.
  • a community string is created for it from the community string for the cluster. Only the first read-only and read-write community strings are propagated to the cluster.
  • Configuration and management data packets are sent between the commander 100 and member switches 102 -A- 102 -N via the network connection.
  • the commander 100 identifies each member switch 102 -A- 102 -N by the MAC address of the port on the member switch that is connected to the commander 100 .
  • FIG. 12 illustrates in block diagram form how a packet intended for a member switch is processed by the commander.
  • a command from the management station 104 is received by the Ethernet module 122 of the commander switch 100 .
  • the command is processed at the IP layer 124 , UDP or TCP layer 126 , and Management Application layer 128 of the commander switch 100 .
  • the Management Application layer 128 determines that the command is intended for member switch 102 , and performs redirection by translating the port number in the received command to the appropriate port for member switch 102 .
  • the redirected command flows down through the UDP or TCP layer 126 , the IP layer 124 , and the Ethernet layer 122 of the commander switch 100 , and is passed on via Ethernet to the member switch 102 .
  • IP Internet Protocol
  • CMP Cluster Management Protocol
  • the commander when a member switch is added to a cluster, the commander generates a unique cluster IP address and assigns it to the member switch.
  • the commander's cluster IP address is also passed to the member switch.
  • These cluster IP addresses are dynamically assigned.
  • the commander finds a conflict with one of the assigned cluster IP addresses (such as when some other IP station, not part of the cluster, is using the same IP address as one of the cluster IP addresses), then the commander resolves the conflict by selecting another cluster IP address and assigning it to the corresponding member switch.
  • both the commander switch and the member switches use CMP addresses to send and receive management data within the cluster.
  • a CMP address is a private IP address in “10.x.y.z” format, where x, y, and z, are integers between 0 and 255.
  • the commander switch automatically generates a CMP address and assigns it to the member switch when the switch first joins the cluster.
  • CMP addresses are automatically generated, there can be conflicts between the IP address used by a cluster network device and the IP address of a network device outside the cluster. For example, some other IP station can be using the same address as an automatically assigned CMP address. Thus, both the commander switch and the member switches constantly check for conflicts, and in case of a conflict a new CMP address is generated.
  • CMP/RARP is a variation of the normal RARP (Reverse ARP) protocol. As described below, CMP/RARP uses a different SNAP encapsulation, and it has provisions to carry variable list of cluster parameters as Type Length Value (“TLV”) fields.
  • TLV Type Length Value
  • FIG. 13 is a block diagram illustrating the CMP/RARP packet format according to aspects of the present invention.
  • a CMP/RARP packet 1300 comprises an Ethernet header 1310 , an LLC/SNAP header 1320 , and a RARP portion 1330 .
  • Ethernet header 1310 comprises a 6-byte destination MAC address 1340 , a 6-byte source MAC address 1345 , and a 2-byte Length field 1350 .
  • LLC/SNAP header 1320 comprises a 3-byte header field 1355 (set to equal 0xAA-AA-03 in one embodiment), a 3-byte OUI field 1360 (set to equal 0x00-00-OC in one embodiment), and a 2-byte CMP/RARP identifier field 1365 (set to equal 0x0114 in one embodiment).
  • RARP portion 1330 of the CMP/RARP packet 1300 comprises a 28-byte RARP packet 1370 , described below, and a variable length CMP/RARP extension field 1375 .
  • CMP/RARP packets 1300 use a separate SNAP encapsulation 1320 to distinguish them from normal RARP packets. Also, it should be noted that at the end of the CMP/RARP packet, there is a variable length extension field 1375 to pass cluster parameters according to aspects of the present invention.
  • FIG. 14 is a block diagram illustrating a cluster ADD message format according to aspects of the present invention.
  • a cluster ADD message 1400 is one specific example of a type of cluster message that may be transmitted in the RARP portion 1330 of the CMP/RARP packet 1300 shown in FIG. 13 .
  • cluster ADD message 1400 comprises a 28-byte CMP/RARP part 1370 and a variable length cluster parameter extension part 1375 .
  • CMP/RARP part 1370 is used for assigning a CMP address to a cluster member switch, while the cluster parameter extension part 1375 is used to transmit cluster parameters to a member switch.
  • Cluster ADD message 1400 is sent to a member switch when the member switch first joins a cluster.
  • FIG. 15A is a block diagram illustrating the format of the CMP/RARP portion 1370 of a cluster ADD message 1400 according to aspects of the present invention.
  • the CMP/RARP portion 1370 has the same format as a regular RARP packet, and comprises a 2-byte Hardware type field 1510 (set to equal 0x0001, i.e., “ethernet type,” in one embodiment), a 2-byte protocol field 1515 (set to equal 0x0800, i.e., “IP type,” in one embodiment), a 1-byte hardware length field 1520 (set to equal “6,” i.e., “ethernet type,” in one embodiment), a 1-byte protocol length field 1525 (set to equal “4,” i.e.
  • IP type in one embodiment
  • a 2-byte opcode field 1530 set to equal 0x04, i.e., “RARP reply,” in one embodiment
  • a 6-byte source hardware address field 1535 which equals the MAC address of the cluster commander switch
  • a 4-byte source protocol address field 1540 which equals the CMP address of the commander switch
  • a 6-byte target hardware address field 1545 which equals the MAC address of the member switch
  • a 4-byte target protocol address field 1550 which equals the CMP address of the member switch.
  • FIG. 15B is a block diagram illustrating the format of the cluster parameter extension portion 1375 of a cluster ADD message 1400 according to aspects of the present invention.
  • the cluster parameter extension portion 1375 of a cluster ADD message 1400 is used to set cluster parameters on a member switch.
  • cluster parameter extension portion 1375 comprises a fixed length portion 1552 and a variable length portion 1554 .
  • the fixed length portion 1552 comprises a 2-byte cluster member number field 1555 , a 2-byte password length field 1560 , a 4-byte command switch management IP address field 1565 , and a 4-byte total parameter length field 1570 .
  • the variable length portion 1554 comprises a variable length password string field 1575 for authentication, and a variable length list of cluster parameter Type Value Fields (“TLVs”) 1580 .
  • Each cluster parameter TLV 1580 further comprises a 1-byte cluster parameter type field 1582 , a 1-byte cluster parameter length field 1582 , and a variable length (up to 255-bytes) cluster parameter value field 1586 .
  • FIG. 16 is a block diagram illustrating the format of an address conflict detection message 1600 according to aspects of the present invention. This message format is used when a member switch detects a conflict with one of the CMP addresses (either its own address or the commander switch's address). As shown in FIG.
  • address conflict resolution message 1600 comprises a 2-byte hardware type field 1610 (set to equal 0x0001, i.e., “ethernet type,” in one embodiment), a 2-byte protocol field 1620 (set to equal 0x0800, i.e., “IP type,” in one embodiment), a 1-byte hardware length field 1630 (set to equal “6,” i.e., “ethernet type,” in one embodiment), a 1-byte protocol length field 1640 (set to equal “4,” i.e., “IP type,” in one embodiment), a 2-byte opcode field 1650 (set to equal 0x03, i.e., “RARP request,” in one embodiment), a 6-byte source hardware address field 1660 (which equals the MAC address of the cluster commander switch), a 4-byte source protocol address field 1670 (which equals 255.255.255.255 if the member switch found a conflict with its own CMP address), a 6-byte target hardware address field 1680 (which equals the MAC address of the member switch),
  • FIG. 17 is a flow chart illustrating an automatic IP address generation algorithm according to one embodiment of the present invention.
  • the commander switch When a member switch first joins a cluster, the commander switch generates a CMP address for the member switch by adding last three bytes of the member switch's MAC address to the number “10.0.0.0.”
  • the commander switch reads the MAC address of a member switch from an Ethernet frame received from the member switch.
  • the commander switch adds the last three bytes of the member switch's MAC address to the number “10.0.0.0.”
  • the commander switch assigns the resulting number to be the CMP IP address of the member switch.
  • the commander switch communicates its own CMP address to the member switch.
  • the commander switch and the member switch use CMP addresses to communicate with each other.
  • CMP addresses are dynamically and automatically generated, they are subject to conflicts. To avoid potential conflicts and to correct any conflicts promptly if they occur, once part of a cluster, both the commander switch and member switches constantly monitor for address conflicts. This is done by monitoring all input IP packets destined to each switch and checking whether the source IP address of the input packet matches any of the CMP addresses. If there is a match, then a conflict is declared.
  • the member switch informs the command switch about the conflict using the CMP/RARP protocol.
  • the conflict is reported by setting the protocol address field to all ‘1s’ (i.e., “255.255.255.255”).
  • the conflict could be either with a member switch's CMP address or with the commander switch's CMP address. If the conflict is with the commander switch's CMP address, the target protocol address field of the CMP/RARP packet is set to “255.255.255.255.” Similarly if the conflict is with the member switch's CMP address, the source protocol address field of the CMP/RARP packet is set to “255.255.255.255.”
  • FIG. 18 is a flow chart illustrating an automatic IP address conflict correction algorithm according to one embodiment of the present invention.
  • the commander switch After detecting the conflict, the commander switch generates a new CMP address according to the algorithm shown in FIG. 18 .
  • three counters are initialized to zero, each representing the number of address correction attempts for the second byte, third byte, and fourth byte of the IP address, respectively.
  • the value of the second byte counter is compared to the highest possible value (255).
  • step 1810 the second byte of the IP address is incremented by one, “modulo 256,” such that the number wraps back to zero if the present number is 255 and the second byte counter is less than 255.
  • step 1820 a new CMP address corresponding to the result is assigned to the switch that caused the conflict.
  • step 1830 if a conflict is still detected, the algorithm loops back to step 1805 . Otherwise, the algorithm terminates at step 1899 .
  • the third byte counter is compared to the highest possible value (255). If the value is less than 255, then at step 1850 , the third byte of the IP address is incremented by one, “modulo 256,” such that the number wraps back to zero if the present number is 255 and the third byte counter is less than 255.
  • a new CMP address corresponding to the result is assigned to the switch that caused the conflict.
  • the algorithm loops back to step 1840 . Otherwise, the algorithm terminates at step 1899 .
  • the fourth byte counter is compared to the highest possible value (255). If the value is less than 255, then at step 1885 , the third byte of the IP address is incremented by one, “modulo 256,” such that the number wraps back to zero if the present number is 255 and the fourth byte counter is less than 255.
  • a new CMP address corresponding to the result is assigned to the switch that caused the conflict.
  • the algorithm loops back to step 1880 . Otherwise, the algorithm terminates at step 1899 .
  • step 1880 If at step 1880 , the value of the fourth byte counter is determined to be greater than or equal to 255 and there is still a conflict, then the algorithm proceeds to step 1900 , where an error condition is declared, meaning that the conflict could not be resolved. However, the probability of such an error condition occurring is extremely low, as discussed below.
  • a total of (256*3), i.e., 768, different IP address combinations are attempted, including the originally-assigned IP address that caused the conflict.
  • the original generated CMP address is “10.x.y.z”
  • the next CMP addresses attempted are “10.x+1.y.z,” “10.x+2.y.z,” . . . , “10.((x+255)mod256).y.z,” “10.x.y+1.z,” “10.x.y+2.z,” . . . , “10.x((y+255)mod256).z,” “10.x.y.z+1,” “10.x.y.z+2,” . . .
  • the commander switch After generating the new CMP address, the commander switch uses the CMP/RARP protocol to assign the new address to the switch whose CMP address caused a conflict.

Abstract

A group of network devices, such as Ethernet switches, are logically configured as a single cluster, with one commander device and one or more member devices. Each network device in the cluster contains an embedded HTML server that facilitates configuration and management of the network device via a management station running a Web browser. Each device in the cluster is identified by a unique Universal Resource Locator (“URL”). However, only the cluster commander is required to have a public IP address. The cluster commander automatically assigns private IP addresses to the other devices in the cluster. Network devices in the cluster constantly monitor network traffic on all their ports to detect conflicts between the automatically assigned IP addresses and the IP addresses of network devices outside of the cluster. When a conflict is detected, the cluster commander assigns a different private IP address to the cluster network device that caused the conflict. The process of detecting and correcting IP address conflicts continues continuously to enable the cluster network devices to react automatically to network configuration changes.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to the field of data communications networks. More particularly, the present invention relates to an apparatus and method for automatic address assignment for network devices in a cluster.
  • 2. Background
  • A network is a communication system that links two or more computers and peripheral devices, and allows users to access resources on other computers and exchange messages with other users. A network allows users to share resources on their own systems with other network users and to access information on centrally located systems or systems that are located at remote offices. It may provide connections to the Internet or to the networks of other organizations. The network typically includes a cable that attaches to network interface cards (“NICs”) in each of the devices within the network. Users may interact with network-enabled software applications to make a network request, such as to get a file or print on a network printer. The application may also communicate with the network software, which may then interact with the network hardware to transmit information to other devices attached to the network.
  • A local area network (“LAN”) is a network that is located in a relatively small physical area, such as a building, in which computers and other network devices are linked, usually via a wiring-based cabling scheme. A LAN typically includes a shared medium to which workstations attach and through which they communicate. LANs often use broadcasting methods for data communication, whereby any device on the LAN can transmit a message that all other devices on the LAN then “listen” to. However, only the device or devices to which the message is addressed actually receive the message. Data is typically packaged into frames for transmission on the LAN.
  • Currently, the most common LAN media is Ethernet, which traditionally has a maximum bandwidth of 10 Mbps. Traditional Ethernet is a half-duplex technology, in which each Ethernet network device checks the network to determine whether data is being transmitted before it transmits, and defers transmission if the network is in use. In spite of transmission deferral, two or more Ethernet network devices can transmit at the same time, which results in a collision. When a collision occurs, the network devices enter a back-off phase and retransmit later.
  • As more network devices are added to a LAN, they must wait more often before they can begin transmitting, and collisions are more likely to occur because more network devices are trying to transmit. Today, throughput on traditional Ethernet LANs suffers even more due to increased use of network intensive programs, such as client-server applications, which cause hosts to transmit more often and for longer periods of time.
  • FIG. 1 is a block diagram illustrating a network connection between a user 10 and a server 20. FIG. 1 is an example which may be consistent with any type of network, including a LAN, a wide area network (“WAN”), or a combination of networks, such as the Internet.
  • When a user 10 connects to a particular destination, such as a requested web page on a server 20, the connection from the user 10 to the server 20 is typically routed through several routers 12A-12D. Routers are internetworking devices. They are typically used to connect similar and heterogeneous network segments into Internetworks. For example, two LANs may be connected across a dial-up line, across the Integrated Services Digital Network (“ISDN”), or across a leased line via routers. Routers may also be found throughout the Internet. End users may connect to a local Internet Service Provider (“ISP”) (not shown).
  • As the data traffic on a LAN increases, users are affected by longer response times and slower data transfers, because all users attached to the same LAN segment compete for a share of the available bandwidth of the LAN segment (e.g., 10 Mbps in the case of traditional Ethernet). Moreover, LANs commonly experience a steady increase in traffic even if the number of users remains constant, due to increased network usage of software applications using the LAN. Eventually, performance drops below an acceptable level and it becomes necessary to separate the LAN into smaller, more lightly loaded segments.
  • LANs are becoming increasingly congested and overburdened. In addition to an ever-growing population of network users, several factors have combined to stress the capabilities of traditional LANs, including faster computers, faster operating systems, and more network-intensive software applications.
  • There are two traditional approaches to relieving LAN congestion. The first is to simply install a faster networking technology, such as FDDI, ATM, or Fast Ethernet. However, these approaches are expensive to implement. The other traditional approach is to use bridges and routers to reduce data traffic between networks. This solution is also relatively expensive both in money and configuration time, and is only effective when inter-segment traffic is minimal. When inter-segment traffic is high, some bridges and routers can become a bottleneck due to their limited processing power. They also require extensive setup and manual configuration in order to maintain their performance. In addition, despite large buffers, packet loss is always a possibility.
  • Switching is a technology that alleviates congestion in Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI) and other similar LANs by reducing traffic and increasing bandwidth. LAN switches are designed to work with existing media infrastructures so that they can be installed with minimal disruption of existing networks.
  • A Media Access Control (“MAC”) address is the unique hexadecimal serial number assigned to each Ethernet network device to identify it on the network. With Ethernet devices, this address is permanently set at the time of manufacture. Each network device has a unique MAC address, so that it will be able to receive only the frames that were sent to it. If MAC addresses were not unique, there would be no way to distinguish between two stations. Devices on a network monitor network traffic and search for their own MAC address in each frame to determine whether they should decode it or not. Special circumstances exist for broadcasting to every device on the network.
  • Ethernet uses variable-length frames of data to transmit information from a source to one or more destinations. Every Ethernet frame has two fields defined as the source and destination addresses, which indicate the MAC addresses of the network devices where a frame originated and where it is ultimately destined, respectively. FIG. 2-A illustrates the structure of an Ethernet frame, as defined by the IEEE. As shown in FIG. 2-A, the Ethernet frame 22 includes a Preamble 24, a Start of Frame Delimiter 26, a Destination Address 28, a Source Address 30, a Length of data field 32, a variable-length Data field 34, a Pad 36, and a Checksum 38. The Preamble 24 is a seven-byte field, with each byte containing the bit pattern 10101010 to allow for clock synchronization between sending and receiving stations (not shown). The Start of Frame Delimiter 26 is a one-byte field containing the bit pattern 10101011 to denote the start of the frame itself. The Destination Address 28 and the Source Address 30 are typically six-byte fields which specify the unique MAC addresses of the receiving and sending stations. Special addresses allow for multicasting to a group of stations and for broadcasting to all stations on the network. The Length of Data field 32 specifies the number of bytes present in the Data field 34, from a minimum of 0 to a maximum of 1500. The Pad field 36 is used to fill out the length of the entire frame 22 to a minimum of 64 bytes when the Data field 34 contains a small number of bytes. Finally, the Checksum field 38 is a 32-bit hash code of the Data field 34, which can used by the receiving station to detect data transmission errors.
  • In the context of the present invention, the term “switching” refers to a technology in which a network device (known as a switch) connects two or more LAN segments. A switch transmits frames of data from one segment to their destinations on the same or other segments. When a switch begins to operate, it examines the MAC address of the frames that flow through it to build a table of known sources. If the switch determines that the destination of a frame is on the same segment as the source of the frame, it drops, or filters, the frame because there is no need to transmit it. If the switch determines that the destination is on another segment, it transmits the frame onto the destination segment only. Finally, using a technique known as flooding, if the destination segment is unknown, the switch transmits the frame on all segments except the source segment.
  • Logically, a LAN switch behaves similarly to a bridge, which is a different kind of network device. The primary difference is that switches have higher data throughput than bridges, because their frame forwarding algorithms are typically performed by application-specific integrated circuits (“ASICs”) especially designed for that purpose, as opposed to the more general purpose (and relatively slower) microprocessors typically used in bridges. Like bridges, switches are designed to divide a large, unwieldy local network into smaller segments, insulating each segment from local traffic on other segments, thus increasing aggregate bandwidth while still retaining full connectivity. Switches typically have higher port counts than bridges, allowing several independent data paths through the device. This higher port count also increases the data throughput capabilities of a switch.
  • Because a switch maintains a table of the source MAC addresses received on every port, it “learns” to which port a station is attached every time the station transmits. Then, each packet that arrives for that station is forwarded only to the correct port, eliminating the waste of bandwidth on the other ports. Since station addresses are relearned every time a station transmits, if stations are relocated the switch will reconfigure its forwarding table immediately upon receiving a transmission from the stations.
  • Referring now to FIG. 2-B, a block diagram of an Ethernet switch according to one aspect of the present invention is shown. As shown in FIG. 2-B, Ethernet switch 200 includes a Layer 1 Physical Interface (“PHY”) 202, 204, and a Layer 2 Media Access Control Interface (“MAC”) 206, 208, for each port on the Ethernet switch 200. A network interface card (“NIC”) consists of a MAC and a PHY. An Ethernet switch also contains a MAC and PHY on every port. Thus, an Ethernet switch may appear to a network as multiple NICs coupled together. Each switch PHY 202, 204, receives the incoming data bit stream and passes it to its corresponding MAC 206, 208, which reassembles the original Ethernet frames.
  • Ethernet switch 200 also includes a frame buffer memory 210, 212, for each port, a source address table memory 220, discovery protocol logic 230, learning logic 240, forwarding logic 250, packet redirection logic 260, and a configuration and management interface 270. During operation, the learning logic 240 will look at the source address (“SA”) within a received Ethernet frame and populate the Source Address Table (“SAT”) memory 220 with three columns: MAC address 280, port number 282, and age 284. The MAC address is the same as the source address that a sender has embedded into the frame. The age item will be a date stamp to indicate when the last frame was received from a particular MAC SA. In the example shown in FIG. 2-B, the port number may be 1 or 2. The SAT is also known as the Switch Forwarding Table (“SFT”).
  • Forwarding logic 250 examines at the destination address (“DA”) of a received Ethernet frame. This now becomes the new MAC address, which is then compared with the entries in the SAT. Four different forwarding options are possible. If the destination address is a specific address, known as a “broadcast” address, the frame is destined for all ports on the network. In this case, the Ethernet switch will forward the frame to all ports, except the one on which the frame was received. A broadcast address is six bytes with all ones, or “FF.FF.FF.FF.FF.FF” in hexadecimal notation. If the MAC address is found in the SAT and the corresponding port number is different from the received port, the frame is forwarded to that particular port number only. If the MAC address is found in the SAT and the port number is the same as the received port number, the frame is not forwarded; instead, it is discarded. This is known as “filtering.” The frame is discarded because the transmitting station and the receiving station are connected on the same shared LAN segment on that particular port and the receiver has already tuned into the frame. If the MAC address is not found in the table, the frame is forwarded to all ports. The reason a particular destination address is not present in the SAT table is that the receiving device could be new on the network, or the recipient has been very quiet (has not recently sent a frame). In both cases, the bridge SAT will not have a current entry. Flooding the frame on all ports is the brute way of ensuring that the frame is routed to its intended recipient.
  • Ethernet switch 200 uses the “age” entry in the SAT to determine whether that MAC address is still in use on the LAN. If the age has exceeded a certain preset value, the entry is removed. This conserves memory space and makes the bridge faster because fewer entries need to be scanned for address matching. Finally, the frame buffer memories 210, 212 will store frames on each port in case there is a backlog of frames to be forwarded.
  • According to embodiments of the present invention, discovery protocol logic 230 receives, processes, and sends Cisco Discovery Protocol (“CDP”) or other discovery protocol packets to neighboring network devices on the network. Packet redirection logic 260 examines the source and destination addresses of Ethernet packets under control of the configuration and management interface 270 and forwards them to other network devices in a cluster configuration. As known to those skilled in the art, the program code corresponding to discovery protocol logic 230, learning logic 240, forwarding logic 250, packet redirection logic 260, configuration and management interface 270, and other necessary functions may all be stored on a computer-readable medium. Depending on each particular application, computer-readable media suitable for this purpose may include, without limitation, floppy diskettes, hard drives, RAM, ROM, EEPROM, nonvolatile RAM, or flash memory.
  • An Ethernet LAN switch improves bandwidth by separating collision domains and selectively forwarding traffic to the appropriate segments. FIG. 3 illustrates the topology of a typical Ethernet network 40 in which a LAN switch 42 has been installed. With reference now to FIG. 3, exemplary Ethernet network 40 includes a LAN switch 42. As shown in FIG. 3, LAN switch 42 has five ports: 44, 46, 48, 50, and 52. The first port 44 is connected to LAN segment 54. The second port 46 is connected to LAN segment 56. The third port 48 is connected to LAN segment 58. The fourth port 50 is connected to LAN segment 60. The fifth port 52 is connected to LAN segment 62. The Ethernet network 40 also includes a plurality of servers 64-A-64-C and a plurality of clients 66-A-66-K, each of which is attached to one of the LAN segments 54, 56, 58, 60, or 62. If server 64-A on port 44 needs to transmit to client 66-D on port 46, the LAN switch 42 forwards Ethernet frames from port 44 to port 46, thus sparing ports 48, 50, and 52 from frames destined for client 66-D. If server 64-C needs to send data to client 66-J at the same time that server 64-A sends data to client 66-D, it can do so because the LAN switch can forward frames from port 48 to port 50 at the same time it is forwarding frames from port 44 to port 46. If server 64-A on port 44 needs to send data to client 66 C, which is also connected to port 44, the LAN switch 42 does not need to forward any frames.
  • Performance improves in LANs in which LAN switches are installed because the LAN switch creates isolated collision domains. Thus, by spreading users over several collision domains, collisions are avoided and performance improves. In addition, many LAN switch installations dedicate certain ports to a single users, giving those users an effective bandwidth of 10 Mbps when using traditional Ethernet.
  • As a LAN grows, either due to additional users or network devices, additional switches must often be added to the LAN and connected together to provide more ports and new network segments. One way to connect multiple LAN switches together is to cascade them using high-speed ports. However, when cascading LAN switches, the interswitch bandwidth is limited by the number of connections between switches.
  • Referring now to FIG. 4, two LAN switches 70-A and 70-B are shown, connected in a cascaded configuration. As shown, each of the LAN switches 70-A and 70-B contains eight ports, 72-A-72-H and 74-A-74-H. On each of the LAN switches 70-A and 70-B, four ports 72-A-72-D and 74-A-74-D are connected to computer workstations 76-A-76-D and 76-E-76-H, respectively. The other four ports on each LAN switch (i.e., ports 72-E-72-H on LAN switch 70-A, and ports 74-E-74-H on LAN switch 70-B) are dedicated to interswitch communication. For example, if each of the four interswitch connections is capable of supporting a 100 Mbps Fast Ethernet channel, the aggregate interswitch communication rate of the switches connected as shown in FIG. 4 is 400 Mbps. However, the total number of ports available for connecting to workstations or other network devices on each LAN switch is diminished due to the dedicated interswitch connections that are necessary, to implement the cascaded configuration.
  • As a LAN grows, network devices are typically added to the LAN and interconnected according to the needs of the particular LAN to which they belong. For example, FIG. 5 illustrates an exemplary group of network devices in a LAN 78, and the interconnections between the network devices in the LAN 78. As shown in FIG. 5, the LAN 78 includes seven network devices: six LAN switches 80-A-80-F and a router 82. Each network device is connected to one or more of the other network devices in the LAN 78. Computer workstations, network printers and other network devices are also connected to the LAN 78, but not shown. It is to be understood that the LAN configuration shown in FIG. 5 is exemplary only, and not in any way limiting.
  • Regardless of the method used to interconnect them, network devices such as LAN switches need to be configured and managed, because they typically include a number of programmable features that can be changed by a network administrator for optimal performance in a particular network. Without limitation, such features typically include whether each port on the network device is enabled or disabled, the data transmission speed setting on each port, and the duplex setting on each port. Many commercially-available network devices contain embedded HTML Web servers, which allow the network device to be configured and managed remotely via a Web browser.
  • Traditionally, network device installation includes inserting the device into the network and assigning it an Internet Protocol (“IP”) address, which is a 32-bit number assigned to hosts that want to participate in a TCP/IP Internet. The IP address of a network device is a unique address that specifies the logical location of a host or client on the Internet.
  • Once a network device has been assigned an IP address, a network administrator can enter the device's IP address or URL into a Web browser such as Netscape Navigator™, available from Netscape Communications Corp. of Mountain View, Calif., or Internet Explorer™, available from Microsoft Corporation of Redmond, Wash., to access the network device and configure it from anywhere in the Internet. However, each network device to be configured must have its own IP address, which must be registered with a domain name service (“DNS”). Assigning an IP address to each and every network device is undesirable, because registering IP addresses with a DNS is both costly and cumbersome.
  • Accordingly, it would be convenient for a network administrator to be able to assign a single IP address to one network device in a cluster, and then to be able to configure and manage all of the network devices in the cluster using this single IP address. Unfortunately, no current mechanism exists to enable this activity. Accordingly, it is an object of the present invention to provide a method and apparatus which permits an entire cluster of network devices to share a single IP address, and to provide a commander device which automatically assigns private IP addresses to other network devices in the cluster. Another object of the present invention is to facilitate communication between the commander device and other cluster network devices without having to explicitly assign IP addresses to network devices in the cluster.
  • SUMMARY OF THE INVENTION
  • A group of network devices, such as Ethernet switches, are logically configured as a single cluster, with one commander device and one or more member devices. Each network device in the cluster contains an embedded HTML server that facilitates configuration and management of the network device via a management station running a Web browser. Each device in the cluster is identified by a unique Universal Resource Locator (“URL”). However, only the cluster commander is required to have a public IP address. The cluster commander automatically assigns private IP addresses to the other devices in the cluster. Network devices in the cluster constantly monitor network traffic on all their ports to detect conflicts between the automatically assigned IP addresses and the IP addresses of network devices outside of the cluster. When a conflict is detected, the cluster commander assigns a different private IP address to the cluster network device that caused the conflict. The process of detecting and correcting IP address conflicts continues continuously to enable the cluster network devices to react automatically to network configuration changes.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an exemplary network connection between a user and a server.
  • FIG. 2-A is a diagram illustrating the structure of an Ethernet data frame.
  • FIG. 2-B is a block diagram of an Ethernet switch in accordance with one aspect of the present invention.
  • FIG. 3 is a block diagram illustrating the topology of an exemplary LAN incorporating a LAN switch.
  • FIG. 4 is a block diagram illustrating an exemplary LAN with two LAN switches interconnected in a cascaded configuration.
  • FIG. 5 is a block diagram illustrating the topology of an exemplary LAN incorporating six LAN switches and a router.
  • FIG. 6 is a block diagram illustrating an exemplary SNMP network.
  • FIG. 7 is a block diagram illustrating a cluster of network devices according to one aspect of the present invention.
  • FIG. 8 is a block diagram illustrating a cluster of network devices in a star configuration according to one aspect of the present invention.
  • FIG. 9 is a block diagram illustrating a cluster of network devices in a daisy chain configuration according to one aspect of the present invention.
  • FIG. 10 is a block diagram illustrating a cluster of network devices in a hybrid configuration according to one aspect of the present invention.
  • FIG. 11 is a sample configuration screen for a switch cluster according to one aspect of the present invention.
  • FIG. 12 is a block diagram of configuration data packet processing by a commander device according to one aspect of the present invention.
  • FIG. 13 is a block diagram illustrating the CMP/RARP packet format according to aspects of the present invention.
  • FIG. 14 is a block diagram illustrating a cluster ADD message format according to aspects of the present invention.
  • FIG. 15A is a block diagram illustrating the format of the CMP/RARP portion of a cluster ADD message according to aspects of the present invention.
  • FIG. 15B is a block diagram illustrating the format of the cluster parameter extension portion of a cluster ADD message according to aspects of the present invention.
  • FIG. 16 is a block diagram illustrating the format of an address conflict detection message according to aspects of the present invention.
  • FIG. 17 is a flow chart illustrating an automatic IP address generation algorithm according to one embodiment of the present invention.
  • FIG. 18 is a flow chart illustrating an automatic IP address conflict correction algorithm according to one embodiment of the present invention.
  • DETAILED DESCRPTION OF THE PREFERRED EBODIMENT
  • Those of ordinary skill in the art will realize that the following description of the present invention is illustrative only and not in any way limiting. Other embodiments of the invention will readily suggest themselves to such skilled persons having the benefit of this disclosure.
  • Network devices, such as LAN switches, may be configured and managed using either out-of-band or in-band techniques. Out-of-band configuration and management are typically performed by connecting to the console port on the network device and using the management console locally from a terminal or remotely through a modem. Alternatively, network devices may be configured and managed “in-band,”either by connecting via Telnet to the network device and using a management console, or by communicating with the network device's in-band management interface using the industry standard Simple Network Management Protocol (“SNMP”). This can be accomplished by using an SNMP-compatible network management application and the network device's Management Interface Base (“MIB”) files. Normally, however, in order to perform in-band administrative tasks of a network device, such as configuration and management, the network device must first be assigned an IP address. Additionally, in order to use in-band configuration and management capabilities, the SNMP management platform of the network device must be configured to understand and be able to access the objects contained in the network device's MIB.
  • Embodiments of the present invention use a subset of the Transmission Control Protocol/Internet Protocol (“TCP/IP”) suite as the underlying mechanism to transport the SNMP configuration and management data. Without limitation, the protocols implemented in embodiments of the present invention include the Internet Protocol (“IP”), the Internet Control Message Protocol (“ICMP”), the User Datagram Protocol (“UDP”), the Trivial File Transfer Protocol (“TFTP”), the Bootstrap Protocol (“BOOTP”), the Address Resolution Protocol (“ARP”), and the Reverse Address Resolution Protocol (“RARP”).
  • The Management Information Base (“MIB”) variables of network devices according to embodiments of the present invention are accessible through SNMP. SNMP is an application-layer protocol designed to facilitate the exchange of management information between network devices. SNMP is used to monitor IP gateways and their networks, and defines a set of variables that the gateway must keep and specifies that all operations on the gateway are a side effect of fetching or storing to data variables. SNMP consists of three parts: a Structure of Management Information (“SMI”), a Management Information Base (“MIB”) and the protocol itself. The SMI and MIB define and store the set of managed entities, while SNMP itself conveys information to and from the SMI and the MIB.
  • Instead of defining a large set of commands, SNMP places all operations in a get-request, get-next-request, and set-request format. For example, an SNMP manager can get a value from an SNMP agent or store a value into that SNMP agent. The SNMP manager can be part of a network management system (“NMS”), and the SNMP agent can reside on a networking device such as a LAN switch. The switch MIB files may be compiled with network management software, which then permits the SNMP agent to respond to MIB-related queries being sent by the NMS.
  • An example of an NMS is the CiscoWorks™ network management software, available from Cisco Systems, Inc. of San Jose, Calif. CiscoWorks™ uses the switch MIB variables to set device variables and to poll devices on the network for specific information. Among other tasks, the CiscoWorks™ software permits the results of a poll to be displayed as a graph and analyzed in order to troubleshoot internetworking problems, increase network performance, verify the configuration of devices, and monitor traffic loads. Other products known to those of ordinary skill in the art, available from several other vendors, provide similar functionality.
  • Referring now to FIG. 6, an exemplary SNMP network 84 is shown. The SNMP agent 86 in network device 88 gathers data from the MIB 90, also in network device 88. The MIB 90 is the repository for information about device parameters and network data. The SNMP agent 86 can send traps, or notification of certain events, to the SNMP manager 92, which is part of the Network Management Software (“NMS”) 94 running on the management console 96. The SNMP manager 92 uses information in the MIB 90 to perform the operations described in Table 1.
    TABLE 1
    SNMP Manager Operations
    Operation Description
    Get-request Retrieve a value from a specific variable.
    Get-next- Retrieve a value from a variable within a table.
    request With this operation, an SNMP manager does not
    need to know the exact variable name.
    A sequential search is performed to find the
    needed variable within a table.
    Get-response The reply to a get-request, get-next-request,
    and set-request sent by an NMS.
    Set-request Store a value in a specific variable.
    trap An unsolicited message sent by an SNMP agent
    to an SNMP manager indicating that some event
    has occurred.
  • Embodiments of the present invention support the following configuration and management interfaces: HTML (web-based) interfaces, SNMP, and a proprietary Internet Operating System (“IOS”) command line interpreter (“CLI”). Each of these management interfaces can be used to monitor and configure a LAN switch or a group of switches, known as a cluster. The cluster management tools are web-based, and may be accessed via an ordinary browser, such as Netscape Navigator™ or Microsoft Internet Explorer™. Embedded HTML-based management tools display images of switches and graphical user interfaces.
  • When LAN switches are grouped into clusters, one switch is called the commander switch, and the other switches are called member switches. Referring now to FIG. 7, an exemplary switch cluster 98 is shown which includes a commander switch 100 and one or more member switches 102-A-102-N. Management station 104 is connected to the commander switch 100, which redirects configuration requests to the member switches 102-A-102-N.
  • According to the present invention, a single IP address for the entire cluster 98 is assigned to the commander switch 100, which distributes configuration information to the other switches in the cluster. In one embodiment, a cluster with up to 15 member switches may be configured and managed via the IP address of the commander switch 100. The member switches 102-A-102-N in the cluster do not need individual IP addresses, and may be managed through the IP address of the commander switch. However, if so desired (e.g., if IP addresses are available), any of member switches 102-A-102-N may be assigned its own IP address as well. In such a case, a member switch may be configured and managed either through the IP address of the commander switch or through its own IP address.
  • According to embodiments of the present invention, the web-based management features are based on an embedded HTML web site within the Flash memory of each network device in the cluster. Web-based management uses the Hypertext Transfer Protocol (“HTTP”), an in-band form of communication, which means that the Web-based management features of the network device are accessed through one of the Ethernet ports that are also used to receive and transmit normal data in each network device.
  • HTTP is an application-level protocol for distributed, collaborative, hypermedia information systems. HTIP allows an open-ended set of methods that indicate the purpose of a request. It builds on the discipline of reference provided by the Uniform Resource Identifier (“URI”), as a location (“URL”) or name (“URN”), for indicating the resource to which a method is to be applied. Messages are passed in a format similar to that used by Internet mail as defined by the Multipurpose Internet Mail Extensions (“MIME”).
  • Forming a Cluster of Network Devices
  • According to aspects of the present invention, a cluster is a group of connected network devices such as LAN switches that are managed as a single entity. The switches can be in the same location, or they can be distributed across a network. According to one embodiment of the present invention, all communication with cluster switches is through a single IP address assigned to the commander switch. Clusters may be configured in a variety of topologies. As an example, FIG. 8 illustrates a switch cluster 106 configured in a “star,” or “radial stack,” topology. In this configuration, each of the eight member switches 102-A-102-H in cluster 106 is directly connected to one of the ports 108A-108-H of commander switch 100.
  • A second example of a cluster configuration, known as a “daisy chain” configuration, is shown in FIG. 9. In cluster 110, only member switch 102-A is directly connected to the commander switch 100. Member switches 102-B-102-G are each connected to an “upstream” switch (one that is fewer “hops” away from commander switch 100) and to a “downstream” switch (one that is more “hops” away from commander switch 100). Finally, the last switch in the chain (member switch 102-H) is only connected to its upstream “neighbor” 102-G.
  • As a third example, FIG. 10 illustrates a “hybrid” cluster configuration with one commander switch 100 and seven member switches 102-A-102-G. In cluster 112, member switches 102-A and 102-E are in a star configuration with respect to commander switch 100. Member switch 102-B is in a daisy chain configuration with respect to member switch 102-A, while member switches 102-C and 102-D are in a star configuration with respect to member switch 102-B. Finally, member switches 102-F and 102-G are in a star configuration with respect to member switch 102-E. Thus, hybrid cluster 112 as shown in FIG. 10 consists of a combination of star and daisy chain configurations.
  • It is to be understood that many more cluster configurations are possible, and that the above examples are not in any way limiting.
  • The commander switch is the single point of access used to configure and monitor all the switches in a cluster. According to one embodiment of the present invention, member switches are managed through a commander switch. The commander switch is used to manage the cluster, and is managed directly by the network management station. Member switches operate under the control of the commander. While it is a part of a cluster, a member switch is not managed directly, unless it has been assigned its own IP address, as mentioned earlier. Rather, requests intended for a member switch are first sent to the commander, then forwarded to the appropriate member switch in the cluster.
  • When switches are first installed, they are cabled together according to the network configuration desired for a particular application, and an IP address is assigned to the commander switch. In addition, the commander switch must be enabled as the commander switch of the cluster. Once the commander switch has been enabled, it can use information known about the network topology to identify other network devices in the network that may be added to the cluster. According to one embodiment of the present invention, the commander switch uses the Cisco™ Discovery Protocol (“CDP”) to automatically identify candidate network devices. However, other similar products known to those of ordinary skill in the art are available from other vendors to accomplish the same task. Alternatively, discovery of candidate network devices may be performed manually by inspecting the network topology and the network devices attached to the network.
  • CDP is a media-independent device discovery protocol which can be used by a network administrator to view information about other network devices directly attached to a particular network device. In addition, network management applications can retrieve the device type and SNMP-agent address of neighboring network devices. This enables applications to send SNMP queries to neighboring devices. CDP thus allows network management applications to discover devices that are neighbors of already known devices, such as neighbors running lower-layer, transparent protocols.
  • It is to be understood that the present invention is not limited to devices that are compatible with CDP. CDP runs on all media that support the Subnetwork Access Protocol (“SNAP”), including LAN and Frame Relay. CDP runs over the data link layer only. Each network device sends periodic messages to a multicast address and listens to the periodic messages sent by others in order to learn about neighboring devices and determine when their interfaces to the media go up or down. Each device also advertises at least one address at which it can receive SNMP messages. The advertisements contain holdtime information, which indicates the period of time a receiving device should hold CDP information from a neighbor before discarding it With CDP, network management applications can learn the device type and the SNMP agent address of neighboring devices. This process enables applications to send SNMP queries to neighboring devices.
  • Once a switch cluster is formed, any of the switches in the cluster may be accessed by entering the IP address of the commander switch into a Web browser. The single password that is entered to log in to the commander switch also grants access to all the member switches in the cluster.
  • The method of creating a cluster of Ethernet switches depends on each particular network configuration. If the switches are arranged in a star topology, as in FIG. 8, with the commander switch at the center, all of the member switches may be added to the cluster at once. On the other hand, if the switches are connected in a daisy-chain topology, as in FIG. 9, the candidate switch that is connected to the commander switch is added first, and then each subsequent switch in the chain is added as it is discovered by CDP. If switches are daisy-chained off a star topology, as in the exemplary hybrid configuration shown in FIG. 10, all the switches that are directly connected to the commander switch may be added first, and then the daisy-chained switches may be added one at a time.
  • In embodiments of the present invention, there can be a maximum of sixteen switches in a cluster: fifteen member switches and one commander switch. If passwords are defined for the candidate member switches, the network administrator must know them all before they can be added to the cluster. In addition, a candidate switch according to embodiments of the present invention must not already be a member switch or a commander switch of another active cluster.
  • If the commander switch of a cluster fails, member switches continue forwarding but cannot be managed through the commander switch. Member switches retain the ability to be managed through normal standalone means, such as the console-port CLI, and they can be managed through SNMP, HTML, and Telnet after they have been assigned an IP address. Recovery from a failed command switch can be accomplished by replacing the failed unit with a cluster member or another switch. To have a cluster member ready to replace the commander switch, the network administrator must assign an IP address to another cluster member, and know the command-switch enable password for that switch.
  • According to embodiments of the present invention, when a cluster is formed, the commander switch automatically changes three parameters on all the member switches in the cluster: the IOS host name, the enable password, and the SNMP community string. If a switch has not been assigned an IOS host name, the commander switch appends a number to the name of the commander switch and assigns it sequentially to the member switches. For example, a commander switch named eng-cluster could name a cluster member switch eng-cluster-5. If an IOS host name has already been assigned to a switch, the switch retains its IOS host name.
  • Once a cluster has been created, network management software such as the Cluster Manager™ program, available from the assignee of the present invention, may be used to monitor and configure the switches in the cluster. FIG. 11 shows a switch cluster with one commander switch 100 and four member switches 102-A-102-D as it is displayed on a sample Cluster Manager™ page.
  • One advantage of the present invention is that a network administrator need set only one IP address, one password, and one system SNMP configuration in order to manage an entire cluster of switches. A cluster can be formed from switches located in several different buildings on a campus, and may be linked by fiber optic, Fast Ethernet, or Gigabit Ethernet connections.
  • Clusters may be managed from a management station through ASCII terminal consoles, telnet sessions, SNMP management stations and Web Consoles. All configuration and management requests are first directed to the cluster commander. Any required authentication is done by the commander. If necessary, the commander acts as a redirector and forwards requests to the appropriate member switch and forwards the reply to the management station. According to embodiments of the present invention, a member switch can be in only one cluster at a time and can have only one commander.
  • There is no restriction on the types of connections between a commander switch and member switches. In one embodiment of the present invention, a cluster can be formed for a fully interconnected group of CDP neighbors. A network device can join a cluster when the network device is a CDP neighbor of the cluster. Without limitation, switches in a cluster may be interconnected using 10 Mbps Ethernet, 100 Mbps Fast Ethernet, or 1000 Mbps Gigabit Ethernet.
  • The primary external configuration and management interface to the cluster is a TCP/IP connection to the commander switch. HTTP, SNMP, and telnet protocols run on top of the IP stack in the operating system. Alternatively, the cluster may also be managed via the console port of the commander.
  • Thus, as shown in FIG. 7, a Web browser on the management station 104 communicates with the switch cluster 98 by establishing an HTTP connection to the commander switch 100. Special CLI commands help present output from the commander switch 100 to the browser in a format that is easily processed on the browser. Communication between the commander switch 100 and member switches 102-A-102-N is accomplished by the commander switch 100 translating the desired actions into commands the member switches 102-A-102-N would be able to interpret if they were acting as stand-alone switches, i.e., if they were not part of a cluster.
  • The commander switch 100 manages SNMP communication for all switches in the cluster 98. The commander switch 100 forwards the set and get requests from SNMP applications to member switches 102-A-102-N, and it forwards traps and other responses from the member switches 102-A-102-N back to the management station 104. In one embodiment of the present invention, read-write and read-only community strings are set up for an entire cluster. Community strings provide authentication in the exchange of SNMP messages. The commander switch appends numbers to the community strings of member switches so that these modified community strings can provide authentication for the member switches. When a new switch is added to the cluster, a community string is created for it from the community string for the cluster. Only the first read-only and read-write community strings are propagated to the cluster.
  • Configuration and management data packets are sent between the commander 100 and member switches 102-A-102-N via the network connection. The commander 100 identifies each member switch 102-A-102-N by the MAC address of the port on the member switch that is connected to the commander 100. FIG. 12 illustrates in block diagram form how a packet intended for a member switch is processed by the commander. A command from the management station 104 is received by the Ethernet module 122 of the commander switch 100. The command is processed at the IP layer 124, UDP or TCP layer 126, and Management Application layer 128 of the commander switch 100. The Management Application layer 128 determines that the command is intended for member switch 102, and performs redirection by translating the port number in the received command to the appropriate port for member switch 102. The redirected command flows down through the UDP or TCP layer 126, the IP layer 124, and the Ethernet layer 122 of the commander switch 100, and is passed on via Ethernet to the member switch 102.
  • Cluster Interface
  • In embodiments of the present invention, Internet Protocol (“IP”) is the transport mechanism used to communicate between the commander switch and member switches in a cluster. To distinguish between normal IP packets and the cluster management IP packets, a special SNAP header is used for the cluster management IP packets. In one embodiment of the present invention, private IP addresses (“10.x.y.z”) are used for intra-cluster communication. Each cluster member, including the commander, is assigned a private IP address, known as the cluster IP address, or Cluster Management Protocol (“CMP”) address. These private IP addresses are maintained internally by the commander.
  • As described below, when a member switch is added to a cluster, the commander generates a unique cluster IP address and assigns it to the member switch. The commander's cluster IP address is also passed to the member switch. These cluster IP addresses are dynamically assigned. When the commander finds a conflict with one of the assigned cluster IP addresses (such as when some other IP station, not part of the cluster, is using the same IP address as one of the cluster IP addresses), then the commander resolves the conflict by selecting another cluster IP address and assigning it to the corresponding member switch.
  • In one embodiment of the present invention, both the commander switch and the member switches use CMP addresses to send and receive management data within the cluster. A CMP address is a private IP address in “10.x.y.z” format, where x, y, and z, are integers between 0 and 255. The commander switch automatically generates a CMP address and assigns it to the member switch when the switch first joins the cluster.
  • Since CMP addresses are automatically generated, there can be conflicts between the IP address used by a cluster network device and the IP address of a network device outside the cluster. For example, some other IP station can be using the same address as an automatically assigned CMP address. Thus, both the commander switch and the member switches constantly check for conflicts, and in case of a conflict a new CMP address is generated.
  • The commander switch assigns the CMP address to the member switch using the CMP/RARP protocol. CMP/RARP is a variation of the normal RARP (Reverse ARP) protocol. As described below, CMP/RARP uses a different SNAP encapsulation, and it has provisions to carry variable list of cluster parameters as Type Length Value (“TLV”) fields.
  • FIG. 13 is a block diagram illustrating the CMP/RARP packet format according to aspects of the present invention. As shown in FIG. 13, a CMP/RARP packet 1300 comprises an Ethernet header 1310, an LLC/SNAP header 1320, and a RARP portion 1330. As known to those skilled in the art, Ethernet header 1310 comprises a 6-byte destination MAC address 1340, a 6-byte source MAC address 1345, and a 2-byte Length field 1350. LLC/SNAP header 1320 comprises a 3-byte header field 1355 (set to equal 0xAA-AA-03 in one embodiment), a 3-byte OUI field 1360 (set to equal 0x00-00-OC in one embodiment), and a 2-byte CMP/RARP identifier field 1365 (set to equal 0x0114 in one embodiment). RARP portion 1330 of the CMP/RARP packet 1300 comprises a 28-byte RARP packet 1370, described below, and a variable length CMP/RARP extension field 1375.
  • As shown in FIG. 13, CMP/RARP packets 1300 use a separate SNAP encapsulation 1320 to distinguish them from normal RARP packets. Also, it should be noted that at the end of the CMP/RARP packet, there is a variable length extension field 1375 to pass cluster parameters according to aspects of the present invention.
  • FIG. 14 is a block diagram illustrating a cluster ADD message format according to aspects of the present invention. As shown in FIG. 14, a cluster ADD message 1400 is one specific example of a type of cluster message that may be transmitted in the RARP portion 1330 of the CMP/RARP packet 1300 shown in FIG. 13. Referring back to FIG. 14, cluster ADD message 1400 comprises a 28-byte CMP/RARP part 1370 and a variable length cluster parameter extension part 1375. CMP/RARP part 1370 is used for assigning a CMP address to a cluster member switch, while the cluster parameter extension part 1375 is used to transmit cluster parameters to a member switch. Cluster ADD message 1400 is sent to a member switch when the member switch first joins a cluster.
  • FIG. 15A is a block diagram illustrating the format of the CMP/RARP portion 1370 of a cluster ADD message 1400 according to aspects of the present invention. As shown in FIG. 15, the CMP/RARP portion 1370 has the same format as a regular RARP packet, and comprises a 2-byte Hardware type field 1510 (set to equal 0x0001, i.e., “ethernet type,” in one embodiment), a 2-byte protocol field 1515 (set to equal 0x0800, i.e., “IP type,” in one embodiment), a 1-byte hardware length field 1520 (set to equal “6,” i.e., “ethernet type,” in one embodiment), a 1-byte protocol length field 1525 (set to equal “4,” i.e. “IP type,” in one embodiment), a 2-byte opcode field 1530 (set to equal 0x04, i.e., “RARP reply,” in one embodiment), a 6-byte source hardware address field 1535 (which equals the MAC address of the cluster commander switch), a 4-byte source protocol address field 1540 (which equals the CMP address of the commander switch), a 6-byte target hardware address field 1545 (which equals the MAC address of the member switch), and a 4-byte target protocol address field 1550 (which equals the CMP address of the member switch).
  • FIG. 15B is a block diagram illustrating the format of the cluster parameter extension portion 1375 of a cluster ADD message 1400 according to aspects of the present invention. The cluster parameter extension portion 1375 of a cluster ADD message 1400 is used to set cluster parameters on a member switch. As shown in FIG. 15, cluster parameter extension portion 1375 comprises a fixed length portion 1552 and a variable length portion 1554. The fixed length portion 1552 comprises a 2-byte cluster member number field 1555, a 2-byte password length field 1560, a 4-byte command switch management IP address field 1565, and a 4-byte total parameter length field 1570. The variable length portion 1554 comprises a variable length password string field 1575 for authentication, and a variable length list of cluster parameter Type Value Fields (“TLVs”) 1580. Each cluster parameter TLV 1580 further comprises a 1-byte cluster parameter type field 1582, a 1-byte cluster parameter length field 1582, and a variable length (up to 255-bytes) cluster parameter value field 1586.
  • FIG. 16 is a block diagram illustrating the format of an address conflict detection message 1600 according to aspects of the present invention. This message format is used when a member switch detects a conflict with one of the CMP addresses (either its own address or the commander switch's address). As shown in FIG. 16, address conflict resolution message 1600 comprises a 2-byte hardware type field 1610 (set to equal 0x0001, i.e., “ethernet type,” in one embodiment), a 2-byte protocol field 1620 (set to equal 0x0800, i.e., “IP type,” in one embodiment), a 1-byte hardware length field 1630 (set to equal “6,” i.e., “ethernet type,” in one embodiment), a 1-byte protocol length field 1640 (set to equal “4,” i.e., “IP type,” in one embodiment), a 2-byte opcode field 1650 (set to equal 0x03, i.e., “RARP request,” in one embodiment), a 6-byte source hardware address field 1660 (which equals the MAC address of the cluster commander switch), a 4-byte source protocol address field 1670 (which equals 255.255.255.255 if the member switch found a conflict with its own CMP address), a 6-byte target hardware address field 1680 (which equals the MAC address of the member switch), and a 4-byte target protocol address field 1690 (which equals 255.255.255.255 if the member switch found a conflict with the CMP address of the commander switch).
  • FIG. 17 is a flow chart illustrating an automatic IP address generation algorithm according to one embodiment of the present invention. When a member switch first joins a cluster, the commander switch generates a CMP address for the member switch by adding last three bytes of the member switch's MAC address to the number “10.0.0.0.” Thus, as shown in FIG. 17, at step 1700 the commander switch reads the MAC address of a member switch from an Ethernet frame received from the member switch. Next, at step 1710, the commander switch adds the last three bytes of the member switch's MAC address to the number “10.0.0.0.” Then, at step 1720, the commander switch assigns the resulting number to be the CMP IP address of the member switch. For example, if the MAC address of the member switch is “00-e0-1e-01-02-03,” then the generated CMP address will be “10.01.02.03.” At step 1730, the commander switch communicates its own CMP address to the member switch. Finally, at step 1740, once a member switch has been assigned a CMP address, the commander switch and the member switch use CMP addresses to communicate with each other.
  • However, as discussed above, since CMP addresses are dynamically and automatically generated, they are subject to conflicts. To avoid potential conflicts and to correct any conflicts promptly if they occur, once part of a cluster, both the commander switch and member switches constantly monitor for address conflicts. This is done by monitoring all input IP packets destined to each switch and checking whether the source IP address of the input packet matches any of the CMP addresses. If there is a match, then a conflict is declared.
  • If the conflict is found on a member switch, the member switch informs the command switch about the conflict using the CMP/RARP protocol. The conflict is reported by setting the protocol address field to all ‘1s’ (i.e., “255.255.255.255”). The conflict could be either with a member switch's CMP address or with the commander switch's CMP address. If the conflict is with the commander switch's CMP address, the target protocol address field of the CMP/RARP packet is set to “255.255.255.255.” Similarly if the conflict is with the member switch's CMP address, the source protocol address field of the CMP/RARP packet is set to “255.255.255.255.”
  • FIG. 18 is a flow chart illustrating an automatic IP address conflict correction algorithm according to one embodiment of the present invention. In this embodiment, after detecting the conflict, the commander switch generates a new CMP address according to the algorithm shown in FIG. 18. First, at step 1800, three counters are initialized to zero, each representing the number of address correction attempts for the second byte, third byte, and fourth byte of the IP address, respectively. Next, at step 1805, the value of the second byte counter is compared to the highest possible value (255). If the value is less than 255, then at step 1810, the second byte of the IP address is incremented by one, “modulo 256,” such that the number wraps back to zero if the present number is 255 and the second byte counter is less than 255. At step 1820, a new CMP address corresponding to the result is assigned to the switch that caused the conflict. At step 1830, if a conflict is still detected, the algorithm loops back to step 1805. Otherwise, the algorithm terminates at step 1899.
  • If at step 1805 the value of the second byte counter is determined to be greater than or equal to 255, then at step 1840, the third byte counter is compared to the highest possible value (255). If the value is less than 255, then at step 1850, the third byte of the IP address is incremented by one, “modulo 256,” such that the number wraps back to zero if the present number is 255 and the third byte counter is less than 255. At step 1860, a new CMP address corresponding to the result is assigned to the switch that caused the conflict. At step 1870, if a conflict is still detected, the algorithm loops back to step 1840. Otherwise, the algorithm terminates at step 1899.
  • If at step 1840 the value of the third byte counter is determined to be greater than or equal to 255, then at step 1880, the fourth byte counter is compared to the highest possible value (255). If the value is less than 255, then at step 1885, the third byte of the IP address is incremented by one, “modulo 256,” such that the number wraps back to zero if the present number is 255 and the fourth byte counter is less than 255. At step 1890, a new CMP address corresponding to the result is assigned to the switch that caused the conflict. At step 1895, if a conflict is still detected, the algorithm loops back to step 1880. Otherwise, the algorithm terminates at step 1899.
  • If at step 1880, the value of the fourth byte counter is determined to be greater than or equal to 255 and there is still a conflict, then the algorithm proceeds to step 1900, where an error condition is declared, meaning that the conflict could not be resolved. However, the probability of such an error condition occurring is extremely low, as discussed below.
  • In the embodiment described above and illustrated in FIG. 18, a total of (256*3), i.e., 768, different IP address combinations are attempted, including the originally-assigned IP address that caused the conflict. Thus, for example, if the original generated CMP address is “10.x.y.z,” then the next CMP addresses attempted are “10.x+1.y.z,” “10.x+2.y.z,” . . . , “10.((x+255)mod256).y.z,” “10.x.y+1.z,” “10.x.y+2.z,” . . . , “10.x((y+255)mod256).z,” “10.x.y.z+1,” “10.x.y.z+2,” . . . , “10.x.y.((z+255)mod256).” This method has proven to be satisfactory in field tests. However, those skilled in the art will realize that many other methods for attempting new IP address combinations may be implemented, depending on the requirements of each particular application. For example, a method in which (256{circumflex over ( )}3), i.e., 16,777,216, different IP addresses are attempted may be implemented by “nesting” the incrementing loops of each byte of the IP address. In other words, this can be implemented by first incrementing the second byte of the IP address up to 256 different times, then incrementing the third byte by one and then incrementing the second byte of the IP address up to 256 different times again. This part of the method alone will result in (256{circumflex over ( )}2), i.e., 65,536, attempts. If a conflict is still detected, then the fourth byte may be incremented by one, whereupon the process of incrementing the second byte, then the third byte, may be repeated, thus resulting in a total of (256{circumflex over ( )}3), i.e., 16,777,216, different IP address combinations.
  • Other address correction methods may be employed by those skilled in the art within the spirit of the present invention. After generating the new CMP address, the commander switch uses the CMP/RARP protocol to assign the new address to the switch whose CMP address caused a conflict.
  • While embodiments and applications of this invention have been shown and described, it would be apparent to those of ordinary skill in the art having the benefit of this disclosure that many more modifications than mentioned above are possible without departing from the inventive concepts herein. The invention, therefore, is not to be restricted except in the spirit of the appended claims.

Claims (24)

1-4. (canceled)
5. A method for automatically correcting Internet Protocol (“IP”) address conflicts among network devices, comprising the steps of:
(1) detecting a conflict between a private IP address used by a first network device which is a member of a cluster of network devices and an IP address used by a second network device which is not a member of said cluster;
(2) changing said private IP address;
(3) determining whether said conflict still exists, and;
(4) if said conflict still exists, repeating steps (2)-(3) until said conflict is resolved or until it is determined that said conflict cannot be resolved.
6. The method according to claim 5, wherein said first network device is a LAN switch.
7. The method according to claim 5, wherein step (2) is performed by incrementing the value of the second byte of said private IP address if the value of said second byte is less than the maximum allowed value.
8. The method according to claim 5, wherein step (2) is performed by incrementing the value of the third byte of said private IP address if the value of said third byte is less than the maximum allowed value.
9. The method according to claim 5, wherein step (2) is performed by incrementing the value of the fourth byte of said private IP address if the value of said fourth byte is less than the maximum allowed value.
10. The method according to claim 5, wherein step (2) is performed by incrementing the value of the second byte of said private IP address if the value of a second byte counter is less than the maximum allowed value, then incrementing the value of the third byte of said private IP address if the value of a third byte counter is less than the maximum allowed value and the value of said second byte counter is equal to the maximum allowed value, then incrementing the value of the fourth byte of said private IP address if the value of a fourth byte counter is less than the maximum allowed value and the value of said second byte counter is equal to the maximum allowed value and the value of said third byte counter is equal to the maximum allowed value.
11. The method according to claim 6, wherein step (2) is performed by incrementing the value of the second byte of said private IP address if the value of said second byte is less than the maximum allowed value.
12. The method according to claim 6, wherein step (2) is performed by incrementing the value of the third byte of said private IP address if the value of said third byte is less than the maximum allowed value.
13. The method according to claim 6, wherein step (2) is performed by incrementing the value of the fourth byte of said private IP address if the value of said fourth byte is less than the maximum allowed value.
14. The method according to claim 6, wherein step (2) is performed by incrementing the value of the second byte of said private IP address if the value of a second byte counter is less than the maximum allowed value, then incrementing the value of the third byte of said private IP address if the value of a third byte counter is less than the maximum allowed value and the value of said second byte counter is equal to the maximum allowed value, then incrementing the value of the fourth byte of said private IP address if the value of a fourth byte counter is less than the maximum allowed value and the value of said second byte counter is equal to the maximum allowed value and the value of said third byte counter is equal to the maximum allowed value.
15-20. (canceled)
21. A cluster of network devices, comprising:
a commander network device having a public IP address;
a member network device having a unique private IP address automatically assigned by said commander network device, wherein said commander network device is capable of automatically detecting and correcting IP address conflicts involving said private IP address.
22. The cluster of network devices according to claim 21, wherein said commander network device is a LAN switch.
23. The cluster of network devices according to claim 21, wherein said commander network device is a LAN switch and said member network device is a LAN switch.
24. The cluster of network devices according to claim 21, wherein said commander network device further comprises logic for iteratively modifying said private IP address until said IP address conflict is resolved or until it is determined that said conflict cannot be resolved.
25. The cluster of network devices according to claim 24, wherein said commander network device is a LAN switch.
26. The cluster of network devices according to claim 24, wherein said commander network device is a LAN switch and said member network device is a LAN switch.
27-32. (canceled)
33. A first network device capable of automatically correcting Internet Protocol (“IP”) address conflicts among network devices, comprising:
means for detecting a conflict between a private IP address used by a second network device which is a member of a cluster of network devices and an IP address used by a third network device which is not a member of said cluster;
means for iteratively changing said private IP address and determining whether said conflict still exists until either said conflict is resolved or until it is determined that said conflict cannot be resolved.
34. The apparatus according to claim 33, wherein said first network device is a LAN switch.
35. The apparatus according to claim 33, wherein said first network device is a LAN switch and said second network device is a LAN switch.
36. (canceled)
37. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform a method for automatically correcting Internet Protocol (“IP”) address conflicts among network devices, the method comprising the steps of:
(1) detecting a conflict between a private IP address used by a first network device which is a member of a cluster of network devices and an IP address used by a second network device which is not a member of said cluster;
(2) changing said private IP address;
(3) determining whether said conflict still exists, and;
(4) if said conflict still exists, repeating steps (2)-(3) until either said conflict is resolved or until it is determined that said conflict cannot be resolved.
US11/137,937 1999-11-30 2005-05-25 Apparatus and method for automatic cluster network device address assignment Abandoned US20050207414A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/137,937 US20050207414A1 (en) 1999-11-30 2005-05-25 Apparatus and method for automatic cluster network device address assignment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/452,284 US6917626B1 (en) 1999-11-30 1999-11-30 Apparatus and method for automatic cluster network device address assignment
US11/137,937 US20050207414A1 (en) 1999-11-30 2005-05-25 Apparatus and method for automatic cluster network device address assignment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/452,284 Division US6917626B1 (en) 1999-11-30 1999-11-30 Apparatus and method for automatic cluster network device address assignment

Publications (1)

Publication Number Publication Date
US20050207414A1 true US20050207414A1 (en) 2005-09-22

Family

ID=34710005

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/452,284 Expired - Lifetime US6917626B1 (en) 1999-11-30 1999-11-30 Apparatus and method for automatic cluster network device address assignment
US11/137,937 Abandoned US20050207414A1 (en) 1999-11-30 2005-05-25 Apparatus and method for automatic cluster network device address assignment
US11/137,889 Expired - Fee Related US7545820B2 (en) 1999-11-30 2005-05-25 Apparatus and method for automatic cluster network device address assignment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/452,284 Expired - Lifetime US6917626B1 (en) 1999-11-30 1999-11-30 Apparatus and method for automatic cluster network device address assignment

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/137,889 Expired - Fee Related US7545820B2 (en) 1999-11-30 2005-05-25 Apparatus and method for automatic cluster network device address assignment

Country Status (1)

Country Link
US (3) US6917626B1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017782A1 (en) * 2002-07-25 2004-01-29 Moxa Technologies Co., Ltd. Equipment monitoring system line swap fast recovery method
US20040153571A1 (en) * 2003-01-31 2004-08-05 Fujitsu Component Limited Console switch and system using the same
US20050076145A1 (en) * 2003-10-07 2005-04-07 Microsoft Corporation Supporting point-to-point intracluster communications between replicated cluster nodes
US20050138157A1 (en) * 2003-12-23 2005-06-23 Ken-Ju Jung Network device discovery system and method thereof
US20060174036A1 (en) * 2005-01-31 2006-08-03 Dain Joseph W Apparatus, system, and method for automatically mapping a tape library system
US20090135715A1 (en) * 2007-11-27 2009-05-28 International Business Machines Corporation Duplicate internet protocol address resolution in a fragmented switch stack environment
US20090254649A1 (en) * 2008-04-02 2009-10-08 International Business Machines Corporation High availability of internet protocol addresses within a cluster
US20090300218A1 (en) * 2008-05-30 2009-12-03 Asustek Computer Inc. Network sharing method
US20100220701A1 (en) * 2009-02-27 2010-09-02 Ruggedcom Inc. Client/Bridge and Method and System for Using Same
US20100250716A1 (en) * 2009-03-31 2010-09-30 Sony Corporation Network comprising a plurality of devices and root device and method for assigning a network address
US7826452B1 (en) * 2003-03-24 2010-11-02 Marvell International Ltd. Efficient host-controller address learning in ethernet switches
US20110085560A1 (en) * 2009-10-12 2011-04-14 Dell Products L.P. System and Method for Implementing a Virtual Switch
US20110164505A1 (en) * 2010-01-04 2011-07-07 Samer Salam Cfm for conflicting mac address notification
US8001393B2 (en) * 2007-02-16 2011-08-16 Hitachi, Ltd. Storage control device
US20140022937A1 (en) * 2012-07-18 2014-01-23 International Business Machines Corporation Integrated device management over ethernet network
US8718053B2 (en) 2010-11-12 2014-05-06 Cisco Technology, Inc. Packet transport for network device clusters
US8892689B1 (en) * 2008-04-30 2014-11-18 Netapp, Inc. Method and apparatus for a storage server to automatically discover and join a network storage cluster
US9450772B2 (en) 2012-01-06 2016-09-20 Huawei Technologies Co., Ltd. Method, group server, and member device for accessing member resources
US20170026314A1 (en) * 2015-07-23 2017-01-26 Honeywell International Inc. Built-in ethernet switch design for rtu redundant system
US20170070428A1 (en) * 2013-09-05 2017-03-09 Pismo Labs Technology Limited Method and system for converting a broadcast packet to a unicast packet at an access point

Families Citing this family (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7299294B1 (en) * 1999-11-10 2007-11-20 Emc Corporation Distributed traffic controller for network data
IL134424A0 (en) * 2000-02-07 2001-04-30 Congruency Inc A switching unit
US6725264B1 (en) * 2000-02-17 2004-04-20 Cisco Technology, Inc. Apparatus and method for redirection of network management messages in a cluster of network devices
US6804232B1 (en) 2000-03-27 2004-10-12 Bbnt Solutions Llc Personal area network with automatic attachment and detachment
DE10030522A1 (en) * 2000-06-28 2002-01-17 Harman Becker Automotive Sys Method for generating a second address
US7617292B2 (en) 2001-06-05 2009-11-10 Silicon Graphics International Multi-class heterogeneous clients in a clustered filesystem
US20040139125A1 (en) 2001-06-05 2004-07-15 Roger Strassburg Snapshot copy of data volume during data access
US7640582B2 (en) 2003-04-16 2009-12-29 Silicon Graphics International Clustered filesystem for mix of trusted and untrusted nodes
US8010558B2 (en) 2001-06-05 2011-08-30 Silicon Graphics International Relocation of metadata server with outstanding DMAPI requests
US7587476B2 (en) * 2001-08-07 2009-09-08 Ricoh Company, Ltd. Peripheral device with a centralized management server, and system, computer program product and method for managing peripheral devices connected to a network
US8868715B2 (en) * 2001-10-15 2014-10-21 Volli Polymer Gmbh Llc Report generation and visualization systems and methods and their use in testing frameworks for determining suitability of a network for target applications
US8543681B2 (en) * 2001-10-15 2013-09-24 Volli Polymer Gmbh Llc Network topology discovery systems and methods
US7515546B2 (en) * 2001-12-19 2009-04-07 Alcatel-Lucent Canada Inc. Method and apparatus for automatic discovery of network devices with data forwarding capabilities
US8040869B2 (en) * 2001-12-19 2011-10-18 Alcatel Lucent Method and apparatus for automatic discovery of logical links between network devices
US7856599B2 (en) * 2001-12-19 2010-12-21 Alcatel-Lucent Canada Inc. Method and system for IP link management
TW569572B (en) * 2002-07-18 2004-01-01 Macronix Int Co Ltd Chip of multi-port Ethernet network switch and daisy chain test method thereof
US7139841B1 (en) 2002-07-24 2006-11-21 Cisco Technology, Inc. Method and apparatus for handling embedded address in data sent through multiple network address translation (NAT) devices
KR100532098B1 (en) * 2002-11-16 2005-11-29 삼성전자주식회사 Incoming and outgoing call system based on duplicate private network
CN1266882C (en) * 2002-12-04 2006-07-26 华为技术有限公司 A management method of network device
US7454525B1 (en) * 2002-12-05 2008-11-18 Cisco Technology, Inc. Enabling communication when signaling protocol packets contain embedded addresses subject to translation
US20040258074A1 (en) * 2003-06-20 2004-12-23 Williams Aidan Michael Method and apparatus for allocating addresses in integrated zero-configured and manually configured networks
US20050010668A1 (en) * 2003-07-07 2005-01-13 Shiwen Chen Traversable network address translation with hierarchical internet addressing architecture
JP2005100270A (en) * 2003-09-26 2005-04-14 Minolta Co Ltd Printing control program and printer
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US20050256948A1 (en) * 2004-05-14 2005-11-17 Shaotang Hu Methods and systems for testing a cluster management station
US20050271047A1 (en) * 2004-06-02 2005-12-08 Huonder Russell J Method and system for managing multiple overlapping address domains
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US20060015596A1 (en) * 2004-07-14 2006-01-19 Dell Products L.P. Method to configure a cluster via automatic address generation
JP2006040188A (en) * 2004-07-30 2006-02-09 Hitachi Ltd Computer system and method for setting computer
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
CA2827035A1 (en) 2004-11-08 2006-05-18 Adaptive Computing Enterprises, Inc. System and method of providing system jobs within a compute environment
US7729284B2 (en) * 2005-01-19 2010-06-01 Emulex Design & Manufacturing Corporation Discovery and configuration of devices across an Ethernet interface
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
US9075657B2 (en) 2005-04-07 2015-07-07 Adaptive Computing Enterprises, Inc. On-demand access to compute resources
US9225663B2 (en) 2005-03-16 2015-12-29 Adaptive Computing Enterprises, Inc. System and method providing a virtual private cluster
WO2008036058A2 (en) 2005-03-16 2008-03-27 Cluster Resources, Inc. On-demand computing environment
US9231886B2 (en) 2005-03-16 2016-01-05 Adaptive Computing Enterprises, Inc. Simple integration of an on-demand compute environment
US8516171B2 (en) * 2005-04-06 2013-08-20 Raritan Americas Inc. Scalable, multichannel remote device KVM management system for converting received signals into format suitable for transmission over a command network
US8332523B2 (en) * 2005-04-06 2012-12-11 Raritan Americas, Inc. Architecture to enable keyboard, video and mouse (KVM) access to a target from a remote client
US20070058620A1 (en) * 2005-08-31 2007-03-15 Mcdata Corporation Management of a switch fabric through functionality conservation
US20070088630A1 (en) * 2005-09-29 2007-04-19 Microsoft Corporation Assessment and/or deployment of computer network component(s)
US9143841B2 (en) 2005-09-29 2015-09-22 Brocade Communications Systems, Inc. Federated management of intelligent service modules
US7979854B1 (en) 2005-09-29 2011-07-12 Cisco Technology, Inc. Method and system for upgrading software or firmware by using drag and drop mechanism
US7633855B2 (en) * 2005-11-03 2009-12-15 Cisco Technology, Inc. System and method for resolving address conflicts in a network
US20070177597A1 (en) * 2006-02-02 2007-08-02 Yu Ju Ethernet connection-based forwarding process
CN100518087C (en) * 2006-03-03 2009-07-22 鸿富锦精密工业(深圳)有限公司 Apparatus and method for managing user terminal equipment
US7953866B2 (en) 2006-03-22 2011-05-31 Mcdata Corporation Protocols for connecting intelligent service modules in a storage area network
US7917523B2 (en) 2006-04-05 2011-03-29 Cisco Technology, Inc. Method and system for providing improved URL mangling performance using fast re-write
US20070258443A1 (en) * 2006-05-02 2007-11-08 Mcdata Corporation Switch hardware and architecture for a computer network
US20070258380A1 (en) * 2006-05-02 2007-11-08 Mcdata Corporation Fault detection, isolation and recovery for a switch system of a computer network
US8997170B2 (en) 2006-12-29 2015-03-31 Shared Spectrum Company Method and device for policy-based control of radio
US20080126521A1 (en) * 2006-09-21 2008-05-29 Hanes David H Network device management system and method
DE102006050134A1 (en) * 2006-10-25 2008-04-30 Abb Ag Bus system operating method, involves monitoring newly added participants for defined time before transmit access to bus and automatically changing own participant address based on preset pattern upon detection of own address in bus traffic
US7782797B2 (en) * 2007-02-27 2010-08-24 Hatteras Networks Methods and apparatus for self partitioning a data network to prevent address conflicts
US8477771B2 (en) 2007-03-01 2013-07-02 Meraki Networks, Inc. System and method for remote monitoring and control of network devices
US8391354B2 (en) * 2007-05-14 2013-03-05 Broadcom Corporation Method and system for transforming uncompressed video traffic to network-aware ethernet traffic with A/V bridging capabilities and A/V bridging extensions
US8125991B1 (en) * 2007-07-31 2012-02-28 Hewlett-Packard Development Company, L.P. Network switch using managed addresses for fast route lookup
US8645524B2 (en) * 2007-09-10 2014-02-04 Microsoft Corporation Techniques to allocate virtual network addresses
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
US20090106452A1 (en) * 2007-10-19 2009-04-23 Lam Johnny A Address assignment
US8295298B2 (en) * 2008-05-07 2012-10-23 Hid Global Gmbh Device with ethernet switch function and single ethernet connector
FR2931970B1 (en) * 2008-05-27 2010-06-11 Bull Sas METHOD FOR GENERATING HANDLING REQUIREMENTS OF SERVER CLUSTER INITIALIZATION AND ADMINISTRATION DATABASE, DATA CARRIER AND CLUSTER OF CORRESPONDING SERVERS
US8452572B2 (en) * 2008-11-17 2013-05-28 Cisco Technology, Inc. Distributed sample survey technique for data flow reduction in sensor networks
JP5233756B2 (en) * 2009-03-06 2013-07-10 富士通株式会社 Information processing apparatus, identification information setting program, and identification information setting method
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US8737210B2 (en) * 2011-03-09 2014-05-27 Telefonaktiebolaget L M Ericsson (Publ) Load balancing SCTP associations using VTAG mediation
US8948054B2 (en) * 2011-12-30 2015-02-03 Cisco Technology, Inc. System and method for discovering multipoint endpoints in a network environment
US9088477B2 (en) 2012-02-02 2015-07-21 International Business Machines Corporation Distributed fabric management protocol
US9077651B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Management of a distributed fabric system
US9077624B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Diagnostics in a distributed fabric system
US9215168B2 (en) * 2012-07-23 2015-12-15 Broadcom Corporation Controller area network communications using ethernet
US9185155B2 (en) * 2012-09-07 2015-11-10 Cisco Technology, Inc. Internet presence for a home network
US9001697B2 (en) 2012-12-14 2015-04-07 Western Digital Technologies, Inc. Methods and devices for replacing and configuring a router in a network
US9143929B1 (en) 2012-12-14 2015-09-22 Western Digital Technologies, Inc. Methods and devices configured for IP address conflict detection and resolution upon assignment of WAN IP address
US9525589B2 (en) * 2012-12-17 2016-12-20 Cisco Technology, Inc. Proactive M2M framework using device-level vCard for inventory, identity, and network management
US10277465B2 (en) * 2013-01-22 2019-04-30 Proofpoint, Inc. System, apparatus and method for dynamically updating the configuration of a network device
US9866432B2 (en) * 2013-05-10 2018-01-09 Comcast Cable Communications, Llc Dynamic network awareness
US20150074260A1 (en) * 2013-09-11 2015-03-12 Cisco Technology, Inc. Auto discovery and topology rendering in substation networks
US10205785B2 (en) * 2014-09-11 2019-02-12 Dell Products L.P. Systems and methods for providing virtual crash cart access to an information handling system
US11038887B2 (en) * 2017-09-29 2021-06-15 Fisher-Rosemount Systems, Inc. Enhanced smart process control switch port lockdown
CN109271433A (en) * 2018-09-03 2019-01-25 中新网络信息安全股份有限公司 A kind of company-data synchronous method
US10897417B2 (en) * 2018-09-19 2021-01-19 Amazon Technologies, Inc. Automated route propagation among networks attached to scalable virtual traffic hubs
US11399006B2 (en) * 2020-08-31 2022-07-26 Nokia Solutions And Networks Oy Address generation
CN114301865B (en) * 2021-12-29 2023-07-21 迈普通信技术股份有限公司 Table entry management method, apparatus, network device and computer readable storage medium

Citations (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4644532A (en) * 1985-06-10 1987-02-17 International Business Machines Corporation Automatic update of topology in a hybrid network
US4922486A (en) * 1988-03-31 1990-05-01 American Telephone And Telegraph Company User to network interface protocol for packet communications networks
US4933937A (en) * 1986-11-29 1990-06-12 Kabushiki Kaisha Toshiba Network adapter for connecting local area network to backbone network
US4962497A (en) * 1989-09-21 1990-10-09 At&T Bell Laboratories Building-block architecture of a multi-node circuit-and packet-switching system
US5018137A (en) * 1988-06-27 1991-05-21 Digital Equipment Corporation Transparent load sharing for parallel networks
US5095480A (en) * 1989-06-16 1992-03-10 Fenner Peter R Message routing system for shared communication media networks
US5136580A (en) * 1990-05-16 1992-08-04 Microcom Systems, Inc. Apparatus and method for learning and filtering destination and source addresses in a local area network system
US5150464A (en) * 1990-06-06 1992-09-22 Apple Computer, Inc. Local area network device startup process
US5241682A (en) * 1991-04-18 1993-08-31 International Business Machines Corporation Border node having routing and functional capability in a first network and only local address capability in a second network
US5274631A (en) * 1991-03-11 1993-12-28 Kalpana, Inc. Computer network switching system
US5280480A (en) * 1991-02-21 1994-01-18 International Business Machines Corporation Source routing transparent bridge
US5287103A (en) * 1991-12-30 1994-02-15 At&T Bell Laboratories Method and apparatus for providing local area network clients with internetwork identification data
US5319644A (en) * 1992-08-21 1994-06-07 Synoptics Communications, Inc. Method and apparatus for identifying port/station relationships in a network
US5371852A (en) * 1992-10-14 1994-12-06 International Business Machines Corporation Method and apparatus for making a cluster of computers appear as a single host on a network
US5394402A (en) * 1993-06-17 1995-02-28 Ascom Timeplex Trading Ag Hub for segmented virtual local area network with shared media access
US5430715A (en) * 1993-09-15 1995-07-04 Stratacom, Inc. Flexible destination address mapping mechanism in a cell switching communication controller
US5519706A (en) * 1993-12-03 1996-05-21 International Business Machines Corporation Dynamic user registration method in a mobile communications network
US5526489A (en) * 1993-03-19 1996-06-11 3Com Corporation System for reverse address resolution for remote network device independent of its physical address
US5530963A (en) * 1993-12-16 1996-06-25 International Business Machines Corporation Method and system for maintaining routing between mobile workstations and selected network workstation using routing table within each router device in the network
US5574860A (en) * 1993-03-11 1996-11-12 Digital Equipment Corporation Method of neighbor discovery over a multiaccess nonbroadcast medium
US5591828A (en) * 1989-06-22 1997-01-07 Behringwerke Aktiengesellschaft Bispecific and oligospecific mono-and oligovalent receptors, the preparation and use thereof
US5594732A (en) * 1995-03-03 1997-01-14 Intecom, Incorporated Bridging and signalling subsystems and methods for private and hybrid communications systems including multimedia systems
US5617421A (en) * 1994-06-17 1997-04-01 Cisco Systems, Inc. Extended domain computer network using standard links
US5715394A (en) * 1993-06-29 1998-02-03 Alcatel N.V. Method of supporting the management of a communications network, and network management facility therefor
US5758282A (en) * 1995-06-19 1998-05-26 Sharp Kabushiki Kaisha Radio terminal using allocated addresses
US5793763A (en) * 1995-11-03 1998-08-11 Cisco Technology, Inc. Security system for network address translation systems
US5802047A (en) * 1995-05-31 1998-09-01 Nec Corporation Inter-LAN connecting device with combination of routing and switching functions
US5809483A (en) * 1994-05-13 1998-09-15 Broka; S. William Online transaction processing system for bond trading
US5812529A (en) * 1996-11-12 1998-09-22 Lanquest Group Method and apparatus for network assessment
US5835720A (en) * 1996-05-17 1998-11-10 Sun Microsystems, Inc. IP discovery apparatus and method
US5835725A (en) * 1996-10-21 1998-11-10 Cisco Technology, Inc. Dynamic address assignment and resolution technique
US5854901A (en) * 1996-07-23 1998-12-29 Cisco Systems, Inc. Method and apparatus for serverless internet protocol address discovery using source address of broadcast or unicast packet
US5862348A (en) * 1996-02-09 1999-01-19 Citrix Systems, Inc. Method and apparatus for connecting a client node to a server node based on load levels
US5912891A (en) * 1996-02-28 1999-06-15 Hitachi, Ltd. Virtual network system
US5918016A (en) * 1997-06-10 1999-06-29 Texas Instruments Incorporated System with program for automating protocol assignments when newly connected to varing computer network configurations
US5968116A (en) * 1996-03-27 1999-10-19 Intel Corporation Method and apparatus for facilitating the management of networked devices
US5991828A (en) * 1993-08-25 1999-11-23 Fujitsu Limited System for automatically connecting portable device to network using network environment information including domain name of naming device and community name of network management protocol
US6009103A (en) * 1997-12-23 1999-12-28 Mediaone Group, Inc. Method and system for automatic allocation of resources in a network
US6023724A (en) * 1997-09-26 2000-02-08 3Com Corporation Apparatus and methods for use therein for an ISDN LAN modem that displays fault information to local hosts through interception of host DNS request messages
US6026441A (en) * 1997-12-16 2000-02-15 At&T Corporation Method for establishing communication on the internet with a client having a dynamically assigned IP address
US6046992A (en) * 1991-10-01 2000-04-04 Intermec Ip Corp. Radio frequency local area network
US6055236A (en) * 1998-03-05 2000-04-25 3Com Corporation Method and system for locating network services with distributed network address translation
US6091951A (en) * 1997-05-14 2000-07-18 Telxon Corporation Seamless roaming among multiple networks
US6092196A (en) * 1997-11-25 2000-07-18 Nortel Networks Limited HTTP distributed remote user authentication system
US6119160A (en) * 1998-10-13 2000-09-12 Cisco Technology, Inc. Multiple-level internet protocol accounting
US6141687A (en) * 1998-05-08 2000-10-31 Cisco Technology, Inc. Using an authentication server to obtain dial-out information on a network
US6188691B1 (en) * 1998-03-16 2001-02-13 3Com Corporation Multicast domain virtual local area network
US6266335B1 (en) * 1997-12-19 2001-07-24 Cyberiq Systems Cross-platform server clustering using a network flow switch
US6370584B1 (en) * 1998-01-13 2002-04-09 Trustees Of Boston University Distributed routing
US6425008B1 (en) * 1999-02-16 2002-07-23 Electronic Data Systems Corporation System and method for remote management of private networks having duplicate network addresses
US6470389B1 (en) * 1997-03-14 2002-10-22 Lucent Technologies Inc. Hosting a network service on a cluster of servers using a single-address image
US6480508B1 (en) * 1999-05-12 2002-11-12 Westell, Inc. Router-based domain name system proxy agent using address translation
US6496866B2 (en) * 1996-08-23 2002-12-17 International Business Machines Corporation System and method for providing dynamically alterable computer clusters for message routing
US6636499B1 (en) * 1999-12-02 2003-10-21 Cisco Technology, Inc. Apparatus and method for cluster network device discovery
US6693878B1 (en) * 1999-10-15 2004-02-17 Cisco Technology, Inc. Technique and apparatus for using node ID as virtual private network (VPN) identifiers
US6810010B1 (en) * 1999-04-14 2004-10-26 Nec Corporation Redundant LAN system, active line/stand-by line switching method, and recording medium
US6975631B1 (en) * 1998-06-19 2005-12-13 Juniper Networks, Inc. Network packet forwarding lookup with a reduced number of memory accesses

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6092178A (en) 1998-09-03 2000-07-18 Sun Microsystems, Inc. System for responding to a resource request

Patent Citations (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4644532A (en) * 1985-06-10 1987-02-17 International Business Machines Corporation Automatic update of topology in a hybrid network
US4933937A (en) * 1986-11-29 1990-06-12 Kabushiki Kaisha Toshiba Network adapter for connecting local area network to backbone network
US4922486A (en) * 1988-03-31 1990-05-01 American Telephone And Telegraph Company User to network interface protocol for packet communications networks
US5018137A (en) * 1988-06-27 1991-05-21 Digital Equipment Corporation Transparent load sharing for parallel networks
US5095480A (en) * 1989-06-16 1992-03-10 Fenner Peter R Message routing system for shared communication media networks
US5591828A (en) * 1989-06-22 1997-01-07 Behringwerke Aktiengesellschaft Bispecific and oligospecific mono-and oligovalent receptors, the preparation and use thereof
US4962497A (en) * 1989-09-21 1990-10-09 At&T Bell Laboratories Building-block architecture of a multi-node circuit-and packet-switching system
US5136580A (en) * 1990-05-16 1992-08-04 Microcom Systems, Inc. Apparatus and method for learning and filtering destination and source addresses in a local area network system
US5150464A (en) * 1990-06-06 1992-09-22 Apple Computer, Inc. Local area network device startup process
US5280480A (en) * 1991-02-21 1994-01-18 International Business Machines Corporation Source routing transparent bridge
US5274631A (en) * 1991-03-11 1993-12-28 Kalpana, Inc. Computer network switching system
US5241682A (en) * 1991-04-18 1993-08-31 International Business Machines Corporation Border node having routing and functional capability in a first network and only local address capability in a second network
US6046992A (en) * 1991-10-01 2000-04-04 Intermec Ip Corp. Radio frequency local area network
US5287103A (en) * 1991-12-30 1994-02-15 At&T Bell Laboratories Method and apparatus for providing local area network clients with internetwork identification data
US5319644A (en) * 1992-08-21 1994-06-07 Synoptics Communications, Inc. Method and apparatus for identifying port/station relationships in a network
US5371852A (en) * 1992-10-14 1994-12-06 International Business Machines Corporation Method and apparatus for making a cluster of computers appear as a single host on a network
US5574860A (en) * 1993-03-11 1996-11-12 Digital Equipment Corporation Method of neighbor discovery over a multiaccess nonbroadcast medium
US5526489A (en) * 1993-03-19 1996-06-11 3Com Corporation System for reverse address resolution for remote network device independent of its physical address
US5394402A (en) * 1993-06-17 1995-02-28 Ascom Timeplex Trading Ag Hub for segmented virtual local area network with shared media access
US5715394A (en) * 1993-06-29 1998-02-03 Alcatel N.V. Method of supporting the management of a communications network, and network management facility therefor
US5991828A (en) * 1993-08-25 1999-11-23 Fujitsu Limited System for automatically connecting portable device to network using network environment information including domain name of naming device and community name of network management protocol
US5430715A (en) * 1993-09-15 1995-07-04 Stratacom, Inc. Flexible destination address mapping mechanism in a cell switching communication controller
US5519706A (en) * 1993-12-03 1996-05-21 International Business Machines Corporation Dynamic user registration method in a mobile communications network
US5530963A (en) * 1993-12-16 1996-06-25 International Business Machines Corporation Method and system for maintaining routing between mobile workstations and selected network workstation using routing table within each router device in the network
US5809483A (en) * 1994-05-13 1998-09-15 Broka; S. William Online transaction processing system for bond trading
US5617421A (en) * 1994-06-17 1997-04-01 Cisco Systems, Inc. Extended domain computer network using standard links
US5594732A (en) * 1995-03-03 1997-01-14 Intecom, Incorporated Bridging and signalling subsystems and methods for private and hybrid communications systems including multimedia systems
US5802047A (en) * 1995-05-31 1998-09-01 Nec Corporation Inter-LAN connecting device with combination of routing and switching functions
US5758282A (en) * 1995-06-19 1998-05-26 Sharp Kabushiki Kaisha Radio terminal using allocated addresses
US5793763A (en) * 1995-11-03 1998-08-11 Cisco Technology, Inc. Security system for network address translation systems
US5862348A (en) * 1996-02-09 1999-01-19 Citrix Systems, Inc. Method and apparatus for connecting a client node to a server node based on load levels
US5912891A (en) * 1996-02-28 1999-06-15 Hitachi, Ltd. Virtual network system
US5968116A (en) * 1996-03-27 1999-10-19 Intel Corporation Method and apparatus for facilitating the management of networked devices
US5835720A (en) * 1996-05-17 1998-11-10 Sun Microsystems, Inc. IP discovery apparatus and method
US5854901A (en) * 1996-07-23 1998-12-29 Cisco Systems, Inc. Method and apparatus for serverless internet protocol address discovery using source address of broadcast or unicast packet
US6496866B2 (en) * 1996-08-23 2002-12-17 International Business Machines Corporation System and method for providing dynamically alterable computer clusters for message routing
US5835725A (en) * 1996-10-21 1998-11-10 Cisco Technology, Inc. Dynamic address assignment and resolution technique
US5812529A (en) * 1996-11-12 1998-09-22 Lanquest Group Method and apparatus for network assessment
US6470389B1 (en) * 1997-03-14 2002-10-22 Lucent Technologies Inc. Hosting a network service on a cluster of servers using a single-address image
US6091951A (en) * 1997-05-14 2000-07-18 Telxon Corporation Seamless roaming among multiple networks
US5918016A (en) * 1997-06-10 1999-06-29 Texas Instruments Incorporated System with program for automating protocol assignments when newly connected to varing computer network configurations
US6023724A (en) * 1997-09-26 2000-02-08 3Com Corporation Apparatus and methods for use therein for an ISDN LAN modem that displays fault information to local hosts through interception of host DNS request messages
US6092196A (en) * 1997-11-25 2000-07-18 Nortel Networks Limited HTTP distributed remote user authentication system
US6026441A (en) * 1997-12-16 2000-02-15 At&T Corporation Method for establishing communication on the internet with a client having a dynamically assigned IP address
US6266335B1 (en) * 1997-12-19 2001-07-24 Cyberiq Systems Cross-platform server clustering using a network flow switch
US6009103A (en) * 1997-12-23 1999-12-28 Mediaone Group, Inc. Method and system for automatic allocation of resources in a network
US6370584B1 (en) * 1998-01-13 2002-04-09 Trustees Of Boston University Distributed routing
US6055236A (en) * 1998-03-05 2000-04-25 3Com Corporation Method and system for locating network services with distributed network address translation
US6188691B1 (en) * 1998-03-16 2001-02-13 3Com Corporation Multicast domain virtual local area network
US6141687A (en) * 1998-05-08 2000-10-31 Cisco Technology, Inc. Using an authentication server to obtain dial-out information on a network
US6975631B1 (en) * 1998-06-19 2005-12-13 Juniper Networks, Inc. Network packet forwarding lookup with a reduced number of memory accesses
US6119160A (en) * 1998-10-13 2000-09-12 Cisco Technology, Inc. Multiple-level internet protocol accounting
US6425008B1 (en) * 1999-02-16 2002-07-23 Electronic Data Systems Corporation System and method for remote management of private networks having duplicate network addresses
US6810010B1 (en) * 1999-04-14 2004-10-26 Nec Corporation Redundant LAN system, active line/stand-by line switching method, and recording medium
US6480508B1 (en) * 1999-05-12 2002-11-12 Westell, Inc. Router-based domain name system proxy agent using address translation
US6693878B1 (en) * 1999-10-15 2004-02-17 Cisco Technology, Inc. Technique and apparatus for using node ID as virtual private network (VPN) identifiers
US6636499B1 (en) * 1999-12-02 2003-10-21 Cisco Technology, Inc. Apparatus and method for cluster network device discovery

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017782A1 (en) * 2002-07-25 2004-01-29 Moxa Technologies Co., Ltd. Equipment monitoring system line swap fast recovery method
US7289430B2 (en) * 2002-07-25 2007-10-30 Moxa Technologies Co., Ltd. Equipment monitoring system line swap fast recovery method
US20040153571A1 (en) * 2003-01-31 2004-08-05 Fujitsu Component Limited Console switch and system using the same
US7562155B2 (en) * 2003-01-31 2009-07-14 Fujitsu Component Limited System, method, and computer program for a console switch
US8472445B1 (en) 2003-03-24 2013-06-25 Marvell Israel (M.I.S.L) Ltd Efficient host-controller address learning in ethernet switches
US7826452B1 (en) * 2003-03-24 2010-11-02 Marvell International Ltd. Efficient host-controller address learning in ethernet switches
US9294397B1 (en) 2003-03-24 2016-03-22 Marvell Israel (M.I.S.L) Ltd. Apparatus and method for forwarding packets based on approved associations between ports and addresses of received packets
US20050076145A1 (en) * 2003-10-07 2005-04-07 Microsoft Corporation Supporting point-to-point intracluster communications between replicated cluster nodes
US7631100B2 (en) * 2003-10-07 2009-12-08 Microsoft Corporation Supporting point-to-point intracluster communications between replicated cluster nodes
US20050138157A1 (en) * 2003-12-23 2005-06-23 Ken-Ju Jung Network device discovery system and method thereof
US20060174036A1 (en) * 2005-01-31 2006-08-03 Dain Joseph W Apparatus, system, and method for automatically mapping a tape library system
US7962645B2 (en) * 2005-01-31 2011-06-14 International Business Machines Corporation Apparatus, system, and method for automatically mapping a tape library system
US8001393B2 (en) * 2007-02-16 2011-08-16 Hitachi, Ltd. Storage control device
US20090135715A1 (en) * 2007-11-27 2009-05-28 International Business Machines Corporation Duplicate internet protocol address resolution in a fragmented switch stack environment
US8213297B2 (en) 2007-11-27 2012-07-03 International Business Machines Corporation Duplicate internet protocol address resolution in a fragmented switch stack environment
US20090254649A1 (en) * 2008-04-02 2009-10-08 International Business Machines Corporation High availability of internet protocol addresses within a cluster
US8108514B2 (en) 2008-04-02 2012-01-31 International Business Machines Corporation High availability of internet protocol addresses within a cluster
US8892689B1 (en) * 2008-04-30 2014-11-18 Netapp, Inc. Method and apparatus for a storage server to automatically discover and join a network storage cluster
US20090300218A1 (en) * 2008-05-30 2009-12-03 Asustek Computer Inc. Network sharing method
US8145764B2 (en) * 2008-05-30 2012-03-27 Asustek Computer Inc. Network sharing method without conflict
US9077558B2 (en) * 2009-02-27 2015-07-07 Siemens Canada Limited Client/bridge failure recovery method and system
US20100220701A1 (en) * 2009-02-27 2010-09-02 Ruggedcom Inc. Client/Bridge and Method and System for Using Same
US9203797B2 (en) * 2009-03-31 2015-12-01 Sony Corporation Network comprising a plurality of devices and root device and method for assigning a network address
US20100250716A1 (en) * 2009-03-31 2010-09-30 Sony Corporation Network comprising a plurality of devices and root device and method for assigning a network address
US20110085560A1 (en) * 2009-10-12 2011-04-14 Dell Products L.P. System and Method for Implementing a Virtual Switch
US20110164505A1 (en) * 2010-01-04 2011-07-07 Samer Salam Cfm for conflicting mac address notification
US8416696B2 (en) 2010-01-04 2013-04-09 Cisco Technology, Inc. CFM for conflicting MAC address notification
US9019840B2 (en) 2010-01-04 2015-04-28 Cisco Technology, Inc. CFM for conflicting MAC address notification
US8718053B2 (en) 2010-11-12 2014-05-06 Cisco Technology, Inc. Packet transport for network device clusters
US9450772B2 (en) 2012-01-06 2016-09-20 Huawei Technologies Co., Ltd. Method, group server, and member device for accessing member resources
RU2598582C2 (en) * 2012-01-06 2016-09-27 Хуавей Текнолоджиз Ко., Лтд. Method, group server and member device to access to member resources
US8891405B2 (en) * 2012-07-18 2014-11-18 International Business Machines Corporation Integrated device management over Ethernet network
US20140022937A1 (en) * 2012-07-18 2014-01-23 International Business Machines Corporation Integrated device management over ethernet network
US9755892B2 (en) 2012-07-18 2017-09-05 International Business Machines Corporation Integrated device managment over Ethernet network
US20170070428A1 (en) * 2013-09-05 2017-03-09 Pismo Labs Technology Limited Method and system for converting a broadcast packet to a unicast packet at an access point
US10298416B2 (en) * 2013-09-05 2019-05-21 Pismo Labs Technology Limited Method and system for converting a broadcast packet to a unicast packet at an access point
US20170026314A1 (en) * 2015-07-23 2017-01-26 Honeywell International Inc. Built-in ethernet switch design for rtu redundant system
CN107852375A (en) * 2015-07-23 2018-03-27 霍尼韦尔国际公司 Built-in ethernet switch design for RTU redundant systems
US9973447B2 (en) * 2015-07-23 2018-05-15 Honeywell International Inc. Built-in ethernet switch design for RTU redundant system

Also Published As

Publication number Publication date
US7545820B2 (en) 2009-06-09
US6917626B1 (en) 2005-07-12
US20050213560A1 (en) 2005-09-29

Similar Documents

Publication Publication Date Title
US6917626B1 (en) Apparatus and method for automatic cluster network device address assignment
US6636499B1 (en) Apparatus and method for cluster network device discovery
US6654796B1 (en) System for managing cluster of network switches using IP address for commander switch and redirecting a managing request via forwarding an HTTP connection to an expansion switch
USRE41750E1 (en) Apparatus and method for redirection of network management messages in a cluster of network devices
US6952421B1 (en) Switched Ethernet path detection
US7568040B2 (en) Techniques for establishing subscriber sessions on an access network using DHCP
AU697935B2 (en) Method for establishing restricted broadcast groups in a switched network
US7039049B1 (en) Method and apparatus for PPPoE bridging in a routing CMTS
Rayes et al. The internet in IoT
Cisco Command Descriptions (part 1)
Cisco Product Overview
Cisco Router Products Command Summary Internetwork Operating System Release 10.2
Cisco Switch Command Quick Reference
Cisco Switch Command Quick Reference
Cisco Product Overview
Cisco X.25 and LAPB Commands
Cisco X.25 and LAPB Commands
Cisco X.25 and LAPB Commands
Cisco X.25 and LAPB Commands
Cisco Configuring CSS Network Protocols
Cisco Product Overview
Cisco X.25 and LAPB Commands
Cisco X.25 and LAPB Commands
Cisco X.25 and LAPB Commands
Cisco Product Overview

Legal Events

Date Code Title Description
AS Assignment

Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DUVVURY, MURALI;REEL/FRAME:016604/0457

Effective date: 20000111

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION