WO2017023247A1 - A networking infrastructure center - Google Patents

A networking infrastructure center Download PDF

Info

Publication number
WO2017023247A1
WO2017023247A1 PCT/US2015/043131 US2015043131W WO2017023247A1 WO 2017023247 A1 WO2017023247 A1 WO 2017023247A1 US 2015043131 W US2015043131 W US 2015043131W WO 2017023247 A1 WO2017023247 A1 WO 2017023247A1
Authority
WO
WIPO (PCT)
Prior art keywords
networking
switches
switch
topology
infrastructure
Prior art date
Application number
PCT/US2015/043131
Other languages
French (fr)
Inventor
Patrick A. Raymond
Michael L. Sabotta
Melvin K. Benedict
Han Wang
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/043131 priority Critical patent/WO2017023247A1/en
Publication of WO2017023247A1 publication Critical patent/WO2017023247A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/111Switch interfaces, e.g. port details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/14Multichannel or multilink protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/356Switches specially adapted for specific applications for storage area networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/65Re-configuration of fast packet switches

Definitions

  • a network may include a number of user devices, storage switches, servers and networking switches.
  • the user devices, servers, storage switches, and networking switches may be connected to each other.
  • the user devices and the servers may exchange data in the form of packets. Further, the user devices and the servers may share hardware resources to maximize computing power.
  • FIG. 1 A is a diagram of a networking chassis, according to one example of principles described herein.
  • FIG. 1 B is a diagram of a system for a networking infrastructure center, according to one example of principles described herein.
  • FIG. 1 C is a diagram of a shared input/output networking and storage multi-tier infrastructure, according to one example of principles described herein.
  • Fig. 1 D is a diagram of a number of ports on a networking chassis, according to one example of principles described herein.
  • Fig. 1 E is a diagram of a networking chassis, according to one example of principles described herein.
  • Fig. 1 F is a diagram of a shuffler, according to one example of principles described herein.
  • Fig. 1 G is a diagram of a shuffler splitting an infrastructure topology, according to one example of principles described herein.
  • Fig. 2A is a diagram of a fat-tree infrastructure topology, according to one example of principles described herein.
  • Fig. 2B is a diagram of a fat-tree infrastructure topology implemented on a networking chassis, according to one example of principles described herein.
  • Fig. 3A is a diagram of an island-centric infrastructure topology, according to one example of principles described herein.
  • Fig. 3B is a diagram of an island-centric infrastructure topology implemented on a networking chassis, according to one example of principles described herein.
  • Fig. 4A is a diagram of a spine-centric infrastructure topology, according to one example of principles described herein.
  • Fig. 4B is a diagram of a spine-centric infrastructure topology implemented on a networking chassis, according to one example of principles described herein.
  • FIG. 5 is a flowchart a method for building a networking infrastructure center, according to one example of principles described herein.
  • a network may include a number of user devices, servers, storage switches, and networking switches.
  • the network includes a number of chassis to physically hold the servers, storage devices, storage switches, and networking switches.
  • networking cables are used to connect the servers, storage devices, storage switches, and networking switches together on the chassis.
  • the servers, storage switches, and networking switches are connected, via the networking cables, to each other realize an infrastructure topology specific for the network.
  • the chassis are designed to realize a single
  • the principles described herein include a networking infrastructure center.
  • the networking infrastructure center may include a networking chassis, networking switches implemented in the networking chassis via module slots, the networking switches being swappable in the module slots to create an infrastructure topology, storage switches implemented in the networking chassis via the module slots, the storage switches being swappable to create the infrastructure topology, and a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via physical signal multiplexing to physically isolate the networking switches and the storage switches.
  • the networking chassis can be customized to realize a number of infrastructure topologies.
  • networking switch means a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device.
  • Networking switches may include net switches, spine switches, leaf switches, or other types of networking switches.
  • the term "spine switch” means a networking device implemented on a second level of an infrastructure topology that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device.
  • the spine switches may be connected to net switches and leaf switches.
  • the spine switches may be similar to a core of a traditional core- aggregate-access model for processing north and south network traffic that travels in and out of a data center. Further, the spine switches may be several high-throughput layer 3 switches with high port density.
  • leaf switch means a networking device implemented on a third level of an infrastructure topology that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. Further, the leaf switches may be connected to spine switches. The leaf switches may be similar to an access layer of a traditional core-aggregate- access model. As a result, a leaf switch may provide network connection points and uplink for spine switches.
  • the term "net switch” means a networking device implemented on a first level of an infrastructure topology that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. Further, the net switch may be connected to spine switches. In some examples, the net switch may be a root of the infrastructure topology.
  • the term "pass through” means a mechanism that allows a signal to be sent from one networking switch or storage switch to another networking switch or storage switch without being altered.
  • a net switch may be connected to a pass through and the pass through may be connected to a spine switch.
  • the signal sent from the net switch to the spine switch via the pass through may be unaltered.
  • a number of or similar language is meant to be understood broadly as any positive number comprising 1 to infinity; zero not being a number, but the absence of a number.
  • Fig. 1A is a diagram of a networking chassis, according to one example of principles described herein.
  • a networking infrastructure center includes a networking chassis, the networking chassis includes three top cascading physical hardware layers and a bottom cascading physical hardware layer.
  • the networking infrastructure center (100) includes a networking chassis (1 14).
  • the networking chassis (1 14) may be a networking rack that can accommodate a number of networking components.
  • networking chassis (1 14) may be a 4 unit (U) networking chassis.
  • the networking chassis (1 14) includes a number of module slots (126).
  • the module slots (126) include module slot 1 (126-1 ), module slot 2 (126-2), module slot 3 (126-3), module slot 4 (126-4), module slot 5 (126-5), module slot 6 (126-6), module slot 7 (126-7), module slot 8 (126-8), module slot 9 (126-9), module slot 10 (126-10), module slot 1 1 (126-1 1 ), module slot 12 (126-12), module slot 13 (126-1 ), module slot 14 (126-14), module slot 15 (126-15), and module slot 16 (126-16).
  • the module slots (126) may be sized to accommodate networking switches such as leaf switches, spine switches, net switches, and serial attached SCSI (SAS) switches. Further, the module slots (126) may be sized to accommodate storage switches. As will be described in other parts of this specification, the module slots (126) allow the leaf switches, spine switches, as well as other networking switches to be arranged such that a number of different infrastructure topologies can be realized.
  • networking chassis (1 14) of the networking infrastructure center (100) may include a number of cascading physical hardware layers (1 16, 1 18).
  • the networking chassis (1 14) may include cascading physical hardware layer one (1 16-1 ), cascading physical hardware layer two (1 16-2), cascading physical hardware layer three (1 16-3), and cascading physical hardware layer 4 (1 16-4).
  • Cascading physical hardware layer one (1 16-1 ), cascading physical hardware layer two (1 16-2), and cascading physical hardware layer three (1 16-3) may make up the three top cascading physical hardware layers.
  • cascading physical hardware layer 4 (1 16-4) may make up the bottom cascading physical hardware layer.
  • the three top cascading physical hardware layers may include leaf switches implemented in a cartridge tray form factor.
  • the three top cascading physical hardware layers may include spine switches implemented in a cartridge tray form factor.
  • the three top cascading physical hardware layers may include a combination of spine switches, net switches, and leaf switches.
  • the three top cascading physical hardware layers may accommodate networking switches.
  • cascading physical hardware layer 4 (1 16-4) may be used to accommodate SAS switches and SAS lanes.
  • cascading physical hardware layer 4 (1 16-4) may be used to accommodate storage switches.
  • the bottom cascading physical hardware layer may be used to accommodate the storage switches.
  • the networking chassis (1 14) may include cartridge tray form factors for inserting or removing leaf switches, spine switches, net switches, and SAS switches from module slots of the networking chassis.
  • the leaf switches, spine switches, net switches, and SAS switches may be arranged on any of the cascading physical hardware layers (1 16, 1 18) to realize any infrastructure topology.
  • the networking chassis may be a 1 U networking chassis to a 7U networking chassis.
  • the networking chassis may include more or less modules slots than illustrated in Fig. 1A.
  • the networking chassis may include any number of cascading physical hardware layers to realize any infrastructure topology.
  • the networking chassis may include three cascading physical hardware layers.
  • the networking chassis may utilize storage switches.
  • the storage switches may be implemented on the bottom cascading physical hardware layer of the networking chassis.
  • the networking chassis may include other combinations of SAS switches and other storage switches.
  • Fig. 1 B is a diagram of a system for a networking infrastructure center, according to one example of principles described herein.
  • the system includes a networking chassis, spine switches implemented in a cartridge tray form factor on the networking chassis, leaf switches implemented in a cartridge tray form factor on the networking chassis, and SAS switches.
  • the networking chassis (1 14) of the networking infrastructure center (100) may include a number of cascading physical hardware layers (1 16 and 1 18).
  • the three top cascading physical hardware layers (1 16-1 , 1 16-2, and 1 16-3) may include leaf switches (102) implemented in a cartridge tray form factor.
  • leaf switch 1 (102-1 ) and leaf switch 4 (102-4) are implemented on cascading physical hardware layer 1 (1 16-1 ).
  • Leaf switch 2 (102-2) and leaf switch 5 (102-5) are examples of leaf switches (102-5).
  • leaf switch 3 (102-3) and leaf switch 6 (102-6) are implemented on cascading physical hardware layer 3 (1 16-3).
  • the three top cascading physical hardware layers may include spine switches (104) implemented in a cartridge tray form factor. As illustrated, spine switch 1 (104-1 ) and spine switch 4 (104-4) are implemented on cascading physical hardware layer 1 (1 16-1 ). Spine switch 2 (104-2) and spine switch 5 (104-5) are implemented on cascading physical hardware layer 2 (1 16-2). Further, spine switch 3 (104-3) and spine switch 6 (104-6) are implemented on cascading physical hardware layer 3 (1 16-3).
  • cascading physical hardware layer 4 (1 16-4) may be used to accommodate SAS switches (108) and SAS lanes (106). As illustrated, cascading physical hardware layer 4 (1 16-4) includes SAS lanes 1 (106-1 ), SAS switch 1 (108-1 ), SAS switch 2 (108-2), and SAS lanes 2 (106-2).
  • the networking switches such as the leaf switches (102) and spine switches (104) may be physically arranged in the networking chassis (1 14) to realize a number of infrastructure topologies.
  • the infrastructure topologies include a fat-tree topology, an island -centric topology, or a spine-centric topology.
  • Fig. 1 C is a diagram of a shared input/output networking and storage multi-tier infrastructure, according to one example of principles described herein.
  • the shared input/output networking and storage multi-tier infrastructure may include a number of trays, a networking chassis, and external storage.
  • the shared input/output networking and storage multi-tier infrastructure (150) includes a number of trays (128-1 ).
  • the shared input/output networking and storage multi-tier infrastructure (150) includes tray 1 (128-1 ), tray 2 (128-2), tray 3 (128-3), tray 4 (128-4), tray 5 (128- 5), and tray 6 (128-6).
  • Each of the trays (128) may be cartridge tray form factors for inserting or removing leaf switches, spine switches, net switches, and SAS switches from the module slots of Fig 1A.
  • the switches associated with the trays (128) may be connected to a leaf switch (102) and an SAS switch (108) as illustrated.
  • the SAS switch (108) may be connected to external storage (130) or a storage switch.
  • the external storage (130) may be just a bunch of disks (JBOD) that connect a series of hard drives by combining multiple hard drives and capacities into a single hard drive. For example, if the external storage (130) may connect a 20 gigabyte (GB) hard drive, a 60 GB hard drive, and a 100 GB hard drive together to form a 180 GB hard drive.
  • the shared input/output networking and storage multi-tier infrastructure (150) includes the networking chassis (1 14).
  • the networking chassis (1 14) may accommodate a leaf switch (102).
  • the networking chassis (1 14) may accommodate a shuffler (1 12).
  • the shuffler (1 12) may be implemented on a backplane of the networking chassis (1 14). As will be described in other parts of this specification, the shuffler is leveraged to build variations of infrastructure topologies such as fat-tree, island-centric, spine- centric, and other infrastructure topologies.
  • the shuffler (1 12) is connected to a number of spine switches (104-1 ).
  • the shuffler (1 12) is connected to spine switch 1 (104-1 ), spine switch 2 (104-2), and spine switch n (104-n).
  • the spine switches (104) may be connected to other networking switches.
  • Fig. 1 D is a diagram of a number of ports on a networking chassis, according to one example of principles described herein. As will be described below, a number of ports used for inputs and outputs on the networking chassis. Further, the number of ports is customizable to introduce a number of in-and-out ratios.
  • the networking chassis (1 14) includes a number of ports (180 to 196).
  • the ports (180 to 196) may be used to connect networking switches such as spine switches, leaf switches, SAS switches, net switches, and other switches to each other.
  • ports (186 to 196) in the center of the networking chassis (1 14) are used as a constant shuffle for the ports (180, 182, 184) of the left side of the networking chassis (1 14) and the ports (181 , 183, 185) of the right side of the networking chassis (1 14).
  • ports 1-8 (180) on the left side of the networking chassis (1 14) and ports 1 -8 (181 ) on the right side of the networking chassis (1 14) may connect to ports 1 (186-1 ) to ports 8 (192-4) respectively.
  • port 1 of ports 1 -8 (180) on the left side of the networking chassis (1 14) and port 1 of ports 1-8 (181 ) on the right side of the networking chassis (1 14) may connect to ports 1 (186-1 ).
  • Ports 1 (186-1 ) may include two ports that allows port 1 of ports 1 -8 (180) on the left side of the networking chassis (1 14) and port 1 of ports 1-8 (181 ) on the right side of the networking chassis (1 14) to connect to ports 1 (186-1 ). Further, each of the ports (180, 182, 184) of the left side of the networking chassis (1 14) and the ports (181 , 183, 185) of the right side of the networking chassis (1 14) may connect to their respective ports (186 to 196) in the center of the networking chassis (175).
  • the ports (186 to 196) in the center of the networking chassis (1 14) may be used as a pass through.
  • a pass through may be a mechanism that allow a signal to be sent from one networking switch to another networking switch without being altered.
  • a net switch may be connected to a pass through and the pass through may be connected to a spine switch. As a result, the signal sent from the net switch to the spine switch via the pass through may be unaltered.
  • ports 2 (186-2), ports 3 (186-3), ports 10 (188-2), ports 1 1 (188-3), ports 18 (190-2), ports 19 (190-3), ports 6 (192-2), ports 7 (192-3), ports 14 (194-2), ports 15 (194-3), ports 22 (196-2), and ports 23 (196-3) may be used as pass through.
  • ports 2 (186-2), ports 3 (186-3), ports 10 (188-2), ports 1 1 (188-3), ports 18 (190-2), ports 19 (190-3), ports 6 (192-2), ports 7 (192-3), ports 14 (194-2), ports 15 (194-3), ports 22 (196-2), and ports 23 (196-3) may be used as pass through for an infrastructure topology.
  • ports 4 (186 to 196) in the center of the networking chassis (1 14) may not be populated.
  • ports 4 (186- 4), ports 12 (188-4), ports 20 (190-4), ports 8 (192-4), ports 16 (194-4), and ports (196-4) may not be populated.
  • ports 4 (186-4), ports 12 (188- 4), ports 20 (190-4), ports 8 (192-4), ports 16 (194-4), and ports (196-4) may not be used for an infrastructure topology.
  • the number of ports (180 to 196) may be used in a one to one mode.
  • the networking chassis may include 144 ports.
  • the number of ports (180 to 196) may be used in a two to one mode.
  • the networking chassis may include 196 ports.
  • the number of ports (180 to 196) may be used in an eight to one mode.
  • the networking chassis may include 256 ports.
  • the number of ports used for input and output is customizable which introduces a number of in-and-out ratios.
  • any of the ports may be used as pass through to realize specific infrastructure topologies. Further, while this example has been described with reference to specific ports not being populated, any of the ports may not be populated to realize specific
  • Fig. 1 E is a diagram of a networking chassis, according to one example of principles described herein.
  • a networking chassis accommodates a number of networking switches and storage switches. Further, the networking chassis accommodates a shuffler to realize an infrastructure topology.
  • the system (175) includes a networking chassis (1 14).
  • the networking chassis (1 14) includes module slots (126).
  • the module slots (126) include module slot 1 (126-1 ), module slot 2 (126-2), module slot 3 (126-3), module slot 4 (126-4), module slot 5 (126-5), module slot 6 (126-6), module slot 7 (126-7).
  • the system (175) includes networking switches (102 and 104).
  • the networking switches (102 and 104) may include leaf switch 1 (102-1 ), leaf switch 2 (102-2), leaf switch 3 (102-3), spine switch 1 (104-1 ), spine switch 2 (104-2), and spine switch 3 (104-3).
  • the networking switches (102 and 104) are implemented in the networking chassis (1 14) via the module slots (126) as described above. As illustrated, leaf switch 1 (102-1 ) is inserted into module slot 4 (126-4). Leaf switch 2 (102-2) is inserted into module slot 5 (126-5). Further, leaf switch 3 (102-3) is inserted into module slot 6 (126-6). Spine switch 1 (104-1 ) is inserted into module slot 1 (126-1 ).
  • Spine switch 2 (104-2) is inserted into module slot 2 (126-2). Further, spine switch 3 (104-3) is inserted into module slot 3 (126-3). Further, the networking switches (102 and 104) are swappable in the module slots (126) to create an infrastructure topology. For example, spine switch 1 (104-1 ) may be located in module slot 2 (126-2) instead of module slot 1 (126-1 ). As a result, the networking switches (102 and 104) may be located in any of the modules slots (126).
  • the system (175) includes storage switches (198).
  • the storage switches (198) include storage switch 1 (198-1 ) and storage switch 2 (198-2).
  • the storage switches (198) are implemented in the networking chassis (1 14) via the module slots (126-7).
  • Storage switch 1 (198-1 ) is inserted into module slot 7 (126-7) and storage switch 2 (198-2) is inserted into module slot 8 (126-8).
  • the storage switches (198) are swappable to create the infrastructure topology.
  • storage switch 1 (198-1 ) may be located in module slot 8 (126-8) instead of module slot 7 (126-7).
  • the storage switches (198) may be located in any of the modules slots (126).
  • the system (175) further includes a shuffler (1 12).
  • the shuffler (1 12) detects locations of the networking switches (102 and 104) and the storage switches (198) to allow the networking switches (102 and 104) and the storage switches (198) to be placed in arbitrary locations on the networking chassis (1 14).
  • the shuffler (1 12) may detect that spine switch 1 (104-1 ) is inserted into module slot 1 (126-1 ).
  • the shuffler (1 12) may detect locations of the networking switches (102 and 104) and the storage switches (198) via hardware detection. In hardware detection, an install pin associated with the networking chassis (1 14) is pulled to logic low when a tray is inserted.
  • the shuffler (1 12) may detect locations of the networking switches (102 and 104) and the storage switches (198) via internal communication to collect switch information. For example, the shuffler (1 12) may not enable connection of the networking switches (102 and 104) and the storage switches (198) before the shuffler (1 12) leverages a system management bus to read device field replaceable unit (FRU) data.
  • FRU read device field replaceable unit
  • the FRU data may include switch type, switch version, switch manufacturer, other FRU data, or combinations thereof.
  • the shuffler (1 12) realizes the infrastructure topology by splitting the infrastructure topology via physical signal multiplexing to physically isolate the networking switches (102 and 104) and the storage switches (198).
  • the shuffler (1 12) may use physical signal multiplexing to assign a specific channel between a single input of the shuffler (1 12) and a corresponding output. Once this assignment is made, it cannot be shared with other inputs or outputs until a switch tray has been inserted or removed from the networking chassis (1 14). More information about splitting will be described in Fig. 1 G.
  • Fig. 1 F is a diagram of a shuffler, according to one example of principles described herein. As will be described below, a shuffler includes a number of ports and a central controller.
  • the shuffler (1 12) may be a type of hardware abstraction layer.
  • the hardware abstraction layer may be a physical (PHY) layer.
  • the shuffler (1 12) may work as a multi input multi output (MIMO) system whose topology has a bipartite graph nature.
  • MIMO multi input multi output
  • the shuffler (1 12) can detect the type of switch it is connected to as well as a plurality of connected ports. As a result the topology doesn't need to be a complete bipartite graph.
  • the shuffler (1 12) provides security features by utilizing infrastructure topology verification.
  • Infrastructure topology verification is used to prevent malicious hacking and attacks on the infrastructure topology.
  • the shuffler (1 12) may verify the infrastructure topology at a hardware level.
  • the infrastructure topology is encrypted and verified on a regular basis via an encryption engine.
  • the encryption engine may use product hardware information, an encryption key, and customer information to determine an encrypted default topology for the infrastructure topology.
  • Infrastructure topology verification may further include a decryption engine.
  • decryption engine may utilize the encrypted default topology, the customer information, and the product hardware information with a decryption key to determine the infrastructure topology.
  • the customer information may include a customer identification number.
  • the customer identification number may be associated with information such as a location, a date, other information, or combinations thereof.
  • the product hardware information may include information such as a manufacturing date, a site, a serial number, other information, or combinations thereof.
  • the shuffler (1 12) may further verify the infrastructure topology at a hardware level by determining infrastructure topology updates, if a default feature is enabled under regular topology verification, and by utilizing a remote topology verification server.
  • Infrastructure topology verification may further include an encryption key generation engine for a passive new topology.
  • the encryption key generation engine may use a new topology, an old encryption key along with the encryption engine above to determine an encrypted new advanced topology for the infrastructure topology.
  • infrastructure topology verification may further include a decryption key generation engine for the passive new topology.
  • the decryption key generation engine may use the product hardware information and the new topology to determine a new decryption key. Using the new decryption key, the customer information, and the encrypted new advanced topology, the decryption engine may use this information as described above for infrastructure topology verification.
  • the shuffler (1 12) includes a number of ports (1 15).
  • the ports (1 15) may be used for connecting networking switches and storage switches together on a networking chassis to realize an infrastructure topology.
  • the ports (1 15) may be used for inputs and/or outputs.
  • the ports (1 15) may have identical bandwidths for inputs and outputs.
  • the infrastructure topology may be a fixed infrastructure topology.
  • the infrastructure topology is fixed when created. As a result, changes can be made by pulling out or pushing in trays with networking switches and storage switches.
  • the infrastructure topology may be a firmware customizable infrastructure topology. For the firmware customizable
  • the central controller (1 17) associated with the shuffler (1 12) can modify the physical signal multiplexing accordingly.
  • the infrastructure topology may be a passive infrastructure topology adaption.
  • a customer can modify the infrastructure topology on the console user interface that is software definable.
  • the infrastructure topology is transferred to shuffler (1 12).
  • the new infrastructure topology may be transferred to memory of the central controller (1 17).
  • the infrastructure topology may be software definable self-adaptive shuffling.
  • the shuffler (1 12) can implement self-adapting changes. This includes splitting the infrastructure topology based on the tray-insertion.
  • the shuffler (1 12) may detect the switch type in the infrastructure topology.
  • the logic implemented in the central controller (1 17) of shuffler (1 12) may use an optimization feature to realize the infrastructure topology.
  • the optimization feature may be preprogrammed to realize the infrastructure topology.
  • the optimization feature may be fine-tuned by a customer via manual script inputs to further realize the infrastructure topology.
  • the shuffler (1 12) may include the central controller (1 17).
  • the central controller (1 17) may be used to control the logic of the shuffler (1 12). The logic may be determined via the instructions contained in memory and executed by a processor associated with the central controller (1 17).
  • the central controller (1 17) may include the encryption engine, the decryption engine, the encryption key generation engine, and the decryption key generation engine.
  • the engines may refer to program instructions for performing a designated function. The program instructions cause the processor to execute the designated function of the engines.
  • the engines refer to a combination of hardware and program instructions to perform a designated function.
  • Each of the engines may include a processor and memory. The program instructions are stored in the memory and cause the processor to execute the designated function of the engine as described above.
  • Fig. 1 G is a diagram of a shuffler splitting an infrastructure topology, according to one example of principles described herein.
  • an infrastructure topology includes a number of networking switches and a number of storage switches.
  • the infrastructure topology (195) includes a number of networking switches (123).
  • the infrastructure topology (195) includes networking switch 1 (123-1 ), networking switch 2 (123-2), networking switch 3 (123-3), networking switch 4 (123-4), and networking switch 5 (123-5).
  • the infrastructure topology (195) includes a number of storage switches (127).
  • the infrastructure topology (195) includes storage switch 1 (127-1 ), storage switch 2 (127-2), storage switch 3 (127-3), storage switch 4 (127-4), and storage switch 5 (127-5).
  • the infrastructure topology (195) includes a number of shufflers (129).
  • the infrastructure topology (195) includes shuffler 1 (129-1 ) and shuffler 2 (129-2).
  • the infrastructure topology (195) can be split.
  • the shuffler (129) can split the infrastructure topology (195) to accommodate both applications.
  • the shufflers (129) use physical signal multiplexing to split the infrastructure topology (195).
  • the infrastructure topology (195) is split via the shufflers (129).
  • shuffler 1 (129-1 ) is used to connect networking switch 1 (123-1 ) and networking switch 2 (123-2) to networking switch 3 (123-3).
  • shuffler 2 (129-2) is used to connect networking switch 3 (123-3) to networking switch 4 (123-4) and networking switch 5 (123-5).
  • shuffler 1 (129-1 ) is used to connect storage switch 1 (127-1 ) and storage switch 2 (127- 2) to storage switch 3 (127-3).
  • shuffler 2 (129-2) is used to connect storage switch 3 (127-3) to storage switch 4 (127-4) and storage switch 5 (127- 5).
  • the shufflers (129) split the infrastructure topology (195) to physically isolate the networking switches (123) and the storage switches (127).
  • spine switches, leaf switches, SAS switches, and other networking switches may be arranged in the networking chassis to create a number of infrastructure topologies.
  • the networking chassis may be flexible to build a number of infrastructure topologies.
  • Fig. 2A is a diagram of a fat-tree topology, according to one example of principles described herein. As will be described below, a number of net switches, spine switches, and leaf switches may be connected to each other such that a fat-tree topology is realized.
  • the fat-tree topology (200) includes a number of net switches (210).
  • the net switches (210) include net switch 1 (210-1 ), net switch 2 (210-2), net switch 3 (210-3), and net switch 4 (210-4).
  • the net switches (210) may be a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device.
  • the net switches (210) may be implemented on a first level of the fat-tree topology (200).
  • the fat-tree topology (200) includes a number of spine switches (204).
  • the spine switches (204) may be a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device.
  • the spine switches (204) include spine switch 1 (204-1 ), spine switch 2 (204-2), spine switch 3 (204-3), spine switch 4 (204-4), spine switch 5 (204-5), spine switch 6 (204-6), spine switch 7 (204-7), and spine switch 8 (204-8).
  • Each of the net switches (210) may be connected to each of the spine switches (204) via networking cables (232) represented as solid lines in Fig. 2A.
  • net switch 1 (210-1 ) is connected to spine switch 1 (204-1 ), spine switch 2 (204-2), spine switch 3 (204-3), spine switch 4 (204-4), spine switch 5 (204-5), spine switch 6 (204-6), spine switch 7 (204-7), and spine switch 8 (204-8).
  • the spine switches (204) may be implemented on a second level of the fat-tree topology (200).
  • the networking cables (232) may be designed to connect to four ports, three ports, or two ports.
  • a shuffler may be located between the net switches (210) and the spine switches (204) to realize the fat-tree topology (200).
  • the fat-tree topology (200) includes a number of leaf switches (202).
  • the leaf switches (202) may be a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device.
  • the leaf switches (202) include leaf switch 1 (202-1 ), leaf switch 2 (202-2), leaf switch 3 (202-3), leaf switch 4 (202-4), leaf switch 5 (202-5), leaf switch 6 (202-6), leaf switch 7 (202-
  • leaf switch 8 (202-8), leaf switch 9 (202-9), leaf switch 10 (202-10).
  • the leaf switches (202) may be implemented on a third level of the fat-tree topology (200).
  • each of leaf switch 1 to leaf switch 5 (202-1 to 202-5) may have one connection to spine switch 1 to spine switch 4 (204-1 to 204-4) via the networking cables (232) represented as solid lines in Fig. 2A.
  • leaf switch 1 (202-1 ) is connected to spine switch 1 (204-1 ), spine switch 2 (204-2), spine switch 3 (204-3), and spine switch 4 (204-4).
  • each of leaf switch 6 to leaf switch 10 may have one connection to spine switch 5 to spine switch 8 (204-5 to 204-
  • leaf switch 6 (202-6) is connected, via the networking cables (232) represented as solid lines in Fig. 2A, to spine switch 5 (204-5), spine switch 6 (204-6), spine switch 7 (204-7), and spine switch 8 (204-8).
  • a shuffler may be located between spine switch 1 to spine switch 4 (204-1 to 204-4) and leaf switch 1 to leaf switch 5 (202-1 to 202-5) to realize the fat-tree topology (200).
  • a shuffler may be located between switch 5 to spine switch 8 (204-5 to 204-8) and leaf switch 6 to leaf switch 10 (202-6 to 202-10) to realize the fat-tree topology (200).
  • Fig. 2B is a diagram of a fat-tree topology implemented on a networking chassis, according to one example of principles described herein.
  • the networking chassis implements spine switches, leaf switches, net switches, and SAS switches in a cartridge tray form factor on the networking chassis to realize the fat-tree topology of Fig. 2A.
  • the networking chassis (214) may be illustrated as two networking chassis.
  • the networking chassis (214) is illustrated as networking chassis (214-1 ) and networking chassis (214-2).
  • the shufflers are illustrated as solid double arrow lines in Fig. 2B.
  • the networking chassis (214-1 ) includes the net switches (210) depicted in Fig. 2A.
  • net switch 1 (210-1 ) is implemented on cascading physical hardware layer 1
  • net switch 2 (210-2) is implemented on cascading physical hardware layer 2
  • net switch 3 (210-3) is implemented on cascading physical hardware layer 3 of the networking chassis (214-1 ) to realize the fat-tree topology of Fig. 2A.
  • the networking chassis (214) includes the spine switches (204) depicted in Fig. 2A.
  • spine switch 1 (204-1 ) and spine switch 4 (204-4) are implemented on cascading physical hardware layer 1.
  • Spine switch 2 (204-2) and spine switch 5 (204-5) are implemented on cascading physical hardware layer 2.
  • spine switch 3 (204-3) and spine switch 6 (204-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (214-2) may include a number of pass through (224).
  • the pass through (224) include pass through 1 (224-1 ), pass through 2 (224-2), pass through 3 (224-3), pass through 4 (224-4), pass through 5 (224-5), pass through 6 (224-6).
  • pass through 1 (224-1 ) and pass through 4 (224-4) are implemented on cascading physical hardware layer 1.
  • Pass through 2 (224-2) and pass through 5 (224-5) are implemented on cascading physical hardware layer 2.
  • Pass through 3 (224-3) and pass through 6 (224-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (214-2) may include a number of leaf switches (202).
  • the leaf switches (202) include leaf switch 1 (202-1 ), leaf switch 2 (202-2), leaf switch 3 (202-3), leaf switch 4 (202-4), leaf switch 5 (202- 5), leaf switch 6 (202-6), leaf switch 7 (202-7), leaf switch 8 (202-8), leaf switch 9 (202-9), leaf switch 10 (202-10).
  • leaf switch 1 (202-4) and leaf switch 4 (202-4) are implemented on cascading physical hardware layer 1 .
  • Leaf switch 2 (202-2) and leaf switch 5 (202-5) are implemented on cascading physical hardware layer 2.
  • leaf switch 3 (202-4) and leaf switch 6 (202-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (214-2) may include SAS switches (108) and SAS lanes (106). As illustrated, cascading physical hardware layer four includes SAS lanes 1 (206-1 ), SAS switch 1 (208-1 ), SAS switch 2 (208-2), and SAS lanes 2 (206-2).
  • the net switches (210), the spine switches (204), the pass through (224), the SAS lanes (206) and SAS switches (208) may be connected together as illustrated in Fig. 2A.
  • the fat-tree topology of Fig. 2A may be realized.
  • Fig. 3A is a diagram of an island-centric topology, according to one example of principles described herein.
  • networking switches such as net switches, spine switches, and leaf switches may be connected to each other such that an island-centric topology is realized.
  • the island-centric topology (300) includes a number of net switches (310).
  • the net switches (310) include net switch 1 (310- 1 ), net switch 2 (310-2), net switch 3 (310-3), and net switch 4 (310-4).
  • the net switches (302) may be implemented on a first level of the island-centric topology (300).
  • the island-centric topology (300) includes a number of spine switches (304).
  • the spine switches (304) include spine switch 1 (304-1 ), spine switch 2 (304-2), spine switch 3 (304-3), spine switch 4 (304-4), spine switch 5 (304-5), spine switch 6 (304-6), spine switch 7 (304-7), and spine switch 8 (304-8).
  • Each of the net switches (310) may be connected to each of the spine switches (304) via networking cables (332) represented as solid lines in Fig. 3A.
  • net switch 1 (310-1 ) is connected to spine switch 1 (304-1 ), spine switch 3 (304-2), spine switch 3 (304-3), spine switch 4 (304-4), spine switch 5 (304-5), spine switch 6 (304-6), spine switch 7 (304-7), and spine switch 8 (304-8).
  • the spine switches (304) may be implemented on a second level of the island-centric topology (300).
  • a shuffler may be located between the net switches (310) and the spine switches (304) to realize the island-centric topology (300).
  • the island-centric topology (300) includes a number of leaf switches (302).
  • the leaf switches (302) include leaf switch 1 (302-1 ), leaf switch 2 (302-2), leaf switch 3 (302-3), leaf switch 4 (302-4), leaf switch 5 (302- 5), leaf switch 6 (302-6), leaf switch 7 (302-7), and leaf switch 8 (302-8).
  • each of leaf switch 1 to leaf switch 4 may have one connection to spine switch 1 to spine switch 4 (304-1 to 304-4) via the networking cables (332) represented as solid lines in Fig. 3A.
  • leaf switch 1 (302-1 ) is connected to spine switch 1 (304-1 ), spine switch 2 (304-2), spine switch 3 (304-3), and spine switch 4 (304-4).
  • each of leaf switch 5 to leaf switch 8 (302-5 to 302-8) may have one connection to spine switch 5 to spine switch 8 (304-5 to 304-8).
  • leaf switch 5 (302-5) is connected to spine switch 5 (304-5), spine switch 6 (304-6), spine switch 7 (304-7), and spine switch 8 (304-8) via the networking cables (332) represented as solid lines in Fig. 3A.
  • the leaf switches (302) may be implemented on a third level of the island-centric topology (300).
  • a shuffler may be located between leaf switch 1 to leaf switch 4 (302-1 to 302-4) and spine switch 1 to spine switch 4 (304-1 to 304-4) to realize the island-centric topology (300).
  • a shuffler may be located between leaf switch 5 to leaf switch 8 (302-5 to 302-8) and spine switch 5 to spine switch 8 (304-5 to 304-8) to realize the island-centric topology (300).
  • Fig. 3B is a diagram of an island-centric topology implemented on a networking chassis, according to one example of principles described herein.
  • the networking chassis implements spine switches, leaf switches, net switches, and SAS switches in a cartridge tray form factor on the networking chassis to realize the island-centric topology.
  • the networking chassis (314) may be illustrated as two networking chassis.
  • the networking chassis (314) is illustrated as networking chassis (314-1 ) and networking chassis (314-2).
  • the shufflers are illustrated as solid double arrow lines in Fig. 3B to realize the island-centric topology (300).
  • the networking chassis (314-1 ) includes the net switches (310) depicted in Fig. 3A.
  • net switch 1 (310-1 ) is implemented on cascading physical hardware layer 1
  • net switch 2 (310-2) is implemented on cascading physical hardware layer 2
  • net switch 3 (310-3) is implemented on cascading physical hardware layer 3 of the networking chassis (314-1 ).
  • the networking chassis (314-1 ) may include a number of pass throughs (324).
  • the pass throughs (324) include pass through 1 (324- 1 ), pass through 2 (324-2), pass through 3 (324-3), pass through 4 (324-4), pass through 5 (324-5), and pass through 6 (324-6).
  • pass through 1 (324-1 ) and pass through 4 (324-4) are implemented on cascading physical hardware layer 1 .
  • Pass through 2 (324-2) and pass through 5 (324-5) are implemented on cascading physical hardware layer 2.
  • Pass through 3 (324-3) and pass through 6 (224-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (314-2) includes the spine switches (304) depicted in Fig. 3A.
  • spine switch 1 (304-1 ) and spine switch 4 (304-4) are implemented on cascading physical hardware layer 1 .
  • Spine switch 2 (304-2) and spine switch 5 (304-5) are implemented on cascading physical hardware layer 2.
  • spine switch 3 (304-3) and spine switch 6 (304-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (314-2) may include a number of leaf switches (302) as depicted in Fig 3A.
  • the leaf switches (302) include leaf switch 1 (302-1 ), leaf switch 2 (302-2), leaf switch 3 (302-3), leaf switch 4 (302-4), leaf switch 5 (302-5), and leaf switch 6 (302-6).
  • leaf switch 1 (302-1 ) and leaf switch 4 (302-4) are implemented on cascading physical hardware layer 1 .
  • Leaf switch 2 (302-2) and leaf switch 5 (302-5) are implemented on cascading physical hardware layer 2.
  • leaf switch 3 (302-4) and leaf switch 6 (302-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (314) may include SAS switches (108) and SAS lanes (106).
  • cascading physical hardware layer 4 includes SAS lanes 1 (206-1 ), SAS switch 1 (208-1 ), SAS switch 2 (208-2), and SAS lanes 2 (206-2).
  • the net switches (310), the spine switches (304), the pass through (324), the SAS lanes (306) and SAS switches (308) may be connected together as illustrated in Fig. 3A.
  • the island-centric topology of Fig. 3A may be realized.
  • Fig. 4A is a diagram of a spine-centric topology, according to one example of principles described herein.
  • networking switches such as net switches, spine switches, and leaf switches may be connected to each other such that a spine-centric topology is realized.
  • the spine-centric topology (400) includes a number of net switches (410).
  • the net switches (410) include net switch 1 (410- 1 ), net switch 2 (410-2), net switch 3 (410-3), and net switch 4 (410-4).
  • the net switches (402) may be implemented on a first level of the spine-centric topology (400).
  • the spine-centric topology (400) includes a number of spine switches (404).
  • the spine switches (404) include spine switch 1 (404- 1 ), spine switch 2 (404-2), spine switch 3 (404-3), spine switch 4 (404-4), spine switch 5 (404-5), spine switch 6 (404-6), spine switch 7 (404-7), and spine switch 8 (404-8).
  • Net switch 1 and net switch 2 (410-1 and 410-2) may be connected to spine switch 1 to spine switch 4 (404-1 to 404-4) via networking cables (432) represented as solid lines in Fig. 4A.
  • net switch 1 (410-1 ) is connected to spine switch 1 (404-1 ), spine switch 2 (404-2), spine switch 3 (404-3), and spine switch 4 (404-4).
  • Net switch 3 (410-3) is connected to spine switch 5 (404-5), spine switch 6 (404-6), spine switch 7 (404-7), and spine switch 8 (404-8).
  • the spine switches (404) may be implemented on a second level of the spine-centric topology (400).
  • a shuffler may be located between the net switches (410) and the spine switches (404) to realize the spine-centric topology (400).
  • the spine-centric topology (400) includes a number of leaf switches (402).
  • the leaf switches (402) include leaf switch 1 (402-1 ), leaf switch 2 (402-2), leaf switch 3 (402-3), leaf switch 4 (402-4), leaf switch 5 (402- 5), leaf switch 6 (402-6), leaf switch 7 (402-7), and leaf switch 8 (402-8).
  • each of leaf switches (402) may be connected to each spine switch (404) via networking cables (432) represented as solid lines in Fig. 4A.
  • leaf switch 1 (302-1 ) is connected to spine switch 1 (404-1 ), spine switch 2 (404-2), spine switch 3 (404-3), spine switch 4 (404-4), spine switch 5 (404-5), spine switch 6 (404-6), spine switch 7 (404-7), and spine switch 8 (404- 8).
  • the leaf switches (404) may be implemented on a third level of the spine- centric topology (400).
  • a shuffler may be located between the spine switches (404) and the leaf switches (402) to realize the spine-centric topology (400).
  • Fig. 4B is a diagram of a spine-centric topology
  • the networking chassis implements spine switches, leaf switches, net switches, and SAS switches in a cartridge tray form factor on the networking chassis to realize the spine-centric topology.
  • the networking chassis (414) may be illustrated as two networking chassis. As a result, the networking chassis (414) is illustrated as networking chassis (414-1 ) and networking chassis (414- 2). Further, the shufflers are illustrated as solid double arrow lines in Fig. 4B to realize the spine-centric topology (400).
  • the networking chassis (414-1 ) includes the net switches (410) depicted in Fig. 4A.
  • net switch 1 (410-1 ) is implemented on cascading physical hardware layer 1
  • net switch 2 (410-2) is implemented on cascading physical hardware layer 2
  • net switch 3 (410-3) is implemented on cascading physical hardware layer 3 of the networking chassis (414) to realize the spine-centric topology of Fig. 4A.
  • the networking chassis (414-1 ) includes the spine switches (404) depicted in Fig. 4A.
  • spine switch 1 (404-1 ) and spine switch 4 (404-4) are implemented on cascading physical hardware layer 1
  • Spine switch 2 (404-2) and spine switch 5 (404-5) are implemented on cascading physical hardware layer 2.
  • spine switch 3 (404-3) and spine switch 6 (404-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (414-2) may include a number of pass through (424).
  • the pass through (424) include pass through 1 (424-1 ), pass through 2 (424-2), pass through 3 (424-3), pass through 4 (424-4), pass through 5 (424-5), pass through 6 (424-6).
  • pass through 1 (424-1 ) and pass through 4 (424-4) are implemented on cascading physical hardware layer 1.
  • Pass through 2 (424-2) and pass through 5 (424-5) are implemented on cascading physical hardware layer 2.
  • Pass through 3 (424-3) and pass through 6 (424-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (414-2) may include a number of leaf switches (402).
  • the leaf switches (402) include leaf switch 1 (402-1 ), leaf switch 2 (402-2), leaf switch 3 (402-3), leaf switch 4 (402-4), leaf switch 5 (402-5), and leaf switch 6 (402-6).
  • leaf switch 1 (402-1 ) and leaf switch 4 (402-4) are implemented on cascading physical hardware layer 1 .
  • Leaf switch 2 (402-2) and leaf switch 5 (402-5) are implemented on cascading physical hardware layer 2.
  • leaf switch 3 (402-3) and leaf switch 6 (402-6) are implemented on cascading physical hardware layer 3.
  • the networking chassis (414-2) may include SAS switches (408) and SAS lanes (406).
  • cascading physical hardware layer 4 includes SAS lanes 1 (406-1 ), SAS switch 1 (408-1 ), SAS switch 2 (408-2), and SAS lanes 2 (406-2).
  • the net switches (410), the spine switches (404), the pass through (424), the SAS lanes (406) and SAS switches (408) may be connected together as illustrated in Fig. 4A.
  • the spine-centric topology of Fig. 4A may be realized.
  • Fig. 5 is a flowchart a method for a networking
  • the method (500) includes determining (501 ) an infrastructure topology and physically (502) arranging networking switches and storage switches on a networking chassis to realize the infrastructure topology via a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via signal multiplexing to physically isolate the networking switches and the storage switches.
  • the method (500) includes determining (501 ) an infrastructure topology.
  • the infrastructure topologies may include a fat-tree topology, an island-centric topology, a spine-centric topology, or other infrastructure topologies.
  • the fat-tree topology, an island -centric topology, or a spine-centric topology may be determined based on the desired characteristic that each of the infrastructure topologies has to offer.
  • the method (500) includes physically (502) arranging networking switches and storage switches on a networking chassis to realize the infrastructure topology via a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via signal multiplexing to physically isolate the networking switches and the storage switches.
  • Networking switches such as the spine switches and the leaf switches are implemented on three top cascading physical hardware layers of the networking chassis to realize the infrastructure topology.
  • networking cables may be used to connect the spine switches and the leaf switches together as illustrated Figs. 2, 3, and 4 for the respective infrastructure topology.
  • the shuffler may operate as described above for splitting the infrastructure topology based on an optimization feature and verifying the infrastructure topology.

Abstract

A networking infrastructure center includes a networking chassis, networking switches implemented in the networking chassis via module slots, the networking switches being swappable in the module slots to create an infrastructure topology, storage switches implemented in the networking chassis via the module slots, the storage switches being swappable to create the infrastructure topology, and a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via physical signal multiplexing to physically isolate the networking switches and the storage switches.

Description

A NETWORKING INFRASTRUCTURE CENTER
BACKGROUND
[0001] A network may include a number of user devices, storage switches, servers and networking switches. The user devices, servers, storage switches, and networking switches may be connected to each other. By connecting the user devices, servers, storage switches and networking switches to each other, the user devices and the servers may exchange data in the form of packets. Further, the user devices and the servers may share hardware resources to maximize computing power.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] The accompanying drawings illustrate various examples of the principles described herein and are a part of the specification. The examples do not limit the scope of the claims.
[0003] Fig. 1 A is a diagram of a networking chassis, according to one example of principles described herein.
[0004] Fig. 1 B is a diagram of a system for a networking infrastructure center, according to one example of principles described herein.
[0005] Fig. 1 C is a diagram of a shared input/output networking and storage multi-tier infrastructure, according to one example of principles described herein.
[0006] Fig. 1 D is a diagram of a number of ports on a networking chassis, according to one example of principles described herein.
[0007] Fig. 1 E is a diagram of a networking chassis, according to one example of principles described herein. [0008] Fig. 1 F is a diagram of a shuffler, according to one example of principles described herein.
[0009] Fig. 1 G is a diagram of a shuffler splitting an infrastructure topology, according to one example of principles described herein.
[0010] Fig. 2A is a diagram of a fat-tree infrastructure topology, according to one example of principles described herein.
[0011] Fig. 2B is a diagram of a fat-tree infrastructure topology implemented on a networking chassis, according to one example of principles described herein.
[0012] Fig. 3A is a diagram of an island-centric infrastructure topology, according to one example of principles described herein.
[0013] Fig. 3B is a diagram of an island-centric infrastructure topology implemented on a networking chassis, according to one example of principles described herein.
[0014] Fig. 4A is a diagram of a spine-centric infrastructure topology, according to one example of principles described herein.
[0015] Fig. 4B is a diagram of a spine-centric infrastructure topology implemented on a networking chassis, according to one example of principles described herein.
[0016] Fig. 5 is a flowchart a method for building a networking infrastructure center, according to one example of principles described herein.
[0017] Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
DETAILED DESCRIPTION
[0018] As mentioned above, a network may include a number of user devices, servers, storage switches, and networking switches. Often, the network includes a number of chassis to physically hold the servers, storage devices, storage switches, and networking switches. To connect the servers, storage switches and networking switches together, networking cables are used to connect the servers, storage devices, storage switches, and networking switches together on the chassis. The servers, storage switches, and networking switches are connected, via the networking cables, to each other realize an infrastructure topology specific for the network.
[0019] Often, the chassis are designed to realize a single
infrastructure topology. As a result, if an infrastructure topology needs to change to accommodate the network's needs, a new chassis is designed to meet the network's needs. Further, the storage switches and networking switches are often placed on a different chassis. This results in needing longer networking cables to connect the storage switches to the networking switches.
[0020] The principles described herein include a networking infrastructure center. The networking infrastructure center may include a networking chassis, networking switches implemented in the networking chassis via module slots, the networking switches being swappable in the module slots to create an infrastructure topology, storage switches implemented in the networking chassis via the module slots, the storage switches being swappable to create the infrastructure topology, and a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via physical signal multiplexing to physically isolate the networking switches and the storage switches. As a result, the networking chassis can be customized to realize a number of infrastructure topologies.
[0021] In the present specification and in the appended claims, the term "networking switch" means a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. Networking switches may include net switches, spine switches, leaf switches, or other types of networking switches.
[0022] In the present specification and in the appended claims, the term "spine switch" means a networking device implemented on a second level of an infrastructure topology that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. Further, the spine switches may be connected to net switches and leaf switches. The spine switches may be similar to a core of a traditional core- aggregate-access model for processing north and south network traffic that travels in and out of a data center. Further, the spine switches may be several high-throughput layer 3 switches with high port density.
[0023] In the present specification and in the appended claims, the term "leaf switch" means a networking device implemented on a third level of an infrastructure topology that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. Further, the leaf switches may be connected to spine switches. The leaf switches may be similar to an access layer of a traditional core-aggregate- access model. As a result, a leaf switch may provide network connection points and uplink for spine switches.
[0024] In the present specification and in the appended claims, the term "net switch" means a networking device implemented on a first level of an infrastructure topology that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. Further, the net switch may be connected to spine switches. In some examples, the net switch may be a root of the infrastructure topology.
[0025] In the present specification and in the appended claims, the term "pass through" means a mechanism that allows a signal to be sent from one networking switch or storage switch to another networking switch or storage switch without being altered. For example, a net switch may be connected to a pass through and the pass through may be connected to a spine switch. As a result, the signal sent from the net switch to the spine switch via the pass through may be unaltered.
[0026] Further, as used in the present specification and in the appended claims, the term "a number of" or similar language is meant to be understood broadly as any positive number comprising 1 to infinity; zero not being a number, but the absence of a number.
[0027] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough
understanding of the present systems and methods. It will be apparent, however, to one skilled in the art that the present apparatus, systems, and methods may be practiced without these specific details. Reference in the specification to "an example" or similar language means that a particular feature, structure, or characteristic described in connection with that example is included as described, but may not be included in other examples.
[0028] Referring now to the figures, Fig. 1A is a diagram of a networking chassis, according to one example of principles described herein. As will be described below, a networking infrastructure center includes a networking chassis, the networking chassis includes three top cascading physical hardware layers and a bottom cascading physical hardware layer.
[0029] As illustrated, the networking infrastructure center (100) includes a networking chassis (1 14). As will be described in other parts of this specification the networking chassis (1 14) may be a networking rack that can accommodate a number of networking components. The networking
components may include leaf switches, spine switches, net switches, storage switches, other types of networking components, or combinations thereof. In some examples, the networking chassis (1 14) may be a 4 unit (U) networking chassis.
[0030] As illustrated, the networking chassis (1 14) includes a number of module slots (126). The module slots (126) include module slot 1 (126-1 ), module slot 2 (126-2), module slot 3 (126-3), module slot 4 (126-4), module slot 5 (126-5), module slot 6 (126-6), module slot 7 (126-7), module slot 8 (126-8), module slot 9 (126-9), module slot 10 (126-10), module slot 1 1 (126-1 1 ), module slot 12 (126-12), module slot 13 (126-1 ), module slot 14 (126-14), module slot 15 (126-15), and module slot 16 (126-16).
[0031] The module slots (126) may be sized to accommodate networking switches such as leaf switches, spine switches, net switches, and serial attached SCSI (SAS) switches. Further, the module slots (126) may be sized to accommodate storage switches. As will be described in other parts of this specification, the module slots (126) allow the leaf switches, spine switches, as well as other networking switches to be arranged such that a number of different infrastructure topologies can be realized. [0032] As illustrated, networking chassis (1 14) of the networking infrastructure center (100) may include a number of cascading physical hardware layers (1 16, 1 18). For example, the networking chassis (1 14) may include cascading physical hardware layer one (1 16-1 ), cascading physical hardware layer two (1 16-2), cascading physical hardware layer three (1 16-3), and cascading physical hardware layer 4 (1 16-4). Cascading physical hardware layer one (1 16-1 ), cascading physical hardware layer two (1 16-2), and cascading physical hardware layer three (1 16-3) may make up the three top cascading physical hardware layers. Further, cascading physical hardware layer 4 (1 16-4) may make up the bottom cascading physical hardware layer. As will be described in other parts of this specification, the three top cascading physical hardware layers may include leaf switches implemented in a cartridge tray form factor. Further, the three top cascading physical hardware layers may include spine switches implemented in a cartridge tray form factor. In other examples, the three top cascading physical hardware layers may include a combination of spine switches, net switches, and leaf switches. As a result, the three top cascading physical hardware layers may accommodate networking switches. Further, cascading physical hardware layer 4 (1 16-4) may be used to accommodate SAS switches and SAS lanes. As will be described in other parts of this specification, cascading physical hardware layer 4 (1 16-4) may be used to accommodate storage switches. As a result, the bottom cascading physical hardware layer may be used to accommodate the storage switches.
[0033] Although not illustrated, the networking chassis (1 14) may include cartridge tray form factors for inserting or removing leaf switches, spine switches, net switches, and SAS switches from module slots of the networking chassis. As a result, the leaf switches, spine switches, net switches, and SAS switches may be arranged on any of the cascading physical hardware layers (1 16, 1 18) to realize any infrastructure topology.
[0034] While this example has been described with reference to the networking chassis being 4U, the networking chassis may be a 1 U networking chassis to a 7U networking chassis. As a result, the networking chassis may include more or less modules slots than illustrated in Fig. 1A. Further, while this example has been described with reference to the networking chassis including four cascading physical hardware layers, the networking chassis may include any number of cascading physical hardware layers to realize any infrastructure topology. For example, the networking chassis may include three cascading physical hardware layers.
[0035] While this example has been described with reference to the networking chassis utilizing SAS switches, the networking chassis may utilize storage switches. The storage switches may be implemented on the bottom cascading physical hardware layer of the networking chassis. Further, the networking chassis may include other combinations of SAS switches and other storage switches.
[0036] Fig. 1 B is a diagram of a system for a networking infrastructure center, according to one example of principles described herein. As will be described below, the system includes a networking chassis, spine switches implemented in a cartridge tray form factor on the networking chassis, leaf switches implemented in a cartridge tray form factor on the networking chassis, and SAS switches.
[0037] As mentioned above, the networking chassis (1 14) of the networking infrastructure center (100) may include a number of cascading physical hardware layers (1 16 and 1 18). The three top cascading physical hardware layers (1 16-1 , 1 16-2, and 1 16-3) may include leaf switches (102) implemented in a cartridge tray form factor. As illustrated, leaf switch 1 (102-1 ) and leaf switch 4 (102-4) are implemented on cascading physical hardware layer 1 (1 16-1 ). Leaf switch 2 (102-2) and leaf switch 5 (102-5) are
implemented on cascading physical hardware layer 2 (1 16-2). Further, leaf switch 3 (102-3) and leaf switch 6 (102-6) are implemented on cascading physical hardware layer 3 (1 16-3).
[0038] Further, the three top cascading physical hardware layers may include spine switches (104) implemented in a cartridge tray form factor. As illustrated, spine switch 1 (104-1 ) and spine switch 4 (104-4) are implemented on cascading physical hardware layer 1 (1 16-1 ). Spine switch 2 (104-2) and spine switch 5 (104-5) are implemented on cascading physical hardware layer 2 (1 16-2). Further, spine switch 3 (104-3) and spine switch 6 (104-6) are implemented on cascading physical hardware layer 3 (1 16-3).
[0039] Further, cascading physical hardware layer 4 (1 16-4) may be used to accommodate SAS switches (108) and SAS lanes (106). As illustrated, cascading physical hardware layer 4 (1 16-4) includes SAS lanes 1 (106-1 ), SAS switch 1 (108-1 ), SAS switch 2 (108-2), and SAS lanes 2 (106-2).
[0040] As will be described in other parts of this specification, the networking switches such as the leaf switches (102) and spine switches (104) may be physically arranged in the networking chassis (1 14) to realize a number of infrastructure topologies. In some examples the infrastructure topologies include a fat-tree topology, an island -centric topology, or a spine-centric topology.
[0041] Fig. 1 C is a diagram of a shared input/output networking and storage multi-tier infrastructure, according to one example of principles described herein. As will be described below, the shared input/output networking and storage multi-tier infrastructure may include a number of trays, a networking chassis, and external storage.
[0042] As illustrated, the shared input/output networking and storage multi-tier infrastructure (150) includes a number of trays (128-1 ). For example, the shared input/output networking and storage multi-tier infrastructure (150) includes tray 1 (128-1 ), tray 2 (128-2), tray 3 (128-3), tray 4 (128-4), tray 5 (128- 5), and tray 6 (128-6). Each of the trays (128) may be cartridge tray form factors for inserting or removing leaf switches, spine switches, net switches, and SAS switches from the module slots of Fig 1A. Further, the switches associated with the trays (128) may be connected to a leaf switch (102) and an SAS switch (108) as illustrated. In some examples, the SAS switch (108) may be connected to external storage (130) or a storage switch. The external storage (130) may be just a bunch of disks (JBOD) that connect a series of hard drives by combining multiple hard drives and capacities into a single hard drive. For example, if the external storage (130) may connect a 20 gigabyte (GB) hard drive, a 60 GB hard drive, and a 100 GB hard drive together to form a 180 GB hard drive. [0043] As illustrated, the shared input/output networking and storage multi-tier infrastructure (150) includes the networking chassis (1 14). The networking chassis (1 14) may accommodate a leaf switch (102). Further, the networking chassis (1 14) may accommodate a shuffler (1 12). The shuffler (1 12) may be implemented on a backplane of the networking chassis (1 14). As will be described in other parts of this specification, the shuffler is leveraged to build variations of infrastructure topologies such as fat-tree, island-centric, spine- centric, and other infrastructure topologies.
[0044] The shuffler (1 12) is connected to a number of spine switches (104-1 ). For example, the shuffler (1 12) is connected to spine switch 1 (104-1 ), spine switch 2 (104-2), and spine switch n (104-n). Although not illustrated, the spine switches (104) may be connected to other networking switches.
[0045] Fig. 1 D is a diagram of a number of ports on a networking chassis, according to one example of principles described herein. As will be described below, a number of ports used for inputs and outputs on the networking chassis. Further, the number of ports is customizable to introduce a number of in-and-out ratios.
[0046] As illustrated, the networking chassis (1 14) includes a number of ports (180 to 196). The ports (180 to 196) may be used to connect networking switches such as spine switches, leaf switches, SAS switches, net switches, and other switches to each other.
[0047] The ports (186 to 196) in the center of the networking chassis (1 14) are used as a constant shuffle for the ports (180, 182, 184) of the left side of the networking chassis (1 14) and the ports (181 , 183, 185) of the right side of the networking chassis (1 14). In an example, ports 1-8 (180) on the left side of the networking chassis (1 14) and ports 1 -8 (181 ) on the right side of the networking chassis (1 14) may connect to ports 1 (186-1 ) to ports 8 (192-4) respectively. For example, port 1 of ports 1 -8 (180) on the left side of the networking chassis (1 14) and port 1 of ports 1-8 (181 ) on the right side of the networking chassis (1 14) may connect to ports 1 (186-1 ). Ports 1 (186-1 ) may include two ports that allows port 1 of ports 1 -8 (180) on the left side of the networking chassis (1 14) and port 1 of ports 1-8 (181 ) on the right side of the networking chassis (1 14) to connect to ports 1 (186-1 ). Further, each of the ports (180, 182, 184) of the left side of the networking chassis (1 14) and the ports (181 , 183, 185) of the right side of the networking chassis (1 14) may connect to their respective ports (186 to 196) in the center of the networking chassis (175).
[0048] In some examples, the ports (186 to 196) in the center of the networking chassis (1 14) may be used as a pass through. A pass through may be a mechanism that allow a signal to be sent from one networking switch to another networking switch without being altered. For example, a net switch may be connected to a pass through and the pass through may be connected to a spine switch. As a result, the signal sent from the net switch to the spine switch via the pass through may be unaltered. In an example, ports 2 (186-2), ports 3 (186-3), ports 10 (188-2), ports 1 1 (188-3), ports 18 (190-2), ports 19 (190-3), ports 6 (192-2), ports 7 (192-3), ports 14 (194-2), ports 15 (194-3), ports 22 (196-2), and ports 23 (196-3) may be used as pass through. As a result, ports 2 (186-2), ports 3 (186-3), ports 10 (188-2), ports 1 1 (188-3), ports 18 (190-2), ports 19 (190-3), ports 6 (192-2), ports 7 (192-3), ports 14 (194-2), ports 15 (194-3), ports 22 (196-2), and ports 23 (196-3) may be used as pass through for an infrastructure topology.
[0049] In one example, some of the ports (186 to 196) in the center of the networking chassis (1 14) may not be populated. For example, ports 4 (186- 4), ports 12 (188-4), ports 20 (190-4), ports 8 (192-4), ports 16 (194-4), and ports (196-4) may not be populated. As a result, ports 4 (186-4), ports 12 (188- 4), ports 20 (190-4), ports 8 (192-4), ports 16 (194-4), and ports (196-4) may not be used for an infrastructure topology.
[0050] Further, the number of ports (180 to 196) may be used in a one to one mode. As a result, the networking chassis may include 144 ports. In another example, the number of ports (180 to 196) may be used in a two to one mode. As a result, the networking chassis may include 196 ports. In yet another example, the number of ports (180 to 196) may be used in an eight to one mode. As a result, the networking chassis may include 256 ports. As a result, the number of ports used for input and output is customizable which introduces a number of in-and-out ratios.
[0051] While this example has been described with reference to specific ports being used as pass through, any of the ports may be used as pass through to realize specific infrastructure topologies. Further, while this example has been described with reference to specific ports not being populated, any of the ports may not be populated to realize specific
infrastructure topologies.
[0052] Fig. 1 E is a diagram of a networking chassis, according to one example of principles described herein. As will be described below, a networking chassis accommodates a number of networking switches and storage switches. Further, the networking chassis accommodates a shuffler to realize an infrastructure topology.
[0053] As illustrated, the system (175) includes a networking chassis (1 14). As described above, the networking chassis (1 14) includes module slots (126). The module slots (126) include module slot 1 (126-1 ), module slot 2 (126-2), module slot 3 (126-3), module slot 4 (126-4), module slot 5 (126-5), module slot 6 (126-6), module slot 7 (126-7).
[0054] Further, the system (175) includes networking switches (102 and 104). The networking switches (102 and 104) may include leaf switch 1 (102-1 ), leaf switch 2 (102-2), leaf switch 3 (102-3), spine switch 1 (104-1 ), spine switch 2 (104-2), and spine switch 3 (104-3). The networking switches (102 and 104) are implemented in the networking chassis (1 14) via the module slots (126) as described above. As illustrated, leaf switch 1 (102-1 ) is inserted into module slot 4 (126-4). Leaf switch 2 (102-2) is inserted into module slot 5 (126-5). Further, leaf switch 3 (102-3) is inserted into module slot 6 (126-6). Spine switch 1 (104-1 ) is inserted into module slot 1 (126-1 ). Spine switch 2 (104-2) is inserted into module slot 2 (126-2). Further, spine switch 3 (104-3) is inserted into module slot 3 (126-3). Further, the networking switches (102 and 104) are swappable in the module slots (126) to create an infrastructure topology. For example, spine switch 1 (104-1 ) may be located in module slot 2 (126-2) instead of module slot 1 (126-1 ). As a result, the networking switches (102 and 104) may be located in any of the modules slots (126).
[0055] Further, the system (175) includes storage switches (198). The storage switches (198) include storage switch 1 (198-1 ) and storage switch 2 (198-2). The storage switches (198) are implemented in the networking chassis (1 14) via the module slots (126-7). Storage switch 1 (198-1 ) is inserted into module slot 7 (126-7) and storage switch 2 (198-2) is inserted into module slot 8 (126-8). Further, the storage switches (198) are swappable to create the infrastructure topology. For example, storage switch 1 (198-1 ) may be located in module slot 8 (126-8) instead of module slot 7 (126-7). As a result, the storage switches (198) may be located in any of the modules slots (126).
[0056] The system (175) further includes a shuffler (1 12). As will be described in other parts of this specification the shuffler (1 12) detects locations of the networking switches (102 and 104) and the storage switches (198) to allow the networking switches (102 and 104) and the storage switches (198) to be placed in arbitrary locations on the networking chassis (1 14). For example, the shuffler (1 12) may detect that spine switch 1 (104-1 ) is inserted into module slot 1 (126-1 ). The shuffler (1 12) may detect locations of the networking switches (102 and 104) and the storage switches (198) via hardware detection. In hardware detection, an install pin associated with the networking chassis (1 14) is pulled to logic low when a tray is inserted. For example, when a networking switch tray is inserted into the networking chassis (1 14) via module slot 1 (126-1 ), install pin one is pulled to logic low. When a storage switch tray is inserted into the networking chassis, install pin two is pulled to logic low. Further, if install pin one or install pin is pulled to logic high, nothing is inserted into the networking chassis. In another example, the shuffler (1 12) may detect locations of the networking switches (102 and 104) and the storage switches (198) via internal communication to collect switch information. For example, the shuffler (1 12) may not enable connection of the networking switches (102 and 104) and the storage switches (198) before the shuffler (1 12) leverages a system management bus to read device field replaceable unit (FRU) data. The FRU data may include switch type, switch version, switch manufacturer, other FRU data, or combinations thereof. [0057] Further, the shuffler (1 12) realizes the infrastructure topology by splitting the infrastructure topology via physical signal multiplexing to physically isolate the networking switches (102 and 104) and the storage switches (198). The shuffler (1 12) may use physical signal multiplexing to assign a specific channel between a single input of the shuffler (1 12) and a corresponding output. Once this assignment is made, it cannot be shared with other inputs or outputs until a switch tray has been inserted or removed from the networking chassis (1 14). More information about splitting will be described in Fig. 1 G.
[0058] Fig. 1 F is a diagram of a shuffler, according to one example of principles described herein. As will be described below, a shuffler includes a number of ports and a central controller.
[0059] In an example, the shuffler (1 12) may be a type of hardware abstraction layer. For example, the hardware abstraction layer may be a physical (PHY) layer. Further, the shuffler (1 12) may work as a multi input multi output (MIMO) system whose topology has a bipartite graph nature. In some examples, the shuffler (1 12) can detect the type of switch it is connected to as well as a plurality of connected ports. As a result the topology doesn't need to be a complete bipartite graph.
[0060] Further, the shuffler (1 12) provides security features by utilizing infrastructure topology verification. Infrastructure topology verification is used to prevent malicious hacking and attacks on the infrastructure topology. The shuffler (1 12) may verify the infrastructure topology at a hardware level. In some examples, the infrastructure topology is encrypted and verified on a regular basis via an encryption engine. Although not illustrated in Fig. 1 F, the encryption engine may use product hardware information, an encryption key, and customer information to determine an encrypted default topology for the infrastructure topology. Infrastructure topology verification may further include a decryption engine. Although not illustrated, decryption engine may utilize the encrypted default topology, the customer information, and the product hardware information with a decryption key to determine the infrastructure topology. The customer information may include a customer identification number. The customer identification number may be associated with information such as a location, a date, other information, or combinations thereof. The product hardware information may include information such as a manufacturing date, a site, a serial number, other information, or combinations thereof. The shuffler (1 12) may further verify the infrastructure topology at a hardware level by determining infrastructure topology updates, if a default feature is enabled under regular topology verification, and by utilizing a remote topology verification server.
[0061] Infrastructure topology verification may further include an encryption key generation engine for a passive new topology. Although not illustrated, the encryption key generation engine may use a new topology, an old encryption key along with the encryption engine above to determine an encrypted new advanced topology for the infrastructure topology.
[0062] Further, infrastructure topology verification may further include a decryption key generation engine for the passive new topology. Although not illustrated, the decryption key generation engine may use the product hardware information and the new topology to determine a new decryption key. Using the new decryption key, the customer information, and the encrypted new advanced topology, the decryption engine may use this information as described above for infrastructure topology verification.
[0063] As illustrated, the shuffler (1 12) includes a number of ports (1 15). The ports (1 15) may be used for connecting networking switches and storage switches together on a networking chassis to realize an infrastructure topology. As a result, the ports (1 15) may be used for inputs and/or outputs. Further, the ports (1 15) may have identical bandwidths for inputs and outputs.
[0064] In some examples, there are several ways for the shuffler (1 12) to customize the networking switches and storage switches in the networking chassis to realize an infrastructure topology. The infrastructure topology may be a fixed infrastructure topology. For a fixed infrastructure topology, the infrastructure topology is fixed when created. As a result, changes can be made by pulling out or pushing in trays with networking switches and storage switches. [0065] Further, the infrastructure topology may be a firmware customizable infrastructure topology. For the firmware customizable
infrastructure topology a new infrastructure topology can be updated by adding new firmware to the networking chassis. The central controller (1 17) associated with the shuffler (1 12) can modify the physical signal multiplexing accordingly.
[0066] The infrastructure topology may be a passive infrastructure topology adaption. For the passive infrastructure topology adaption a customer can modify the infrastructure topology on the console user interface that is software definable. Once the infrastructure topology is modified to create a new infrastructure topology, the infrastructure topology is transferred to shuffler (1 12). Although not illustrated, the new infrastructure topology may be transferred to memory of the central controller (1 17).
[0067] Further, the infrastructure topology may be software definable self-adaptive shuffling. For the software definable self-adaptive shuffling, the shuffler (1 12) can implement self- adapting changes. This includes splitting the infrastructure topology based on the tray-insertion. To implement self-adapting changes, the shuffler (1 12) may detect the switch type in the infrastructure topology. The logic implemented in the central controller (1 17) of shuffler (1 12) may use an optimization feature to realize the infrastructure topology. The optimization feature may be preprogrammed to realize the infrastructure topology. The optimization feature may be fine-tuned by a customer via manual script inputs to further realize the infrastructure topology.
[0068] Further, the shuffler (1 12) may include the central controller (1 17). The central controller (1 17) may be used to control the logic of the shuffler (1 12). The logic may be determined via the instructions contained in memory and executed by a processor associated with the central controller (1 17). In some example, the central controller (1 17) may include the encryption engine, the decryption engine, the encryption key generation engine, and the decryption key generation engine. The engines may refer to program instructions for performing a designated function. The program instructions cause the processor to execute the designated function of the engines. In other examples, the engines refer to a combination of hardware and program instructions to perform a designated function. Each of the engines may include a processor and memory. The program instructions are stored in the memory and cause the processor to execute the designated function of the engine as described above.
[0069] Fig. 1 G is a diagram of a shuffler splitting an infrastructure topology, according to one example of principles described herein. As will be an infrastructure topology includes a number of networking switches and a number of storage switches.
[0070] As illustrated, the infrastructure topology (195) includes a number of networking switches (123). For example, the infrastructure topology (195) includes networking switch 1 (123-1 ), networking switch 2 (123-2), networking switch 3 (123-3), networking switch 4 (123-4), and networking switch 5 (123-5). Further, the infrastructure topology (195) includes a number of storage switches (127). For example, the infrastructure topology (195) includes storage switch 1 (127-1 ), storage switch 2 (127-2), storage switch 3 (127-3), storage switch 4 (127-4), and storage switch 5 (127-5).
[0071] Further, the infrastructure topology (195) includes a number of shufflers (129). For example, the infrastructure topology (195) includes shuffler 1 (129-1 ) and shuffler 2 (129-2). The infrastructure topology (195) can be split. For example, if both network switches and storage switches are inserted into chassis, the shuffler (129) can split the infrastructure topology (195) to accommodate both applications. In some examples, the shufflers (129) use physical signal multiplexing to split the infrastructure topology (195).
[0072] As illustrated, the infrastructure topology (195) is split via the shufflers (129). For example, shuffler 1 (129-1 ) is used to connect networking switch 1 (123-1 ) and networking switch 2 (123-2) to networking switch 3 (123-3). Further, shuffler 2 (129-2) is used to connect networking switch 3 (123-3) to networking switch 4 (123-4) and networking switch 5 (123-5). Similarly, shuffler 1 (129-1 ) is used to connect storage switch 1 (127-1 ) and storage switch 2 (127- 2) to storage switch 3 (127-3). Further, shuffler 2 (129-2) is used to connect storage switch 3 (127-3) to storage switch 4 (127-4) and storage switch 5 (127- 5). As a result, the shufflers (129) split the infrastructure topology (195) to physically isolate the networking switches (123) and the storage switches (127).
[0073] As will be described in Figs. 2A to 4B, spine switches, leaf switches, SAS switches, and other networking switches may be arranged in the networking chassis to create a number of infrastructure topologies. As a result, the networking chassis may be flexible to build a number of infrastructure topologies.
[0074] Fig. 2A is a diagram of a fat-tree topology, according to one example of principles described herein. As will be described below, a number of net switches, spine switches, and leaf switches may be connected to each other such that a fat-tree topology is realized.
[0075] As illustrated, the fat-tree topology (200) includes a number of net switches (210). The net switches (210) include net switch 1 (210-1 ), net switch 2 (210-2), net switch 3 (210-3), and net switch 4 (210-4). The net switches (210) may be a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. The net switches (210) may be implemented on a first level of the fat-tree topology (200).
[0076] Further, the fat-tree topology (200) includes a number of spine switches (204). The spine switches (204) may be a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. The spine switches (204) include spine switch 1 (204-1 ), spine switch 2 (204-2), spine switch 3 (204-3), spine switch 4 (204-4), spine switch 5 (204-5), spine switch 6 (204-6), spine switch 7 (204-7), and spine switch 8 (204-8). Each of the net switches (210) may be connected to each of the spine switches (204) via networking cables (232) represented as solid lines in Fig. 2A. For example, net switch 1 (210-1 ) is connected to spine switch 1 (204-1 ), spine switch 2 (204-2), spine switch 3 (204-3), spine switch 4 (204-4), spine switch 5 (204-5), spine switch 6 (204-6), spine switch 7 (204-7), and spine switch 8 (204-8). The spine switches (204) may be implemented on a second level of the fat-tree topology (200). In some examples, the networking cables (232) may be designed to connect to four ports, three ports, or two ports. Although not illustrated, a shuffler may be located between the net switches (210) and the spine switches (204) to realize the fat-tree topology (200).
[0077] Further, the fat-tree topology (200) includes a number of leaf switches (202). The leaf switches (202) may be a networking device that connects devices together on a network by using packet switching to receive, process, and forward data to a destination device. The leaf switches (202) include leaf switch 1 (202-1 ), leaf switch 2 (202-2), leaf switch 3 (202-3), leaf switch 4 (202-4), leaf switch 5 (202-5), leaf switch 6 (202-6), leaf switch 7 (202-
7) , leaf switch 8 (202-8), leaf switch 9 (202-9), leaf switch 10 (202-10). The leaf switches (202) may be implemented on a third level of the fat-tree topology (200).
[0078] Further, each of leaf switch 1 to leaf switch 5 (202-1 to 202-5) may have one connection to spine switch 1 to spine switch 4 (204-1 to 204-4) via the networking cables (232) represented as solid lines in Fig. 2A. For example, leaf switch 1 (202-1 ) is connected to spine switch 1 (204-1 ), spine switch 2 (204-2), spine switch 3 (204-3), and spine switch 4 (204-4).
[0079] Further, each of leaf switch 6 to leaf switch 10 (202-6 to 202- 10) may have one connection to spine switch 5 to spine switch 8 (204-5 to 204-
8) . For example, leaf switch 6 (202-6) is connected, via the networking cables (232) represented as solid lines in Fig. 2A, to spine switch 5 (204-5), spine switch 6 (204-6), spine switch 7 (204-7), and spine switch 8 (204-8). Although not illustrated, a shuffler may be located between spine switch 1 to spine switch 4 (204-1 to 204-4) and leaf switch 1 to leaf switch 5 (202-1 to 202-5) to realize the fat-tree topology (200). Further, a shuffler may be located between switch 5 to spine switch 8 (204-5 to 204-8) and leaf switch 6 to leaf switch 10 (202-6 to 202-10) to realize the fat-tree topology (200).
[0080] Fig. 2B is a diagram of a fat-tree topology implemented on a networking chassis, according to one example of principles described herein. As will be described below, the networking chassis implements spine switches, leaf switches, net switches, and SAS switches in a cartridge tray form factor on the networking chassis to realize the fat-tree topology of Fig. 2A. [0081] For illustrative purposes, the networking chassis (214) may be illustrated as two networking chassis. As a result, the networking chassis (214) is illustrated as networking chassis (214-1 ) and networking chassis (214-2). Further, the shufflers are illustrated as solid double arrow lines in Fig. 2B.
[0082] As illustrated, the networking chassis (214-1 ) includes the net switches (210) depicted in Fig. 2A. In an example, net switch 1 (210-1 ) is implemented on cascading physical hardware layer 1 , net switch 2 (210-2) is implemented on cascading physical hardware layer 2, and net switch 3 (210-3) is implemented on cascading physical hardware layer 3 of the networking chassis (214-1 ) to realize the fat-tree topology of Fig. 2A.
[0083] Further, the networking chassis (214) includes the spine switches (204) depicted in Fig. 2A. To realize the fat-tree topology of Fig. 2A, spine switch 1 (204-1 ) and spine switch 4 (204-4) are implemented on cascading physical hardware layer 1. Spine switch 2 (204-2) and spine switch 5 (204-5) are implemented on cascading physical hardware layer 2. Further, spine switch 3 (204-3) and spine switch 6 (204-6) are implemented on cascading physical hardware layer 3.
[0084] The networking chassis (214-2) may include a number of pass through (224). The pass through (224) include pass through 1 (224-1 ), pass through 2 (224-2), pass through 3 (224-3), pass through 4 (224-4), pass through 5 (224-5), pass through 6 (224-6). To realize the fat-tree topology of Fig. 2A, pass through 1 (224-1 ) and pass through 4 (224-4) are implemented on cascading physical hardware layer 1. Pass through 2 (224-2) and pass through 5 (224-5) are implemented on cascading physical hardware layer 2. Pass through 3 (224-3) and pass through 6 (224-6) are implemented on cascading physical hardware layer 3.
[0085] Further, the networking chassis (214-2) may include a number of leaf switches (202). The leaf switches (202) include leaf switch 1 (202-1 ), leaf switch 2 (202-2), leaf switch 3 (202-3), leaf switch 4 (202-4), leaf switch 5 (202- 5), leaf switch 6 (202-6), leaf switch 7 (202-7), leaf switch 8 (202-8), leaf switch 9 (202-9), leaf switch 10 (202-10). To realize the fat-tree topology of Fig. 2A, leaf switch 1 (202-4) and leaf switch 4 (202-4) are implemented on cascading physical hardware layer 1 . Leaf switch 2 (202-2) and leaf switch 5 (202-5) are implemented on cascading physical hardware layer 2. Further, leaf switch 3 (202-4) and leaf switch 6 (202-6) are implemented on cascading physical hardware layer 3.
[0086] Further, the networking chassis (214-2) may include SAS switches (108) and SAS lanes (106). As illustrated, cascading physical hardware layer four includes SAS lanes 1 (206-1 ), SAS switch 1 (208-1 ), SAS switch 2 (208-2), and SAS lanes 2 (206-2).
[0087] Although not illustrated in Fig. 2B, the net switches (210), the spine switches (204), the pass through (224), the SAS lanes (206) and SAS switches (208) may be connected together as illustrated in Fig. 2A. As a result, the fat-tree topology of Fig. 2A may be realized.
[0088] Fig. 3A is a diagram of an island-centric topology, according to one example of principles described herein. As will be described below, a number of networking switches such as net switches, spine switches, and leaf switches may be connected to each other such that an island-centric topology is realized.
[0089] As illustrated, the island-centric topology (300) includes a number of net switches (310). The net switches (310) include net switch 1 (310- 1 ), net switch 2 (310-2), net switch 3 (310-3), and net switch 4 (310-4). The net switches (302) may be implemented on a first level of the island-centric topology (300).
[0090] Further, the island-centric topology (300) includes a number of spine switches (304). The spine switches (304) include spine switch 1 (304-1 ), spine switch 2 (304-2), spine switch 3 (304-3), spine switch 4 (304-4), spine switch 5 (304-5), spine switch 6 (304-6), spine switch 7 (304-7), and spine switch 8 (304-8). Each of the net switches (310) may be connected to each of the spine switches (304) via networking cables (332) represented as solid lines in Fig. 3A. For example, net switch 1 (310-1 ) is connected to spine switch 1 (304-1 ), spine switch 3 (304-2), spine switch 3 (304-3), spine switch 4 (304-4), spine switch 5 (304-5), spine switch 6 (304-6), spine switch 7 (304-7), and spine switch 8 (304-8). The spine switches (304) may be implemented on a second level of the island-centric topology (300). Although not illustrated, a shuffler may be located between the net switches (310) and the spine switches (304) to realize the island-centric topology (300).
[0091] Further, the island-centric topology (300) includes a number of leaf switches (302). The leaf switches (302) include leaf switch 1 (302-1 ), leaf switch 2 (302-2), leaf switch 3 (302-3), leaf switch 4 (302-4), leaf switch 5 (302- 5), leaf switch 6 (302-6), leaf switch 7 (302-7), and leaf switch 8 (302-8).
Further, each of leaf switch 1 to leaf switch 4 (302-1 to 302-4) may have one connection to spine switch 1 to spine switch 4 (304-1 to 304-4) via the networking cables (332) represented as solid lines in Fig. 3A. For example, leaf switch 1 (302-1 ) is connected to spine switch 1 (304-1 ), spine switch 2 (304-2), spine switch 3 (304-3), and spine switch 4 (304-4).
[0092] Further, each of leaf switch 5 to leaf switch 8 (302-5 to 302-8) may have one connection to spine switch 5 to spine switch 8 (304-5 to 304-8). For example, leaf switch 5 (302-5) is connected to spine switch 5 (304-5), spine switch 6 (304-6), spine switch 7 (304-7), and spine switch 8 (304-8) via the networking cables (332) represented as solid lines in Fig. 3A. Further, the leaf switches (302) may be implemented on a third level of the island-centric topology (300). Although not illustrated, a shuffler may be located between leaf switch 1 to leaf switch 4 (302-1 to 302-4) and spine switch 1 to spine switch 4 (304-1 to 304-4) to realize the island-centric topology (300). Further, although not illustrated, a shuffler may be located between leaf switch 5 to leaf switch 8 (302-5 to 302-8) and spine switch 5 to spine switch 8 (304-5 to 304-8) to realize the island-centric topology (300).
[0093] Fig. 3B is a diagram of an island-centric topology implemented on a networking chassis, according to one example of principles described herein. As will be described below, the networking chassis implements spine switches, leaf switches, net switches, and SAS switches in a cartridge tray form factor on the networking chassis to realize the island-centric topology.
[0094] For illustrative purposes, the networking chassis (314) may be illustrated as two networking chassis. As a result, the networking chassis (314) is illustrated as networking chassis (314-1 ) and networking chassis (314-2). Further, the shufflers are illustrated as solid double arrow lines in Fig. 3B to realize the island-centric topology (300).
[0095] As illustrated, the networking chassis (314-1 ) includes the net switches (310) depicted in Fig. 3A. In an example, net switch 1 (310-1 ) is implemented on cascading physical hardware layer 1 , net switch 2 (310-2) is implemented on cascading physical hardware layer 2, and net switch 3 (310-3) is implemented on cascading physical hardware layer 3 of the networking chassis (314-1 ).
[0096] Further, the networking chassis (314-1 ) may include a number of pass throughs (324). The pass throughs (324) include pass through 1 (324- 1 ), pass through 2 (324-2), pass through 3 (324-3), pass through 4 (324-4), pass through 5 (324-5), and pass through 6 (324-6). Further, to realize the island-centric topology, pass through 1 (324-1 ) and pass through 4 (324-4) are implemented on cascading physical hardware layer 1 . Pass through 2 (324-2) and pass through 5 (324-5) are implemented on cascading physical hardware layer 2. Pass through 3 (324-3) and pass through 6 (224-6) are implemented on cascading physical hardware layer 3.
[0097] Further, the networking chassis (314-2) includes the spine switches (304) depicted in Fig. 3A. To realize the island-centric topology, spine switch 1 (304-1 ) and spine switch 4 (304-4) are implemented on cascading physical hardware layer 1 . Spine switch 2 (304-2) and spine switch 5 (304-5) are implemented on cascading physical hardware layer 2. Further, spine switch 3 (304-3) and spine switch 6 (304-6) are implemented on cascading physical hardware layer 3.
[0098] Further, the networking chassis (314-2) may include a number of leaf switches (302) as depicted in Fig 3A. The leaf switches (302) include leaf switch 1 (302-1 ), leaf switch 2 (302-2), leaf switch 3 (302-3), leaf switch 4 (302-4), leaf switch 5 (302-5), and leaf switch 6 (302-6). To realize the island- centric topology, leaf switch 1 (302-1 ) and leaf switch 4 (302-4) are implemented on cascading physical hardware layer 1 . Leaf switch 2 (302-2) and leaf switch 5 (302-5) are implemented on cascading physical hardware layer 2. Further, leaf switch 3 (302-4) and leaf switch 6 (302-6) are implemented on cascading physical hardware layer 3.
[0099] Further, the networking chassis (314) may include SAS switches (108) and SAS lanes (106). As illustrated, cascading physical hardware layer 4 includes SAS lanes 1 (206-1 ), SAS switch 1 (208-1 ), SAS switch 2 (208-2), and SAS lanes 2 (206-2).
[00100] Although not illustrated in Fig. 3B, the net switches (310), the spine switches (304), the pass through (324), the SAS lanes (306) and SAS switches (308) may be connected together as illustrated in Fig. 3A. As a result, the island-centric topology of Fig. 3A may be realized.
[00101] Fig. 4A is a diagram of a spine-centric topology, according to one example of principles described herein. As will be described below, a number of networking switches, such as net switches, spine switches, and leaf switches may be connected to each other such that a spine-centric topology is realized.
[00102] As illustrated, the spine-centric topology (400) includes a number of net switches (410). The net switches (410) include net switch 1 (410- 1 ), net switch 2 (410-2), net switch 3 (410-3), and net switch 4 (410-4). The net switches (402) may be implemented on a first level of the spine-centric topology (400).
[00103] Further, the spine-centric topology (400) includes a number of spine switches (404). The spine switches (404) include spine switch 1 (404- 1 ), spine switch 2 (404-2), spine switch 3 (404-3), spine switch 4 (404-4), spine switch 5 (404-5), spine switch 6 (404-6), spine switch 7 (404-7), and spine switch 8 (404-8). Net switch 1 and net switch 2 (410-1 and 410-2) may be connected to spine switch 1 to spine switch 4 (404-1 to 404-4) via networking cables (432) represented as solid lines in Fig. 4A. For example, net switch 1 (410-1 ) is connected to spine switch 1 (404-1 ), spine switch 2 (404-2), spine switch 3 (404-3), and spine switch 4 (404-4). Net switch 3 (410-3) is connected to spine switch 5 (404-5), spine switch 6 (404-6), spine switch 7 (404-7), and spine switch 8 (404-8). The spine switches (404) may be implemented on a second level of the spine-centric topology (400). Although not illustrated, a shuffler may be located between the net switches (410) and the spine switches (404) to realize the spine-centric topology (400).
[00104] Further, the spine-centric topology (400) includes a number of leaf switches (402). The leaf switches (402) include leaf switch 1 (402-1 ), leaf switch 2 (402-2), leaf switch 3 (402-3), leaf switch 4 (402-4), leaf switch 5 (402- 5), leaf switch 6 (402-6), leaf switch 7 (402-7), and leaf switch 8 (402-8).
Further, each of leaf switches (402) may be connected to each spine switch (404) via networking cables (432) represented as solid lines in Fig. 4A. For example, leaf switch 1 (302-1 ) is connected to spine switch 1 (404-1 ), spine switch 2 (404-2), spine switch 3 (404-3), spine switch 4 (404-4), spine switch 5 (404-5), spine switch 6 (404-6), spine switch 7 (404-7), and spine switch 8 (404- 8). The leaf switches (404) may be implemented on a third level of the spine- centric topology (400). Although not illustrated, a shuffler may be located between the spine switches (404) and the leaf switches (402) to realize the spine-centric topology (400).
[00105] Fig. 4B is a diagram of a spine-centric topology
implemented on a networking chassis, according to one example of principles described herein. As will be described below, the networking chassis implements spine switches, leaf switches, net switches, and SAS switches in a cartridge tray form factor on the networking chassis to realize the spine-centric topology.
[00106] For illustrative purposes, the networking chassis (414) may be illustrated as two networking chassis. As a result, the networking chassis (414) is illustrated as networking chassis (414-1 ) and networking chassis (414- 2). Further, the shufflers are illustrated as solid double arrow lines in Fig. 4B to realize the spine-centric topology (400).
[00107] As illustrated, the networking chassis (414-1 ) includes the net switches (410) depicted in Fig. 4A. In an example, net switch 1 (410-1 ) is implemented on cascading physical hardware layer 1 , net switch 2 (410-2) is implemented on cascading physical hardware layer 2, and net switch 3 (410-3) is implemented on cascading physical hardware layer 3 of the networking chassis (414) to realize the spine-centric topology of Fig. 4A. [00108] Further, the networking chassis (414-1 ) includes the spine switches (404) depicted in Fig. 4A. To realize the spine-centric topology, spine switch 1 (404-1 ) and spine switch 4 (404-4) are implemented on cascading physical hardware layer 1 . Spine switch 2 (404-2) and spine switch 5 (404-5) are implemented on cascading physical hardware layer 2. Further, spine switch 3 (404-3) and spine switch 6 (404-6) are implemented on cascading physical hardware layer 3.
[00109] Further, the networking chassis (414-2) may include a number of pass through (424). The pass through (424) include pass through 1 (424-1 ), pass through 2 (424-2), pass through 3 (424-3), pass through 4 (424-4), pass through 5 (424-5), pass through 6 (424-6). To realize the spine-centric topology, pass through 1 (424-1 ) and pass through 4 (424-4) are implemented on cascading physical hardware layer 1. Pass through 2 (424-2) and pass through 5 (424-5) are implemented on cascading physical hardware layer 2. Pass through 3 (424-3) and pass through 6 (424-6) are implemented on cascading physical hardware layer 3.
[00110] Further, the networking chassis (414-2) may include a number of leaf switches (402). The leaf switches (402) include leaf switch 1 (402-1 ), leaf switch 2 (402-2), leaf switch 3 (402-3), leaf switch 4 (402-4), leaf switch 5 (402-5), and leaf switch 6 (402-6). To realize the spine-centric topology leaf switch 1 (402-1 ) and leaf switch 4 (402-4) are implemented on cascading physical hardware layer 1 . Leaf switch 2 (402-2) and leaf switch 5 (402-5) are implemented on cascading physical hardware layer 2. Further, leaf switch 3 (402-3) and leaf switch 6 (402-6) are implemented on cascading physical hardware layer 3.
[00111] Further, the networking chassis (414-2) may include SAS switches (408) and SAS lanes (406). As illustrated, cascading physical hardware layer 4 includes SAS lanes 1 (406-1 ), SAS switch 1 (408-1 ), SAS switch 2 (408-2), and SAS lanes 2 (406-2).
[00112] Although not illustrated in Fig. 4B, the net switches (410), the spine switches (404), the pass through (424), the SAS lanes (406) and SAS switches (408) may be connected together as illustrated in Fig. 4A. As a result, the spine-centric topology of Fig. 4A may be realized.
[00113] Fig. 5 is a flowchart a method for a networking
infrastructure center, according to one example of principles described herein. In this example, the method (500) includes determining (501 ) an infrastructure topology and physically (502) arranging networking switches and storage switches on a networking chassis to realize the infrastructure topology via a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via signal multiplexing to physically isolate the networking switches and the storage switches.
[00114] As mentioned above, the method (500) includes determining (501 ) an infrastructure topology. The infrastructure topologies may include a fat-tree topology, an island-centric topology, a spine-centric topology, or other infrastructure topologies. The fat-tree topology, an island -centric topology, or a spine-centric topology may be determined based on the desired characteristic that each of the infrastructure topologies has to offer.
[00115] As mentioned above, the method (500) includes physically (502) arranging networking switches and storage switches on a networking chassis to realize the infrastructure topology via a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via signal multiplexing to physically isolate the networking switches and the storage switches. Networking switches, such as the spine switches and the leaf switches are implemented on three top cascading physical hardware layers of the networking chassis to realize the infrastructure topology. Further, networking cables may be used to connect the spine switches and the leaf switches together as illustrated Figs. 2, 3, and 4 for the respective infrastructure topology. Further, the shuffler may operate as described above for splitting the infrastructure topology based on an optimization feature and verifying the infrastructure topology.
[00116] The preceding description has been presented to illustrate and describe examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A networking infrastructure center comprising:
a networking chassis;
networking switches implemented in the networking chassis via module slots, the networking switches being swappable in the module slots to create an infrastructure topology;
storage switches implemented in the networking chassis via the module slots, the storage switches being swappable to create the infrastructure topology; and
a shuffler, the shuffler to:
detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis; and
realize the infrastructure topology by splitting the infrastructure topology via physical signal multiplexing to physically isolate the networking switches and the storage switches.
2. The networking infrastructure center claim 1 , wherein three top cascading physical hardware layers of the networking chassis accommodate the networking switches implemented in a cartridge tray form factor.
3. The networking infrastructure center of claim 1 , wherein the
shuffler detects the locations of the networking switches and the storage switches in the networking chassis via hardware detection.
The networking infrastructure center of claim 1 , wherein the shuffler verifies the infrastructure topology at a hardware level.
The networking infrastructure center of claim 1 , wherein the shuffler detects the locations of the networking switches and the storage switches in the networking chassis via internal communication to collect switch information.
The networking infrastructure center of claim 1 , wherein the infrastructure topology is a fat-tree topology, an island-centric topology, or a spine-centric topology.
A shuffler implemented in a networking infrastructure center, the shuffler comprising:
a number of ports for connecting networking switches and storage switches together on a networking chassis to realize an infrastructure topology; and
a central controller;
the shuffler to:
detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis; and
realize the infrastructure topology by splitting the infrastructure topology via physical signal multiplexing to physically isolate the networking switches and the storage switches.
The shuffler of claim 7, wherein the shuffler detects the locations of the networking switches and the storage switches in the networking chassis via internal communication to collect switch information.
9. The shuffler of claim 7, wherein the shuffler detects the locations of the networking switches and the storage switches in the networking chassis via hardware detection.
10. The shuffler of claim 7, wherein the infrastructure topology
comprises a fat-tree topology, an island-centric topology, or a spine-centric topology.
1 1. The shuffler of claim 7, wherein serial attached SCSI (SAS) switches are implemented on a bottom cascading physical hardware layer of the networking chassis.
12. The shuffler of claim 7, wherein the shuffler verifies the
infrastructure topology at a hardware level.
13. A method for building a networking infrastructure center, the method comprising:
determining an infrastructure topology; and physically arranging networking switches and storage switches on a networking chassis to realize the infrastructure topology via a shuffler, the shuffler to detect locations of the networking switches and the storage switches to allow the networking switches and the storage switches to be placed in arbitrary locations on the networking chassis and to realize the infrastructure topology by splitting the infrastructure topology via signal multiplexing to physically isolate the networking switches and the storage switches.
14. The method of claim 13, wherein the splitting is based on an optimization feature of the shuffler.
15. The method of claim 13, wherein the shuffler detects the locations of the networking switches and the storage switches in the networking chassis via hardware detection.
PCT/US2015/043131 2015-07-31 2015-07-31 A networking infrastructure center WO2017023247A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/043131 WO2017023247A1 (en) 2015-07-31 2015-07-31 A networking infrastructure center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/043131 WO2017023247A1 (en) 2015-07-31 2015-07-31 A networking infrastructure center

Publications (1)

Publication Number Publication Date
WO2017023247A1 true WO2017023247A1 (en) 2017-02-09

Family

ID=57943338

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/043131 WO2017023247A1 (en) 2015-07-31 2015-07-31 A networking infrastructure center

Country Status (1)

Country Link
WO (1) WO2017023247A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070067435A1 (en) * 2003-10-08 2007-03-22 Landis John A Virtual data center that allocates and manages system resources across multiple nodes
US7406038B1 (en) * 2002-04-05 2008-07-29 Ciphermax, Incorporated System and method for expansion of computer network switching system without disruption thereof
US20080263185A1 (en) * 2003-11-20 2008-10-23 International Business Machines Corporation Automatic configuration of the network devices via connection to specific switch ports
US20100254703A1 (en) * 2009-04-01 2010-10-07 Kirkpatrick Peter E Optical Network for Cluster Computing
US7953903B1 (en) * 2004-02-13 2011-05-31 Habanero Holdings, Inc. Real time detection of changed resources for provisioning and management of fabric-backplane enterprise servers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7406038B1 (en) * 2002-04-05 2008-07-29 Ciphermax, Incorporated System and method for expansion of computer network switching system without disruption thereof
US20070067435A1 (en) * 2003-10-08 2007-03-22 Landis John A Virtual data center that allocates and manages system resources across multiple nodes
US20080263185A1 (en) * 2003-11-20 2008-10-23 International Business Machines Corporation Automatic configuration of the network devices via connection to specific switch ports
US7953903B1 (en) * 2004-02-13 2011-05-31 Habanero Holdings, Inc. Real time detection of changed resources for provisioning and management of fabric-backplane enterprise servers
US20100254703A1 (en) * 2009-04-01 2010-10-07 Kirkpatrick Peter E Optical Network for Cluster Computing

Similar Documents

Publication Publication Date Title
US10991452B2 (en) Hardware acceleration of short read mapping for genomic and other types of analyses
CN206198697U (en) A kind of electronic modular system
CA2921751C (en) Method and apparatus to manage the direct interconnect switch wiring and growth in computer networks
CN1585410B (en) Providing SCSI device access over a network
US10372360B2 (en) Apparatus, system, and method for reconfigurable media-agnostic storage
US8934483B2 (en) Data center switch
Wan et al. Bns-gcn: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling
CN108802519A (en) Test system and method
US10251302B2 (en) Flexible chassis for installing electronic equipment in a rack
US8793352B2 (en) Storage area network configuration
US10055188B2 (en) Display device, multi-display system, and ID determination method for display device
CN101498992A (en) Methods for implementation of WORM mode on a removable disk drive storage system
CN102265562A (en) Communication management device, communication node, communication system, and data communication method
CN109089398A (en) The system for determining the slot position of equipment rack
WO2018236430A1 (en) Identifying system device locations
CN107368401A (en) management system and management method
WO2017023247A1 (en) A networking infrastructure center
CN111602973A (en) Management system and method based on Internet of things technology
WO2005041523A1 (en) Method and apparatus for translating data packets from one network protocol to another
Hendy A combinatorial description of the closest tree algorithm for finding evolutionary trees
CN108255434A (en) Label management method, managing device and computer readable storage medium
US10237204B2 (en) Switch chassis with flexible topology
Popescu et al. Big data in plant science: resources and data mining tools for plant genomics and proteomics
Othman et al. The development of crosstalk-free scheduling algorithms for routing in Optical Multistage Interconnection Networks
CN103412830A (en) Method, device and system for centralized management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15900516

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15900516

Country of ref document: EP

Kind code of ref document: A1