US20100049942A1 - Dragonfly processor interconnect network - Google Patents

Dragonfly processor interconnect network Download PDF

Info

Publication number
US20100049942A1
US20100049942A1 US12/195,198 US19519808A US2010049942A1 US 20100049942 A1 US20100049942 A1 US 20100049942A1 US 19519808 A US19519808 A US 19519808A US 2010049942 A1 US2010049942 A1 US 2010049942A1
Authority
US
United States
Prior art keywords
routers
group
router
computer system
multiprocessor computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/195,198
Inventor
John Kim
Dennis C. Abts
Steven L. Scott
William J. Dally
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Leland Stanford Junior University
Original Assignee
Leland Stanford Junior University
Cray Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leland Stanford Junior University, Cray Inc filed Critical Leland Stanford Junior University
Priority to US12/195,198 priority Critical patent/US20100049942A1/en
Assigned to CRAY INC. reassignment CRAY INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ABTS, DENNIS C., SCOTT, STEVEN L.
Assigned to BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY, THE reassignment BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY, THE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KIM, JOHN, DALLY, WILLIAM J.
Publication of US20100049942A1 publication Critical patent/US20100049942A1/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CRAY INC.
Priority to US14/583,588 priority patent/US9614786B2/en
Priority to US15/435,952 priority patent/US10153985B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17375One dimensional, e.g. linear array, ring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4027Coupling between buses using bus bridges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/58Association of routers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1515Non-blocking multistage, e.g. Clos

Definitions

  • the invention relates generally to computer interconnect networks, and more specifically in one embodiment to a dragonfly topology processor interconnect network.
  • Computer systems have long relied on network connections to transfer data, whether from one computer system to another computer system, one computer component to another computer component, or from one processor to another processor in the same computer.
  • Most computer networks link multiple computerized elements to one another, and include various functions such as verification that a message sent over the network arrived at the intended recipient, confirmation of the integrity of the message, and a method of routing a message to the intended recipient on the network.
  • Processor interconnect networks are used in multiprocessor computer systems to transfer data from one processor to another, or from one group of processors to another group.
  • the number of interconnection links can be very large with computer systems having hundreds or thousands of processors, and system performance can vary significantly based on the efficiency of the processor interconnect network.
  • the number of connections, number of intermediate nodes between a sending and receiving processing node, and the speed or type of connection all play a factor in the interconnect network performance.
  • the network topology, or pattern of connections used to tie processing nodes together affects performance, and remains an area of active research. It is impractical to directly link each node to each other node in systems having many tens of processors, and all but impossible as the number of processors reaches the thousands.
  • processor interconnect network designer is thereby challenged to provide fast and efficient communication between the various processing nodes, while controlling the number of overall links, and the cost and complexity of the processor interconnect network.
  • the topology of a network or the method used to determine how to link a processing node to other nodes in a multiprocessor computer system, is therefore an area of interest.
  • the invention comprises in one example a dragonfly topology network, comprising a plurality of processor nodes, a plurality of routers, each router directly coupled to a plurality of terminal nodes, the routers coupled to one another and arranged into a group, and a plurality of groups of routers, such that each group is connected to each other group via at least one direct connection.
  • Network data is routed in some embodiments using at least one of credit round-trip latency as an indicator of channel congestion and selective virtual channel discrimination.
  • FIG. 1 is a block diagram of a dragonfly network topology, consistent with an example embodiment of the invention.
  • FIG. 2 is a graph illustrating scalability of a dragonfly network in nodes for various router radices, consistent with an example embodiment of the invention.
  • FIG. 3 is a block diagram illustrating a dragonfly network topology, consistent with an example embodiment of the invention.
  • FIG. 4 is block diagram of dragonfly network topology groups, consistent with some example embodiments of the invention.
  • FIG. 5 is a block diagram of a dragonfly network illustrating minimal and non-minimal routing using virtual channels, consistent with an example embodiment of the invention.
  • FIG. 6 is a graph illustrating latency v. offered load for a variety of routing algorithms using various traffic patterns, consistent with an example embodiment of the invention.
  • FIG. 7 is a node group diagram of a dragonfly topology network illustrating adaptive routing via global channels using backpressure from intermediate nodes, consistent with an example embodiment of the invention.
  • FIG. 8 is a node diagram illustrating credit round trip latency tracking, consistent with an example embodiment of the invention.
  • Interconnection networks are widely used to connect processors and memories in multiprocessors, as switching fabrics for high-end routers and switches, and for connecting I/O devices.
  • processors and memories are widely used to connect processors and memories in multiprocessors, as switching fabrics for high-end routers and switches, and for connecting I/O devices.
  • the performance of the interconnection network plays a central role in determining the overall performance of the system.
  • the latency and bandwidth of the network largely establish the remote memory access latency and bandwidth.
  • the Cray Black Widow system one of the first systems to employ a high-radix network, uses a variant of the folded-Clos topology and radix-64 routers—a significant departure from previous low-radix 3-D torus networks.
  • Recently, the advent of economical optical signaling enables topologies with long channels. However, these long optical channels remain significantly more expensive than short electrical channels.
  • we introduce a Dragonfly topology that exploits emerging optical signaling technology by grouping routers to further increase the effective radix of the network.
  • the topology of an interconnection network largely determines both the performance and the cost of the network.
  • Network cost is dominated by the cost of channels, and in particular the cost of the long, global, inter-cabinet channels.
  • reducing the number of global channels can significantly reduce the cost of the network.
  • the dragonfly topology introduced in this paper reduces the number of global channels traversed per packet using minimal routing to one.
  • very high-radix routers with a radix of approximately 2 ⁇ N (where N is the size of the network) are used. While radix 64 routers have been introduced, and a radix of 128 is feasible, much higher radices in the hundreds or thousands are needed to build machines that scale to 8K-1M nodes if each packet is limited to only one global hop using traditional very high radix router technology.
  • the Dragonfly network topology proposes using a group of routers connected into a subnetwork as one very high radix virtual router. This very high effective radix in turn allows us to build a network in which all minimal routes traverse at most one global channel. It also increases the physical length of the global channels, exploiting the capabilities of emerging optical signaling technology.
  • Achieving good performance on a wide range of traffic patterns on a dragonfly topology involves selecting a routing algorithm that can effectively balance load across the global channels.
  • Global adaptive routing (UGAL) can perform such load balancing if the load of the global channels is available at the source router, where the routing decision is made.
  • the source router is most often not connected to the global channel in question.
  • the adaptive routing decision is made based on remote or indirect information.
  • High-radix networks reduce the diameter of the network but require longer cables compared to low-radix networks. Advances in signaling technology and the recent development of active optical cables facilitate implementation of high-radix topologies with longer cables.
  • An interconnection network is embedded in a packaging hierarchy. At the lowest level, the routers are connected via circuit boards, which are then connected via a backplane or midplane. One or more backplanes are packaged in a cabinet, with multiple cabinets connected by electrical or optical cables to form a complete system.
  • the global (inter-cabinet) cables and their associated transceivers often dominate the cost of a network. To minimize the network cost, the topology should be matched to the characteristics of the available interconnect technologies, such as cost and performance.
  • optical cables have a higher fixed cost, their ability to transmit data over long distances at several times the data rate of copper cables results in a lower cost per unit distance than electrical cables. Based on the data available using current technologies, the break-even point is at 10 m. For distances shorter than 10 m, electrical signaling is less expensive. Beyond 10 m, optical signaling is more economical. The Dragonfly topology proposed here exploits this relationship between cost and distance. By reducing the number of global cables, it minimizes the effect of the higher fixed overhead of optical signaling, and by making the global cables longer, it maximizes the advantage of the lower per-unit cost of optical fibers.
  • the Dragonfly topology is a hierarchical network with three levels, as shown in FIG. 1 : routers ( 104 , 105 , and 106 ), groups ( 101 , 102 , and 103 ), and system.
  • routers 104 , 105 , and 106
  • groups 101 , 102 , and 103
  • a group consists of a routers connected via an intra-group interconnection network formed from local channels, as shown at 101 in FIG. 1 .
  • This very high radix, k′>>k enables the system level network to be realized with very low global diameter (the maximum number of expensive global channels on the minimum path between any two nodes).
  • a system-level network built directly with radix k routers would require a larger global diameter.
  • the dragonfly parameters a, p, and h can have any values.
  • FIG. 2 The scalability of a balanced dragonfly is shown in FIG. 2 .
  • the dragonfly topology is highly scalable—with radix-64 routers, the topology scales to over 256k nodes with a network diameter of only three hops.
  • Arbitrary networks can be used for the intra-group and inter-group networks in FIG. 1 .
  • FIG. 1 A simple example of the dragonfly is shown in FIG.
  • the global radix, k′ can be increased further by using a higher-dimensional topology for the intra-group network.
  • a network may also exploit intra-group packaging locality.
  • a 2-D flattened butterfly is shown in FIG. 4 at 401 , which has the same k′ as the group shown in FIG. 5 but exploits packaging locality by providing more bandwidth to local routers.
  • channel slicing can be employed. Rather than make the channels wider, which would decrease the router radix, multiple network can be connected in parallel to add capacity.
  • the dragonfly topology in some embodiments can also utilize parallel networks to add capacity to the network.
  • the dragonfly networks described so far assumed uniform bandwidth to all nodes in the network. However, if such uniform bandwidth is not needed, bandwidth tapering can be implemented by removing inter-group channels among some of the groups.
  • Minimal routing in a dragonfly from source node s attached to router Rs in group Gs to destination node d attached to router Rd in group Gd traverses a single global channel and is accomplished in three steps:
  • Valiant's algorithm can be applied at the system level—routing each packet first to a randomly-selected intermediate group Gi and then to its final destination d. Applying Valiant's algorithm to groups suffices to balance load on both the global and local channels. This randomized non-minimal routing traverses at most two global channels and requires five steps:
  • VCs virtual channels
  • FIG. 5 Two virtual channels (VCs) are needed for minimal routing and three VCs are required for non-minimal routing, as shown in FIG. 5 .
  • These virtual router assignments eliminate all channel dependencies due to routing.
  • additional virtual channels may be required to avoid protocol deadlock—e.g., for shared memory systems, separate sets of virtual channels may be required for request and reply messages.
  • Cycle accurate simulations are used to evaluate the performance of the different routing algorithms.
  • Packets are injected using a Bernoulli process.
  • the simulator is warmed up under load without taking measurements until steady-state is reached. Then a sample of injected packets is labeled during a measurement interval. The simulation is run until all labeled packets exit the system.
  • Single flit (flow control unit) packets are used to separate the routing algorithm from flow control issues such as the use of wormhole or virtual cut-through flow control.
  • the input buffers are initially assumed to be 16 flits deep. The impact of different buffer sizes is also evaluated.
  • Latency v. offered load is shown for the four routing algorithms, using both uniform random traffic at 601 and adversarial traffic at 602 .
  • the use of a synthetic traffic pattern allows us to stress the topology and routing algorithm to fully evaluate the network.
  • For benign traffic such as uniform random (UR)
  • UR uniform random
  • MIN is sufficient to provide low latency and high throughput, as shown at 601 of FIG. 6 .
  • VAL achieves approximately half of the network capacity because its load-balancing doubles the load on the global channels.
  • Both UGAL-G and UGAL-L approach the throughput of MIN, but with slightly higher latency near saturation.
  • the higher latency is caused by the use of parallel or greedy allocation where the routing decision at each port is made in parallel.
  • the use of sequential allocation will reduce the latency at the expense of a more complex allocator.
  • the evaluation for this WC traffic is shown in FIG. 6 at 602 .
  • MIN forwards all of the traffic from each group across a single channel, its throughput is limited to 1/ah.
  • VAL achieves slightly under 50% throughput which is the maximum possible throughput with this traffic.
  • UGAL-G achieves similar throughput as VAL but UGAL-L leads to both limited throughput as well as high average packet latency at intermediate load.
  • Adaptive routing on the dragonfly is challenging because it is the global channels, the group outputs, that need to be balanced, not the router outputs. This leads to an indirect routing problem.
  • Each router must pick a global channel to use using only local information that depends only indirectly on the state of the global channels.
  • Previous global adaptive routing methods used local queue information, source queues and output queues, to generate accurate estimates of network congestion. In these cases, the local queues were an accurate proxy of global congestion, because they directly indicated congestion on the routes they initiated.
  • a throughput issue with UGAL-L arises due to a single local channel handling both minimal and non-minimal traffic.
  • a packet in R 1 has a minimal path which uses gc 7 and a nonminimal path which uses gc 6 . Both paths share the same local channel from R 1 to R 2 . Because both paths share the same local queue (and hence have the same queue occupancy) and the minimal path is shorter (one global hop vs two), the minimal channel will always be selected, even when it is saturated. This leads to the minimal global channel being overloaded and the non-minimal global channels that share the same router as the minimal channel being under utilized.
  • the minimal channel is preferred and the load is uniformly balanced across all other global channels.
  • the non-minimal channels on the router that contains the minimal global channel are under utilized—resulting in a degradation of network throughput.
  • UGAL-LVC matches the throughput of UGAL-G on a WC traffic pattern but for UR traffic, the throughput is limited, with approximately 30% reduction in throughput.
  • UGALLVC performs well since the minimal queue is heavily loaded.
  • individual VCs do not provide an accurate representation of the channel congestion—resulting in throughput degradation.
  • UGAL-LVC H Compared to UGAL-LVC, UGAL-LVC H provides the same throughput on WC traffic pattern but matches the throughput of UGAL-G on UR traffic but resulting in nearly 2 ⁇ higher latency at an offered load of 0.8, near saturation. ForWC traffic, UGAL-L VC H also results in higher intermediate latency compared to UGAL-G.
  • the high intermediate latency of UGAL-L is due to minimally-routed packets having to fill the channel buffers between the source and the point of congestion before congestion is sensed.
  • Our research shows that non-minimally routed packets have a latency curve comparable to UGAL-G while minimally-routed packets see significantly higher latency.
  • the latency of minimally-routed packets increases and is proportional to the depth of the buffers.
  • a histogram of latency distribution shows two clear distributions—one large distribution with low latency for the non-minimal packets and another distribution with a limited number of packets but with much higher latency for the minimal packets.
  • q 1 reflects the state of q 0 and q 2 reflects the state of q 3 .
  • the flow control provides backpressure to q 1 and q 2 as shown with the arrows in FIG. 7 .
  • these local queue information can be used to accurately measure the throughput. Since the throughput is defined as the offered load when the latency goes to infinity (or the queue occupancy goes to infinity), this local queue information is sufficient.
  • q 0 needs to be completely full in order for q 1 to reflect the congestion of gc 0 and allow R 1 to route packets non-minimally.
  • the global queues need to be completely full and provide a stiff backpressure towards the local queues.
  • the stiffness of the backpressure is inversely proportional to the depth of the buffer—with deeper buffers, it takes longer for the backpressure to propagate while with shallower buffers, a much stiffer backpressure is provided.
  • the buffer size decreases, the latency at intermediate load is decreased because of the stiffer backpressure.
  • using smaller buffers comes at the cost of reduced network throughput.
  • credit round-trip latency To overcome the high intermediate latency, we propose using credit round-trip latency to sense congestion faster and reduce latency.
  • credit-based flow control illustrated in FIG. 8 , credit counts are maintained for buffers downstream. As packets are sent downstream, the appropriate credit count is decremented and once the packet leaves downstream router, credits are sent back upstream and the credit count is incremented. The latency for the credits to return is referred to as credit round-trip latency (tcrt) and if there is congestion downstream, the packet cannot be immediately processed and results in an increase in tcrt.
  • tcrt credit round-trip latency
  • conventional credit flow control is illustrated at 801 .
  • the output credit count is decremented ( 2 ) and credits are sent back upstream ( 3 ).
  • This scheme is modified as shown at 802 to use credit round trip latency to estimate congestion in the network.
  • the time stamp is pushed into the credit time queue, denoted CTQ.
  • the credit is delayed ( 3 ), and when downstream credits are received ( 5 ), the credit count is updated as well as the credit round trip latency tcrt.
  • the delay of returning credits provides the appearance of shallower buffers to create a stiff backpressure.
  • the credits needs to delayed by the variance of td across all outputs.
  • We estimate the variance by finding min [td(o)] value and using the difference.
  • the upstream routers observes congestion at a faster rate (compared to waiting for the queues to fill up) and leads to better global adaptive routing decisions.
  • UGAL-LCR The UGAL-L routing algorithm evaluation using credit latency (UGAL-LCR) is investigated for both WC and UR traffic using buffers of depth 16 and 256.
  • UGAL-LCR leads to significant reduction in latency compared to UGALL and approaches the latency of UGAL-G.
  • UGAL-LCR reduces latency by up to 35% with 16 buffers and up to over 20 ⁇ reduction in intermediate latency with 256 buffers compared to UGAL-L.
  • the intermediate latency with UGAL-LCR is independent of buffer size.
  • UGAL-LCR provides up to 50% latency reduction near saturation compared to UGAL-LVC H.
  • both UGAL-LCR and UGALL VC H fall short of the throughput of UGAL-G with UR traffic because their imprecise local information results in some packets being routed non-minimally.
  • credits are conventionally tracked as a pool of credits in credit flow control—i.e., a single credit counter is maintained for each output VC and increments when a credit is received.
  • the implementation of UGAL-LCR requires tracking each credit individually. This can be done by pushing a timestamp on the tail of a queue each time a flit is sent, as shown in FIG. 17( b ) with the use of a credit timestamp queue (CTQ), and popping the timestamp off the head of the queue when the corresponding credit arrives. Because flits and credits are 1:1 and maintain ordering, the simple queue suffices to measure round-trip credit latency.
  • the depth of the queue needs to be proportional to the depth of the data buffers but the queue size can be reduced to utilize imprecise information to measure congestion—e.g., by having a queue which is only 1 ⁇ 4 of the data buffer size, only one of four credits are tracked to measure the congestion.
  • the cost of a dragonfly topology also compares favorably to a flattened butterfly, as well as to other topologies.
  • the flattened butterfly topology reduces network cost of a butterfly by removing intermediate routers and channels. As a result, the flattened butterfly reduces cost by approximately 50% compared to a folded-Clos on balanced traffic.
  • the dragonfly topology extends the flattened butterfly by increasing the effective radix of the routers to further reduce the cost and increase the scalability of the network.
  • a comparison of dragonfly and flattened butterfly networks of 64k nodes shows that a flattened butterfly uses 50% of the router ports for global channels, while a dragonfly uses 25% of the ports for global connections.
  • the flattened butterfly requires two additional dimensions, while the dragonfly is a single dimension.
  • the dragonfly provides better scalability because the group size can be increased to scale the network whereas scaling the flattened butterfly requires adding additional dimensions. With the hop count nearly identical, the dragonfly trades off longer global cables for smaller number of global cables required to provide a more cost-efficient topology better matched to emerging signaling technologies.
  • the dollar cost of a dragonfly also compares favorably to a flattened butterfly for networks larger than 1k nodes, showing approximately a 10% savings for up to 4k nodes, and approximately a 20% cost savings relative to flattened butterfly topologies for more than 4k nodes as the dragonfly has fewer long, global cables.
  • Folded Clos and 3-d torus networks suffer in comparison, because of the larger number of cables needed to support high network diameters.
  • the dragonfly is 62% the cost of a 3-d torus network and 50% that of a folded Clos network. This reduction in network cost is directly correlated to a reduction in network power consumed, which is a significant advantage for large networks as well as for installations that are desirably environmentally friendly.
  • the example embodiments of a dragonfly network presented here show how use of a group of routers as a virtual router can increase the effective radix of a network, and hence reduce network diameter, cost, and latency. Because the dragonfly topology reduces the number global cables in a network, while at the same time increasing their length, the dragonfly topology is particularly well suited for implementations using emerging active optical cables-which have a high fixed cost but a low cost per unit length compared to electrical cables. Using active optical cables for the global channels, a dragonfly network reduces cost by 20% compared to a flattened butterfly and by 52% compared to a folded Clos network of the same bandwidth.
  • dragonfly networks described here also comprise two new variants of global adaptive routing that overcome the challenge of indirect adaptive routing presented by the dragonfly.
  • a dragonfly router will typically make a routing decision based on the state of a global channel attached to a different router in the same group.
  • Conventional global adaptive routing algorithms that use local queue occupancies to infer the state of this remote channel give degraded throughput and latency.
  • the combination of these two techniques gives a global adaptive routing algorithm that approaches the performance of an ideal algorithm with perfect knowledge of remote channel state.

Abstract

A multiprocessor computer system comprises a dragonfly processor interconnect network that comprises a plurality of processor nodes, a plurality of routers, each router directly coupled to a plurality of terminal nodes, the routers coupled to one another and arranged into a group, and a plurality of groups of routers, such that each group is connected to each other group via at least one direct connection.

Description

    FIELD OF THE INVENTION
  • The invention relates generally to computer interconnect networks, and more specifically in one embodiment to a dragonfly topology processor interconnect network.
  • LIMITED COPYRIGHT WAIVER
  • A portion of the disclosure of this patent document contains material to which the claim of copyright protection is made. The copyright owner has no objection to the facsimile reproduction by any person of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office file or records, but reserves all other rights whatsoever.
  • BACKGROUND
  • Computer systems have long relied on network connections to transfer data, whether from one computer system to another computer system, one computer component to another computer component, or from one processor to another processor in the same computer. Most computer networks link multiple computerized elements to one another, and include various functions such as verification that a message sent over the network arrived at the intended recipient, confirmation of the integrity of the message, and a method of routing a message to the intended recipient on the network.
  • Processor interconnect networks are used in multiprocessor computer systems to transfer data from one processor to another, or from one group of processors to another group. The number of interconnection links can be very large with computer systems having hundreds or thousands of processors, and system performance can vary significantly based on the efficiency of the processor interconnect network. The number of connections, number of intermediate nodes between a sending and receiving processing node, and the speed or type of connection all play a factor in the interconnect network performance.
  • Similarly, the network topology, or pattern of connections used to tie processing nodes together affects performance, and remains an area of active research. It is impractical to directly link each node to each other node in systems having many tens of processors, and all but impossible as the number of processors reaches the thousands.
  • Further, the cost of communications interfaces, cables, and other factors can add significantly to the cost of poorly designed or inefficient processor interconnect networks, especially where long connections or high-speed fiber optic links are required. A processor interconnect network designer is thereby challenged to provide fast and efficient communication between the various processing nodes, while controlling the number of overall links, and the cost and complexity of the processor interconnect network.
  • The topology of a network, or the method used to determine how to link a processing node to other nodes in a multiprocessor computer system, is therefore an area of interest.
  • SUMMARY
  • The invention comprises in one example a dragonfly topology network, comprising a plurality of processor nodes, a plurality of routers, each router directly coupled to a plurality of terminal nodes, the routers coupled to one another and arranged into a group, and a plurality of groups of routers, such that each group is connected to each other group via at least one direct connection.
  • Network data is routed in some embodiments using at least one of credit round-trip latency as an indicator of channel congestion and selective virtual channel discrimination.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a dragonfly network topology, consistent with an example embodiment of the invention.
  • FIG. 2 is a graph illustrating scalability of a dragonfly network in nodes for various router radices, consistent with an example embodiment of the invention.
  • FIG. 3 is a block diagram illustrating a dragonfly network topology, consistent with an example embodiment of the invention.
  • FIG. 4 is block diagram of dragonfly network topology groups, consistent with some example embodiments of the invention.
  • FIG. 5 is a block diagram of a dragonfly network illustrating minimal and non-minimal routing using virtual channels, consistent with an example embodiment of the invention.
  • FIG. 6 is a graph illustrating latency v. offered load for a variety of routing algorithms using various traffic patterns, consistent with an example embodiment of the invention.
  • FIG. 7 is a node group diagram of a dragonfly topology network illustrating adaptive routing via global channels using backpressure from intermediate nodes, consistent with an example embodiment of the invention.
  • FIG. 8 is a node diagram illustrating credit round trip latency tracking, consistent with an example embodiment of the invention.
  • DETAILED DESCRIPTION
  • In the following detailed description of example embodiments of the invention, reference is made to specific examples by way of drawings and illustrations. These examples are described in sufficient detail to enable those skilled in the art to practice the invention, and serve to illustrate how the invention may be applied to various purposes or embodiments. Other embodiments of the invention exist and are within the scope of the invention, and logical, mechanical, electrical, and other changes may be made without departing from the subject or scope of the present invention. Features or limitations of various embodiments of the invention described herein, however essential to the example embodiments in which they are incorporated, do not limit the invention as a whole, and any reference to the invention, its elements, operation, and application do not limit the invention as a whole but serve only to define these example embodiments. The following detailed description does not, therefore, limit the scope of the invention, which is defined only by the appended claims.
  • Interconnection networks are widely used to connect processors and memories in multiprocessors, as switching fabrics for high-end routers and switches, and for connecting I/O devices. As processor and memory performance continues to increase in a multiprocessor computer system, the performance of the interconnection network plays a central role in determining the overall performance of the system. The latency and bandwidth of the network largely establish the remote memory access latency and bandwidth.
  • A good interconnection network typically designed around the capabilities and constraints of available technology. Increasing router pin bandwidth, for example, has motivated the use of high-radix routers in which increased bandwidth is used to increase the number of ports per router, rather than maintaining a small number of ports and increasing the bandwidth per port. The Cray Black Widow system, one of the first systems to employ a high-radix network, uses a variant of the folded-Clos topology and radix-64 routers—a significant departure from previous low-radix 3-D torus networks. Recently, the advent of economical optical signaling enables topologies with long channels. However, these long optical channels remain significantly more expensive than short electrical channels. In this paper, we introduce a Dragonfly topology, that exploits emerging optical signaling technology by grouping routers to further increase the effective radix of the network.
  • The topology of an interconnection network largely determines both the performance and the cost of the network. Network cost is dominated by the cost of channels, and in particular the cost of the long, global, inter-cabinet channels. Thus, reducing the number of global channels can significantly reduce the cost of the network. To reduce global channels without reducing performance, the number of global channels traversed by the average packet must be reduced. The dragonfly topology introduced in this paper reduces the number of global channels traversed per packet using minimal routing to one.
  • To achieve this global diameter of one, very high-radix routers, with a radix of approximately 2√N (where N is the size of the network) are used. While radix 64 routers have been introduced, and a radix of 128 is feasible, much higher radices in the hundreds or thousands are needed to build machines that scale to 8K-1M nodes if each packet is limited to only one global hop using traditional very high radix router technology. To achieve the benefits of a very high radix with routers without requiring hundreds or thousands of ports per node, the Dragonfly network topology proposes using a group of routers connected into a subnetwork as one very high radix virtual router. This very high effective radix in turn allows us to build a network in which all minimal routes traverse at most one global channel. It also increases the physical length of the global channels, exploiting the capabilities of emerging optical signaling technology.
  • Achieving good performance on a wide range of traffic patterns on a dragonfly topology involves selecting a routing algorithm that can effectively balance load across the global channels. Global adaptive routing (UGAL) can perform such load balancing if the load of the global channels is available at the source router, where the routing decision is made. With the Dragonfly topology, however, the source router is most often not connected to the global channel in question. Hence, the adaptive routing decision is made based on remote or indirect information.
  • The indirect nature of this decision leads to degradation in both latency and throughput when conventional UGAL (which uses local queue occupancy to make routing decisions) is used. We propose two modifications to the UGAL routing algorithm for the Dragonfly network topology that overcome this limitation with performance results approaching an ideal implementation using global information. Adding selective virtual-channel discrimination to UGAL (UGAL-VC H) eliminates bandwidth degradation due to local channel sharing between minimal and non-minimal paths. Using credit-round trip latency to both sense global channel congestion and to propagate this congestion information upstream (UGAL-CR) eliminates latency degradation by providing much stiffer backpressure than is possible using only queue occupancy for congestion sensing.
  • High-radix networks reduce the diameter of the network but require longer cables compared to low-radix networks. Advances in signaling technology and the recent development of active optical cables facilitate implementation of high-radix topologies with longer cables.
  • An interconnection network is embedded in a packaging hierarchy. At the lowest level, the routers are connected via circuit boards, which are then connected via a backplane or midplane. One or more backplanes are packaged in a cabinet, with multiple cabinets connected by electrical or optical cables to form a complete system. The global (inter-cabinet) cables and their associated transceivers often dominate the cost of a network. To minimize the network cost, the topology should be matched to the characteristics of the available interconnect technologies, such as cost and performance.
  • The maximum bandwidth of an electrical cable drops with increasing cable length because signal attenuation due to skin effect and dielectric absorption increases linearly with distance. For typical high-performance signaling rates (10-20 Gb/s) and technology parameters, electrical signaling paths are limited to about 1 m in circuit boards and 10 m in cables. At longer distances, either the signaling rate must be reduced or repeaters inserted to overcome attenuation.
  • Historically, the high cost of optical signaling limited its use to very long distances or applications that demanded performance regardless of cost. Recent advances in silicon photonics and their application to active optical cables such as Intel Connects Cables and Luxtera Blazar have provided designers with economical optical interconnects. These active optical cables have electrical connections at either end and electrooptical and optoelectrical modules integrated into the cable itself.
  • Although optical cables have a higher fixed cost, their ability to transmit data over long distances at several times the data rate of copper cables results in a lower cost per unit distance than electrical cables. Based on the data available using current technologies, the break-even point is at 10 m. For distances shorter than 10 m, electrical signaling is less expensive. Beyond 10 m, optical signaling is more economical. The Dragonfly topology proposed here exploits this relationship between cost and distance. By reducing the number of global cables, it minimizes the effect of the higher fixed overhead of optical signaling, and by making the global cables longer, it maximizes the advantage of the lower per-unit cost of optical fibers.
  • To show an example Dragonfly network topology, the following symbols are used in the description of the dragonfly topology and in example routing algorithms presented later:
      • N Number of network terminals
      • p Number of terminals connected to each router
      • a Number of routers in each group
      • k Radix of the routers
      • k_ Effective radix of the group (or the virtual router)
      • h Number of channels within each router used to connect to other groups
      • g Number of groups in the system
      • q Queue depth of an output port
      • qvc Queue depth of an individual output VC
      • H Hop count
      • Outi Router output port i
  • The Dragonfly topology is a hierarchical network with three levels, as shown in FIG. 1: routers (104, 105, and 106), groups (101, 102, and 103), and system. At the router level, each router has connections to p nodes, a−1 local channels—to other routers in the same group—and h global channels—to routers in other groups. Therefore the radix (or degree) of each router is defined as k=p+a+h−1. A group consists of a routers connected via an intra-group interconnection network formed from local channels, as shown at 101 in FIG. 1. Each group has ap connections to terminals and ah connections to global channels, and all of the routers in a group collectively act as a virtual router with radix k′=a(p+h). This very high radix, k′>>k enables the system level network to be realized with very low global diameter (the maximum number of expensive global channels on the minimum path between any two nodes). Up to g=ah+1 groups (N=ap(ah+1) terminals) can be connected with a global diameter of one. In contrast, a system-level network built directly with radix k routers would require a larger global diameter.
  • In a maximum-size (N=ap(ah+1)) dragonfly, there is exactly one connection between each pair of groups. In smaller dragonflies, there are more global connections out of each group than there are other groups. These extra global connections are distributed over the groups with each pair of groups connected by at least _ah+1 g_ channels.
  • The dragonfly parameters a, p, and h can have any values. However, to balance channel load, the network in this example has a=2p=2h. Because each packet traverses two local channels along its route (one at each end of the global channel) for one global channel and one terminal channel, this ratio maintains balance. Because global channels are expensive, deviations from this 2:1 ratio are done in some embodiments in a manner that overprovisions local and terminal channels, so that the expensive global channels remain fully utilized. That is, the network is balanced in such examples so that a≧2h, 2p≧2h.
  • The scalability of a balanced dragonfly is shown in FIG. 2. By increasing the effective radix, the dragonfly topology is highly scalable—with radix-64 routers, the topology scales to over 256k nodes with a network diameter of only three hops. Arbitrary networks can be used for the intra-group and inter-group networks in FIG. 1. In the example presented here, we use a 1-D flattened butterfly or a completely-connected topology for both networks. A simple example of the dragonfly is shown in FIG. 3 with p=h=2 (two processing nodes per router and two channels within each router coupled to other groups), a=4 (four routers in each group) that scales to N=72 (72 nodes in the network) with k=7 (radix 7) routers. By using virtual routers, the effective radix is increased from k=7 to k′=16, as group G0 of FIG. 3 has eight global connections and eight node connections.
  • The global radix, k′, can be increased further by using a higher-dimensional topology for the intra-group network. Such a network may also exploit intra-group packaging locality. For example, a 2-D flattened butterfly is shown in FIG. 4 at 401, which has the same k′ as the group shown in FIG. 5 but exploits packaging locality by providing more bandwidth to local routers. A 3-dimension flattened butterfly is used in FIG. 4 at 402 to increase the effective radix from k′=16 to K′=32—allowing the topology to scale up to N=1056 using the same k=7 router as in FIG. 1.
  • To increase the terminal bandwidth of a high-radix network such as a dragonfly, channel slicing can be employed. Rather than make the channels wider, which would decrease the router radix, multiple network can be connected in parallel to add capacity. Similarly, the dragonfly topology in some embodiments can also utilize parallel networks to add capacity to the network. In addition, the dragonfly networks described so far assumed uniform bandwidth to all nodes in the network. However, if such uniform bandwidth is not needed, bandwidth tapering can be implemented by removing inter-group channels among some of the groups.
  • A variety of minimal and non-minimal routing algorithms can be implemented using the dragonfly topology. Some embodiments of global adaptive routing using local information lead to limited throughput and very high latency at intermediate loads. To overcome these problems, we introduce new mechanisms to global adaptive routing, which provide performance that approaches an ideal implementation of global adaptive routing.
  • Minimal routing in a dragonfly from source node s attached to router Rs in group Gs to destination node d attached to router Rd in group Gd traverses a single global channel and is accomplished in three steps:
      • Step 1: If Gs_=Gd and Rs does not have a connection to Gd, route within Gs from Rs to Ra, a router that has a global channel to Gd.
      • Step 2: If Gs_=Gd, traverse the global channel from Ra to reach router Rb in Gd.
      • Step 3: If Rb_=Rd, route within Gd from Rb to Rd.
  • This minimal routing works well for load-balanced traffic, but results in poor performance on adversarial traffic patterns. To load-balance adversarial traffic patterns, Valiant's algorithm can be applied at the system level—routing each packet first to a randomly-selected intermediate group Gi and then to its final destination d. Applying Valiant's algorithm to groups suffices to balance load on both the global and local channels. This randomized non-minimal routing traverses at most two global channels and requires five steps:
      • Step 1: If Gs_=Gi and Rs does not have a connection to Gi, route within Gs from Rs to Ra, a router that has a global channel to Gi.
      • Step 2: If Gs_=Gi traverse the global channel from Ra to reach router Rx in Gi.
      • Step 3: If Gi_=Gd and Rx does not have a connection to Gd, route within Gi from Rx to Ry, a router that has a global channel to Gd.
      • Step 4: If Gi_=Gd, traverse the global channel from Ry to router Rb in Gd.
      • Step 5: If Rb_=Rd, route within Gd from Rb to Rd.
  • To prevent routing deadlock, two virtual channels (VCs) are needed for minimal routing and three VCs are required for non-minimal routing, as shown in FIG. 5. These virtual router assignments eliminate all channel dependencies due to routing. For some applications, additional virtual channels may be required to avoid protocol deadlock—e.g., for shared memory systems, separate sets of virtual channels may be required for request and reply messages.
  • A variety of routing algorithms for the dragonfly topology have been evaluated, including:
      • Minimal (MIN): The minimal path is taken as described previously.
      • Valiant (VAL) [32]: Randomized non-minimal routing as described previously.
      • Universal Globally-Adaptive Load-balanced [29] (UGALG,UGAL-L) UGAL chooses between MIN and VAL on a packet-by-packet basis to load-balance the network. The choice is made by using queue length and hop count to estimate network delay and choosing the path with minimum delay. We implement two versions of UGAL.
      • UGAL-L—uses local queue information at the current router node.
      • UGAL-G—uses queue information for all the global channels in Gs—assuming knowledge of queue lengths on other routers. While difficult to implement, this represents an ideal implementation of UGAL since the load-balancing is required of the global channels, not the local channels.
  • Cycle accurate simulations are used to evaluate the performance of the different routing algorithms. We simulate a single-cycle, input-queued router switch but provide sufficient speedup in order to generalize the results and ensure that routers do not become the bottleneck of the network. Packets are injected using a Bernoulli process. The simulator is warmed up under load without taking measurements until steady-state is reached. Then a sample of injected packets is labeled during a measurement interval. The simulation is run until all labeled packets exit the system. Unless otherwise noted, the simulation results are shown for dragonfly of size 1K node using p=h=4 and a=8 parameters. Simulations of other size networks follow the same trend and are not presented due to space constraints. Single flit (flow control unit) packets are used to separate the routing algorithm from flow control issues such as the use of wormhole or virtual cut-through flow control. The input buffers are initially assumed to be 16 flits deep. The impact of different buffer sizes is also evaluated.
  • The different routing algorithms are evaluated using both benign and adversarial synthetic traffic patterns, as shown in FIG. 6. Latency v. offered load is shown for the four routing algorithms, using both uniform random traffic at 601 and adversarial traffic at 602. The use of a synthetic traffic pattern allows us to stress the topology and routing algorithm to fully evaluate the network. For benign traffic such as uniform random (UR), MIN is sufficient to provide low latency and high throughput, as shown at 601 of FIG. 6. VAL achieves approximately half of the network capacity because its load-balancing doubles the load on the global channels. Both UGAL-G and UGAL-L approach the throughput of MIN, but with slightly higher latency near saturation. The higher latency is caused by the use of parallel or greedy allocation where the routing decision at each port is made in parallel. The use of sequential allocation will reduce the latency at the expense of a more complex allocator.
  • To test the load-balancing ability of a routing algorithm, we use a worst-case (WC) traffic pattern where each node in group Gi sends traffic to a randomly selected node in group Gi+1. With minimal routing, this pattern will cause all nodes in each group Gi to send all of their traffic across the single global channel to group Gi+1. Non-minimal routing is required to load balance this traffic pattern by spreading the bulk of the traffic across the other global channels.
  • The evaluation for this WC traffic is shown in FIG. 6 at 602. Because MIN forwards all of the traffic from each group across a single channel, its throughput is limited to 1/ah. VAL achieves slightly under 50% throughput which is the maximum possible throughput with this traffic. UGAL-G achieves similar throughput as VAL but UGAL-L leads to both limited throughput as well as high average packet latency at intermediate load. In the following section, we show how the indirect nature of adaptive routing on the dragonfly leads to performance degradation. We identify the issues with UGAL-L and present mechanisms that can overcome these problems.
  • Adaptive routing on the dragonfly is challenging because it is the global channels, the group outputs, that need to be balanced, not the router outputs. This leads to an indirect routing problem. Each router must pick a global channel to use using only local information that depends only indirectly on the state of the global channels. Previous global adaptive routing methods used local queue information, source queues and output queues, to generate accurate estimates of network congestion. In these cases, the local queues were an accurate proxy of global congestion, because they directly indicated congestion on the routes they initiated.
  • With the dragonfly topology, however, local queues only sense congestion on a global channel via backpressure over the local channels. If the local channels are overprovisioned, significant numbers of packets must be enqueued on the overloaded minimal route before the source router will sense the congestion. This results in a degradation in throughput and latency as shown earlier in FIG. 6 at 602.
  • A throughput issue with UGAL-L arises due to a single local channel handling both minimal and non-minimal traffic. For example, in FIG. 7, a packet in R1 has a minimal path which uses gc7 and a nonminimal path which uses gc6. Both paths share the same local channel from R1 to R2. Because both paths share the same local queue (and hence have the same queue occupancy) and the minimal path is shorter (one global hop vs two), the minimal channel will always be selected, even when it is saturated. This leads to the minimal global channel being overloaded and the non-minimal global channels that share the same router as the minimal channel being under utilized. With UGAL-G, the minimal channel is preferred and the load is uniformly balanced across all other global channels. With UGAL-L, on the other hand, the non-minimal channels on the router that contains the minimal global channel are under utilized—resulting in a degradation of network throughput.
  • To overcome this limitation, we modify the UGAL algorithm to separate the queue occupancy into minimal and nonminimal components by using individual VCs (UGAL-LVC).
  • if (qm vcHm ≦ qnm vcHnm )
       route minimally;
    else
       route nonminimally;

    where the subscript m and nm denote the minimal and nonminimal paths. If the VC assignment of FIG. 5 is used, qm vc=q(V C1) and qnm vc=q(V C0).
  • When compared, UGAL-LVC matches the throughput of UGAL-G on a WC traffic pattern but for UR traffic, the throughput is limited, with approximately 30% reduction in throughput. For the WC traffic, where most of the traffic needs to be sent non-minimally, UGALLVC performs well since the minimal queue is heavily loaded. However, for load-balanced traffic when most traffic should be sent minimally, individual VCs do not provide an accurate representation of the channel congestion—resulting in throughput degradation.
  • To overcome this limitation, we further modify the UGAL algorithm to separate the queue occupancy into minimal and non-minimal components only when the minimal and nonminimal paths start with the same output port. Our hybrid modified UGAL routing algorithm (UGAL-LVC H) is:
  • if (qmHm ≦ qnmHnm && Outm_= Qutnm ) ||
    (qm vcHm ≦ qnm vcHnm && Outm = Outnm)
       route minimally;
    else
       route nonminimally;
  • Compared to UGAL-LVC, UGAL-LVC H provides the same throughput on WC traffic pattern but matches the throughput of UGAL-G on UR traffic but resulting in nearly 2× higher latency at an offered load of 0.8, near saturation. ForWC traffic, UGAL-L VC H also results in higher intermediate latency compared to UGAL-G.
  • The high intermediate latency of UGAL-L is due to minimally-routed packets having to fill the channel buffers between the source and the point of congestion before congestion is sensed. Our research shows that non-minimally routed packets have a latency curve comparable to UGAL-G while minimally-routed packets see significantly higher latency. As input buffers are increased, the latency of minimally-routed packets increases and is proportional to the depth of the buffers. A histogram of latency distribution shows two clear distributions—one large distribution with low latency for the non-minimal packets and another distribution with a limited number of packets but with much higher latency for the minimal packets.
  • To understand this problem with UGAL-L, in the example dragonfly group shown in FIG. 7, assume a packet in R1 is making its global adaptive routing decision of routing either minimally through gc0 or non-minimally through gc7. The routing decision needs to load balance global channel utilization and ideally, the channel utilization can be obtained from the queues associated with the global channels, q0 and q3. However, q0 and q3 queue informations are only available at R0 and R2 and not readily available at R1—thus, the routing decision can only be made indirectly through the local queue information available at R1.
  • In this example, q1 reflects the state of q0 and q2 reflects the state of q3. When either q0 or q3 is full, the flow control provides backpressure to q1 and q2 as shown with the arrows in FIG. 7. As a result, in steady-state measurement, these local queue information can be used to accurately measure the throughput. Since the throughput is defined as the offered load when the latency goes to infinity (or the queue occupancy goes to infinity), this local queue information is sufficient. However, q0 needs to be completely full in order for q1 to reflect the congestion of gc0 and allow R1 to route packets non-minimally. Thus, using local information requires sacrificing some packets to properly determine the congestion—resulting in packets being sent minimally having much higher latency. As the load increases, although minimally routed packets continue to increase in latency, more packets are sent non-minimally and results in a decrease in average latency until saturation.
  • In order for local queues to provide a good estimate of global congestion, the global queues need to be completely full and provide a stiff backpressure towards the local queues. The stiffness of the backpressure is inversely proportional to the depth of the buffer—with deeper buffers, it takes longer for the backpressure to propagate while with shallower buffers, a much stiffer backpressure is provided. As the buffer size decreases, the latency at intermediate load is decreased because of the stiffer backpressure. However, using smaller buffers comes at the cost of reduced network throughput.
  • To overcome the high intermediate latency, we propose using credit round-trip latency to sense congestion faster and reduce latency. In credit-based flow control, illustrated in FIG. 8, credit counts are maintained for buffers downstream. As packets are sent downstream, the appropriate credit count is decremented and once the packet leaves downstream router, credits are sent back upstream and the credit count is incremented. The latency for the credits to return is referred to as credit round-trip latency (tcrt) and if there is congestion downstream, the packet cannot be immediately processed and results in an increase in tcrt.
  • Referring to FIG. 8, conventional credit flow control is illustrated at 801. As packets are sent downstream (1), the output credit count is decremented (2) and credits are sent back upstream (3). This scheme is modified as shown at 802 to use credit round trip latency to estimate congestion in the network. In addition to the output credit count being decremented (2), the time stamp is pushed into the credit time queue, denoted CTQ. Before sending the credit back upstream (4), the credit is delayed (3), and when downstream credits are received (5), the credit count is updated as well as the credit round trip latency tcrt.
  • The value of tcrt can be used to estimate the congestion of global channels. By using this information to delay upstream credits, we stiffen the backpressure and more rapidly propagate congestion information up stream. For each output O, tcrt(O) is measured and the quantity td(O)=tcrt(O)−tcrtO is stored in a register. Then, when a flit is sent to output O, instead of immediately sending a credit back upstream, the credit is delayed by td(O)−min [td(o)]. The credits sent across the global channels are not delayed. This ensures that there is no cyclic loop in this mechanism and allows the global channels to be fully utilized.
  • The delay of returning credits provides the appearance of shallower buffers to create a stiff backpressure. However, to ensure that the entire buffer gets utilized and there is no reduced throughput at high load, the credits needs to delayed by the variance of td across all outputs. We estimate the variance by finding min [td(o)] value and using the difference. By delaying credits, the upstream routers observes congestion at a faster rate (compared to waiting for the queues to fill up) and leads to better global adaptive routing decisions.
  • The UGAL-L routing algorithm evaluation using credit latency (UGAL-LCR) is investigated for both WC and UR traffic using buffers of depth 16 and 256. UGAL-LCR leads to significant reduction in latency compared to UGALL and approaches the latency of UGAL-G. For WC traffic, UGAL-LCR reduces latency by up to 35% with 16 buffers and up to over 20× reduction in intermediate latency with 256 buffers compared to UGAL-L. Unlike UGAL-L, the intermediate latency with UGAL-LCR is independent of buffer size. For UR traffic, UGAL-LCR provides up to 50% latency reduction near saturation compared to UGAL-LVC H. However, both UGAL-LCR and UGALL VC H fall short of the throughput of UGAL-G with UR traffic because their imprecise local information results in some packets being routed non-minimally.
  • The implementation of this scheme results in minimal complexity overhead as the following three features are needed at each router:
      • tracking credits individually to measure tcrt
      • registers to store td values
      • a delay mechanism in returning credits
        The amount of storage required for td is minimal as only O(k) registers are required. The credits are often returned by piggybacking on data flits and delaying credits to wait for the transmission of the next data flit upstream is required. The proposed mechanism only requires adding additional delay.
  • As for tracking individual credits, credits are conventionally tracked as a pool of credits in credit flow control—i.e., a single credit counter is maintained for each output VC and increments when a credit is received. The implementation of UGAL-LCR requires tracking each credit individually. This can be done by pushing a timestamp on the tail of a queue each time a flit is sent, as shown in FIG. 17( b) with the use of a credit timestamp queue (CTQ), and popping the timestamp off the head of the queue when the corresponding credit arrives. Because flits and credits are 1:1 and maintain ordering, the simple queue suffices to measure round-trip credit latency. The depth of the queue needs to be proportional to the depth of the data buffers but the queue size can be reduced to utilize imprecise information to measure congestion—e.g., by having a queue which is only ¼ of the data buffer size, only one of four credits are tracked to measure the congestion.
  • The cost of a dragonfly topology also compares favorably to a flattened butterfly, as well as to other topologies. The flattened butterfly topology reduces network cost of a butterfly by removing intermediate routers and channels. As a result, the flattened butterfly reduces cost by approximately 50% compared to a folded-Clos on balanced traffic. The dragonfly topology extends the flattened butterfly by increasing the effective radix of the routers to further reduce the cost and increase the scalability of the network.
  • A comparison of dragonfly and flattened butterfly networks of 64k nodes shows that a flattened butterfly uses 50% of the router ports for global channels, while a dragonfly uses 25% of the ports for global connections. The flattened butterfly requires two additional dimensions, while the dragonfly is a single dimension. In addition, the dragonfly provides better scalability because the group size can be increased to scale the network whereas scaling the flattened butterfly requires adding additional dimensions. With the hop count nearly identical, the dragonfly trades off longer global cables for smaller number of global cables required to provide a more cost-efficient topology better matched to emerging signaling technologies.
  • The dollar cost of a dragonfly also compares favorably to a flattened butterfly for networks larger than 1k nodes, showing approximately a 10% savings for up to 4k nodes, and approximately a 20% cost savings relative to flattened butterfly topologies for more than 4k nodes as the dragonfly has fewer long, global cables. Folded Clos and 3-d torus networks suffer in comparison, because of the larger number of cables needed to support high network diameters. For a network of only 1k nodes, the dragonfly is 62% the cost of a 3-d torus network and 50% that of a folded Clos network. This reduction in network cost is directly correlated to a reduction in network power consumed, which is a significant advantage for large networks as well as for installations that are desirably environmentally friendly.
  • The example embodiments of a dragonfly network presented here show how use of a group of routers as a virtual router can increase the effective radix of a network, and hence reduce network diameter, cost, and latency. Because the dragonfly topology reduces the number global cables in a network, while at the same time increasing their length, the dragonfly topology is particularly well suited for implementations using emerging active optical cables-which have a high fixed cost but a low cost per unit length compared to electrical cables. Using active optical cables for the global channels, a dragonfly network reduces cost by 20% compared to a flattened butterfly and by 52% compared to a folded Clos network of the same bandwidth.
  • Various embodiments of dragonfly networks described here also comprise two new variants of global adaptive routing that overcome the challenge of indirect adaptive routing presented by the dragonfly. A dragonfly router will typically make a routing decision based on the state of a global channel attached to a different router in the same group. Conventional global adaptive routing algorithms that use local queue occupancies to infer the state of this remote channel give degraded throughput and latency. We introduce the selective use of virtual channel discrimination to overcome the bandwidth degradation. We also introduce the use of credit round-trip latency to both sense and signal channel congestion. The combination of these two techniques gives a global adaptive routing algorithm that approaches the performance of an ideal algorithm with perfect knowledge of remote channel state.
  • Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown. This application is intended to cover any adaptations or variations of the example embodiments of the invention described herein. It is intended that this invention be limited only by the claims, and the full scope of equivalents thereof.

Claims (20)

1. A multiprocessor computer system comprising a dragonfly processor interconnect network, the dragonfly processor interconnect network comprising:
a plurality of processor nodes;
a plurality of routers, each router directly coupled to a plurality of terminal nodes, the routers coupled to one another and arranged into a group,
and a plurality of groups of routers, such that each group is connected to each other group via at least one direct connection.
2. The multiprocessor computer system of claim 1, wherein each group acts as virtual router with radix of approximately 2 times the square root of the number of nodes in the network.
3. The multiprocessor computer system of claim 1, wherein the virtual radix of each group is the product of the number of routers in each group times the sum of the number of processor nodes connected to each router plus the number of global channels
4. The multiprocessor computer system of claim 1, wherein the number of routers per group is equal to twice the number of processor nodes per router, and wherein the number of processor nodes per router is equal to the number of channels per router connected to other groups.
5. The multiprocessor computer system of claim 1, wherein the number of routers in a group is greater than twice the number of global channels per router.
6. The multiprocessor computer system of claim 1, wherein the number of processor nodes per router is greater than the number of global channels per router.
7. The multiprocessor computer system of claim 1, wherein the routers within a group are connected via a flattened butterfly network.
8. The multiprocessor computer system of claim 1, wherein the routers route data using selective virtual channel discrimination.
9. The multiprocessor computer system of claim 1, wherein the routers route data using credit round-trip latency as an indicator of channel congestion.
10. A method of operating a multiprocessor computer system, comprising:
communicating a message from a processor node to a router, the router coupled to a plurality of processor nodes;
communicating the message between two or more routers, the routers coupled to one another and arranged into a group, and
communicating the data between two groups of routers, such that each group is connected to each other group via at least one direct connection.
11. The method of operating a multiprocessor computer system of claim 1, wherein each group acts as virtual router with radix of approximately 2 times the square root of the number of nodes in the network.
12. The method of operating a multiprocessor computer system of claim 1, wherein the virtual radix of each group is the product of the number of routers in each group times the sum of the number of processor nodes connected to each router plus the number of global channels
13. The method of operating a multiprocessor computer system of claim 1, wherein the number of routers per group is equal to twice the number of processor nodes per router, and wherein the number of processor nodes per router is equal to the number of channels per router connected to other groups.
14. The method of operating a multiprocessor computer system of claim 1, wherein the number of routers in a group is greater than twice the number of global channels per router.
15. The method of operating a multiprocessor computer system of claim 1, wherein the number of processor nodes per router is greater than the number of global channels per router.
16. The method of operating a multiprocessor computer system of claim 1, wherein the routers within a group are connected via a flattened butterfly network.
17. The method of operating a multiprocessor computer system of claim 1, wherein the routers route data using selective virtual channel discrimination.
18. The method of operating a multiprocessor computer system of claim 1, wherein the routers route data using credit round-trip latency as an indicator of channel congestion.
19. A multiprocessor computer system, comprising a Dragonfly processor interconnect network.
20. A method of communicating data between processing nodes in a multiprocessor computer system, comprising routing the data over a Dragonfly processor interconnect network.
US12/195,198 2008-08-20 2008-08-20 Dragonfly processor interconnect network Abandoned US20100049942A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/195,198 US20100049942A1 (en) 2008-08-20 2008-08-20 Dragonfly processor interconnect network
US14/583,588 US9614786B2 (en) 2008-08-20 2014-12-27 Dragonfly processor interconnect network
US15/435,952 US10153985B2 (en) 2008-08-20 2017-02-17 Dragonfly processor interconnect network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/195,198 US20100049942A1 (en) 2008-08-20 2008-08-20 Dragonfly processor interconnect network

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/583,588 Continuation US9614786B2 (en) 2008-08-20 2014-12-27 Dragonfly processor interconnect network

Publications (1)

Publication Number Publication Date
US20100049942A1 true US20100049942A1 (en) 2010-02-25

Family

ID=41697399

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/195,198 Abandoned US20100049942A1 (en) 2008-08-20 2008-08-20 Dragonfly processor interconnect network
US14/583,588 Active US9614786B2 (en) 2008-08-20 2014-12-27 Dragonfly processor interconnect network
US15/435,952 Active US10153985B2 (en) 2008-08-20 2017-02-17 Dragonfly processor interconnect network

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/583,588 Active US9614786B2 (en) 2008-08-20 2014-12-27 Dragonfly processor interconnect network
US15/435,952 Active US10153985B2 (en) 2008-08-20 2017-02-17 Dragonfly processor interconnect network

Country Status (1)

Country Link
US (3) US20100049942A1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012105265A (en) * 2010-11-05 2012-05-31 Cray Inc Table-driven routing in dragonfly processor interconnect network
US20120144064A1 (en) * 2010-11-05 2012-06-07 Cray Inc. Progressive adaptive routing in a dragonfly processor interconnect network
EP2549388A1 (en) * 2011-04-01 2013-01-23 Huawei Technologies Co., Ltd. Computer system
US8489718B1 (en) * 2010-05-19 2013-07-16 Amazon Technologies, Inc. Torroidal backbone connections for network deployment
US20140032701A1 (en) * 2009-02-19 2014-01-30 Micron Technology, Inc. Memory network methods, apparatus, and systems
US20140140341A1 (en) * 2012-11-19 2014-05-22 Cray Inc. Increasingly minimal bias routing
CN103973564A (en) * 2013-01-31 2014-08-06 清华大学 Interconnection network system and self-adaptation routing method
US20140245324A1 (en) * 2013-02-22 2014-08-28 International Business Machines Corporation All-to-all message exchange in parallel computing systems
CN104079490A (en) * 2014-06-27 2014-10-01 清华大学 Multi-layer dragonfly interconnecting network and self-adaptive routing method
WO2015196461A1 (en) * 2014-06-27 2015-12-30 Tsinghua University Deadlock-free adaptive routing of interconnect network
US20160028613A1 (en) * 2014-07-22 2016-01-28 Mellanox Technologies Ltd. Dragonfly Plus: Communication Over Bipartite Node Groups Connected by a Mesh Network
US20160140071A1 (en) * 2014-11-13 2016-05-19 Cavium, Inc. Arbitrated Access To Resources Among Multiple Devices
US9519605B2 (en) 2014-07-08 2016-12-13 International Business Machines Corporation Interconnection network topology for large scale high performance computing (HPC) systems
US9614786B2 (en) 2008-08-20 2017-04-04 Intel Corporation Dragonfly processor interconnect network
US9729473B2 (en) 2014-06-23 2017-08-08 Mellanox Technologies, Ltd. Network high availability using temporary re-routing
US9806994B2 (en) 2014-06-24 2017-10-31 Mellanox Technologies, Ltd. Routing via multiple paths with efficient traffic distribution
US9894005B2 (en) 2015-03-31 2018-02-13 Mellanox Technologies, Ltd. Adaptive routing controlled by source node
US9973435B2 (en) 2015-12-16 2018-05-15 Mellanox Technologies Tlv Ltd. Loopback-free adaptive routing
US10178029B2 (en) 2016-05-11 2019-01-08 Mellanox Technologies Tlv Ltd. Forwarding of adaptive routing notifications
US10200294B2 (en) 2016-12-22 2019-02-05 Mellanox Technologies Tlv Ltd. Adaptive routing based on flow-control credits
US10229230B2 (en) * 2015-01-06 2019-03-12 International Business Machines Corporation Simulating a large network load
CN110324243A (en) * 2018-03-28 2019-10-11 清华大学 The dragonfly network architecture and its broadcast routing method
CN110324249A (en) * 2018-03-28 2019-10-11 清华大学 A kind of dragonfly network architecture and its multicast route method
US10644995B2 (en) 2018-02-14 2020-05-05 Mellanox Technologies Tlv Ltd. Adaptive routing in a box
US10708219B2 (en) 2013-10-06 2020-07-07 Mellanox Technologies, Ltd. Simplified packet routing
US10819621B2 (en) 2016-02-23 2020-10-27 Mellanox Technologies Tlv Ltd. Unicast forwarding of adaptive-routing notifications
EP3758318A1 (en) * 2019-06-28 2020-12-30 INTEL Corporation Shared memory mesh for switching
US11005724B1 (en) 2019-01-06 2021-05-11 Mellanox Technologies, Ltd. Network topology having minimal number of long connections among groups of network elements
US11411911B2 (en) 2020-10-26 2022-08-09 Mellanox Technologies, Ltd. Routing across multiple subnetworks using address mapping
US11575594B2 (en) 2020-09-10 2023-02-07 Mellanox Technologies, Ltd. Deadlock-free rerouting for resolving local link failures using detour paths
DE102022212766A1 (en) 2021-12-01 2023-06-01 Mellanox Technologies Ltd. LARGE-SCALE NETWORK WITH HIGH PORT UTILIZATION
US11765041B1 (en) * 2022-09-15 2023-09-19 Huawei Technologies Co., Ltd. Methods and systems for implementing a high radix network topology
US11870682B2 (en) 2021-06-22 2024-01-09 Mellanox Technologies, Ltd. Deadlock-free local rerouting for handling multiple local link failures in hierarchical network topologies

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10477288B2 (en) 2018-02-05 2019-11-12 David I-Keong Wong Data center interconnect as a switch
US10742513B2 (en) 2018-02-05 2020-08-11 David I-Keong Wong Network interconnect as a switch
US11165686B2 (en) 2018-08-07 2021-11-02 International Business Machines Corporation Switch-connected Dragonfly network
CN113728596A (en) 2019-05-23 2021-11-30 慧与发展有限责任合伙企业 System and method for facilitating efficient management of idempotent operations in a Network Interface Controller (NIC)
CN110784406B (en) * 2019-10-23 2021-07-13 上海理工大学 Dynamic self-adaptive on-chip network threshold routing method based on power perception
CN113076280B (en) 2019-12-18 2024-03-01 华为技术有限公司 Data transmission method and related equipment
CN111597141B (en) * 2020-05-13 2022-02-08 中国人民解放军国防科技大学 Hierarchical exchange structure and deadlock avoidance method for ultrahigh-order interconnection chip

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030088696A1 (en) * 1999-01-11 2003-05-08 Fastforward Networks, Inc. Performing multicast communication in computer networks by using overlay routing
US7139926B1 (en) * 2002-08-30 2006-11-21 Lucent Technologies Inc. Stateful failover protection among routers that provide load sharing using network address translation (LSNAT)
US20080285562A1 (en) * 2007-04-20 2008-11-20 Cray Inc. Flexible routing tables for a high-radix router

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4970658A (en) 1989-02-16 1990-11-13 Tesseract Corporation Knowledge engineering tool
US5079738A (en) 1989-09-29 1992-01-07 Rockwell International Corporation Processor interconnect network for printing press system forming a star network
US5249283A (en) 1990-12-24 1993-09-28 Ncr Corporation Cache coherency method and apparatus for a multiple path interconnection network
DK0702873T3 (en) 1992-06-15 1998-05-11 British Telecomm Service Platform
US5425029A (en) 1993-09-20 1995-06-13 Motorola, Inc. Fast packet adaptation method for ensuring packet portability across diversified switching type networks
US5864738A (en) 1996-03-13 1999-01-26 Cray Research, Inc. Massively parallel processing system using two data paths: one connecting router circuit to the interconnect network and the other connecting router circuit to I/O controller
JP2998688B2 (en) 1997-04-09 2000-01-11 日本電気株式会社 Disaster recovery system
US6212636B1 (en) 1997-05-01 2001-04-03 Itt Manufacturing Enterprises Method for establishing trust in a computer network via association
JP3553398B2 (en) 1999-01-08 2004-08-11 日本電信電話株式会社 Routing apparatus and routing method
US6766424B1 (en) 1999-02-09 2004-07-20 Hewlett-Packard Development Company, L.P. Computer architecture with dynamic sub-page placement
US6643764B1 (en) 2000-07-20 2003-11-04 Silicon Graphics, Inc. Multiprocessor system utilizing multiple links to improve point to point bandwidth
US6477618B2 (en) 2000-12-28 2002-11-05 Emc Corporation Data storage system cluster architecture
US7035202B2 (en) 2001-03-16 2006-04-25 Juniper Networks, Inc. Network routing using link failure information
US8018860B1 (en) 2003-03-12 2011-09-13 Sprint Communications Company L.P. Network maintenance simulator with path re-route prediction
GB2421158B (en) 2003-10-03 2007-07-11 Avici Systems Inc Rapid alternate paths for network destinations
US7852836B2 (en) 2003-11-19 2010-12-14 Cray Inc. Reduced arbitration routing system and method
US20050177344A1 (en) 2004-02-09 2005-08-11 Newisys, Inc. A Delaware Corporation Histogram performance counters for use in transaction latency analysis
US20050289101A1 (en) 2004-06-25 2005-12-29 Doddaballapur Jayasimha Methods and systems for dynamic partition management of shared-interconnect partitions
US20070198675A1 (en) 2004-10-25 2007-08-23 International Business Machines Corporation Method, system and program product for deploying and allocating an autonomic sensor network ecosystem
JP2006185348A (en) 2004-12-28 2006-07-13 Fujitsu Ltd Multiprocessor system and method for operating lock flag
US8260922B1 (en) 2005-09-16 2012-09-04 Cisco Technology, Inc. Technique for using OER with an ECT solution for multi-homed sites
US7675857B1 (en) 2006-05-03 2010-03-09 Google Inc. Method and apparatus to avoid network congestion
WO2008011712A1 (en) 2006-07-28 2008-01-31 Michael Tin Yau Chan Wide-area wireless network topology
US8285789B2 (en) 2007-10-05 2012-10-09 Intel Corporation Flattened butterfly processor interconnect network
US20100049942A1 (en) 2008-08-20 2010-02-25 John Kim Dragonfly processor interconnect network
US9100269B2 (en) 2008-10-28 2015-08-04 Rpx Clearinghouse Llc Provisioned provider link state bridging (PLSB) with routed back-up
US8301654B2 (en) 2009-02-24 2012-10-30 Hitachi, Ltd. Geographical distributed storage system based on hierarchical peer to peer architecture
US8576715B2 (en) 2009-10-26 2013-11-05 Mellanox Technologies Ltd. High-performance adaptive routing
US8639885B2 (en) 2009-12-21 2014-01-28 Oracle America, Inc. Reducing implementation costs of communicating cache invalidation information in a multicore processor
US20110191088A1 (en) 2010-02-01 2011-08-04 Yar-Sun Hsu Object-oriented network-on-chip modeling
US8489718B1 (en) 2010-05-19 2013-07-16 Amazon Technologies, Inc. Torroidal backbone connections for network deployment
US20120059938A1 (en) 2010-06-28 2012-03-08 Cray Inc. Dimension-ordered application placement in a multiprocessor computer
US8495194B1 (en) 2010-06-29 2013-07-23 Amazon Technologies, Inc. Connecting network deployment units
US20120020349A1 (en) 2010-07-21 2012-01-26 GraphStream Incorporated Architecture for a robust computing system
US8427980B2 (en) 2010-07-21 2013-04-23 Hewlett-Packard Development Company, L. P. Methods and apparatus to determine and implement multidimensional network topologies
US8837517B2 (en) 2010-09-22 2014-09-16 Amazon Technologies, Inc. Transpose boxes for network interconnection
US8621111B2 (en) 2010-09-22 2013-12-31 Amazon Technologies, Inc. Transpose box based network scaling
JP5913912B2 (en) 2010-11-05 2016-04-27 インテル コーポレイション Innovative Adaptive Routing in Dragonfly Processor Interconnect Network
JP5860670B2 (en) 2010-11-05 2016-02-16 インテル コーポレイション Table-driven routing in a Dragonfly processor interconnect network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030088696A1 (en) * 1999-01-11 2003-05-08 Fastforward Networks, Inc. Performing multicast communication in computer networks by using overlay routing
US7139926B1 (en) * 2002-08-30 2006-11-21 Lucent Technologies Inc. Stateful failover protection among routers that provide load sharing using network address translation (LSNAT)
US20080285562A1 (en) * 2007-04-20 2008-11-20 Cray Inc. Flexible routing tables for a high-radix router

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Flattened butterfly: a cost-efficient topology for high-radix networks ISCA '07 Proceedings of the 34th annual international symposium on Computer architecture Pages 126-137 ACM New York, NY, USA ©2007 table of contents ISBN: 978-1-59593-706-3 *
Kim et al.; Flattened Butterfly : A Cost-Efficient Topology for High-Radix Networks; 6-2007; Retrieved from the Internet ; pp. 1-12 as printed. *

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9614786B2 (en) 2008-08-20 2017-04-04 Intel Corporation Dragonfly processor interconnect network
US10153985B2 (en) 2008-08-20 2018-12-11 Intel Corporation Dragonfly processor interconnect network
US10681136B2 (en) 2009-02-19 2020-06-09 Micron Technology, Inc. Memory network methods, apparatus, and systems
US20140032701A1 (en) * 2009-02-19 2014-01-30 Micron Technology, Inc. Memory network methods, apparatus, and systems
US8489718B1 (en) * 2010-05-19 2013-07-16 Amazon Technologies, Inc. Torroidal backbone connections for network deployment
US20120144064A1 (en) * 2010-11-05 2012-06-07 Cray Inc. Progressive adaptive routing in a dragonfly processor interconnect network
US20120144065A1 (en) * 2010-11-05 2012-06-07 Cray Inc. Table-driven routing in a dragonfly processor interconnect network
US20150208145A1 (en) * 2010-11-05 2015-07-23 Intel Corporation Progressive adaptive routing in a dragonfly processor interconnect network
US10469380B2 (en) 2010-11-05 2019-11-05 Intel Corporation Table-driven routing in a dragonfly processor interconnect network
JP2012105265A (en) * 2010-11-05 2012-05-31 Cray Inc Table-driven routing in dragonfly processor interconnect network
EP3128438A3 (en) * 2010-11-05 2017-02-15 Intel Corporation Table-driven routing in a dragonfly processor interconnect network
EP2461254A1 (en) * 2010-11-05 2012-06-06 Cray Inc. Table-driven routing in a dragonfly processor interconnect network
JP2015080274A (en) * 2010-11-05 2015-04-23 インテル コーポレイション Table-driven routing in dragonfly processor interconnect network
US9282037B2 (en) * 2010-11-05 2016-03-08 Intel Corporation Table-driven routing in a dragonfly processor interconnect network
US9137143B2 (en) * 2010-11-05 2015-09-15 Intel Corporation Progressive adaptive routing in a dragonfly processor interconnect network
JP2015165716A (en) * 2010-11-05 2015-09-17 インテル コーポレイション Progressive adaptive routing in dragonfly processor interconnect network
EP2549388A1 (en) * 2011-04-01 2013-01-23 Huawei Technologies Co., Ltd. Computer system
EP2549388A4 (en) * 2011-04-01 2013-05-01 Huawei Tech Co Ltd Computer system
US10757022B2 (en) 2012-11-19 2020-08-25 Hewlett Packard Enterprise Development Lp Increasingly minimal bias routing
US9577918B2 (en) * 2012-11-19 2017-02-21 Cray Inc. Increasingly minimal bias routing
US20140140341A1 (en) * 2012-11-19 2014-05-22 Cray Inc. Increasingly minimal bias routing
CN103973564A (en) * 2013-01-31 2014-08-06 清华大学 Interconnection network system and self-adaptation routing method
US10042683B2 (en) 2013-02-22 2018-08-07 International Business Machines Corporation All-to-all message exchange in parallel computing systems
US20140245324A1 (en) * 2013-02-22 2014-08-28 International Business Machines Corporation All-to-all message exchange in parallel computing systems
US9329914B2 (en) * 2013-02-22 2016-05-03 International Business Machines Corporation All-to-all message exchange in parallel computing systems
US9740542B2 (en) 2013-02-22 2017-08-22 International Business Machines Corporation All-to-all message exchange in parallel computing systems
US10708219B2 (en) 2013-10-06 2020-07-07 Mellanox Technologies, Ltd. Simplified packet routing
US9729473B2 (en) 2014-06-23 2017-08-08 Mellanox Technologies, Ltd. Network high availability using temporary re-routing
US9806994B2 (en) 2014-06-24 2017-10-31 Mellanox Technologies, Ltd. Routing via multiple paths with efficient traffic distribution
CN104079490A (en) * 2014-06-27 2014-10-01 清华大学 Multi-layer dragonfly interconnecting network and self-adaptive routing method
WO2015196461A1 (en) * 2014-06-27 2015-12-30 Tsinghua University Deadlock-free adaptive routing of interconnect network
US9626322B2 (en) 2014-07-08 2017-04-18 International Business Machines Corporation Interconnection network topology for large scale high performance computing (HPC) systems
US9519605B2 (en) 2014-07-08 2016-12-13 International Business Machines Corporation Interconnection network topology for large scale high performance computing (HPC) systems
US20160028613A1 (en) * 2014-07-22 2016-01-28 Mellanox Technologies Ltd. Dragonfly Plus: Communication Over Bipartite Node Groups Connected by a Mesh Network
US9699067B2 (en) * 2014-07-22 2017-07-04 Mellanox Technologies, Ltd. Dragonfly plus: communication over bipartite node groups connected by a mesh network
US10002099B2 (en) * 2014-11-13 2018-06-19 Cavium, Inc. Arbitrated access to resources among multiple devices
US20160140071A1 (en) * 2014-11-13 2016-05-19 Cavium, Inc. Arbitrated Access To Resources Among Multiple Devices
US10229230B2 (en) * 2015-01-06 2019-03-12 International Business Machines Corporation Simulating a large network load
US9894005B2 (en) 2015-03-31 2018-02-13 Mellanox Technologies, Ltd. Adaptive routing controlled by source node
US9973435B2 (en) 2015-12-16 2018-05-15 Mellanox Technologies Tlv Ltd. Loopback-free adaptive routing
US10819621B2 (en) 2016-02-23 2020-10-27 Mellanox Technologies Tlv Ltd. Unicast forwarding of adaptive-routing notifications
US10178029B2 (en) 2016-05-11 2019-01-08 Mellanox Technologies Tlv Ltd. Forwarding of adaptive routing notifications
US10200294B2 (en) 2016-12-22 2019-02-05 Mellanox Technologies Tlv Ltd. Adaptive routing based on flow-control credits
US10644995B2 (en) 2018-02-14 2020-05-05 Mellanox Technologies Tlv Ltd. Adaptive routing in a box
CN110324249A (en) * 2018-03-28 2019-10-11 清华大学 A kind of dragonfly network architecture and its multicast route method
CN110324243A (en) * 2018-03-28 2019-10-11 清华大学 The dragonfly network architecture and its broadcast routing method
US11005724B1 (en) 2019-01-06 2021-05-11 Mellanox Technologies, Ltd. Network topology having minimal number of long connections among groups of network elements
EP3758318A1 (en) * 2019-06-28 2020-12-30 INTEL Corporation Shared memory mesh for switching
US11641326B2 (en) 2019-06-28 2023-05-02 Intel Corporation Shared memory mesh for switching
US11575594B2 (en) 2020-09-10 2023-02-07 Mellanox Technologies, Ltd. Deadlock-free rerouting for resolving local link failures using detour paths
US11411911B2 (en) 2020-10-26 2022-08-09 Mellanox Technologies, Ltd. Routing across multiple subnetworks using address mapping
US11870682B2 (en) 2021-06-22 2024-01-09 Mellanox Technologies, Ltd. Deadlock-free local rerouting for handling multiple local link failures in hierarchical network topologies
DE102022212766A1 (en) 2021-12-01 2023-06-01 Mellanox Technologies Ltd. LARGE-SCALE NETWORK WITH HIGH PORT UTILIZATION
US11765103B2 (en) 2021-12-01 2023-09-19 Mellanox Technologies, Ltd. Large-scale network with high port utilization
US11765041B1 (en) * 2022-09-15 2023-09-19 Huawei Technologies Co., Ltd. Methods and systems for implementing a high radix network topology

Also Published As

Publication number Publication date
US10153985B2 (en) 2018-12-11
US20150186318A1 (en) 2015-07-02
US20170353401A1 (en) 2017-12-07
US9614786B2 (en) 2017-04-04

Similar Documents

Publication Publication Date Title
US10153985B2 (en) Dragonfly processor interconnect network
US10469380B2 (en) Table-driven routing in a dragonfly processor interconnect network
US9137143B2 (en) Progressive adaptive routing in a dragonfly processor interconnect network
Kim et al. Technology-driven, highly-scalable dragonfly topology
Mellette et al. Rotornet: A scalable, low-complexity, optical datacenter network
Jiang et al. Indirect adaptive routing on large scale interconnection networks
US7133399B1 (en) System and method for router central arbitration
JP2016503594A (en) Non-uniform channel capacity in the interconnect
Guay et al. vFtree-A fat-tree routing algorithm using virtual lanes to alleviate congestion
Sancho et al. Effective methodology for deadlock-free minimal routing in InfiniBand networks
Ferraz et al. A two-phase multipathing scheme based on genetic algorithm for data center networking
CN112825512A (en) Load balancing method and device
EP4109854A1 (en) Telemetry-based load-balanced fine-grained adaptive routing in high-performance system interconnect
Zhang et al. Congestion-aware adaptive forwarding in datacenter networks
Bwalya et al. Performance evaluation of buffer size for access networks in first generation optical networks
EP4109852A1 (en) Load-balanced fine-grained adaptive routing in high-performance system interconnect
EP4109853A1 (en) Filter with engineered damping for load-balanced fine-grained adaptive routing in high-performance system interconnect
Chen et al. Alleviating flow interference in data center networks through fine-grained switch queue management
Riadi et al. An opportunistic burst cloning scheme for optical burst switching over star networks
Pan et al. CQPPS: A scalable multi‐path switch fabric without back pressure
손성민 Adaptive Load Balancing Mechanism for Multipath Transmission in Data Center
Lei et al. An Efficient Label Routing on High-Radix Interconnection Networks
Guay et al. using Virtual Lanes to Alleviate Congestion
Fourneau et al. Convergence Routing under Bursty Traffic: Instability and an AIMD Controller
Row Large Valency Serial Wormhole Routing Networks as a Scalable Multimedia Switching Infrastructure

Legal Events

Date Code Title Description
AS Assignment

Owner name: CRAY INC.,WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ABTS, DENNIS C.;SCOTT, STEVEN L.;SIGNING DATES FROM 20081210 TO 20081212;REEL/FRAME:022055/0100

Owner name: BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, JOHN;DALLY, WILLIAM J.;SIGNING DATES FROM 20081120 TO 20081204;REEL/FRAME:022055/0136

AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CRAY INC.;REEL/FRAME:028545/0797

Effective date: 20120502

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE