US20080273539A1 - System for performing a packet header lookup - Google Patents
System for performing a packet header lookup Download PDFInfo
- Publication number
- US20080273539A1 US20080273539A1 US12/165,623 US16562308A US2008273539A1 US 20080273539 A1 US20080273539 A1 US 20080273539A1 US 16562308 A US16562308 A US 16562308A US 2008273539 A1 US2008273539 A1 US 2008273539A1
- Authority
- US
- United States
- Prior art keywords
- lookup
- packet
- header
- resultant
- cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000008569 process Effects 0.000 abstract description 5
- 239000000872 buffer Substances 0.000 description 21
- 230000001133 acceleration Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 14
- 238000012545 processing Methods 0.000 description 7
- 238000000926 separation method Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 102000004137 Lysophosphatidic Acid Receptors Human genes 0.000 description 3
- 108090000642 Lysophosphatidic Acid Receptors Proteins 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000001934 delay Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 108700010388 MIBs Proteins 0.000 description 1
- 235000009499 Vanilla fragrans Nutrition 0.000 description 1
- 244000263375 Vanilla tahitensis Species 0.000 description 1
- 235000012036 Vanilla tahitensis Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3009—Header conversion, routing tables or routing tags
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/742—Route cache; Operation thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
Definitions
- the present invention relates to computer networks, and more particularly to a method and system for improving the latency due to packet header lookups.
- FIG. 1 depicts a conventional system 10 for performing a packet header lookup.
- a lookup corresponding to the packet header is performed in order to determine how the packet is to be treated. For example, based upon the information contained in the packet header, typically in the form of an IP five-tuple, the packet may be routed or processed differently.
- the conventional system 10 includes a network processor 12 , a main store 14 , a network adapter 16 , and an adapter memory 18 .
- the packet is received in the network adapter 16 and provided to the adapter memory 18 .
- the packet may then be copied to a temporary buffer (not explicitly shown) in the main store 14 .
- the processor 12 parses the packet to obtain the header. Based upon the header, the processor 12 performs a lookup. Typically, a hash of portions of the five-tuple in the header is used as a key to perform the lookup.
- the processor 12 processes the data using the resultant and copies the packet data to the destination buffer (not shown) in the main store 14 .
- the packet is completely received and stored in the main store 14 before the processor 12 commences operations on the packet related to performing the header lookup. Once the header is obtained, the main store 14 is searched in order to obtain the appropriate data. Each of these operations consumes time.
- the present invention provides a method and system for performing a lookup for a packet in a computer network.
- the packet includes a header.
- the method and system include providing a parser, providing a lookup engine coupled with the parser, and providing a processor coupled with the lookup engine.
- the parser is for parsing the packet for the header prior to receipt of the packet being completed.
- the lookup engine performs a lookup for the header and returns a resultant.
- the lookup includes performing a local lookup of a cache that includes resultants of previous lookups.
- the processor processes the resultant.
- the present invention may improve the efficiency of the header lookup, resulting in a lower latency.
- FIG. 1 is a diagram of a conventional system for performing a packet header lookup.
- FIG. 2 is a block diagram of a server system in accordance with the present invention.
- FIG. 3 is a simple block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention.
- FIG. 4 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the MAC and Serdes Layer.
- FIG. 5 shows the components and dataflow for one embodiment of RxNet in accordance with the present invention.
- FIG. 6 shows the components and dataflow for one embodiment of TxEnet in accordance with the present invention.
- FIG. 7 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the Packet Acceleration and Visualization Layer.
- FIG. 8 shows one embodiment of the RxAccel unit in accordance with the present invention.
- FIG. 9 shows one embodiment of the TxAccel unit in accordance with the present invention.
- FIG. 10 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the Host Interface Layer.
- FIG. 11 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the components used in packet header lookup.
- FIG. 12 is a block diagram of one embodiment of the lookup engine in accordance with the present invention.
- FIG. 13 is a flow chart depicting of one embodiment of a method for performing a packet header lookup in accordance with the present invention.
- FIG. 14 is a flow chart depicting of another embodiment of a method for performing a packet header lookup in accordance with the present invention.
- the present invention relates to computer networks.
- the following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements.
- Various modifications to the preferred embodiments and the generic principles and features described herein will be readily apparent to those skilled in the art.
- the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein.
- the present invention provides a method and system for performing a lookup for a packet in a computer network.
- the packet includes a header.
- the method and system include providing a parser, providing a lookup engine coupled with the parser, and providing a processor coupled with the lookup engine.
- the parser is for parsing the packet for the header prior to receipt of the packet being completed.
- the lookup engine performs a lookup for the header and returns a resultant.
- the lookup includes performing a local lookup of a cache that includes resultants of previous lookups.
- the processor processes the resultant.
- FIG. 2 is a block diagram of a server system 100 in accordance with the present invention.
- the server system 100 includes a processor 104 which is coupled between a system memory 102 and an interface adapter chip 106 .
- the interface adapter chip 106 includes an interface 108 to the private (Gx) bus of the processor 104 and a Host Ethernet Adapter (HEA) 110 .
- the HEA 110 receives and transmits signals from and to the processor 104 .
- the HEA 110 is an integrated Ethernet adapter.
- a set of accelerator features are provided such that a TCP/IP stack within the servers uses those features when and as required.
- the interface between the processor 104 and the interface adapter chip 106 has been streamlined by bypassing the PCI bus and providing interface techniques that enable demultiplexing and multiqueueing and packet header separation. In so doing an Ethernet adapter is provided that allows for improved functionality with high speed system while allowing for compatibility with legacy server environments. Some of the key features of this improved functionality are described hereinbelow.
- the HEA 110 supports advanced acceleration features.
- One key observation is that the current acceleration functions do a good job on the transmit side (e.g. transmitting packets from the processor) but not a very good job on the receive side (e.g. receiving packets via the adapter).
- the HEA 110 addresses this gap by introducing new features such as Packet Demultiplexing and Multiqueueing, and Header separation.
- All of the HEA 110 new features are optional; it is up to the TCP/IP stack to take advantage of them if and when required.
- a vanilla TCP/IP stack can use the HEA 110 without using per the connection queueing feature and yet take advantage of the other features of HEA such as throughput, low latency and virtualization support.
- Multiqueueing and Demultiplexing is the key feature to support functions such as virtualization, per connection queueing, and OS bypass.
- HEA demultiplexing uses the concept of Queue Pairs, Completion Queues and Event Queues. Enhancements have been added to better address OS protocol stacks requirements and short packet latency reduction.
- HEA can demultiplex incoming packets based on:
- Destination MAC address typically one MAC address and one default queue per partition
- Connection identifier for established connections (Protocol, Source IP address, Destination IP address, Source port, Destination port).
- Destination port and optionally destination IP address for TCP connection setup packet (SYN).
- HEA is optionally capable of separating the TCP/IP header from the data payload. This feature allows the header to be directed to the protocol stack for processing without polluting the received buffers posted by the applications. This feature is a component required for enabling zero-copy operations.
- the queue pair concept is extended to support more than one receive queue per pair. This enables the stack to better manage its buffer pool memory. For example, one queue can be assigned to small packets, one to medium packets and one to large packets. The HEA will select the ad hoc queue according to the received packet size.
- a descriptor may contain immediate data, in such case no indirection, i.e., no additional DMA from system memory is required to get the data to be sent.
- low latency queues do not supply buffers but rather receive immediate packet data.
- the HEA writes to the receive queue rather than reading. Short packets take advantage of this feature leading to a dramatic reduction of DMA operations: one single DMA write per packet as opposed to one DMA read and one DMA write per packet.
- Receive low latency queues are also used to support the packet header separation: the header is written in the low latency queue while the payload is DMAed to a buffer indicated in the ad-hoc receive queues.
- Demultiplexing and Multiqueueing, Address Translation and Packet Header Separation are the basic building blocks to virtualization and provide low latency in operation. Furthermore, it should be noted that these features can also be used to improve traditional OS protocol stack performance, for example, per-connection queueing allows for the removal of code and more importantly the memory accesses—and associated stalls/cache pollution—consumed to locate the TCP connection control block (TCB) in the system memory.
- TCP connection control block TCP connection control block
- FIG. 3 is a simple block diagram of the HEA 110 in accordance with the present invention.
- the HEA 110 has a three layer architecture.
- the first layer comprises a Media Access Controller (MAC) and Serialization/Deserialization (Serdes) Layer 202 which provides a plurality of interfaces from and to other devices on the Ethernet network.
- MAC Media Access Controller
- Serdes Serialization/Deserialization
- Layer 202 which provides a plurality of interfaces from and to other devices on the Ethernet network.
- the same chip I/Os are used to provide a plurality of interfaces.
- the same chip I/Os are utilized to provide either a 10 Gigabit interface or a 1 Gigabit interface.
- the second layer comprises a Packet Acceleration and Virtualization Layer 204 .
- the layer 204 provides for receiving packets and demultiplexing the flow of packets for enabling virtualization.
- the layer 204 enables virtualization or partitioning of the operating system of a server based upon the packets.
- the layer 204 also provides packet header separation to enable zero copy operation. Also since layer 204 interacts directly with the private bus (Gx) through the Host Interface Layer 206 , a low latency, high bandwidth connection is provided.
- the third layer comprises the Host Interface Layer 206 .
- the Host Interface Layer 206 provides the interface to the Gx or private bus of the processor.
- the layer 206 provides for multiple receive sub-queues per Queue Pair (QP) to enable effective buffer management for a TCP stack.
- QP receive sub-queues per Queue Pair
- the host layer 206 provides the context management for a given flow of data packets.
- FIG. 4 is a block diagram of the HEA 110 with a more detailed view of the MAC and Serdes Layer 202 .
- the MACs 302 , 304 and 304 b include analog coding units 308 a , 308 b and 308 c for aligning and coding the packets received.
- the MACs 302 , 304 a and 304 b are coupled to a High Speed Serializer/deserialization (HSS) 306 .
- the HSS 306 is capable of receiving data from one 10 Gigabit source or four 1 Gigabit sources.
- This section shows the high level structure and flow through the receive Ethernet function within layer 202 .
- the Rx accelerator unit 400 ( FIG. 5 ) as will be explained in more detail hereinafter is part of Packet Acceleration and Virtualization layer 204 .
- FIG. 5 shows the components and dataflow for one of RxNet.
- Data arrives on the XAUI interface and is processed by the HSS 304 , analog coding units 308 a and 308 b and MAC which assembles and aligns the packet data in this embodiment in a 64 bit (10 G) or 32 bit (1 G) parallel data bus.
- Control signals are also generated which indicate start and end of frame and other packet information.
- the data and control pass through the RxAccel unit 400 which performs parsing, filtering, checksum and lookup functions in preparation for processing by the Receive Packet Processor (RPP) of the layer 206 ( FIG. 2 ).
- the clock is converted to a 4.6 ns clock and the data width is converted to 128 b as it enters the RxAccel unit 400 .
- the RxAccel unit 400 As data flows through the RxAccel unit 400 to the Virtual Lane Input Manager (VLIM) data buffers, the RxAccel unit 400 snoops on the control and data and starts its processing. The data flow is delayed in the RxAccel unit 400 such that the results of the RxAccel unit 400 are synchronized with the end of the packet. At this time, the results of the RxAccel unit 400 are passed to the VLIM command queue along with some original control information from the MAC. This control information is stored along with the data in the VLIM.
- VLIM Virtual Lane Input Manager
- the RxAccel unit 400 may need to go to main memory through the GX bus interface.
- the GX bus operates at 4.6 ns.
- the VLIM can asynchronously read the queue pair resolution information from the RxAccel unit 400 .
- Tx accelerator unit 500 ( FIG. 6 ) as will be explained in more detail hereinafter is part of Packet Acceleration and Virtualization layer 204 .
- FIG. 6 shows the components and dataflow for one TxEnet.
- Packet data and control arrives from the ENop component of the HEA 110 .
- the Tx Accelerator (TxAccel) unit 500 interprets the control information and modifies fields in the Packet Header. It makes the wrap versus port decision based on control information or information found in the Packet Header. It also generates the appropriate controls for the TxMAC 302 and 304 .
- the data flow is delayed in the TxAccel unit 500 such that the TxAccel unit 500 can update Packet Headers before flowing to the MAC 302 and 304 .
- the data width is converted from 128 bits to 64 bits (10 G) or 32 bits (1 G).
- the data and control pass through a clock conversion function in the TxAccel unit 500 in order to enter the differing clock domain of the MAC 302 .
- the MAC 302 and 304 , analog converters 508 a and 508 b and HSS 306 format packets for the Ethernet XAUI interface.
- FIG. 7 is a block diagram of the HEA 110 with a more detailed view of the Packet Acceleration and Visualization Layer 204 .
- the HEA Layer 204 comprises a receive (RxAccel) acceleration unit 400 and a transmit acceleration (TxAccel) unit 500 .
- the RxAccel unit 400 comprises a receive backbone (RBB) 402 , a parser filter checksum unit (PFC) 404 , a lookup engine (LUE) 406 and a MIB database 408 .
- the TxAccel unit 500 comprises the transmit backbone 502 , lookup checks 504 and an MIB engine 506 .
- the operation of the Rx acceleration unit 400 and the Tx acceleration unit 500 will be described in more detail hereinbelow.
- FIG. 8 shows that the RxAccel unit 400 is composed of the Receive Backbone (RBB) 402 , the Parser, Filter and Checksum Unit (PFC) 404 , the Local Lookup Unit (LLU) 406 , the Remote Lookup Unit (RLU) 408 and an MIB database 410 .
- RBB Receive Backbone
- PFC Parser, Filter and Checksum Unit
- LLU Local Lookup Unit
- RLU Remote Lookup Unit
- the RBB 402 manages the flow of data and is responsible for the clock and data bus width conversion functions. Control and Data received from the RxMAC is used by the PFC 404 to perform acceleration functions and to make a discard decision.
- the PFC 404 passes control and data extracted from the frame, including the 5-tuple key, to the LLU 406 in order to resolve a Queue Pair number (QPN) for the RBB 402 .
- the LLU 406 either finds the QPN immediately or allocates a cache entry to reserve the slot. If the current key is not in the cache, the LLU 406 searches for the key in main store.
- the PFC 404 interfaces to the MIB database 410 to store packet statistics.
- This section describes the high level structure and flow through the Transmit Acceleration unit 500 (TxAccel).
- FIG. 9 shows that the TxAccel unit 500 is composed of two Transmit Backbones (XBB) 502 a and 502 b , two Transmit Checksum units (XCS) 504 a and 504 b , two Transmit MIBs 506 a and 506 b , one Wrap Unit (WRP) 508 and one Pause Unit (PAU) logic 510 .
- Data flows through the TxAccel from the ENop and is modified to adjust the IP and TCP checksum fields.
- the XBB 502 a and 502 b manages the flow of data and is responsible for the clock and data bus width conversion functions. Control and Data received from the ENop is used by the XCS 504 a and 504 b to perform checksum functions.
- the XBB 502 transforms the information to the clock domain of the TxAccel.
- the status information is merged with original information obtained from the packet by the XCS 504 and passed to the MIB Counter logic 506 a and 506 b .
- the MIB logic 506 a and 506 b updates the appropriate counters in the MIB array.
- the Wrap Unit (WRP) 508 is responsible for transferring to the receive side packets XCSs 504 a and 504 b have decided to wrap.
- the Pause Unit (PAU) 510 orders the MAC to transmit pause frames based on the receive buffer's occupancy.
- FIG. 10 is a block diagram of the HEA 110 with a more detailed view of the Host Interface Layer 206 .
- the Host Interface Layer 206 includes input and output buffers 602 and 604 for receiving packets from the layer 204 and providing packets to layer 204 .
- the layer 206 includes a Receive Packet Processor (RPP) 606 for appropriately processing the packets in the input buffer.
- RPP Receive Packet Processor
- the context management mechanism 908 provides multiple sub-queues per queue prior to enable effective buffer management for the TCP stack.
- the Rx unit 400 of layer 204 in conjunction with components of the host interface layer 206 provides the packets to the appropriate portion of the processor. Accordingly, the received packets must be demultiplexed to ensure that they flow to the appropriate portion of the server.
- the queue pair context Before the Receive Packet Processor (RPP) 606 can work on a received packet, the queue pair context must be retrieved.
- the QP connection manager does this using a QP number. Since QP numbers are not transported in TCP/IP packets, it must be determined by other means. There are two general classes of QPs, a per-connection QP and a default QP.
- Per-connection QP are intended to be used for long-lived connections where fragmentation of the IP packets is not expected and for which low-latency is expected. They require that the application utilize a user-space sockets library which supports the user-spacing queueing mechanism provided by the HEA 110 . The logical port must first be found using the destination MAC address. Three types of lookup exist for per-connection QP:
- New TCP connections for a particular destination IP address and destination TCP port are performed based on the TCP/IP (DA, DP, Logical port) if the packet was a TCP SYN packet.
- New TCP connections for a particular destination TCP port only (disregarding DA).
- a lookup is performed based on the TCP/IP (DP, Logical port) if the packet was a TCP SYN packet.
- Default QP are used if no per-connection QP can be found for the packet or if per-connection lookup is not enabled for a MAC address or if the packet is a recirculated multicast/broadcast packet.
- Generally default QP are handled by the kernel networking stack in the OS or hypervisor. These types of default QP exist in the HEA 110 :
- a logical port corresponds to a logical Ethernet interface with its own default queue. Each logical port has a separate port on the logical switch. There could be one or more logical ports belonging to an LPAR.
- a lookup is performed based on MAC address.
- a direct index (logical port number) to the default OS queue is provided with recirculated (wrapped) multicast/broadcast packets.
- a default UC QPN may be used.
- This mechanism allows for flexibility between the two extremes of queueing per connection and queueing per logical port (OS queue). Both models can operate together with some connections having their own queueing and some connections being queued with the default logical port queues.
- Connection lookup is performed by the RxAccel unit 400 .
- One such unit exists for each port group.
- each component performs a portion of the process.
- the PFC extracts the needed fields from the packet header and determines the logical port number based on the destination MAC address.
- the Local Lookup Unit (LLU) 406 and Remote Lookup Unit (RLU) 408 are then responsible for resolving the QP number.
- the LLU 406 attempts to find a QPN using local resources only (cache and registers).
- the purpose of the LLU 406 is to attempt to determine the QP number associated with the received packet.
- the QP number is required by the VLIM and RPP 606 . It performs this task locally if possible (i.e. without going to system memory).
- the QP number can be found locally in one of several ways:
- the LLU 406 communicates with the RBB 402 providing the QP number and/or the queue index to use for temporary queueing. If no eligible entries are available in the cache, the LLU 406 indicates to the RBB 402 that the search is busy. The packet must be dropped in this case.
- the LLU 406 provides the QPN to the VLIM/unloader when a queue index resolution is requested and has been resolved.
- the RLU attempts to find a QPN using system memory tables.
- the LLU utilizes a local 64 entry cache in order to find the QPN for TCP/UDP packets. If the entry is found in the cache, the RLU 408 does not need to be invoked. If the entry is not found in the cache, a preliminary check is made in the negative cache to see if the entry might be in the connection table.
- the negative cache is useful for eliminating unnecessary accesses to main memory when there are a few number of configured queues (note: since the size of the negative cache is small, it is only useful when the number of entries in the table is relatively small, that is, significantly less than 1K. As the number of entries approaches and exceeds 1K, the negative cache will become all Is, thus making it non-useful.
- the purpose of the negative cache is to not penalize the OS queries when there are a small number of QP.
- a problem may arise when there are small number of active QP but a large number of configured QP.
- the OS queues will suffer in this case.) (e.g., when using most OS queues).
- the RLU 408 uses a hash of the 6-tuple (including logical port number) to fetch an 128 byte Direct Table (DT) entry.
- This DT entry contains up to eight 6-tuple patterns and associated QPN. If a match is found, no further action is required. If there are more than 8 patterns associated with this hash value, then a Collision Overflow Table (COT) entry may need to be fetched for additional patterns. If a match is found, the LLU 406 cache is updated with the found QPN.
- DT Direct Table
- the QPN can not be determined on the fly as the packet is being placed into the input buffers. In fact the QPN may be determined several packets later. For this reason, the RxAccel unit 400 may either provide a QPN or a queue index to the VLIM for packet queueing. If a QPN is provided, then the VLIM (unloader) may queue the packet directly for work by the RPP. If a queue index is provided, then the VLIM (unloader) must hold this packet to wait for resolution of the QPN. The QPN is always determined by the time the RPP is dispatched.
- FIG. 11 is a block diagram of one embodiment of a portion of a HEA 110 in accordance with the present invention with a more detailed view of the components used in packet header lookup.
- the HEA 110 includes an adapter 600 that includes a receive buffer 602 , a parser 604 , a lookup engine 606 , and a processor 608 , as well as a main store 612 and a processor 610 .
- the packet is received and provided, preferably in parallel, to the parser 604 and the receive buffer 602 .
- the receive buffer 602 stores the packet, while the parser 604 parses the packet.
- the parser 604 parses the packet to obtain the header while the packet is still being received in the adapter 600 . Stated differently, the parser 604 starts parsing the packet to obtain the header before the packet has been completely received by the HEA 110 .
- the parser 604 provides the header to the lookup engine 606 .
- the lookup engine 606 utilizes the header to perform a lookup.
- the lookup engine 606 determines a hash of the header and uses this hash for the lookup.
- the lookup engine 606 performs a local lookup of a local cache (not shown in FIG. 11 ) and only performs a remote lookup when the resultant cannot be obtained using local resources only.
- the packet header lookup can be performed more efficiently.
- the parser 604 commences parsing the packet before the receipt of the packet is completed.
- the delay in performing the header lookup may be reduced.
- a local lookup is preferably performed. Latencies for packets having headers which include repeat information may be further reduced.
- FIG. 12 is a block diagram of one embodiment of the lookup engine 606 ′ in accordance with the present invention.
- the lookup engine 606 ′ is thus preferably used in the system 600 depicted in FIG. 11 .
- the lookup engine 606 ′ includes a local lookup unit 620 and a remote lookup unit 630 .
- the local lookup unit 620 and the remote lookup unit 630 preferably correspond to the units 406 and 408 described above.
- the local lookup unit 620 includes a cache memory 622 and a counter 624 .
- the cache 622 stores resultants from previous lookups for previously processed packets and the corresponding hashes.
- the remote lookup unit 630 includes a main store read memory 632 and a search pattern storage 634 .
- the local lookup unit 620 utilizes local resources, such as the cache 622 , to perform a lookup corresponding to the header of the packet.
- the remote lookup unit 630 performs a lookup of the main store 612 by storing the appropriate search pattern (e.g. the hash corresponding to the header) in the search pattern storage 634 .
- the resultant returned may be stored in the main store lookup memory 632 .
- the local lookup unit 620 determines whether the resultant of the lookup can be obtained from the cache 622 . If so, the resultant is either already stored in the cache 622 or the cache 622 is waiting for the remote lookup unit 630 to return the resultant from a search of the main store 612 due to a previous packet. If the cache 622 is waiting, the entry in the cache corresponding to the resultant is not yet resolved. If the resultant is already in the cache 622 , the local lookup unit 620 returns the resultant. Thus, the processor 608 may quickly perform any processing desired and the packet data stored in the appropriate destination buffer of the main store 612 . If the entry corresponding to the resultant has not been resolved, then the index of that entry of the cache 622 is returned and the counter 624 incremented to track the packet. Once the entry is resolved, the counter 624 can be decremented and the resultant returned.
- the packet header lookup can be performed more efficiently.
- the parser 604 still commences parsing the packet before the receipt of the packet is completed.
- the delay in performing the header lookup may be reduced.
- the local lookup unit 620 performs a local lookup. If a resultant is in the cache 622 , delays due to a remote lookup may be avoided. Latencies for packets having headers which include repeat information may thus be further reduced.
- FIG. 13 is a flow chart depicting of one embodiment of a method 700 for performing a packet header lookup in accordance with the present invention.
- the method 700 is described in connection with the system 600 depicted in FIGS. 11 and 12 . However, another system might be used to implement the method 700 .
- the packet is parsed to obtain the header while the packet is still being received by the system 600 , via step 702 .
- Step 702 is preferably performed by providing the packet both to the parser 604 and the receive buffer 602 .
- Step 702 thus also includes utilizing the parser 602 to start parsing the packet before the receipt is complete and to obtain the header.
- a lookup is performed using the header, via step 702 .
- Step 702 includes providing the header to the lookup engine 606 / 606 ′.
- the lookup engine 606 / 606 ′ preferably utilizes the data in the header, such as the five-tuple, to provide a hash used to search for the resultant corresponding to the header.
- a local search is performed using the local lookup unit 620 . If the resultant is not available locally, then a remote search of the main store 612 is performed. Thus, the resultant can be obtained, and the packet processed.
- the packet header lookup can be performed more efficiently. Parsing of the packet starts in step 702 before the receipt of the packet is completed. Thus, the delay in performing the header lookup may be reduced. Moreover, a local lookup may be performed in step 704 . Latencies for packet headers having their resultants stored locally may thus be reduced.
- FIG. 14 is a flow chart depicting another embodiment of a method 710 for performing a packet header lookup in accordance with the present invention.
- the method 710 is described in the context of the system 600 described in FIGS. 11 and 12 . However, another system might be used to implement the method 710 .
- Step 712 is preferably performed by providing the packet both to the parser 604 and the receive buffer 602 .
- Step 712 also includes utilizing the parser 602 to parse the packet and to obtain the header. It is determined whether the resultant corresponding to the header can be obtained using local resources, such as the cache, via step 714 .
- Step 714 preferably includes obtaining a hash based on the header and providing the hash to the local lookup unit 620 . If it is determined that the resultant is not available through local resources, then the remote lookup unit 630 is accessed to perform a remote lookup, via step 716 .
- the local lookup unit 620 enqueues the remote lookup and writes the hash into the cache 622 so that the corresponding entry will store the resultant once the remote lookup is completed.
- a lookup of the main store is performed, via step 718 .
- Step 718 includes writing the hash to the search pattern memory 634 and obtaining space in the main store lookup memory 632 to store the resultant of the search of the main store 612 .
- the resultant is provided to the remote lookup unit 630 , as well as to the local lookup unit 620 , via step 720 .
- the local lookup unit 620 writes the resultant to the appropriate entry of the cache 622 .
- the resultant is also provided to the processor 608 for processing in step 720 .
- the packet may be processed and the data stored in the destination buffer of the main store 612 .
- the resultant may be available locally, then there is an entry in the cache 622 corresponding to the header. However the entry may not be resolved. It is determined by the local lookup unit 620 whether the entry is resolved, via step 722 . If the entry is resolved, then the resultant is stored in the entry corresponding to the header. Thus, the resultant is obtained from the cache, via step 724 . If the entry is not resolved, then a previous packet has a hash corresponding to the hash currently being searched. However, the remote lookup for this previous packet has not yet been completed. Consequently, the index of the appropriate cache entry is provided, via step 726 . The counter 624 is also incremented, via step 728 .
- the resultant is provided, via step 730 .
- the counter 624 is decremented because the packet header lookup has been completed for the unresolved entry.
- the processor 608 receives the resultant of the header lookup. Consequently, processing of the packet can be completed and the packet data stored in the destination buffer of the main store 612 .
- latencies in packet header lookup may be reduced. Because the packet is parsed before receipt is complete, the packet header may be obtained more rapidly and delays reduced. Furthermore, because a local lookup is possible, the packet header lookup may be reduced when the resultant is available locally. Thus, delays may be further reduced.
- a method and system for more efficiently performing a packet header lookup has been disclosed.
- the present invention has been described in accordance with the embodiments shown, and one of ordinary skill in the art will readily recognize that there could be variations to the embodiments, and any variations would be within the spirit and scope of the present invention.
- the present invention can be implemented using hardware, software, a computer readable medium containing program instructions, or a combination thereof.
- the computer readable medium may be computer-readable storage medium such as memory or CD-ROM, and is to be executed by a processor. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
Abstract
A system for performing a lookup for a packet in a computer network are disclosed. The packet includes a header. The system includes a parser, a lookup engine coupled with the parser, and a processor coupled with the lookup engine. The parser parses the packet for the header prior to receipt of the packet being completed. The lookup engine performs a lookup for the header and returns a resultant. In one aspect, the lookup includes performing a local lookup of a cache that includes resultants of previous lookups. The processor processes the resultant.
Description
- Under 35 USC §120, this application is a continuation application and claims the benefit of priority to U.S. patent application Ser. No. 11/096,362, filed Apr. 1, 2005 entitled “Method for Performing a Packet Header Lookup” which is incorporated herein by reference.
- The present application is also related to the following copending U.S. patent applications:
- U.S. patent application Ser. No. 11/097,608, entitled “Host Ethernet Adapter for Networking Offload in Server Environment”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/096,363, entitled “Method and System for Accommodating Several Ethernet Ports and a Wrap Transmitted Flow Handled by a Simplified Frame-By-Frame Upper Structure”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/096,571, entitled “Method and Apparatus for Providing a Network Connection Table”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/097,051, entitled “Network Communications for Operating System Partitions”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/097,652, entitled “Configurable Ports for a Host Ethernet Adapter”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/096,365, entitled “System and Method for Parsing, Filtering, and Computing the Checksum in a Host Ethernet Adapter (HEA)”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/096,353, entitled “System and Method for a Method for Reducing Latency in a Host Ethernet Adapter (HEA)”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/097,055, entitled “Method and Apparatus for Blind Checksum and Correction for Network Transmissions”, filed on even date herewith and assigned to the assignee of the present invention.
- U.S. patent application Ser. No. 11/097,430, entitled “System and Method for Computing a Blind Checksum in a Host Ethernet Adapter (HEA)”, filed on even date herewith and assigned to the assignee of the present invention.
- The present invention relates to computer networks, and more particularly to a method and system for improving the latency due to packet header lookups.
-
FIG. 1 depicts aconventional system 10 for performing a packet header lookup. A lookup corresponding to the packet header is performed in order to determine how the packet is to be treated. For example, based upon the information contained in the packet header, typically in the form of an IP five-tuple, the packet may be routed or processed differently. Theconventional system 10 includes anetwork processor 12, amain store 14, anetwork adapter 16, and anadapter memory 18. - The packet is received in the
network adapter 16 and provided to theadapter memory 18. The packet may then be copied to a temporary buffer (not explicitly shown) in themain store 14. Once the packet has been received and copied into themain store 14, theprocessor 12 then parses the packet to obtain the header. Based upon the header, theprocessor 12 performs a lookup. Typically, a hash of portions of the five-tuple in the header is used as a key to perform the lookup. Once theprocessor 12 receives the resultant of the lookup, theprocessor 12 processes the data using the resultant and copies the packet data to the destination buffer (not shown) in themain store 14. - Although the conventional system functions, one of ordinary skill in the art will readily recognize that it is relatively slow. In particular, the packet is completely received and stored in the
main store 14 before theprocessor 12 commences operations on the packet related to performing the header lookup. Once the header is obtained, themain store 14 is searched in order to obtain the appropriate data. Each of these operations consumes time. - Accordingly, what is needed is a more efficient method and system for performing a packet header lookup. The present invention addresses such a need.
- The present invention provides a method and system for performing a lookup for a packet in a computer network. The packet includes a header. The method and system include providing a parser, providing a lookup engine coupled with the parser, and providing a processor coupled with the lookup engine. The parser is for parsing the packet for the header prior to receipt of the packet being completed. The lookup engine performs a lookup for the header and returns a resultant. In one aspect, the lookup includes performing a local lookup of a cache that includes resultants of previous lookups. The processor processes the resultant.
- According to the method and system disclosed herein, the present invention may improve the efficiency of the header lookup, resulting in a lower latency.
-
FIG. 1 is a diagram of a conventional system for performing a packet header lookup. -
FIG. 2 is a block diagram of a server system in accordance with the present invention. -
FIG. 3 is a simple block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention. -
FIG. 4 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the MAC and Serdes Layer. -
FIG. 5 shows the components and dataflow for one embodiment of RxNet in accordance with the present invention. -
FIG. 6 shows the components and dataflow for one embodiment of TxEnet in accordance with the present invention. -
FIG. 7 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the Packet Acceleration and Visualization Layer. -
FIG. 8 shows one embodiment of the RxAccel unit in accordance with the present invention. -
FIG. 9 shows one embodiment of the TxAccel unit in accordance with the present invention. -
FIG. 10 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the Host Interface Layer. -
FIG. 11 is a block diagram of one embodiment of the host Ethernet adapter in accordance with the present invention with a more detailed view of the components used in packet header lookup. -
FIG. 12 is a block diagram of one embodiment of the lookup engine in accordance with the present invention. -
FIG. 13 is a flow chart depicting of one embodiment of a method for performing a packet header lookup in accordance with the present invention. -
FIG. 14 is a flow chart depicting of another embodiment of a method for performing a packet header lookup in accordance with the present invention. - The present invention relates to computer networks. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiments and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein.
- The present invention provides a method and system for performing a lookup for a packet in a computer network. The packet includes a header. The method and system include providing a parser, providing a lookup engine coupled with the parser, and providing a processor coupled with the lookup engine. The parser is for parsing the packet for the header prior to receipt of the packet being completed. The lookup engine performs a lookup for the header and returns a resultant. In one aspect, the lookup includes performing a local lookup of a cache that includes resultants of previous lookups. The processor processes the resultant.
- The present invention will be described in terms of a particular computer system. However, one of ordinary skill in the art will readily recognize that the method and system in accordance with the present invention can be incorporated into another computer system having different and/or other components.
-
FIG. 2 is a block diagram of aserver system 100 in accordance with the present invention. Theserver system 100 includes aprocessor 104 which is coupled between asystem memory 102 and aninterface adapter chip 106. Theinterface adapter chip 106 includes aninterface 108 to the private (Gx) bus of theprocessor 104 and a Host Ethernet Adapter (HEA) 110. TheHEA 110 receives and transmits signals from and to theprocessor 104. - The
HEA 110 is an integrated Ethernet adapter. A set of accelerator features are provided such that a TCP/IP stack within the servers uses those features when and as required. The interface between theprocessor 104 and theinterface adapter chip 106 has been streamlined by bypassing the PCI bus and providing interface techniques that enable demultiplexing and multiqueueing and packet header separation. In so doing an Ethernet adapter is provided that allows for improved functionality with high speed system while allowing for compatibility with legacy server environments. Some of the key features of this improved functionality are described hereinbelow. - The
HEA 110 supports advanced acceleration features. One key observation is that the current acceleration functions do a good job on the transmit side (e.g. transmitting packets from the processor) but not a very good job on the receive side (e.g. receiving packets via the adapter). TheHEA 110 addresses this gap by introducing new features such as Packet Demultiplexing and Multiqueueing, and Header separation. - All of the
HEA 110 new features are optional; it is up to the TCP/IP stack to take advantage of them if and when required. For example, a vanilla TCP/IP stack can use theHEA 110 without using per the connection queueing feature and yet take advantage of the other features of HEA such as throughput, low latency and virtualization support. - Multiqueueing and Demultiplexing is the key feature to support functions such as virtualization, per connection queueing, and OS bypass. HEA demultiplexing uses the concept of Queue Pairs, Completion Queues and Event Queues. Enhancements have been added to better address OS protocol stacks requirements and short packet latency reduction.
- Depending upon system requirements and configuration, HEA can demultiplex incoming packets based on:
- Destination MAC address (typically one MAC address and one default queue per partition)
- Connection identifier for established connections (Protocol, Source IP address, Destination IP address, Source port, Destination port).
- Destination port and optionally destination IP address for TCP connection setup packet (SYN).
- HEA is optionally capable of separating the TCP/IP header from the data payload. This feature allows the header to be directed to the protocol stack for processing without polluting the received buffers posted by the applications. This feature is a component required for enabling zero-copy operations.
- Many enhanced features are provided by the
HEA 110 in the server environment. Some of these features are listed below. - (a) Multiple Receive Queue: The queue pair concept is extended to support more than one receive queue per pair. This enables the stack to better manage its buffer pool memory. For example, one queue can be assigned to small packets, one to medium packets and one to large packets. The HEA will select the ad hoc queue according to the received packet size.
- (b) Low Latency Queue: On the transmit side a descriptor (WQE) may contain immediate data, in such case no indirection, i.e., no additional DMA from system memory is required to get the data to be sent. On the receive side, low latency queues do not supply buffers but rather receive immediate packet data. The HEA writes to the receive queue rather than reading. Short packets take advantage of this feature leading to a dramatic reduction of DMA operations: one single DMA write per packet as opposed to one DMA read and one DMA write per packet.
- (c) Receive low latency queues are also used to support the packet header separation: the header is written in the low latency queue while the payload is DMAed to a buffer indicated in the ad-hoc receive queues.
- In summary, Demultiplexing and Multiqueueing, Address Translation and Packet Header Separation are the basic building blocks to virtualization and provide low latency in operation. Furthermore, it should be noted that these features can also be used to improve traditional OS protocol stack performance, for example, per-connection queueing allows for the removal of code and more importantly the memory accesses—and associated stalls/cache pollution—consumed to locate the TCP connection control block (TCB) in the system memory.
- To describe the features of the
HEA 110 in more detail refer now to the following description in conjunction with the accompanying figures. -
FIG. 3 is a simple block diagram of theHEA 110 in accordance with the present invention. As is seen theHEA 110 has a three layer architecture. The first layer comprises a Media Access Controller (MAC) and Serialization/Deserialization (Serdes)Layer 202 which provides a plurality of interfaces from and to other devices on the Ethernet network. In thelayer 202 the same chip I/Os are used to provide a plurality of interfaces. For example, in a preferred embodiment, the same chip I/Os are utilized to provide either a 10 Gigabit interface or a 1 Gigabit interface. - The second layer comprises a Packet Acceleration and
Virtualization Layer 204. Thelayer 204 provides for receiving packets and demultiplexing the flow of packets for enabling virtualization. Thelayer 204 enables virtualization or partitioning of the operating system of a server based upon the packets. Thelayer 204 also provides packet header separation to enable zero copy operation. Also sincelayer 204 interacts directly with the private bus (Gx) through theHost Interface Layer 206, a low latency, high bandwidth connection is provided. - The third layer comprises the
Host Interface Layer 206. TheHost Interface Layer 206 provides the interface to the Gx or private bus of the processor. Thelayer 206 provides for multiple receive sub-queues per Queue Pair (QP) to enable effective buffer management for a TCP stack. Thehost layer 206 provides the context management for a given flow of data packets. - To describe the features of each of the
layers HEA 100 in more detail refer now to the following discussions in conjunction with the accompanying figures. -
FIG. 4 is a block diagram of theHEA 110 with a more detailed view of the MAC andSerdes Layer 202. As is seen in this embodiment there is one 10Gigabit MAC 302 and four 1Gigabit MACs MACs analog coding units MACs HSS 306 is capable of receiving data from one 10 Gigabit source or four 1 Gigabit sources. - This section shows the high level structure and flow through the receive Ethernet function within
layer 202. The Rx accelerator unit 400 (FIG. 5 ) as will be explained in more detail hereinafter is part of Packet Acceleration andVirtualization layer 204. -
FIG. 5 shows the components and dataflow for one of RxNet. Data arrives on the XAUI interface and is processed by theHSS 304,analog coding units RxAccel unit 400 which performs parsing, filtering, checksum and lookup functions in preparation for processing by the Receive Packet Processor (RPP) of the layer 206 (FIG. 2 ). In this embodiment, the clock is converted to a 4.6 ns clock and the data width is converted to 128 b as it enters theRxAccel unit 400. - As data flows through the
RxAccel unit 400 to the Virtual Lane Input Manager (VLIM) data buffers, theRxAccel unit 400 snoops on the control and data and starts its processing. The data flow is delayed in theRxAccel unit 400 such that the results of theRxAccel unit 400 are synchronized with the end of the packet. At this time, the results of theRxAccel unit 400 are passed to the VLIM command queue along with some original control information from the MAC. This control information is stored along with the data in the VLIM. - If the
RxAccel unit 400 does not have the lookup entry cached, it may need to go to main memory through the GX bus interface. The GX bus operates at 4.6 ns. The VLIM can asynchronously read the queue pair resolution information from theRxAccel unit 400. - This section provides an overview of the transmit structure and flow through Ethernet and Acceleration functions. The Tx accelerator unit 500 (
FIG. 6 ) as will be explained in more detail hereinafter is part of Packet Acceleration andVirtualization layer 204. -
FIG. 6 shows the components and dataflow for one TxEnet. Packet data and control arrives from the ENop component of theHEA 110. The Tx Accelerator (TxAccel)unit 500 interprets the control information and modifies fields in the Packet Header. It makes the wrap versus port decision based on control information or information found in the Packet Header. It also generates the appropriate controls for theTxMAC TxAccel unit 500 such that theTxAccel unit 500 can update Packet Headers before flowing to theMAC TxAccel unit 500 in order to enter the differing clock domain of theMAC 302. TheMAC analog converters HSS 306 format packets for the Ethernet XAUI interface. -
FIG. 7 is a block diagram of theHEA 110 with a more detailed view of the Packet Acceleration andVisualization Layer 204. TheHEA Layer 204 comprises a receive (RxAccel)acceleration unit 400 and a transmit acceleration (TxAccel)unit 500. TheRxAccel unit 400 comprises a receive backbone (RBB) 402, a parser filter checksum unit (PFC) 404, a lookup engine (LUE) 406 and aMIB database 408. TheTxAccel unit 500 comprises the transmitbackbone 502, lookup checks 504 and anMIB engine 506. The operation of theRx acceleration unit 400 and theTx acceleration unit 500 will be described in more detail hereinbelow. -
FIG. 8 shows that theRxAccel unit 400 is composed of the Receive Backbone (RBB) 402, the Parser, Filter and Checksum Unit (PFC) 404, the Local Lookup Unit (LLU) 406, the Remote Lookup Unit (RLU) 408 and anMIB database 410. - Data flows through the RxAccel from the RxMAC unaltered. The
RBB 402 manages the flow of data and is responsible for the clock and data bus width conversion functions. Control and Data received from the RxMAC is used by thePFC 404 to perform acceleration functions and to make a discard decision. ThePFC 404 passes control and data extracted from the frame, including the 5-tuple key, to theLLU 406 in order to resolve a Queue Pair number (QPN) for theRBB 402. TheLLU 406 either finds the QPN immediately or allocates a cache entry to reserve the slot. If the current key is not in the cache, theLLU 406 searches for the key in main store. ThePFC 404 interfaces to theMIB database 410 to store packet statistics. - This section describes the high level structure and flow through the Transmit Acceleration unit 500 (TxAccel).
-
FIG. 9 shows that theTxAccel unit 500 is composed of two Transmit Backbones (XBB) 502 a and 502 b, two Transmit Checksum units (XCS) 504 a and 504 b, two TransmitMIBs logic 510. Data flows through the TxAccel from the ENop and is modified to adjust the IP and TCP checksum fields. TheXBB XCS XBB 502 transforms the information to the clock domain of the TxAccel. The status information is merged with original information obtained from the packet by theXCS 504 and passed to theMIB Counter logic MIB logic side packets XCSs -
FIG. 10 is a block diagram of theHEA 110 with a more detailed view of theHost Interface Layer 206. TheHost Interface Layer 206 includes input andoutput buffers layer 204 and providing packets to layer 204. Thelayer 206 includes a Receive Packet Processor (RPP) 606 for appropriately processing the packets in the input buffer. Thecontext management mechanism 908 provides multiple sub-queues per queue prior to enable effective buffer management for the TCP stack. - The
Rx unit 400 oflayer 204 in conjunction with components of thehost interface layer 206 provides the packets to the appropriate portion of the processor. Accordingly, the received packets must be demultiplexed to ensure that they flow to the appropriate portion of the server. - To describe the details of this demultiplexing function refer now to the following in conjunction with
FIG. 8 andFIG. 9 . - Before the Receive Packet Processor (RPP) 606 can work on a received packet, the queue pair context must be retrieved. The QP connection manager does this using a QP number. Since QP numbers are not transported in TCP/IP packets, it must be determined by other means. There are two general classes of QPs, a per-connection QP and a default QP.
- Per-connection QP are intended to be used for long-lived connections where fragmentation of the IP packets is not expected and for which low-latency is expected. They require that the application utilize a user-space sockets library which supports the user-spacing queueing mechanism provided by the
HEA 110. The logical port must first be found using the destination MAC address. Three types of lookup exist for per-connection QP: - 1. New TCP connections for a particular destination IP address and destination TCP port. A lookup is performed based on the TCP/IP (DA, DP, Logical port) if the packet was a TCP SYN packet.
- 2. New TCP connections for a particular destination TCP port only (disregarding DA). A lookup is performed based on the TCP/IP (DP, Logical port) if the packet was a TCP SYN packet.
- 3. Existing TCP/UDP connection. A lookup is performed based on the TCP/IP 5-tuple plus the logical port if the packet was a non-fragmented unicast TCP or UDP packet.
- Default QP are used if no per-connection QP can be found for the packet or if per-connection lookup is not enabled for a MAC address or if the packet is a recirculated multicast/broadcast packet. Generally default QP are handled by the kernel networking stack in the OS or hypervisor. These types of default QP exist in the HEA 110:
- 1. Default OS queue per logical port. (A logical port corresponds to a logical Ethernet interface with its own default queue. Each logical port has a separate port on the logical switch. There could be one or more logical ports belonging to an LPAR.)
- A lookup is performed based on MAC address.
- A direct index (logical port number) to the default OS queue is provided with recirculated (wrapped) multicast/broadcast packets.
- 2. MC or BC queue.
- A configured value if the packet is a multicast or broadcast packet which does not match one of the MAC addresses in the MAC lookup table.
- 3. Super-default UC queue.
- If a UC packet does not match one of the configured MAC addresses, a default UC QPN may be used.
- This mechanism allows for flexibility between the two extremes of queueing per connection and queueing per logical port (OS queue). Both models can operate together with some connections having their own queueing and some connections being queued with the default logical port queues.
- Connection lookup is performed by the
RxAccel unit 400. One such unit exists for each port group. Within theRxAccel unit 400, each component performs a portion of the process. The PFC extracts the needed fields from the packet header and determines the logical port number based on the destination MAC address. The Local Lookup Unit (LLU) 406 and Remote Lookup Unit (RLU) 408 are then responsible for resolving the QP number. TheLLU 406 attempts to find a QPN using local resources only (cache and registers). - The purpose of the
LLU 406 is to attempt to determine the QP number associated with the received packet. The QP number is required by the VLIM andRPP 606. It performs this task locally if possible (i.e. without going to system memory). - The QP number can be found locally in one of several ways:
- Lookup in TS cache
- Default partition QP
- Default UC QP
- If no match is found locally, then a preliminary check is made on the negative cache to see if the entry might be in present in system memory. If so, the
RLU 408 is invoked to perform the search. If theRLU 408 is busy, a queue of requests can be formed which will be provided to theRLU 408 as it becomes free. - The
LLU 406 communicates with theRBB 402 providing the QP number and/or the queue index to use for temporary queueing. If no eligible entries are available in the cache, theLLU 406 indicates to theRBB 402 that the search is busy. The packet must be dropped in this case. - The
LLU 406 provides the QPN to the VLIM/unloader when a queue index resolution is requested and has been resolved. The RLU attempts to find a QPN using system memory tables. - The LLU utilizes a local 64 entry cache in order to find the QPN for TCP/UDP packets. If the entry is found in the cache, the
RLU 408 does not need to be invoked. If the entry is not found in the cache, a preliminary check is made in the negative cache to see if the entry might be in the connection table. The negative cache is useful for eliminating unnecessary accesses to main memory when there are a few number of configured queues (note: since the size of the negative cache is small, it is only useful when the number of entries in the table is relatively small, that is, significantly less than 1K. As the number of entries approaches and exceeds 1K, the negative cache will become all Is, thus making it non-useful. The purpose of the negative cache is to not penalize the OS queries when there are a small number of QP. A problem may arise when there are small number of active QP but a large number of configured QP. The OS queues will suffer in this case.) (e.g., when using most OS queues). - If the
RLU 408 is invoked, it uses a hash of the 6-tuple (including logical port number) to fetch an 128 byte Direct Table (DT) entry. This DT entry contains up to eight 6-tuple patterns and associated QPN. If a match is found, no further action is required. If there are more than 8 patterns associated with this hash value, then a Collision Overflow Table (COT) entry may need to be fetched for additional patterns. If a match is found, theLLU 406 cache is updated with the found QPN. - When the
RLU 408 must be invoked, the QPN can not be determined on the fly as the packet is being placed into the input buffers. In fact the QPN may be determined several packets later. For this reason, theRxAccel unit 400 may either provide a QPN or a queue index to the VLIM for packet queueing. If a QPN is provided, then the VLIM (unloader) may queue the packet directly for work by the RPP. If a queue index is provided, then the VLIM (unloader) must hold this packet to wait for resolution of the QPN. The QPN is always determined by the time the RPP is dispatched. - SYN packet lookup (2 or 3 tuple) uses the same cache and lookup tables as the 6-tuple lookup. Here is the rationale and key design points:
- Perf requirements are relaxed (not real steady state) so we can access multiple times to the System memory
- Reuse 6 tuples Look Up resources (tables)
- Use the 3-tuple to find the cache index for SYN packets to ensure that all packets added to this cache list belong to the same QP, whether matching 3-tuple, 2-tuple or none. Using this 6-tuple isn't good since if a non-SYN came in, it would get added to the list and be routed to the 3/2 tuple QP. Using a two-tuple would not work since the packet may end up not matching the two-tuple. Multiple packets with the same 2-tuple may get added to the list in this cache entry and may end up being moved to the wrong QP.
- A check is NOT made for 6-tuple match when packet is a SYN. It is left to the host to check for connection already open on a SYN.
- If 2 tuple SYN routing (LPAR, DP), this pattern is installed in the table as <logical_port#, DA=0, DP, SA=0, SP=0, prot=0>(TCP=0)
- If 3 tuple SYN routing (LPAR, DP, DA), this pattern is installed in the table as <logical_port#, DA, DP, SA=0, SP=0, prot=0>BUT install it in the DT at the index given by 2 tuple (i.e. DA=0).
- To more particularly describe the present invention, refer to
FIG. 11 .FIG. 11 is a block diagram of one embodiment of a portion of aHEA 110 in accordance with the present invention with a more detailed view of the components used in packet header lookup. TheHEA 110 includes anadapter 600 that includes a receivebuffer 602, aparser 604, alookup engine 606, and a processor 608, as well as amain store 612 and aprocessor 610. - The packet is received and provided, preferably in parallel, to the
parser 604 and the receivebuffer 602. The receivebuffer 602 stores the packet, while theparser 604 parses the packet. Theparser 604 parses the packet to obtain the header while the packet is still being received in theadapter 600. Stated differently, theparser 604 starts parsing the packet to obtain the header before the packet has been completely received by theHEA 110. Theparser 604 provides the header to thelookup engine 606. Thelookup engine 606 utilizes the header to perform a lookup. Preferably, thelookup engine 606 determines a hash of the header and uses this hash for the lookup. Also in a preferred embodiment, thelookup engine 606 performs a local lookup of a local cache (not shown inFIG. 11 ) and only performs a remote lookup when the resultant cannot be obtained using local resources only. - Using the
system 600, the packet header lookup can be performed more efficiently. Theparser 604 commences parsing the packet before the receipt of the packet is completed. Thus, the delay in performing the header lookup may be reduced. Moreover, a local lookup is preferably performed. Latencies for packets having headers which include repeat information may be further reduced. -
FIG. 12 is a block diagram of one embodiment of thelookup engine 606′ in accordance with the present invention. Thelookup engine 606′ is thus preferably used in thesystem 600 depicted inFIG. 11 . Referring toFIGS. 11 and 12 , thelookup engine 606′ includes alocal lookup unit 620 and aremote lookup unit 630. Thelocal lookup unit 620 and theremote lookup unit 630 preferably correspond to theunits local lookup unit 620 includes acache memory 622 and acounter 624. Thecache 622 stores resultants from previous lookups for previously processed packets and the corresponding hashes. Theremote lookup unit 630 includes a main store readmemory 632 and asearch pattern storage 634. As their names suggest, thelocal lookup unit 620 utilizes local resources, such as thecache 622, to perform a lookup corresponding to the header of the packet. Theremote lookup unit 630 performs a lookup of themain store 612 by storing the appropriate search pattern (e.g. the hash corresponding to the header) in thesearch pattern storage 634. The resultant returned may be stored in the mainstore lookup memory 632. - The
local lookup unit 620 determines whether the resultant of the lookup can be obtained from thecache 622. If so, the resultant is either already stored in thecache 622 or thecache 622 is waiting for theremote lookup unit 630 to return the resultant from a search of themain store 612 due to a previous packet. If thecache 622 is waiting, the entry in the cache corresponding to the resultant is not yet resolved. If the resultant is already in thecache 622, thelocal lookup unit 620 returns the resultant. Thus, the processor 608 may quickly perform any processing desired and the packet data stored in the appropriate destination buffer of themain store 612. If the entry corresponding to the resultant has not been resolved, then the index of that entry of thecache 622 is returned and thecounter 624 incremented to track the packet. Once the entry is resolved, thecounter 624 can be decremented and the resultant returned. - Using the
lookup engine 606′, the packet header lookup can be performed more efficiently. Theparser 604 still commences parsing the packet before the receipt of the packet is completed. Thus, the delay in performing the header lookup may be reduced. Moreover, thelocal lookup unit 620 performs a local lookup. If a resultant is in thecache 622, delays due to a remote lookup may be avoided. Latencies for packets having headers which include repeat information may thus be further reduced. -
FIG. 13 is a flow chart depicting of one embodiment of amethod 700 for performing a packet header lookup in accordance with the present invention. Themethod 700 is described in connection with thesystem 600 depicted inFIGS. 11 and 12 . However, another system might be used to implement themethod 700. The packet is parsed to obtain the header while the packet is still being received by thesystem 600, viastep 702. Step 702 is preferably performed by providing the packet both to theparser 604 and the receivebuffer 602. Step 702 thus also includes utilizing theparser 602 to start parsing the packet before the receipt is complete and to obtain the header. A lookup is performed using the header, viastep 702. Step 702 includes providing the header to thelookup engine 606/606′. Thelookup engine 606/606′ preferably utilizes the data in the header, such as the five-tuple, to provide a hash used to search for the resultant corresponding to the header. In a preferred embodiment, a local search is performed using thelocal lookup unit 620. If the resultant is not available locally, then a remote search of themain store 612 is performed. Thus, the resultant can be obtained, and the packet processed. - Using the
method 700, the packet header lookup can be performed more efficiently. Parsing of the packet starts instep 702 before the receipt of the packet is completed. Thus, the delay in performing the header lookup may be reduced. Moreover, a local lookup may be performed in step 704. Latencies for packet headers having their resultants stored locally may thus be reduced. -
FIG. 14 is a flow chart depicting another embodiment of amethod 710 for performing a packet header lookup in accordance with the present invention. Themethod 710 is described in the context of thesystem 600 described inFIGS. 11 and 12 . However, another system might be used to implement themethod 710. - The packet is parsed to obtain the header while the packet is still being received by the
system 600, viastep 712. Step 712 is preferably performed by providing the packet both to theparser 604 and the receivebuffer 602. Step 712 also includes utilizing theparser 602 to parse the packet and to obtain the header. It is determined whether the resultant corresponding to the header can be obtained using local resources, such as the cache, viastep 714. Step 714 preferably includes obtaining a hash based on the header and providing the hash to thelocal lookup unit 620. If it is determined that the resultant is not available through local resources, then theremote lookup unit 630 is accessed to perform a remote lookup, viastep 716. In a preferred embodiment, thelocal lookup unit 620 enqueues the remote lookup and writes the hash into thecache 622 so that the corresponding entry will store the resultant once the remote lookup is completed. A lookup of the main store is performed, viastep 718. Step 718 includes writing the hash to thesearch pattern memory 634 and obtaining space in the mainstore lookup memory 632 to store the resultant of the search of themain store 612. Once a resultant has been obtained, the resultant is provided to theremote lookup unit 630, as well as to thelocal lookup unit 620, viastep 720. Thelocal lookup unit 620 writes the resultant to the appropriate entry of thecache 622. The resultant is also provided to the processor 608 for processing instep 720. Thus, the packet may be processed and the data stored in the destination buffer of themain store 612. - If the resultant may be available locally, then there is an entry in the
cache 622 corresponding to the header. However the entry may not be resolved. It is determined by thelocal lookup unit 620 whether the entry is resolved, viastep 722. If the entry is resolved, then the resultant is stored in the entry corresponding to the header. Thus, the resultant is obtained from the cache, viastep 724. If the entry is not resolved, then a previous packet has a hash corresponding to the hash currently being searched. However, the remote lookup for this previous packet has not yet been completed. Consequently, the index of the appropriate cache entry is provided, viastep 726. Thecounter 624 is also incremented, viastep 728. Once the entry has been resolved by a remote lookup, such as in steps 716-720, the resultant is provided, viastep 730. In addition, thecounter 624 is decremented because the packet header lookup has been completed for the unresolved entry. Thus, the processor 608 receives the resultant of the header lookup. Consequently, processing of the packet can be completed and the packet data stored in the destination buffer of themain store 612. - Using the
method 710 latencies in packet header lookup may be reduced. Because the packet is parsed before receipt is complete, the packet header may be obtained more rapidly and delays reduced. Furthermore, because a local lookup is possible, the packet header lookup may be reduced when the resultant is available locally. Thus, delays may be further reduced. - A method and system for more efficiently performing a packet header lookup has been disclosed. The present invention has been described in accordance with the embodiments shown, and one of ordinary skill in the art will readily recognize that there could be variations to the embodiments, and any variations would be within the spirit and scope of the present invention. For example, the present invention can be implemented using hardware, software, a computer readable medium containing program instructions, or a combination thereof. The computer readable medium may be computer-readable storage medium such as memory or CD-ROM, and is to be executed by a processor. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
Claims (2)
1. A system for performing a lookup for a packet in a computer network, the packet having a header, the system comprising:
a parser, wherein the parser:
parses the packet for the header prior to receipt of the packet being completed, wherein the header is separated from data payload;
demultiplexes the packet; and
hashes the header; and
a lookup engine coupled with the parser, wherein the lookup engine:
uses the hashed header to perform a lookup corresponding to the header, wherein the lookup is a local lookup of a cache;
returning a resultant corresponding to the header if the cache includes an entry corresponding to the header and the entry stores the resultant;
providing an index for the packet if the cache includes an entry corresponding to the header but the entry does not store the resultant;
incrementing a counter if the index is provided;
decrementing the counter if the resultant has been resolved;
performing a remote lookup of a memory if the cache does not have an entry corresponding to the header; and
storing the resultant of the memory lookup in the entry of the cache corresponding to the header.
2. A computer-readable medium including program instructions for performing a lookup for a packet in a computer system, the packet having a header, the program instructions which when executed by a computer system cause the computer system to execute a method comprising:
parsing the packet for the header prior to receipt of the packet being completed, wherein the header is separated from data payload;
demultiplexing the packet;
hashing the header;
using the hashed header, performing a lookup corresponding to the header, wherein the lookup is a local lookup of a cache;
returning a resultant corresponding to the header if the cache includes an entry corresponding to the header and the entry stores the resultant;
providing an index for the packet if the cache includes an entry corresponding to the header but the entry does not store the resultant;
incrementing a counter if the index is provided;
decrementing the counter if the resultant has been resolved;
performing a remote lookup of a memory if the cache does not have an entry corresponding to the header; and
storing the resultant of the memory lookup in the entry of the cache corresponding to the header.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/165,623 US20080273539A1 (en) | 2005-04-01 | 2008-06-30 | System for performing a packet header lookup |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/096,362 US7492771B2 (en) | 2005-04-01 | 2005-04-01 | Method for performing a packet header lookup |
US12/165,623 US20080273539A1 (en) | 2005-04-01 | 2008-06-30 | System for performing a packet header lookup |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/096,362 Continuation US7492771B2 (en) | 2005-04-01 | 2005-04-01 | Method for performing a packet header lookup |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080273539A1 true US20080273539A1 (en) | 2008-11-06 |
Family
ID=37070374
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/096,362 Expired - Fee Related US7492771B2 (en) | 2005-04-01 | 2005-04-01 | Method for performing a packet header lookup |
US12/165,623 Abandoned US20080273539A1 (en) | 2005-04-01 | 2008-06-30 | System for performing a packet header lookup |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/096,362 Expired - Fee Related US7492771B2 (en) | 2005-04-01 | 2005-04-01 | Method for performing a packet header lookup |
Country Status (1)
Country | Link |
---|---|
US (2) | US7492771B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090073978A1 (en) * | 2007-09-14 | 2009-03-19 | International Business Machines Corporation | Low Latency Multicast for InfinibandR Host Channel Adapters |
US20090077567A1 (en) * | 2007-09-14 | 2009-03-19 | International Business Machines Corporation | Adaptive Low Latency Receive Queues |
US20140164553A1 (en) * | 2012-12-12 | 2014-06-12 | International Business Machines Corporation | Host ethernet adapter frame forwarding |
US11671280B2 (en) * | 2020-02-20 | 2023-06-06 | Nxp B.V. | Network node with diagnostic signalling mode |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8296354B2 (en) | 2004-12-03 | 2012-10-23 | Microsoft Corporation | Flexibly transferring typed application data |
US8424020B2 (en) * | 2006-01-31 | 2013-04-16 | Microsoft Corporation | Annotating portions of a message with state properties |
US7715428B2 (en) * | 2007-01-31 | 2010-05-11 | International Business Machines Corporation | Multicore communication processing |
US8103764B2 (en) * | 2008-10-14 | 2012-01-24 | CacheIQ, Inc. | Method and apparatus for matching trigger pattern |
US8964748B2 (en) * | 2009-04-17 | 2015-02-24 | Genband Us Llc | Methods, systems, and computer readable media for performing flow compilation packet processing |
US8291058B2 (en) * | 2010-02-19 | 2012-10-16 | Intrusion, Inc. | High speed network data extractor |
US8902890B2 (en) | 2011-05-27 | 2014-12-02 | International Business Machines Corporation | Memory saving packet modification |
CN102780619B (en) * | 2012-07-23 | 2015-03-11 | 北京星网锐捷网络技术有限公司 | Method and device for processing message |
CN107515775B (en) * | 2016-06-15 | 2021-11-19 | 华为技术有限公司 | Data transmission method and device |
Citations (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US1724198A (en) * | 1927-06-30 | 1929-08-13 | Utica Products Inc | Electric heater |
US4825406A (en) * | 1981-10-05 | 1989-04-25 | Digital Equipment Corporation | Secondary storage facility employing serial communications between drive and controller |
US5172371A (en) * | 1990-08-09 | 1992-12-15 | At&T Bell Laboratories | Growable switch |
US5359659A (en) * | 1992-06-19 | 1994-10-25 | Doren Rosenthal | Method for securing software against corruption by computer viruses |
US5442802A (en) * | 1992-01-03 | 1995-08-15 | International Business Machines Corporation | Asynchronous co-processor data mover method and means |
US5983274A (en) * | 1997-05-08 | 1999-11-09 | Microsoft Corporation | Creation and use of control information associated with packetized network data by protocol drivers and device drivers |
US5991299A (en) * | 1997-09-11 | 1999-11-23 | 3Com Corporation | High speed header translation processing |
US6041058A (en) * | 1997-09-11 | 2000-03-21 | 3Com Corporation | Hardware filtering method and apparatus |
US6266700B1 (en) * | 1995-12-20 | 2001-07-24 | Peter D. Baker | Network filtering system |
US20020048270A1 (en) * | 1999-08-27 | 2002-04-25 | Allen James Johnson | Network switch using network processor and methods |
US20030022792A1 (en) * | 1998-08-13 | 2003-01-30 | Erwing Hacker | Herbicidal compositions for tolerant or resistant cereal crops |
US20030026252A1 (en) * | 2001-07-31 | 2003-02-06 | Thunquest Gary L. | Data packet structure for directly addressed multicast protocol |
US20030103459A1 (en) * | 2001-11-16 | 2003-06-05 | Connors Dennis P. | Method and implementation for a flow specific modified selective-repeat ARQ communication system |
US20030154399A1 (en) * | 2002-02-08 | 2003-08-14 | Nir Zuk | Multi-method gateway-based network security systems and methods |
US6678746B1 (en) * | 2000-08-01 | 2004-01-13 | Hewlett-Packard Development Company, L.P. | Processing network packets |
US20040117600A1 (en) * | 2002-12-12 | 2004-06-17 | Nexsil Communications, Inc. | Native Lookup Instruction for File-Access Processor Searching a Three-Level Lookup Cache for Variable-Length Keys |
US20040128398A1 (en) * | 2001-02-15 | 2004-07-01 | Banderacom | Work queue to TCP/IP translation |
US20040177275A1 (en) * | 2003-03-06 | 2004-09-09 | Rose Kenneth M. | Apparatus and method for filtering IP packets |
US20050022017A1 (en) * | 2003-06-24 | 2005-01-27 | Maufer Thomas A. | Data structures and state tracking for network protocol processing |
US20050174153A1 (en) * | 2004-02-09 | 2005-08-11 | Nec Electronics Corporation | Fractional frequency divider circuit and data transmission apparatus using the same |
US6937574B1 (en) * | 1999-03-16 | 2005-08-30 | Nortel Networks Limited | Virtual private networks and methods for their operation |
US6970419B1 (en) * | 1998-08-07 | 2005-11-29 | Nortel Networks Limited | Method and apparatus for preserving frame ordering across aggregated links between source and destination nodes |
US20060031600A1 (en) * | 2004-08-03 | 2006-02-09 | Ellis Jackson L | Method of processing a context for execution |
US7003118B1 (en) * | 2000-11-27 | 2006-02-21 | 3Com Corporation | High performance IPSEC hardware accelerator for packet classification |
US7062570B2 (en) * | 2000-08-04 | 2006-06-13 | Avaya Technology, Corp. | High performance server farm with tagging and pipelining |
US20060216958A1 (en) * | 2005-03-25 | 2006-09-28 | Cisco Technology, Inc. (A California Corporation) | Carrier card converter for 10 gigabit ethernet slots |
US7131140B1 (en) * | 2000-12-29 | 2006-10-31 | Cisco Technology, Inc. | Method for protecting a firewall load balancer from a denial of service attack |
US20070169179A1 (en) * | 1998-06-15 | 2007-07-19 | Intel Corporation | Tightly coupled scalar and boolean processor |
US7286557B2 (en) * | 2001-11-16 | 2007-10-23 | Intel Corporation | Interface and related methods for rate pacing in an ethernet architecture |
US7298761B2 (en) * | 2003-05-09 | 2007-11-20 | Institute For Information Industry | Link path searching and maintaining method for a bluetooth scatternet |
US7349399B1 (en) * | 2002-09-20 | 2008-03-25 | Redback Networks, Inc. | Method and apparatus for out-of-order processing of packets using linked lists |
US7360217B2 (en) * | 2001-09-28 | 2008-04-15 | Consentry Networks, Inc. | Multi-threaded packet processing engine for stateful packet processing |
US7366194B2 (en) * | 2001-04-18 | 2008-04-29 | Brocade Communications Systems, Inc. | Fibre channel zoning by logical unit number in hardware |
Family Cites Families (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5058110A (en) | 1989-05-03 | 1991-10-15 | Ultra Network Technologies | Protocol processor |
US5430842A (en) | 1992-05-29 | 1995-07-04 | Hewlett-Packard Company | Insertion of network data checksums by a network adapter |
US5752078A (en) * | 1995-07-10 | 1998-05-12 | International Business Machines Corporation | System for minimizing latency data reception and handling data packet error if detected while transferring data packet from adapter memory to host memory |
US6226680B1 (en) | 1997-10-14 | 2001-05-01 | Alacritech, Inc. | Intelligent network interface system method for protocol processing |
US6434620B1 (en) | 1998-08-27 | 2002-08-13 | Alacritech, Inc. | TCP/IP offload network interface device |
US6658002B1 (en) | 1998-06-30 | 2003-12-02 | Cisco Technology, Inc. | Logical operation unit for packet processing |
US6650640B1 (en) | 1999-03-01 | 2003-11-18 | Sun Microsystems, Inc. | Method and apparatus for managing a network flow in a high performance network interface |
US6400730B1 (en) | 1999-03-10 | 2002-06-04 | Nishan Systems, Inc. | Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network |
GB2352360B (en) | 1999-07-20 | 2003-09-17 | Sony Uk Ltd | Network terminator |
US6427169B1 (en) | 1999-07-30 | 2002-07-30 | Intel Corporation | Parsing a packet header |
US6724769B1 (en) | 1999-09-23 | 2004-04-20 | Advanced Micro Devices, Inc. | Apparatus and method for simultaneously accessing multiple network switch buffers for storage of data units of data frames |
US6788697B1 (en) | 1999-12-06 | 2004-09-07 | Nortel Networks Limited | Buffer management scheme employing dynamic thresholds |
US6822968B1 (en) | 1999-12-29 | 2004-11-23 | Advanced Micro Devices, Inc. | Method and apparatus for accounting for delays caused by logic in a network interface by integrating logic into a media access controller |
US7308006B1 (en) | 2000-02-11 | 2007-12-11 | Lucent Technologies Inc. | Propagation and detection of faults in a multiplexed communication system |
US6735670B1 (en) | 2000-05-12 | 2004-05-11 | 3Com Corporation | Forwarding table incorporating hash table and content addressable memory |
US6754662B1 (en) | 2000-08-01 | 2004-06-22 | Nortel Networks Limited | Method and apparatus for fast and consistent packet classification via efficient hash-caching |
US8019901B2 (en) | 2000-09-29 | 2011-09-13 | Alacritech, Inc. | Intelligent network storage interface system |
US7218632B1 (en) * | 2000-12-06 | 2007-05-15 | Cisco Technology, Inc. | Packet processing engine architecture |
US6954463B1 (en) | 2000-12-11 | 2005-10-11 | Cisco Technology, Inc. | Distributed packet processing architecture for network access servers |
US7023811B2 (en) | 2001-01-17 | 2006-04-04 | Intel Corporation | Switched fabric network and method of mapping nodes using batch requests |
US6728929B1 (en) | 2001-02-16 | 2004-04-27 | Spirent Communications Of Calabasas, Inc. | System and method to insert a TCP checksum in a protocol neutral manner |
US7292586B2 (en) | 2001-03-30 | 2007-11-06 | Nokia Inc. | Micro-programmable protocol packet parser and encapsulator |
US7274706B1 (en) * | 2001-04-24 | 2007-09-25 | Syrus Ziai | Methods and systems for processing network data |
JP3936550B2 (en) | 2001-05-14 | 2007-06-27 | 富士通株式会社 | Packet buffer |
US7164678B2 (en) | 2001-06-25 | 2007-01-16 | Intel Corporation | Control of processing order for received network packets |
US6976205B1 (en) | 2001-09-21 | 2005-12-13 | Syrus Ziai | Method and apparatus for calculating TCP and UDP checksums while preserving CPU resources |
US7124198B2 (en) | 2001-10-30 | 2006-10-17 | Microsoft Corporation | Apparatus and method for scaling TCP off load buffer requirements by segment size |
US6907466B2 (en) | 2001-11-08 | 2005-06-14 | Extreme Networks, Inc. | Methods and systems for efficiently delivering data to a plurality of destinations in a computer network |
EP1464144A4 (en) | 2001-11-09 | 2010-09-22 | Vitesse Semiconductor Corp | A means and a method for switching data packets or frames |
US7236492B2 (en) | 2001-11-21 | 2007-06-26 | Alcatel-Lucent Canada Inc. | Configurable packet processor |
WO2003049488A1 (en) | 2001-12-03 | 2003-06-12 | Vitesse Semiconductor Company | Interface to operate groups of inputs/outputs |
US7269661B2 (en) | 2002-02-12 | 2007-09-11 | Bradley Richard Ree | Method using receive and transmit protocol aware logic modules for confirming checksum values stored in network packet |
US7047374B2 (en) | 2002-02-25 | 2006-05-16 | Intel Corporation | Memory read/write reordering |
US7283528B1 (en) | 2002-03-22 | 2007-10-16 | Raymond Marcelino Manese Lim | On the fly header checksum processing using dedicated logic |
US7031304B1 (en) | 2002-09-11 | 2006-04-18 | Redback Networks Inc. | Method and apparatus for selective packet Mirroring |
KR100486713B1 (en) | 2002-09-17 | 2005-05-03 | 삼성전자주식회사 | Apparatus and method for streaming multimedia data |
US7271706B2 (en) | 2002-10-09 | 2007-09-18 | The University Of Mississippi | Termite acoustic detection |
KR100454681B1 (en) | 2002-11-07 | 2004-11-03 | 한국전자통신연구원 | An Ethernet switching Apparatus and Method using Frame Multiplexing and Demultiplexing |
KR100460672B1 (en) | 2002-12-10 | 2004-12-09 | 한국전자통신연구원 | Line interface apparatus for 10 gigabit ethernet and control method thereof |
US20040218623A1 (en) | 2003-05-01 | 2004-11-04 | Dror Goldenberg | Hardware calculation of encapsulated IP, TCP and UDP checksums by a switch fabric channel adapter |
US7098685B1 (en) | 2003-07-14 | 2006-08-29 | Lattice Semiconductor Corporation | Scalable serializer-deserializer architecture and programmable interface |
US8776050B2 (en) | 2003-08-20 | 2014-07-08 | Oracle International Corporation | Distributed virtual machine monitor for managing multiple virtual resources across multiple physical nodes |
JP4437650B2 (en) | 2003-08-25 | 2010-03-24 | 株式会社日立製作所 | Storage system |
US7441179B2 (en) | 2003-10-23 | 2008-10-21 | Intel Corporation | Determining a checksum from packet data |
US20050114710A1 (en) | 2003-11-21 | 2005-05-26 | Finisar Corporation | Host bus adapter for secure network devices |
US7292591B2 (en) | 2004-03-30 | 2007-11-06 | Extreme Networks, Inc. | Packet processing system architecture and method |
US7502474B2 (en) | 2004-05-06 | 2009-03-10 | Advanced Micro Devices, Inc. | Network interface with security association data prefetch for high speed offloaded security processing |
US7134796B2 (en) | 2004-08-25 | 2006-11-14 | Opnext, Inc. | XFP adapter module |
US7436773B2 (en) | 2004-12-07 | 2008-10-14 | International Business Machines Corporation | Packet flow control in switched full duplex ethernet networks |
US8040903B2 (en) | 2005-02-01 | 2011-10-18 | Hewlett-Packard Development Company, L.P. | Automated configuration of point-to-point load balancing between teamed network resources of peer devices |
-
2005
- 2005-04-01 US US11/096,362 patent/US7492771B2/en not_active Expired - Fee Related
-
2008
- 2008-06-30 US US12/165,623 patent/US20080273539A1/en not_active Abandoned
Patent Citations (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US1724198A (en) * | 1927-06-30 | 1929-08-13 | Utica Products Inc | Electric heater |
US4825406A (en) * | 1981-10-05 | 1989-04-25 | Digital Equipment Corporation | Secondary storage facility employing serial communications between drive and controller |
US5172371A (en) * | 1990-08-09 | 1992-12-15 | At&T Bell Laboratories | Growable switch |
US5442802A (en) * | 1992-01-03 | 1995-08-15 | International Business Machines Corporation | Asynchronous co-processor data mover method and means |
US5359659A (en) * | 1992-06-19 | 1994-10-25 | Doren Rosenthal | Method for securing software against corruption by computer viruses |
US6266700B1 (en) * | 1995-12-20 | 2001-07-24 | Peter D. Baker | Network filtering system |
US5983274A (en) * | 1997-05-08 | 1999-11-09 | Microsoft Corporation | Creation and use of control information associated with packetized network data by protocol drivers and device drivers |
US5991299A (en) * | 1997-09-11 | 1999-11-23 | 3Com Corporation | High speed header translation processing |
US6041058A (en) * | 1997-09-11 | 2000-03-21 | 3Com Corporation | Hardware filtering method and apparatus |
US20070169179A1 (en) * | 1998-06-15 | 2007-07-19 | Intel Corporation | Tightly coupled scalar and boolean processor |
US6970419B1 (en) * | 1998-08-07 | 2005-11-29 | Nortel Networks Limited | Method and apparatus for preserving frame ordering across aggregated links between source and destination nodes |
US20030022792A1 (en) * | 1998-08-13 | 2003-01-30 | Erwing Hacker | Herbicidal compositions for tolerant or resistant cereal crops |
US6937574B1 (en) * | 1999-03-16 | 2005-08-30 | Nortel Networks Limited | Virtual private networks and methods for their operation |
US20020048270A1 (en) * | 1999-08-27 | 2002-04-25 | Allen James Johnson | Network switch using network processor and methods |
US6678746B1 (en) * | 2000-08-01 | 2004-01-13 | Hewlett-Packard Development Company, L.P. | Processing network packets |
US7062570B2 (en) * | 2000-08-04 | 2006-06-13 | Avaya Technology, Corp. | High performance server farm with tagging and pipelining |
US7003118B1 (en) * | 2000-11-27 | 2006-02-21 | 3Com Corporation | High performance IPSEC hardware accelerator for packet classification |
US7131140B1 (en) * | 2000-12-29 | 2006-10-31 | Cisco Technology, Inc. | Method for protecting a firewall load balancer from a denial of service attack |
US20040128398A1 (en) * | 2001-02-15 | 2004-07-01 | Banderacom | Work queue to TCP/IP translation |
US7366194B2 (en) * | 2001-04-18 | 2008-04-29 | Brocade Communications Systems, Inc. | Fibre channel zoning by logical unit number in hardware |
US20030026252A1 (en) * | 2001-07-31 | 2003-02-06 | Thunquest Gary L. | Data packet structure for directly addressed multicast protocol |
US7360217B2 (en) * | 2001-09-28 | 2008-04-15 | Consentry Networks, Inc. | Multi-threaded packet processing engine for stateful packet processing |
US7286557B2 (en) * | 2001-11-16 | 2007-10-23 | Intel Corporation | Interface and related methods for rate pacing in an ethernet architecture |
US20030103459A1 (en) * | 2001-11-16 | 2003-06-05 | Connors Dennis P. | Method and implementation for a flow specific modified selective-repeat ARQ communication system |
US20030154399A1 (en) * | 2002-02-08 | 2003-08-14 | Nir Zuk | Multi-method gateway-based network security systems and methods |
US7349399B1 (en) * | 2002-09-20 | 2008-03-25 | Redback Networks, Inc. | Method and apparatus for out-of-order processing of packets using linked lists |
US20040117600A1 (en) * | 2002-12-12 | 2004-06-17 | Nexsil Communications, Inc. | Native Lookup Instruction for File-Access Processor Searching a Three-Level Lookup Cache for Variable-Length Keys |
US20040177275A1 (en) * | 2003-03-06 | 2004-09-09 | Rose Kenneth M. | Apparatus and method for filtering IP packets |
US7298761B2 (en) * | 2003-05-09 | 2007-11-20 | Institute For Information Industry | Link path searching and maintaining method for a bluetooth scatternet |
US20050022017A1 (en) * | 2003-06-24 | 2005-01-27 | Maufer Thomas A. | Data structures and state tracking for network protocol processing |
US20050174153A1 (en) * | 2004-02-09 | 2005-08-11 | Nec Electronics Corporation | Fractional frequency divider circuit and data transmission apparatus using the same |
US20060031600A1 (en) * | 2004-08-03 | 2006-02-09 | Ellis Jackson L | Method of processing a context for execution |
US20060216958A1 (en) * | 2005-03-25 | 2006-09-28 | Cisco Technology, Inc. (A California Corporation) | Carrier card converter for 10 gigabit ethernet slots |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090073978A1 (en) * | 2007-09-14 | 2009-03-19 | International Business Machines Corporation | Low Latency Multicast for InfinibandR Host Channel Adapters |
US20090077567A1 (en) * | 2007-09-14 | 2009-03-19 | International Business Machines Corporation | Adaptive Low Latency Receive Queues |
US7899050B2 (en) * | 2007-09-14 | 2011-03-01 | International Business Machines Corporation | Low latency multicast for infiniband® host channel adapters |
US8265092B2 (en) | 2007-09-14 | 2012-09-11 | International Business Machines Corporation | Adaptive low latency receive queues |
US20140164553A1 (en) * | 2012-12-12 | 2014-06-12 | International Business Machines Corporation | Host ethernet adapter frame forwarding |
US9137167B2 (en) * | 2012-12-12 | 2015-09-15 | International Business Machines Corporation | Host ethernet adapter frame forwarding |
US11671280B2 (en) * | 2020-02-20 | 2023-06-06 | Nxp B.V. | Network node with diagnostic signalling mode |
Also Published As
Publication number | Publication date |
---|---|
US7492771B2 (en) | 2009-02-17 |
US20060221966A1 (en) | 2006-10-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7492771B2 (en) | Method for performing a packet header lookup | |
US7586936B2 (en) | Host Ethernet adapter for networking offload in server environment | |
EP2019360B1 (en) | Data processing apparatus and data transfer method | |
US9727508B2 (en) | Address learning and aging for network bridging in a network processor | |
Honda et al. | mSwitch: a highly-scalable, modular software switch | |
JP3645734B2 (en) | Network relay device and network relay method | |
US9154442B2 (en) | Concurrent linked-list traversal for real-time hash processing in multi-core, multi-thread network processors | |
US6731652B2 (en) | Dynamic packet processor architecture | |
JP3640299B2 (en) | A proposal and response architecture for route lookup and packet classification requests | |
US20060227811A1 (en) | TCP engine | |
US20100023626A1 (en) | Method and apparatus for reducing host overhead in a socket server implementation | |
US20210160350A1 (en) | Generating programmatically defined fields of metadata for network packets | |
US20130089099A1 (en) | Modifying Data Streams without Reordering in a Multi-Thread, Multi-Flow Network Communications Processor Architecture | |
CN101227296B (en) | Method, system for transmitting PCIE data and plate card thereof | |
JP2002510452A (en) | Search engine architecture for high performance multilayer switch elements | |
US7903687B2 (en) | Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device | |
US9274586B2 (en) | Intelligent memory interface | |
US20220206957A1 (en) | Methods and systems for using a packet processing pipeline to accelerate infiniband administrative operations | |
US20030147394A1 (en) | Network switch with parallel working of look-up engine and network processor | |
Ding et al. | A split architecture approach to terabyte-scale caching in a protocol-oblivious forwarding switch | |
JP2001237881A (en) | Table type data retrieval device and packet processing system using it, and table type data retrieval method | |
US10506044B1 (en) | Statistics collecting architecture | |
US7706409B2 (en) | System and method for parsing, filtering, and computing the checksum in a host Ethernet adapter (HEA) | |
JP2003218907A (en) | Processor with reduced memory requirements for high- speed routing and switching of packets | |
JP3645733B2 (en) | Network relay device and network relay method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |