US20050195832A1 - Method and system for performing longest prefix matching for network address lookup using bloom filters - Google Patents

Method and system for performing longest prefix matching for network address lookup using bloom filters Download PDF

Info

Publication number
US20050195832A1
US20050195832A1 US11/055,767 US5576705A US2005195832A1 US 20050195832 A1 US20050195832 A1 US 20050195832A1 US 5576705 A US5576705 A US 5576705A US 2005195832 A1 US2005195832 A1 US 2005195832A1
Authority
US
United States
Prior art keywords
prefixes
prefix
bloom filters
hash
lookup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/055,767
Other versions
US7602785B2 (en
Inventor
Sarang Dharmapurikar
Praveen Krishnamurthy
David Taylor
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Washington University in St Louis WUSTL
Original Assignee
Washington University in St Louis WUSTL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Washington University in St Louis WUSTL filed Critical Washington University in St Louis WUSTL
Priority to US11/055,767 priority Critical patent/US7602785B2/en
Assigned to WASHINGTON UNIVERSITY reassignment WASHINGTON UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DHARMAPURIKAR, SARANG, KRISHNAMURTHY, PRAVEEN, TAYLOR, DAVID E.
Publication of US20050195832A1 publication Critical patent/US20050195832A1/en
Assigned to NATIONAL SCIENCE FOUNDATION reassignment NATIONAL SCIENCE FOUNDATION CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: WASHINGTON UNIVERSITY
Priority to US12/566,150 priority patent/US20100098081A1/en
Application granted granted Critical
Publication of US7602785B2 publication Critical patent/US7602785B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/74591Address table lookup; Address filtering using content-addressable memories [CAM]

Definitions

  • the present invention relates to network communication routing and, in particular, to a method and system of performing longest prefix matching for network address lookup using Bloom filters.
  • LPM Longest Prefix Matching
  • IPv4 Internet Protocol Version 4
  • This protocol requires Internet routers to search variable-length address prefixes in order to find the longest matching prefix of the network destination address of each product traveling through the router and retrieve the corresponding forwarding information.
  • This computationally intensive task commonly referred to as network address lookup, is often the performance bottleneck in high-performance Internet routers due to the number of off-chip memory accesses required per lookup.
  • TCAM Ternary Content Addressable Memory
  • TCAMs are less dense than SRAM, and have access times of 100 M random accesses per second, which are over 3.3 times slower than SRAMs (which are capable of performing 333,000,000 random accesses per second) due to the capacitive loading induced by their parallelism. Further, power consumption per bit of storage is four orders of magnitude higher than SRAM.
  • Trie-based systems such as the Trie-based systems, Tree Bitmap, Multiway and Multicolumn Search, and Binary Search on Prefix Length techniques may make use of commodity SRAM and SDRAM devices.
  • these techniques have not met the criteria to provide advantages in performance that are independent of IP address length or to provide improved scalability.
  • Bloom filters for Longest Prefix Matching.
  • Bloom filters which are efficient data structures for membership queries with tunable false positive errors, are typically used for efficient exact match searches. The probability of a false positive is dependent upon the number of entries stored in the filter, the size of the filter, and the number of hash functions used to probe the filter.
  • Methods consistent with the present invention perform a network address lookup by sorting forwarding table entries by prefix length, associating a Bloom filter with each unique prefix length, and “programming” each Bloom filter with prefixes of its associated length.
  • a network address lookup search in accordance with methods consistent with the present invention begins by performing parallel membership queries to the Bloom filters by using the appropriate segments of the input IP address.
  • the result of this step is a vector of matching prefix lengths, some of which may be false matches.
  • a hash table corresponding to each prefix length may then be probed in the order of longest match in the vector to shortest match in the vector, terminating when a match is found or all of the lengths represented in the vector are searched.
  • One aspect of the present invention is that the performance, as determined by the number of dependent memory accesses per lookup, may be held constant for longer address lengths or additional unique address prefix lengths in the forwarding table given that memory resources scale linearly with the number of prefixes in the forwarding table.
  • Methods consistent with the present invention may include optimizations, such as asymmetric Bloom filters that dimension filters according to prefix length distribution, to provide optimal average case performance for a network address lookup while limiting worst case performance. Accordingly, with a modest amount of embedded RAM for Bloom filters, the average number of hash probes to tables stored in a separate memory device approaches one. By employing a direct lookup array and properly configuring the Bloom filters, the worst case may be held to two hash probes and one array access per lookup while maintaining near optimal average performance of one hash probe per lookup.
  • optimizations such as asymmetric Bloom filters that dimension filters according to prefix length distribution
  • Implementation with current technology is capable of average performance of over 300 M lookups per second and worst case performance of over 100 M lookups per second using a commodity SRAM device operating at 333 MHz.
  • Methods consistent with the present invention offer better performance, scalability, and lower cost than TCAMs, given that commodity SRAM devices are denser, cheaper, and operate more than three times faster than TCAM-based solutions.
  • a method of performing a network address lookup comprises: grouping forwarding entries from a routing table by prefix length; associating each of a plurality of Bloom filters with a unique prefix length; programming said plurality of Bloom filters with said associated set of prefixes; and performing membership probes to said Bloom filters by using predetermined prefixes of a network address.
  • a system for performing a network address lookup.
  • the system comprises means for sorting forwarding entries from a routing table by prefix length, means for associating each of a plurality of Bloom filters with a unique prefix length, means for programming said plurality of Bloom filters with said associated set of prefixes, and means for performing membership queries to said Bloom filters by using predetermined prefixes of an network address.
  • FIG. 1 depicts an exemplary system for performing longest prefix matching using Bloom filters according to one embodiment consistent with the present invention
  • FIG. 2 depicts an average prefix length distribution for IPv4 Border Gate Protocol (“BGP”) table snapshots according to one embodiment consistent with the present invention
  • FIG. 3 depicts an expected number of hash probes per lookup, Eexp, versus total embedded memory size, M, for various values of total prefixes, N, using a basic configuration for IPv4 with 32 asymmetric Bloom filters, according to one embodiment consistent with the present invention
  • FIG. 4 depicts a direct lookup array for the first three prefix lengths according to one embodiment consistent with the present invention
  • FIG. 5 depicts an expected number of hash probes per lookup, Eexp, versus total embedded memory size, M, for various values of total prefixes, N, using a direct lookup array for prefix lengths 1 . . . 20 and 12 Bloom filters for prefix lengths 21 . . . 32, according to one embodiment consistent with the present invention
  • FIG. 6 depicts an expected number of hash probes per lookup, Eexp, versus total embedded memory size, M, for various values of total prefixes, N, using a direct lookup array for prefix lengths 1 . . . 20, and two Bloom filters for prefix lengths 21 . . . 24 and 25 . . . 32, according to one embodiment consistent with the present invention;
  • FIG. 8 depicts a combined prefix length distribution for Internet Protocol Version 6 (“IPv6”) BGP table snapshots, according to one embodiment consistent with the present invention
  • FIG. 9 depicts a plurality of Mini-Bloom filters which allow the system, according to one embodiment consistent with the present invention, to adapt to prefix distribution.
  • the dashed line shows a programming path for a prefix of length 2, and the solid line illustrates query paths for an input IP address;
  • Methods consistent with the present invention employ a LPM technique that provides better performance and scalability than conventional TCAM-based techniques for IP network address lookup.
  • the present invention exhibits several advantages over conventional techniques, since the number of dependent memory accesses required for a lookup is virtually independent of the length of the IP network address and the number of unique prefix lengths (in other words, statistical performance may be held constant for arbitrary address lengths provided ample memory resources). Scaling the present invention to IPv6 does not degrade lookup performance and requires more on-chip memory for Bloom filters only if the number of stored unique prefix lengths increases.
  • FIG. 1 depicts an exemplary system 100 consistent with the present invention for performing a network address lookup using longest prefix matching that employs Bloom filters.
  • the system 100 is operatively connected to a router 50 to receive an IP address 50 , such as a destination network address, from a packet payload (not shown in figures) that is being traversed through the router 50 .
  • the system 100 may be incorporated into the router 50 .
  • the system 100 includes a group of Bloom filters 101 that are operatively configured to determine IP network address prefix memberships in sets of prefixes that are sorted by prefix length.
  • the system 100 may also include a group of Counting Bloom filters 102 each of which are operatively connected to a respective Bloom filter 101 and a hash table 103 , preferably an off-chip hash table, that is operatively connected to the Bloom filters 101 .
  • a network address lookup search executed by the system 100 in accordance with methods consistent with the present invention begins by performing parallel membership queries to the Bloom filters 101 , which are organized by prefix length. The result is a vector 104 in FIG. 1 of matching prefix lengths, some of which may be false matches.
  • the hash table 103 has all the prefixes in the routing table and is operatively configured to be probed in order of the longest match in the vector 104 to the shortest match in the vector 104 , terminating when a match is found or all of the lengths represented in the vector are searched.
  • the hash table 103 may be one of a multiple of hash tables, each containing prefixes of a particular length, operatively configured to be probed.
  • the expected number of off-chip memory accesses required by the system 100 per network address lookup approaches one, providing better performance, scalability, and lower cost than TCAMs, given that commodity SRAM devices are denser, cheaper, and operate more than three times faster than TCAM-based solutions.
  • each Bloom filter 101 is a data structure used for representing a set of messages succinctly (See B. Bloom, in “Space/time trade-offs in hash coding with allowable errors”, ACM, 13(7):422-426, May 1970).
  • Each Bloom filter 101 includes a bit-vector of length m used to efficiently represent a set of messages, such as IP addresses that the router 50 may be expected to receive in a packet payload.
  • the Bloom filter 101 Given a set of messages X with n members, for each message x i in X, the Bloom filter 101 may compute k hash functions on x i , producing k hash values each ranging from 1 to m.
  • each message x i causes k bits in the m-bit long vector to be set to 1. Note that if one of the k hash values addresses a bit that is already set to 1, that bit is not changed. This same procedure is repeated for all the members of the set, and is referred to herein as “programming” the Bloom filter.
  • the Bloom filter Given message x, the Bloom filter generates k hash values using the same hash functions it used to program the filter. The bits in the m-bit long vector at the locations corresponding to the k hash values are checked. If at least one of these k bits is 0, then the message is declared to be a non-member of the set of messages. If all the k bits are found to be 1, then the message is said to belong to the set with a certain probability. If all the k bits are found to be 1 and x is not a member of X, then it is said to be a false positive.
  • the false probability that a random bit of the m-bit vector is set to 1 by a hash function is simply 1/m.
  • the probability that it is not set is 1 ⁇ (1/m).
  • the probability that it is not set by any of the n members of X is (1 ⁇ (1/m)) n . Since each of the messages sets k bits in the vector, it becomes (1 ⁇ (1/m)) nk . Hence, the probability that this bit is found to be 1 is 1 ⁇ (1 ⁇ (1/m)) nk .
  • the ratio m/n may be interpreted as the average number of bits consumed by a single member of the set of messages. It should be noted that this space requirement is independent of the actual size of the member. In the optimal case, the false positive probability is decreased exponentially with a linear increase in the ratio m/n. In addition, this implies that the number of hash functions k, and hence the number of random lookups in the bit vector required to query membership of one message in the set of messages is proportional to m/n.
  • the amount of memory resources, m needs to scale linearly with the size of the message set, n.
  • Bloom filters it is not possible to delete a member stored in the filter. Deleting a particular message entry from the set programmed into the Bloom filter 103 requires that the corresponding k hashed bits in the bit vector (e.g., vector 104 ) be set to zero. This could disturb other members programmed into the Bloom filter which hash to (or set to one) any of these bits.
  • the bit vector e.g., vector 104
  • each Counting Bloom filter 102 has a vector of counters corresponding to each bit in the bit-vector. Whenever a member or message (e.g., IP address 52 prefix) is added to or deleted from the set of messages (or prefixes) programmed in the filter 102 , the counters corresponding to the k hash values are incremented or decremented, respectively. When a counter changes from zero to one, the corresponding bit in the bit-vector is set. When a counter changes from one to zero, the corresponding bit in the bit-vector is cleared.
  • a member or message e.g., IP address 52 prefix
  • the counters are changed only during addition and deletion of prefixes in the Bloom filter. These updates are relatively less frequent than the actual query process itself. Hence, counters may be maintained in software and the bit corresponding to each counter is maintained in hardware. Thus, by avoiding counter implementation in hardware, memory resources may be saved.
  • Bloom filters An important property of Bloom filters is that the computation time involved in performing the query is independent from the number of the prefixes programmed in it, provided, as stated above, that the memory m used by the data structure varies linearly with the number of strings n stored in it. Further, the amount of storage required by the Bloom filter for each prefix is independent from its length. Still further, the computation, which requires generation of hash values, may be performed in special purpose hardware.
  • the present invention leverages advances in modern hardware technology along with the efficiency of Bloom filters to perform longest prefix matching using a custom logic device with a modest amount of embedded SRAM and a commodity off-chip SRAM device.
  • a commodity DRAM (Dynamic Random Access Memory) device could also be used, further reducing cost and power consumption but increasing the “off-chip” memory access period.
  • the network address lookup performance is independent of address length, prefix length, and the number of unique prefix lengths in the database, and the average number of “off-chip” memory accesses per lookup approaches one. Hence, lookup throughput scales directly with the memory device access period.
  • the plurality of IP address 52 prefixes (e.g., forwarding prefixes) from a routing table 58 in FIG. 1 that are expected to be received by the system are grouped into sets according to prefix length.
  • the system 100 employs a set of W Bloom filters 101 , where W is the number of unique prefix lengths of the prefixes in the routing table, and associates one filter 101 with each unique prefix length.
  • the Bloom filters 101 are Counting Bloom filters. Each filter 101 is “programmed” with the associated set of prefixes according to the previously described procedure.
  • bit-vectors associated with each Bloom filter 101 are stored in embedded memory 105
  • the counters 102 associated with each filter 101 may be maintained, for example, by a separate control processor (not shown in figures) responsible for managing route updates. Separate control processors with ample memory are common features of high-performance routers.
  • the hash table 103 is also constructed for all the prefixes where each hash entry is a [prefix, next hop] pair. Although it is assumed, for example, that the result of a match is the next hop for the packet being traversed through the router 50 , more elaborate information may be associated with each prefix if desired.
  • the hash table 103 may be one of a group of hash tables each containing the prefixes of a particular length. However, a single hash table 103 is preferred.
  • the single hash table 103 or the set of hash tables 103 may be stored off-chip in a separate memory device; for example, a large, high-speed SRAM.
  • a network address lookup search executed by the system 100 in accordance with methods consistent with the present invention may proceed as follows.
  • the input IP address 52 is used to probe the set of W Bloom filters 101 in parallel.
  • the one-bit prefix of the address 52 is used to probe the respective filter 101 associated with length one prefixes
  • the two-bit prefix of the address is used to probe the respective filter 101 associated with length two prefixes, and so on.
  • Each filter 101 indicates a “match” or “no match.”
  • a vector 104 of potentially matching prefix lengths for the given address is composed, referenced herein as the “match vector.”
  • Bloom filters 101 may produce false positives, but never produce false negatives; therefore, if a matching prefix exists in the database, it will be represented in the match vector.
  • the network address lookup search executed by the system 100 in accordance with methods consistent with the present invention then proceeds by probing the hash table 103 with the prefixes represented in the vector 104 in order from the longest prefix to the shortest until a match is found or the vector 104 is exhausted.
  • the number of hash probes required to determine the correct prefix length for an IP address is determined by the number of matching Bloom filters 101 .
  • all Bloom filters 101 are tuned to the same false positive probability, ⁇ . This may be achieved by selecting appropriate values for m for each filter 101 .
  • B l represent the number of Bloom filters 101 for the prefixes of length greater than l.
  • P l ( B l i ) ⁇ f i ⁇ ( 1 - f ) B l - i ( 5 )
  • Equation 9 gives the expected number of hash probes for a longest prefix match
  • Equation 10 provides the maximum number of hash probes for a worst case lookup.
  • the value of B is equal to W.
  • the system 100 provides high performance independent of prefix database characteristics and input address patterns, with a search engine (e.g., search engine 110 in FIG. 1 ) that achieves, for example, an average of one hash probe per lookup, bounds the worst case search, and utilizes a small amount of embedded memory.
  • a search engine e.g., search engine 110 in FIG. 1
  • N the target amount of prefixes supported by the system
  • ⁇ i be the false positive probability of the i th Bloom filter.
  • the expected number of hash probes executed by the system 100 per lookup depends only on the total amount of memory resources, M, and the total number of supported prefixes, N. This is independent from the number of unique prefix lengths and the distribution of prefixes among the prefix lengths.
  • the expected number of hash probes per lookup, E exp is plotted versus total embedded memory size M for various values of N in FIG. 3 .
  • the expected number of hash probes per lookup is less than two for 250,000 prefixes.
  • the present exemplary system 100 is also memory efficient as it only requires 8 bits of embedded memory per prefix. Doubling the size of the embedded memory to 4 Mb, for example, provides near optimal average performance of one hash probe per lookup.
  • the worst case number of dependent memory accesses is simply 33.
  • the term for the access for the matching prefix may be omitted, because the default route may be stored internally.
  • the worst case number of dependent memory accesses is 32.
  • the system 100 may use a direct lookup array device ( 112 in FIG. 1 ) for the first few prefix lengths as an efficient way to represent shorter prefixes while reducing the number of Bloom filters 101 .
  • a direct lookup array device 112 in FIG. 1
  • the number of worst case hash probes is reduced by one.
  • Use of the direct lookup array device 112 also reduces the amount of embedded memory required by the Bloom filters 101 to achieve optimal average performance, as the number of prefixes represented by Bloom filters is decreased.
  • This implementation of the direct lookup array device includes a direct lookup array 400 that is operatively connected to a binary trie device 402 and a controlled prefix expansion (CPE) trie 404 .
  • the prefixes of length ⁇ a are stored in the binary trie 402 .
  • CPE trie 404 performs a CPE on a stride length equal to a.
  • the next hop associated with each leaf at level a of the CPE trie is written to a respective array slot of the direct lookup array 400 addressed by the bits labeling the path from the root to the leaf.
  • the direct lookup array 400 is searched by using the first a bits of the IP destination address 52 to index into the array 400 . For example, as shown in FIG. 4 , an address 52 with initial bits 101 would result in a next hop of 4.
  • the direct lookup array 400 requires 2 a ⁇ NH len bits of memory, where NH len is the number of bits required to represent the next hop.
  • a 256 port router e.g., router 50
  • 8 bits are required to represent the next hop value and the direct lookup array 400 requires 1 MB of memory.
  • Use of a direct lookup array 400 for the first 20 prefix lengths leaves prefix lengths 21 . . . 32 to Bloom filters 101 .
  • N [1:20] is the sum of the prefixes with lengths [1:20].
  • the N [1:20] prefixes constitute 24.6% of the total prefixes in the sample IPv4 BGP tables. Therefore, 75.4% of the total prefixes N are represented in the Bloom filters 101 in this implementation.
  • the expected number of hash probes per lookup versus total embedded memory size for various values of N is shown in FIG. 5 .
  • the expected number of hash probes per lookup for databases containing 250,000 prefixes is less than two when using a small 1 Mb embedded memory. Doubling the size of the memory to 2 Mb, for example, reduces the expected number of hash probes per lookup to less than 1.1 for 250,000 prefix databases.
  • the worst case hash probes per lookup is still large.
  • the worst case is 13 dependent memory accesses per lookup.
  • a high-performance implementation option for the system 100 is to make the direct lookup array device 112 the final stage in a pipelined search architecture. IP destination addresses 52 that reach this stage with a null next hop value would use the next hop retrieved from the direct lookup array 400 of the device 112 .
  • a pipelined architecture requires a dedicated memory bank or port for the direct lookup array 400 .
  • the number of remaining Bloom filters 101 may be reduced by limiting the number of distinct prefix lengths via further use of Controlled Prefix Expansion (CPE). It is desirable to limit the worst case hash probes to as few as possible without prohibitively large embedded memory requirements.
  • CPE strides depends on the prefix distribution. As illustrated in the average distribution of IPv4 prefixes shown in FIG. 2 , for example, in all of the sample databases that may be used to hold a routing table 58 of IP address 52 prefixes, there is a significant concentration of prefixes from lengths 21 to 24. On average, 75.2% of the N prefixes fall in the range of 21 to 24.
  • prefixes in the 25 to 32 range are extremely sparse. Specifically, 0.2% of the N prefixes fall in the range 25 to 32. (Note that 24.6% of the prefixes fall in the range of 1 to 20.)
  • the prefixes not covered by the direct lookup array 400 are divided into 2 groups, G 1 and G 2 , for example, corresponding to prefix lengths 21-24 and 25-32, respectively.
  • Each exemplary group is expanded out to the upper limit of the group so that G 1 contains only length 24 prefixes and G 2 contains only length 32 prefixes.
  • N [21:24] is the number of prefixes of length 21 to 24 before expansion
  • N [25:32] is the number of prefixes of length 25 to 32 before expansion.
  • the system 100 may have two Bloom filters 101 and the direct lookup array 400 , bounding the worst case lookup to two hash probes and one array lookup.
  • the expected number of hash probes per lookup versus total embedded memory M for various values of N is shown in FIG. 6 .
  • the expected number of hash probes per lookup for databases containing 250,000 prefixes is less than 1.6 when using a small 1 Mb embedded memory. Doubling the size of the memory to 2 Mb reduces the expected number of hash probes per lookup to less than 1.2 for 250,000 prefix databases.
  • the use of CPE to reduce the number of Bloom filters 101 allows the system 100 to perform a maximum of two hash probes and one array access per network address lookup, for example, while maintaining near optimal average network address lookup performance with modest use of embedded memory resources.
  • M 2 Mb, for example, and m i is adjusted for each asymmetric Bloom filter 101 according to the distribution of prefixes of the database under test.
  • the ANSI C rand function was used to generate hash values for the Bloom filters 101 , as well as the prefix hash tables 103 .
  • the collisions in the prefix hash tables 103 were around 0.8% which is negligibly small.
  • IP addresses 52 were generated in proportion to the prefix distribution.
  • IP addresses corresponding to a 24 bit prefix in the database dominated the input traffic.
  • IP addresses were applied for each test run.
  • Input traffic patterns with randomly generated IP addresses generated no false positives in any of the tests for the three schemes or system 100 configurations. The false positives increased as the traffic pattern contained more IP addresses corresponding to the prefixes in the database.
  • the average number of hash probes per lookup over all test databases was found to be 1.003, which corresponds to a lookup rate of about 332 million lookups per second with a commodity SRAM device operating at 333 MHz. This is an increase in speed of 3.3 times over state-of-the-art TCAM-based solutions.
  • Scheme 3 had a worst case performance of 2 hash probes and one array access per lookup. Assuming that the array 400 is stored in the same memory device as the tables 103 , worst case performance is 110 million lookups per second, which exceeds current TCAM performance. Note that the values of the expected hash probes per lookup as shown by the simulations generally agree with the values predicted by the equations.
  • the number of dependent memory accesses per lookup may be held constant given that memory resources scale linearly with database size.
  • a network address lookup system and method consistent with the present invention is suitable for high-speed IPv6 route lookups.
  • FIG. 8 shows the combined distribution for a total of 1,550 prefix entries. A significant result is that the total number of unique prefix lengths in the combined distribution is 14, less than half of the number for the IPv4 tables studied.
  • IPv6 unicast network addresses may be aggregated with arbitrary prefix lengths like IPv4 network addresses under CIDR. Although this provides extensive flexibility, the flexibility does not necessarily result in a large increase in unique prefix lengths.
  • the global unicast network address format has three fields: a global routing prefix; a subnet ID; and an interface ID. All global unicast network addresses, other than those that begin with 000, must have a 64-bit interface ID in the Modified EUI-64 format. These interface IDs may be of global or local scope; however, the global routing prefix and subnet ID fields must consume a total of 64 bits. Global unicast network addresses that begin with 000 do not have any restrictions on interface ID size; however, these addresses are intended for special purposes such as embedded IPv4 addresses. Embedded IPv4 addresses provide a mechanism for tunneling IPv6 packets over IPv4 routing infrastructure. This special class of global unicast network addresses should not contribute a significant number of unique prefix lengths to IPv6 routing tables.
  • IPv6 Internet Registries must meet several criteria in order to receive an address allocation, including a plan to provide IPv6 connectivity by assigning /48 address blocks. During the assignment process, /64 blocks are assigned when only one subnet ID is required and /128 addresses are assigned when only one device interface is required. Although it is not clear how much aggregation will occur due to Internet Service Providers assigning multiple /48 blocks, the allocation and assignment policy provides significant structure. Thus, IPv6 routing tables will not contain significantly more unique prefix lengths than current IPv4 tables.
  • systems and methods consistent with the present invention provide a longest prefix matching approach that is a viable mechanism for IPv6 routing lookups. Due to the longer “strides” between hierarchical boundaries of IPv6 addresses, use of Controlled Prefix Expansion (CPE) to reduce the number of Bloom filters 101 may not be practical. In this case, a suitable pipelined architecture may be employed to limit the worst case memory accesses.
  • CPE Controlled Prefix Expansion
  • mini-Bloom filters ( 902 in FIG. 9 ) may be built for the system 100 in lieu of Bloom filters 101 .
  • the dimensions of each mini-Bloom filter 902 be an m′ bit long vector with a capacity of n′ prefixes.
  • mini-Bloom filters were proportionally allocated according to the prefix distribution.
  • on-chip resources were allocated to individual Bloom filters in units of mini-Bloom filters 902 instead of bits.
  • the prefixes of a particular length across the set of mini-Bloom filters 902 allocated to it were uniformly distributed, and each prefix is stored in only one mini-Bloom filter 902 .
  • This uniform random distribution of prefixes was achieved within a set of mini-Bloom filters by calculating a primary hash over the prefix. The prefix is stored in the mini-Bloom filter 902 pointed to by this primary hash value, within the set of mini-bloom filters, as illustrated by the dashed line in FIG. 9 .
  • a given IP address is dispatched to all sets of mini-Bloom filters 902 for distinct prefix lengths on a tri-state bus 904 .
  • the same primary hash function is calculated on the IP address to find out which one of the mini-Bloom filters 902 within the corresponding set should be probed with the given prefix. This mechanism ensures that an input IP address probes only one mini-Bloom filter 902 in the set associated with a particular prefix length as shown by the solid lines in FIG. 9 .
  • the aggregate false positive probability of a particular set of mini-Bloom filters 902 is the same as the false positive probability of an individual mini-Bloom filter.
  • the false positive probability of the present embodiment remains unchanged if the average memory bits per prefix in the mini-Bloom filter 902 is the same as the average memory bits per prefix in the original scheme.
  • the importance of the scheme shown in FIG. 9 is that the allocation of the mini-Bloom filters for different prefix lengths may be changed unlike in the case of hardwired memory.
  • the tables which indicate the prefix length set and its corresponding mini-Bloom filters may be maintained on-chip with reasonable hardware resources.
  • the resource distribution among different sets of mini-Bloom filters 902 may be reconfigured by updating these tables. This flexibility makes the present invention independent from prefix length distribution.
  • the number of hash functions k is essentially the lookup capacity of the memory storing a Bloom filter 101 .
  • on-chip memories need to support at least k reading ports. Fabrication of 6 to 8 read ports for an on-chip Random Access Memory is attainable with existing embedded memory technology.
  • a single memory with the desired lookups is realized by employing multiple smaller memories, with fewer ports. For instance, if the technology limits the number of ports on a single memory to 4, then 2 such smaller memories are required to achieve a lookup capacity of 8 as shown in FIG. 10 b .
  • the Bloom filter 101 allows any hash function to map to any bit in the vector. It is possible that for some member, more than 4 hash functions map to the same memory segment, thereby exceeding the lookup capacity of the memory. This problem may be solved by restricting the range of each hash function to a given memory. This avoids collision among hash functions across different memory segments.
  • h is the maximum lookup capacity of a RAM as limited by the technology
  • k/h such memories of size m/(k/h) may be combined to realize the desired capacity of m bits and k hash functions.
  • a Longest Prefix Matching (LPM) system consistent with the present invention employs Bloom filters to efficiently narrow the scope of the network address lookup search.
  • asymmetric Bloom filters 101 may be used that allocate memory resources according to prefix distribution and provide viable means for their implementation.
  • CPE Controlled Prefix Expansion
  • worst case performance is limited to two hash probes and one array access per lookup.
  • Performance analysis and simulations show that average performance approaches one hash probe per lookup with modest embedded memory resources, less than 8 bits per prefix. The future viability for IPv6 route lookups is assured with the present invention.
  • the present system could achieve average performance of over 300 million lookups per second and worst case performance of over 100 million lookups per second.
  • state-of-the-art TCAM-based solutions for LPM provide 100 million lookups per second, consume 150 times more power per bit of storage than SRAM, and cost approximately 30 times as much per bit of storage than SRAM.

Abstract

The present invention relates to a method and system of performing parallel membership queries to Bloom filters for Longest Prefix Matching, where address prefix memberships are determined in sets of prefixes sorted by prefix length. Hash tables corresponding to each prefix length are probed from the longest to the shortest match in the vector, terminating when a match is found or all of the lengths are searched. The performance, as determined by the number of dependent memory accesses per lookup, is held constant for longer address lengths or additional unique address prefix lengths in the forwarding table given that memory resources scale linearly with the number of prefixes in the forwarding table. For less than 2 Mb of embedded RAM and a commodity SRAM, the present technique achieves average performance of one hash probe per lookup and a worst case of two hash probes and one array access per lookup.

Description

  • This application claims the benefit of the filing date of U.S. Provisional Application No. 60/543,222, entitled “Method And Apparatus For Performing Longest Prefix Matching For In Packet Payload Using Bloom Filters,” filed on Feb. 9, 2004, which is incorporated herein by reference to the extent allowable by law.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to network communication routing and, in particular, to a method and system of performing longest prefix matching for network address lookup using Bloom filters.
  • Longest Prefix Matching (LPM) techniques have received significant attention due to the fundamental role LPM plays in the performance of Internet routers. Classless Inter-Domain Routing (CIDR) has been widely adopted to prolong the life of Internet Protocol Version 4 (IPv4). This protocol requires Internet routers to search variable-length address prefixes in order to find the longest matching prefix of the network destination address of each product traveling through the router and retrieve the corresponding forwarding information. This computationally intensive task, commonly referred to as network address lookup, is often the performance bottleneck in high-performance Internet routers due to the number of off-chip memory accesses required per lookup.
  • Although significant advances have been made in systemic LPM techniques, most commercial router designers use Ternary Content Addressable Memory (TCAM) devices in order to keep pace with optical link speeds despite their larger size, cost, and power consumption relative to Static Random Access Memory (SRAM).
  • However, current TCAMs are less dense than SRAM, and have access times of 100 M random accesses per second, which are over 3.3 times slower than SRAMs (which are capable of performing 333,000,000 random accesses per second) due to the capacitive loading induced by their parallelism. Further, power consumption per bit of storage is four orders of magnitude higher than SRAM.
  • Techniques such as the Trie-based systems, Tree Bitmap, Multiway and Multicolumn Search, and Binary Search on Prefix Length techniques may make use of commodity SRAM and SDRAM devices. However, these techniques have not met the criteria to provide advantages in performance that are independent of IP address length or to provide improved scalability.
  • Therefore, a need exists for a method and system that overcome the problems noted above and others previously experienced.
  • SUMMARY OF THE INVENTION
  • Methods and systems consistent with the present invention employ Bloom filters for Longest Prefix Matching. Bloom filters, which are efficient data structures for membership queries with tunable false positive errors, are typically used for efficient exact match searches. The probability of a false positive is dependent upon the number of entries stored in the filter, the size of the filter, and the number of hash functions used to probe the filter. Methods consistent with the present invention perform a network address lookup by sorting forwarding table entries by prefix length, associating a Bloom filter with each unique prefix length, and “programming” each Bloom filter with prefixes of its associated length. A network address lookup search in accordance with methods consistent with the present invention begins by performing parallel membership queries to the Bloom filters by using the appropriate segments of the input IP address. The result of this step is a vector of matching prefix lengths, some of which may be false matches. A hash table corresponding to each prefix length may then be probed in the order of longest match in the vector to shortest match in the vector, terminating when a match is found or all of the lengths represented in the vector are searched.
  • One aspect of the present invention is that the performance, as determined by the number of dependent memory accesses per lookup, may be held constant for longer address lengths or additional unique address prefix lengths in the forwarding table given that memory resources scale linearly with the number of prefixes in the forwarding table.
  • Methods consistent with the present invention may include optimizations, such as asymmetric Bloom filters that dimension filters according to prefix length distribution, to provide optimal average case performance for a network address lookup while limiting worst case performance. Accordingly, with a modest amount of embedded RAM for Bloom filters, the average number of hash probes to tables stored in a separate memory device approaches one. By employing a direct lookup array and properly configuring the Bloom filters, the worst case may be held to two hash probes and one array access per lookup while maintaining near optimal average performance of one hash probe per lookup.
  • Implementation with current technology is capable of average performance of over 300 M lookups per second and worst case performance of over 100 M lookups per second using a commodity SRAM device operating at 333 MHz. Methods consistent with the present invention offer better performance, scalability, and lower cost than TCAMs, given that commodity SRAM devices are denser, cheaper, and operate more than three times faster than TCAM-based solutions.
  • Specifically, in accordance with methods consistent with the present invention, a method of performing a network address lookup is provided. The method comprises: grouping forwarding entries from a routing table by prefix length; associating each of a plurality of Bloom filters with a unique prefix length; programming said plurality of Bloom filters with said associated set of prefixes; and performing membership probes to said Bloom filters by using predetermined prefixes of a network address.
  • In accordance with systems consistent with the present invention, a system is provided for performing a network address lookup. The system comprises means for sorting forwarding entries from a routing table by prefix length, means for associating each of a plurality of Bloom filters with a unique prefix length, means for programming said plurality of Bloom filters with said associated set of prefixes, and means for performing membership queries to said Bloom filters by using predetermined prefixes of an network address.
  • Other systems, methods, features, and advantages of the present invention will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
  • BRIEF DESCRIPTION OF THE DRAWING
  • FIG. 1 depicts an exemplary system for performing longest prefix matching using Bloom filters according to one embodiment consistent with the present invention;
  • FIG. 2 depicts an average prefix length distribution for IPv4 Border Gate Protocol (“BGP”) table snapshots according to one embodiment consistent with the present invention;
  • FIG. 3 depicts an expected number of hash probes per lookup, Eexp, versus total embedded memory size, M, for various values of total prefixes, N, using a basic configuration for IPv4 with 32 asymmetric Bloom filters, according to one embodiment consistent with the present invention;
  • FIG. 4 depicts a direct lookup array for the first three prefix lengths according to one embodiment consistent with the present invention;
  • FIG. 5 depicts an expected number of hash probes per lookup, Eexp, versus total embedded memory size, M, for various values of total prefixes, N, using a direct lookup array for prefix lengths 1 . . . 20 and 12 Bloom filters for prefix lengths 21 . . . 32, according to one embodiment consistent with the present invention;
  • FIG. 6 depicts an expected number of hash probes per lookup, Eexp, versus total embedded memory size, M, for various values of total prefixes, N, using a direct lookup array for prefix lengths 1 . . . 20, and two Bloom filters for prefix lengths 21 . . . 24 and 25 . . . 32, according to one embodiment consistent with the present invention;
  • FIG. 7 depicts an average number of hash probes per lookup for Scheme 3 programmed with database 1, where N=116,819 for various embedded memory sizes M, according to one embodiment consistent with the present invention;
  • FIG. 8 depicts a combined prefix length distribution for Internet Protocol Version 6 (“IPv6”) BGP table snapshots, according to one embodiment consistent with the present invention;
  • FIG. 9 depicts a plurality of Mini-Bloom filters which allow the system, according to one embodiment consistent with the present invention, to adapt to prefix distribution. The dashed line shows a programming path for a prefix of length 2, and the solid line illustrates query paths for an input IP address;
  • FIG. 10 a depicts a Bloom filter with single memory vector with k=8, according to one embodiment consistent with the present invention; and
  • FIG. 10 b depicts two Bloom Filters of length m/2 with k=4, combined to realize an m-bit long Bloom filter with k=8, according to one embodiment consistent with the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Methods consistent with the present invention employ a LPM technique that provides better performance and scalability than conventional TCAM-based techniques for IP network address lookup. The present invention exhibits several advantages over conventional techniques, since the number of dependent memory accesses required for a lookup is virtually independent of the length of the IP network address and the number of unique prefix lengths (in other words, statistical performance may be held constant for arbitrary address lengths provided ample memory resources). Scaling the present invention to IPv6 does not degrade lookup performance and requires more on-chip memory for Bloom filters only if the number of stored unique prefix lengths increases. Although logic operations and accesses to embedded memory increase operating costs, the amount of parallelism and embedded memory employed by the present invention are well within the capabilities of modem Application-Specific Integrated Circuit (“ASIC”) technology. Finally, by avoiding significant precomputation, such as typically exhibited using a known “leaf pushing” technique, the present invention is able to retain its network address lookup performance even when the network prefix databases are incrementally updated.
  • FIG. 1 depicts an exemplary system 100 consistent with the present invention for performing a network address lookup using longest prefix matching that employs Bloom filters. In the implementation shown in FIG. 1, the system 100 is operatively connected to a router 50 to receive an IP address 50, such as a destination network address, from a packet payload (not shown in figures) that is being traversed through the router 50. In one implementation, the system 100 may be incorporated into the router 50. The system 100 includes a group of Bloom filters 101 that are operatively configured to determine IP network address prefix memberships in sets of prefixes that are sorted by prefix length. The system 100 may also include a group of Counting Bloom filters 102 each of which are operatively connected to a respective Bloom filter 101 and a hash table 103, preferably an off-chip hash table, that is operatively connected to the Bloom filters 101. As discussed below, a network address lookup search executed by the system 100 in accordance with methods consistent with the present invention begins by performing parallel membership queries to the Bloom filters 101, which are organized by prefix length. The result is a vector 104 in FIG. 1 of matching prefix lengths, some of which may be false matches. The hash table 103 has all the prefixes in the routing table and is operatively configured to be probed in order of the longest match in the vector 104 to the shortest match in the vector 104, terminating when a match is found or all of the lengths represented in the vector are searched. In one implementation, the hash table 103 may be one of a multiple of hash tables, each containing prefixes of a particular length, operatively configured to be probed. For a modest amount of on-chip resources for Bloom filters 101, the expected number of off-chip memory accesses required by the system 100 per network address lookup approaches one, providing better performance, scalability, and lower cost than TCAMs, given that commodity SRAM devices are denser, cheaper, and operate more than three times faster than TCAM-based solutions.
  • In general, each Bloom filter 101 is a data structure used for representing a set of messages succinctly (See B. Bloom, in “Space/time trade-offs in hash coding with allowable errors”, ACM, 13(7):422-426, May 1970). Each Bloom filter 101 includes a bit-vector of length m used to efficiently represent a set of messages, such as IP addresses that the router 50 may be expected to receive in a packet payload. Given a set of messages X with n members, for each message xi in X, the Bloom filter 101 may compute k hash functions on xi, producing k hash values each ranging from 1 to m. Each of these values address a single bit in the m-bit vector, hence each message xi causes k bits in the m-bit long vector to be set to 1. Note that if one of the k hash values addresses a bit that is already set to 1, that bit is not changed. This same procedure is repeated for all the members of the set, and is referred to herein as “programming” the Bloom filter.
  • Querying the Bloom filters 101 for membership of a given message x in the set of messages is similar to the programming process. Given message x, the Bloom filter generates k hash values using the same hash functions it used to program the filter. The bits in the m-bit long vector at the locations corresponding to the k hash values are checked. If at least one of these k bits is 0, then the message is declared to be a non-member of the set of messages. If all the k bits are found to be 1, then the message is said to belong to the set with a certain probability. If all the k bits are found to be 1 and x is not a member of X, then it is said to be a false positive. This ambiguity in membership comes from the fact that the k bits in the m-bit vector may be set by any of the n members of X. Thus, finding a bit set to 1 does not necessarily imply that it was set by the particular message being queried. However, finding a 0 bit certainly implies that the string does not belong to the set, since if it were a member then all the k bits would definitely have been set to 1 when the Bloom filter 103 was programmed with that message.
  • In the derivation of the false positive probability (i.e., for a message that is not programmed, all k bits that it hashes to are 1), the false probability that a random bit of the m-bit vector is set to 1 by a hash function is simply 1/m. The probability that it is not set is 1−(1/m). The probability that it is not set by any of the n members of X is (1−(1/m))n. Since each of the messages sets k bits in the vector, it becomes (1−(1/m))nk. Hence, the probability that this bit is found to be 1 is 1−(1−(1/m))nk. For a message to be detected as a possible member of the set, all k bit locations generated by the hash functions need to be 1. The probability that this happens, ƒ, is given by:
    ƒ=(1−(1−(1/m))nk)k  (1)
  • For large values of m, the above equation approaches the limit:
    ƒ≈(1−e ( nk/m))k  (2)
  • This explains the presence of false positives in this scheme, and the absence of any false negatives.
  • Because this probability is independent of the input message, it is termed the “false positive” probability. The false positive probability may be reduced by choosing appropriate values for m and k for a given size of the member set, n. It is clear that the size of the bit-vector, m, needs to be quite large compared to the size of the message set, n. For a given ratio of m/n, the false positive probability may be reduced by increasing the number of hash functions, k. In the optimal case, when false positive probability is minimized with respect to k, the following relationship is obtained: k = ( m n ) ln 2 ( 3 )
  • The ratio m/n may be interpreted as the average number of bits consumed by a single member of the set of messages. It should be noted that this space requirement is independent of the actual size of the member. In the optimal case, the false positive probability is decreased exponentially with a linear increase in the ratio m/n. In addition, this implies that the number of hash functions k, and hence the number of random lookups in the bit vector required to query membership of one message in the set of messages is proportional to m/n.
  • The false positive probability at this optimal point (i.e., false positive probability ratio) is: f = ( 1 2 ) k ( 4 )
  • If the false positive probability is to be fixed, then the amount of memory resources, m, needs to scale linearly with the size of the message set, n.
  • One property of Bloom filters is that it is not possible to delete a member stored in the filter. Deleting a particular message entry from the set programmed into the Bloom filter 103 requires that the corresponding k hashed bits in the bit vector (e.g., vector 104) be set to zero. This could disturb other members programmed into the Bloom filter which hash to (or set to one) any of these bits.
  • To overcome this drawback, each Counting Bloom filter 102 has a vector of counters corresponding to each bit in the bit-vector. Whenever a member or message (e.g., IP address 52 prefix) is added to or deleted from the set of messages (or prefixes) programmed in the filter 102, the counters corresponding to the k hash values are incremented or decremented, respectively. When a counter changes from zero to one, the corresponding bit in the bit-vector is set. When a counter changes from one to zero, the corresponding bit in the bit-vector is cleared.
  • The counters are changed only during addition and deletion of prefixes in the Bloom filter. These updates are relatively less frequent than the actual query process itself. Hence, counters may be maintained in software and the bit corresponding to each counter is maintained in hardware. Thus, by avoiding counter implementation in hardware, memory resources may be saved.
  • An important property of Bloom filters is that the computation time involved in performing the query is independent from the number of the prefixes programmed in it, provided, as stated above, that the memory m used by the data structure varies linearly with the number of strings n stored in it. Further, the amount of storage required by the Bloom filter for each prefix is independent from its length. Still further, the computation, which requires generation of hash values, may be performed in special purpose hardware.
  • The present invention leverages advances in modern hardware technology along with the efficiency of Bloom filters to perform longest prefix matching using a custom logic device with a modest amount of embedded SRAM and a commodity off-chip SRAM device. A commodity DRAM (Dynamic Random Access Memory) device could also be used, further reducing cost and power consumption but increasing the “off-chip” memory access period. In the present invention, by properly dimensioning the amount and allocation of embedded memory for Bloom filters 101, the network address lookup performance is independent of address length, prefix length, and the number of unique prefix lengths in the database, and the average number of “off-chip” memory accesses per lookup approaches one. Hence, lookup throughput scales directly with the memory device access period.
  • In one implementation, the plurality of IP address 52 prefixes (e.g., forwarding prefixes) from a routing table 58 in FIG. 1 that are expected to be received by the system are grouped into sets according to prefix length. As shown in FIG. 1, the system 100 employs a set of W Bloom filters 101, where W is the number of unique prefix lengths of the prefixes in the routing table, and associates one filter 101 with each unique prefix length. In one embodiment, the Bloom filters 101 are Counting Bloom filters. Each filter 101 is “programmed” with the associated set of prefixes according to the previously described procedure.
  • Although the bit-vectors associated with each Bloom filter 101 are stored in embedded memory 105, the counters 102 associated with each filter 101 may be maintained, for example, by a separate control processor (not shown in figures) responsible for managing route updates. Separate control processors with ample memory are common features of high-performance routers.
  • The hash table 103 is also constructed for all the prefixes where each hash entry is a [prefix, next hop] pair. Although it is assumed, for example, that the result of a match is the next hop for the packet being traversed through the router 50, more elaborate information may be associated with each prefix if desired. As mentioned above, the hash table 103 may be one of a group of hash tables each containing the prefixes of a particular length. However, a single hash table 103 is preferred. The single hash table 103 or the set of hash tables 103 may be stored off-chip in a separate memory device; for example, a large, high-speed SRAM.
  • Using the approximation that probing a hash table 103 stored in off-chip memory requires one memory access, minimizing the number of hash probes per lookup is described as follows.
  • A network address lookup search executed by the system 100 in accordance with methods consistent with the present invention may proceed as follows. The input IP address 52 is used to probe the set of W Bloom filters 101 in parallel. The one-bit prefix of the address 52 is used to probe the respective filter 101 associated with length one prefixes, the two-bit prefix of the address is used to probe the respective filter 101 associated with length two prefixes, and so on. Each filter 101 indicates a “match” or “no match.” By examining the outputs of all filters 101, a vector 104 of potentially matching prefix lengths for the given address is composed, referenced herein as the “match vector.”
  • For example, for packets following IPv4, when the input address produces matches in the Bloom filters 101 associated with prefix lengths 8, 17, 23, and 30; the resulting match vector would be [8, 17, 23, 30]. Bloom filters may produce false positives, but never produce false negatives; therefore, if a matching prefix exists in the database, it will be represented in the match vector.
  • The network address lookup search executed by the system 100 in accordance with methods consistent with the present invention then proceeds by probing the hash table 103 with the prefixes represented in the vector 104 in order from the longest prefix to the shortest until a match is found or the vector 104 is exhausted.
  • The number of hash probes required to determine the correct prefix length for an IP address is determined by the number of matching Bloom filters 101. In one implementation of system 100, all Bloom filters 101 are tuned to the same false positive probability, ƒ. This may be achieved by selecting appropriate values for m for each filter 101. Let Bl represent the number of Bloom filters 101 for the prefixes of length greater than l. The probability P that exactly i filters associated with prefix lengths greater than l will generate false positives is given by: P l = ( B l i ) f i ( 1 - f ) B l - i ( 5 )
  • For each value of i, i additional hash probes are required. Hence, the expected number of additional hash probes required when matching a length l prefix is: E l = i = 1 B l i ( B l i ) f i ( 1 - f ) B l - i ( 6 )
  • which is the mean for a binomial distribution with Bl elements and a probability of success ƒ. Hence,
    El=Blƒ  (7)
  • The equation above shows that the expected number of additional hash probes for the prefixes of a particular length is equal to the number of Bloom filters for the longer prefixes times the false positive probability (which is the same for all the filters). Let B be the total number of Bloom filters in the system for a given configuration. The worst case value of El, which is denoted as Eadd, may be expressed as:
    Eadd=Bƒ  (8)
  • This is the maximum number of additional hash probes per lookup, independent of input address (e.g., IP address 52). Since these are the expected additional probes due to the false positives, the total number of expected hash probes per lookup for any input address is:
    E exp =E add+1=Bƒ+1  (9)
  • where the additional one probe accounts for the probe at the matching prefix length. However, there is a possibility that the IP address 52 may create a false positive matches in all the filters 101 in the system 100. In this case, the number of required hash probes is:
    E worst =B+1  (10)
  • Thus, Equation 9 gives the expected number of hash probes for a longest prefix match, and Equation 10 provides the maximum number of hash probes for a worst case lookup.
  • Since both values depend on B, the number of filters 101 in the system 100, reducing B is important to limit the worst case. In one implementation of the system 100, the value of B is equal to W.
  • Accordingly, the system 100 provides high performance independent of prefix database characteristics and input address patterns, with a search engine (e.g., search engine 110 in FIG. 1) that achieves, for example, an average of one hash probe per lookup, bounds the worst case search, and utilizes a small amount of embedded memory.
  • Several variables affect system performance and resource utilization:
  • N, the target amount of prefixes supported by the system;
  • M, the total amount of embedded memory available for the Bloom filters;
  • W, the number of unique prefix lengths supported by the system;
  • mi, the size of each Bloom filter;
  • ki, the number of hash functions computed in each Bloom filter; and
  • ni, the number of prefixes stored in each Bloom filter.
  • For clarity in the discussion, IPv4 addresses (e.g., IP address 52) are assumed to be 32-bits long. Therefore, in the worst case, W=32. Given that current IPv4 BGP tables are in excess of 100,000 entries, N=200,000 may be used in one implementation of system 100. Further, the number of hash functions per filter 101 may be set, for example, such that the false positive probability ƒ is a minimum for a filter 101 of length m. The feasibility of designing system 100 to have selectable values of k is discussed below.
  • As long as the false positive probability is kept the same for all the Bloom filters 101, the system 100 performance is independent from the prefix distribution. Let ƒi be the false positive probability of the ith Bloom filter. Given that the filter is allocated mi bits of memory, stores ni prefixes, and performs ki=(mi/ni) ln2 hash functions, the expression for ƒi becomes, f i = f ( 1 2 ) ( m i / n i ) ln 2 i [ 1 32 ] [ 11 ]
  • This implies that:
    m 1 /n 1 =m 2 /n 2 = . . . =m i /n i = . . . =m 32 /n 32 =Σm i /Σn i =M/N  (12)
  • Therefore, the false positive probability ƒi for a given filter i may be expressed as:
    ƒi=ƒ=(½)(M/N)ln 2  (13)
  • Based on the preceding analysis, the expected number of hash probes executed by the system 100 per lookup depends only on the total amount of memory resources, M, and the total number of supported prefixes, N. This is independent from the number of unique prefix lengths and the distribution of prefixes among the prefix lengths.
  • The preceding analysis indicates that memory (not shown in figures) may be proportionally allocated to each Bloom filter 101 based on its share of the total number of prefixes. Given a static, uniform distribution of prefixes, each Bloom filter 101 may be allocated m=M/B bits of memory. Examining of standard IP forwarding tables reveals that the distribution of prefixes is not uniform over the set of prefix lengths. Routing protocols also distribute periodic updates; hence, forwarding tables are not static. For example, with 15 snapshots of IPv4 BGP tables, and for gathered statistics on prefix length distributions, as expected, the prefix distributions for the IPv4 tables demonstrated common trends such as large numbers of 24-bit prefixes and few prefixes of length less than 8-bits. An average prefix distribution for all of the tables in this example, is shown in FIG. 2.
  • In an exemplary static system configured for uniformly distributed prefix lengths to search a database with non-uniform prefix length distribution, some filters are “over-allocated” to memory while others are “under-allocated.” Thus, the false positive probabilities for the Bloom filters are no longer equal. In this example, the amount of embedded memory per filter is proportionally allocated based on its current share of the total prefixes and the number of hash functions is adjusted to maintain a minimal false positive probability. This exemplary configuration is termed “asymmetric Bloom filters”, and a device architecture capable of supporting it is discussed below. Using Equation 9 for the case of IPv4, the expected number of hash probes per lookup, Eexp, may be expressed as:
    E exp=32×(½)(M ln 2/N)+1  (14)
  • Given the feasibility of asymmetric Bloom filters, the expected number of hash probes per lookup, Eexp, is plotted versus total embedded memory size M for various values of N in FIG. 3. With a modest 2 Mb embedded memory, for example, the expected number of hash probes per lookup is less than two for 250,000 prefixes. The present exemplary system 100 is also memory efficient as it only requires 8 bits of embedded memory per prefix. Doubling the size of the embedded memory to 4 Mb, for example, provides near optimal average performance of one hash probe per lookup. Using Equation 10, the worst case number of dependent memory accesses is simply 33. The term for the access for the matching prefix may be omitted, because the default route may be stored internally. Hence, in this implementation of system 100, the worst case number of dependent memory accesses is 32.
  • The preceding analysis illustrates how asymmetric Bloom filters 101 consistent with the present invention may achieve near optimal average performance for large numbers of prefixes with a modest amount of embedded memory.
  • Since the distribution statistics shown in FIG. 2 indicate that sets associated with the first few prefix lengths are typically empty and the first few non-empty sets hold few prefixes, the system 100 may use a direct lookup array device (112 in FIG. 1) for the first few prefix lengths as an efficient way to represent shorter prefixes while reducing the number of Bloom filters 101. For every prefix length represented in the direct lookup array device 112, the number of worst case hash probes is reduced by one. Use of the direct lookup array device 112 also reduces the amount of embedded memory required by the Bloom filters 101 to achieve optimal average performance, as the number of prefixes represented by Bloom filters is decreased.
  • One implementation of the direct lookup array device 112 for the first a=3 prefixes is shown in FIG. 4. This implementation of the direct lookup array device includes a direct lookup array 400 that is operatively connected to a binary trie device 402 and a controlled prefix expansion (CPE) trie 404. The prefixes of length≦a are stored in the binary trie 402. CPE trie 404 performs a CPE on a stride length equal to a. The next hop associated with each leaf at level a of the CPE trie is written to a respective array slot of the direct lookup array 400 addressed by the bits labeling the path from the root to the leaf. The direct lookup array 400 is searched by using the first a bits of the IP destination address 52 to index into the array 400. For example, as shown in FIG. 4, an address 52 with initial bits 101 would result in a next hop of 4. The direct lookup array 400 requires 2a×NHlen bits of memory, where NHlen is the number of bits required to represent the next hop.
  • For example, a=20 results in a direct lookup array 400 with 1 M slots. For a 256 port router (e.g., router 50) where the next hop corresponds to the output port, 8 bits are required to represent the next hop value and the direct lookup array 400 requires 1 MB of memory. Use of a direct lookup array 400 for the first 20 prefix lengths leaves prefix lengths 21 . . . 32 to Bloom filters 101. Thus, the expression for the expected number of hash probes per lookup performed by the search engine 110 of the system 100 becomes:
    E exp=12×(½)(M ln 2/N−N [1:20] )  (15)
  • where N[1:20] is the sum of the prefixes with lengths [1:20].
  • On average, the N[1:20] prefixes constitute 24.6% of the total prefixes in the sample IPv4 BGP tables. Therefore, 75.4% of the total prefixes N are represented in the Bloom filters 101 in this implementation. Given this distribution of prefixes, the expected number of hash probes per lookup versus total embedded memory size for various values of N is shown in FIG. 5. The expected number of hash probes per lookup for databases containing 250,000 prefixes is less than two when using a small 1 Mb embedded memory. Doubling the size of the memory to 2 Mb, for example, reduces the expected number of hash probes per lookup to less than 1.1 for 250,000 prefix databases. Although the amount of memory required to achieve good average performance has decreased to only 4 bits per prefix, for example, the worst case hash probes per lookup is still large. Using Equation 10, the worst case number of dependent memory accesses becomes Eworst=(32−20)+1=13. For an IPv4 database containing the maximum of 32 unique prefix lengths, for example, the worst case is 13 dependent memory accesses per lookup.
  • A high-performance implementation option for the system 100 is to make the direct lookup array device 112 the final stage in a pipelined search architecture. IP destination addresses 52 that reach this stage with a null next hop value would use the next hop retrieved from the direct lookup array 400 of the device 112. A pipelined architecture requires a dedicated memory bank or port for the direct lookup array 400.
  • The number of remaining Bloom filters 101 may be reduced by limiting the number of distinct prefix lengths via further use of Controlled Prefix Expansion (CPE). It is desirable to limit the worst case hash probes to as few as possible without prohibitively large embedded memory requirements. Clearly, the appropriate choice of CPE strides depends on the prefix distribution. As illustrated in the average distribution of IPv4 prefixes shown in FIG. 2, for example, in all of the sample databases that may be used to hold a routing table 58 of IP address 52 prefixes, there is a significant concentration of prefixes from lengths 21 to 24. On average, 75.2% of the N prefixes fall in the range of 21 to 24.
  • Likewise, it is shown for example, in all of the sample databases, that prefixes in the 25 to 32 range are extremely sparse. Specifically, 0.2% of the N prefixes fall in the range 25 to 32. (Note that 24.6% of the prefixes fall in the range of 1 to 20.)
  • Based on these observations, in one implementation of the system 100, the prefixes not covered by the direct lookup array 400 are divided into 2 groups, G1 and G2, for example, corresponding to prefix lengths 21-24 and 25-32, respectively. Each exemplary group is expanded out to the upper limit of the group so that G1 contains only length 24 prefixes and G2 contains only length 32 prefixes. For example, N[21:24] is the number of prefixes of length 21 to 24 before expansion and N[25:32] is the number of prefixes of length 25 to 32 before expansion. Use of CPE operations by the system 100, such as shown in FIG. 4, increases the number of prefixes in each group by an “expansion factor” factor α[21:24] and α[25:32], respectively. In one example, Applicants observed an average value of 1.8 for α[21:24], and an average value of 49.9 for α[25:32] in the sample databases. Such a large value of α[25:32] is tolerable due to the small number of prefixes in G2. By dividing the prefixes not covered by the direct lookup array 400 and using CPE operations with the direct lookup array 400, the system 100 may have two Bloom filters 101 and the direct lookup array 400, bounding the worst case lookup to two hash probes and one array lookup. The expression for the expected number of hash probes per lookup becomes: E exp = 2 × ( 1 2 ) ( M ln 2 α [ 21 : 24 ] N [ 21 : 24 ] + α [ 25 : 32 ] N [ 25 : 32 ] ] ( 16 )
  • Using the observed average distribution of prefixes and observed average values of α[21:24] and α[25:32], the expected number of hash probes per lookup versus total embedded memory M for various values of N is shown in FIG. 6. In this example, the expected number of hash probes per lookup for databases containing 250,000 prefixes is less than 1.6 when using a small 1 Mb embedded memory. Doubling the size of the memory to 2 Mb reduces the expected number of hash probes per lookup to less than 1.2 for 250,000 prefix databases. The use of CPE to reduce the number of Bloom filters 101 allows the system 100 to perform a maximum of two hash probes and one array access per network address lookup, for example, while maintaining near optimal average network address lookup performance with modest use of embedded memory resources.
  • The following provides simulation results for each of three embodiments of system 100 consistent with the present invention, each of which use forwarding or routing tables (e.g., table 58) constructed from standard IPv4 BGP tables. The exemplary embodiments of the present invention are termed:
      • Scheme 1: This first exemplary scheme is the system 100 configuration which uses asymmetric Bloom filters 101 for all prefix lengths as described previously;
      • Scheme 2: This second exemplary scheme that may be employed by system 100 uses a direct lookup array device 112 for prefix lengths [1 . . . 20] and asymmetric Bloom filters 101 for prefix lengths [21 . . . 32] as described previously; and
      • Scheme 3: This third exemplary scheme that may be employed by system 100 uses a direct lookup array device 112 for prefix lengths [1 . . . 20] and two asymmetric Bloom filters 101 for CPE prefix lengths 24 and 32 which represent prefix lengths [21 . . . 24] and [25 . . . 32], respectively, as described above.
  • For each of the three schemes, M=2 Mb, for example, and mi is adjusted for each asymmetric Bloom filter 101 according to the distribution of prefixes of the database under test. Fifteen IPv4 BGP tables were collected, and for each combination of database and system 100 configuration, the theoretical value of Eexp was computed using Equations 14, 15, and 16. A simulation was run for every combination of database and system 100 configuration. The ANSI C rand function was used to generate hash values for the Bloom filters 101, as well as the prefix hash tables 103. The collisions in the prefix hash tables 103 were around 0.8% which is negligibly small.
  • In order to investigate the effects of input addresses on system 100 network address lookup performance, various traffic patterns varying from completely random addresses to only addresses with a valid prefix in the database were placed under test. In the latter case, the IP addresses 52 were generated in proportion to the prefix distribution. Thus, IP addresses corresponding to a 24 bit prefix in the database dominated the input traffic. One million IP addresses were applied for each test run. Input traffic patterns with randomly generated IP addresses generated no false positives in any of the tests for the three schemes or system 100 configurations. The false positives increased as the traffic pattern contained more IP addresses corresponding to the prefixes in the database.
  • Maximum false positives were observed when the traffic pattern consisted of only the IP addresses corresponding to the prefixes in the database. Hence, the following results correspond to this input traffic pattern. The average number of hash probes per lookup from the test runs with each of the databases on all three schemes or system 100 configurations, along with the corresponding theoretical values, are shown in Table 1. The maximum number of memory accesses (hash probes and direct lookup) per lookup was recorded for each test run of all the schemes. While the theoretical worst case memory accesses per lookup for Scheme 1 and Scheme 2 are 32 and 13, respectively, the worst observed lookups required less than four memory accesses in all test runs. For scheme 3, in most of test runs, the worst observed lookups required three memory accesses.
    TABLE 1
    Scheme 1 Scheme 2 Scheme 3
    Database Prefixes Theoretical Observed Theoretical Observed Theoretical Observed
     1 116,819 1.008567 1.008047 1.000226 1.000950 1.000504 1.003227
     2 101,707 1.002524 1.005545 1.000025 1.000777 1.002246 1.001573
     3 102,135 1.002626 1.005826 1.000026 1.000793 1.002298 1.001684
     4 104,968 1.003385 1.006840 1.000089 1.000734 1.00443 1.003020
     5 110,678 1.005428 1.004978 1.000100 1.000687 1.003104 1.000651
     6 116,757 1.008529 1.006792 1.000231 1.000797 1.004334 1.000831
     7 117,058 1.008712 1.007347 1.000237 1.000854 1.008014 1.004946
     8 119,326 1.010183 1.009998 1.000297 1.001173 1.012303 1.007333
     9 119,503 1.010305 1.009138 1.000303 1.001079 1.008529 1.005397
    10 120,082 1.010712 1.009560 1.000329 1.001099 1.016904 1.010076
    11 117,221 1.008806 1.007218 1.000239 1.000819 1.004494 1.002730
    12 117,062 1.008714 1.006885 1.000235 1.000803 1.004439 1.000837
    13 117,346 1.008889 1.006843 1.000244 1.000844 1.004515 1.000835
    14 117,322 1.0008874 1.008430 1.000240 1.001117 1.004525 1.003111
    15 117,199 1.008798 1.007415 1.000239 1.000956 1.004526 1.002730
    Average 114,344 1.007670 1.007390 1.000204 1.000898 1.006005 1.003265
  • Using Scheme 3 or the third system 100 configuration, the average number of hash probes per lookup over all test databases was found to be 1.003, which corresponds to a lookup rate of about 332 million lookups per second with a commodity SRAM device operating at 333 MHz. This is an increase in speed of 3.3 times over state-of-the-art TCAM-based solutions.
  • At the same time, Scheme 3 had a worst case performance of 2 hash probes and one array access per lookup. Assuming that the array 400 is stored in the same memory device as the tables 103, worst case performance is 110 million lookups per second, which exceeds current TCAM performance. Note that the values of the expected hash probes per lookup as shown by the simulations generally agree with the values predicted by the equations.
  • A direct comparison was made between the theoretical performance and observed performance for each scheme or system 100 configuration. To see the effect of total embedded memory resources (M) for Bloom filters 101, Scheme 3 was simulated with database 1 and N=116189 prefixes for various values of M between 500 kb and 4 Mb. FIG. 7 shows theoretical and observed values for the average number of hash probes per lookup for each value of M. Simulation results show slightly better performance than the corresponding theoretical values. This improvement in the performance may be attributed to the fact that the distribution of input addresses 52 has been matched to the distribution of prefixes in the database under test. Since length 24 prefixes dominate real databases, arriving packets are more likely to match the second Bloom filter 101 and less likely to require an array 400 access.
  • Thus, the number of dependent memory accesses per lookup may be held constant given that memory resources scale linearly with database size. Given this characteristic of the system 100, and the memory efficiency demonstrated for IPv4, a network address lookup system and method consistent with the present invention is suitable for high-speed IPv6 route lookups.
  • In order to assess the current state of IPv6 tables, five IPv6 BGP table snapshots were collected from several sites. Since the tables are relatively small, a combined distribution of prefix lengths was computed. FIG. 8 shows the combined distribution for a total of 1,550 prefix entries. A significant result is that the total number of unique prefix lengths in the combined distribution is 14, less than half of the number for the IPv4 tables studied.
  • IPv6 unicast network addresses may be aggregated with arbitrary prefix lengths like IPv4 network addresses under CIDR. Although this provides extensive flexibility, the flexibility does not necessarily result in a large increase in unique prefix lengths.
  • The global unicast network address format has three fields: a global routing prefix; a subnet ID; and an interface ID. All global unicast network addresses, other than those that begin with 000, must have a 64-bit interface ID in the Modified EUI-64 format. These interface IDs may be of global or local scope; however, the global routing prefix and subnet ID fields must consume a total of 64 bits. Global unicast network addresses that begin with 000 do not have any restrictions on interface ID size; however, these addresses are intended for special purposes such as embedded IPv4 addresses. Embedded IPv4 addresses provide a mechanism for tunneling IPv6 packets over IPv4 routing infrastructure. This special class of global unicast network addresses should not contribute a significant number of unique prefix lengths to IPv6 routing tables.
  • In the future, IPv6 Internet Registries must meet several criteria in order to receive an address allocation, including a plan to provide IPv6 connectivity by assigning /48 address blocks. During the assignment process, /64 blocks are assigned when only one subnet ID is required and /128 addresses are assigned when only one device interface is required. Although it is not clear how much aggregation will occur due to Internet Service Providers assigning multiple /48 blocks, the allocation and assignment policy provides significant structure. Thus, IPv6 routing tables will not contain significantly more unique prefix lengths than current IPv4 tables.
  • Accordingly, systems and methods consistent with the present invention provide a longest prefix matching approach that is a viable mechanism for IPv6 routing lookups. Due to the longer “strides” between hierarchical boundaries of IPv6 addresses, use of Controlled Prefix Expansion (CPE) to reduce the number of Bloom filters 101 may not be practical. In this case, a suitable pipelined architecture may be employed to limit the worst case memory accesses.
  • The ability to support a lookup table of a certain size, irrespective of the prefix length distribution is a desirable feature of the system 100. Instead of building distribution dependent memories of customized size, for example, a number of small fixed-size Bloom filters called mini-Bloom filters (902 in FIG. 9) may be built for the system 100 in lieu of Bloom filters 101. For example, let the dimensions of each mini-Bloom filter 902 be an m′ bit long vector with a capacity of n′ prefixes. The false positive probability of the mini-Bloom filter 902 is:
    ƒ′=(½)(m′/n′)ln 2  (17)
  • In this implementation, instead of allocating a fixed amount of memory to each of the Bloom filters 101, multiple mini-Bloom filters were proportionally allocated according to the prefix distribution. In other words, on-chip resources were allocated to individual Bloom filters in units of mini-Bloom filters 902 instead of bits. While building the database, the prefixes of a particular length across the set of mini-Bloom filters 902 allocated to it were uniformly distributed, and each prefix is stored in only one mini-Bloom filter 902. This uniform random distribution of prefixes was achieved within a set of mini-Bloom filters by calculating a primary hash over the prefix. The prefix is stored in the mini-Bloom filter 902 pointed to by this primary hash value, within the set of mini-bloom filters, as illustrated by the dashed line in FIG. 9.
  • In the membership query process, a given IP address is dispatched to all sets of mini-Bloom filters 902 for distinct prefix lengths on a tri-state bus 904. The same primary hash function is calculated on the IP address to find out which one of the mini-Bloom filters 902 within the corresponding set should be probed with the given prefix. This mechanism ensures that an input IP address probes only one mini-Bloom filter 902 in the set associated with a particular prefix length as shown by the solid lines in FIG. 9.
  • Since the prefix is hashed or probed in only one of the mini-Bloom filters 902 in each set, the aggregate false positive probability of a particular set of mini-Bloom filters 902 is the same as the false positive probability of an individual mini-Bloom filter. Hence, the false positive probability of the present embodiment remains unchanged if the average memory bits per prefix in the mini-Bloom filter 902 is the same as the average memory bits per prefix in the original scheme. The importance of the scheme shown in FIG. 9 is that the allocation of the mini-Bloom filters for different prefix lengths may be changed unlike in the case of hardwired memory. The tables which indicate the prefix length set and its corresponding mini-Bloom filters may be maintained on-chip with reasonable hardware resources. The resource distribution among different sets of mini-Bloom filters 902 may be reconfigured by updating these tables. This flexibility makes the present invention independent from prefix length distribution.
  • The number of hash functions k, is essentially the lookup capacity of the memory storing a Bloom filter 101. Thus, k=6 implies that 6 random locations must be accessed in the time allotted for a Bloom filter query. In the case of single cycle Bloom filter queries, on-chip memories need to support at least k reading ports. Fabrication of 6 to 8 read ports for an on-chip Random Access Memory is attainable with existing embedded memory technology.
  • For designs with values of k higher than what may be realized by technology, a single memory with the desired lookups is realized by employing multiple smaller memories, with fewer ports. For instance, if the technology limits the number of ports on a single memory to 4, then 2 such smaller memories are required to achieve a lookup capacity of 8 as shown in FIG. 10 b. The Bloom filter 101 allows any hash function to map to any bit in the vector. It is possible that for some member, more than 4 hash functions map to the same memory segment, thereby exceeding the lookup capacity of the memory. This problem may be solved by restricting the range of each hash function to a given memory. This avoids collision among hash functions across different memory segments.
  • In general, if h is the maximum lookup capacity of a RAM as limited by the technology, then k/h such memories of size m/(k/h) may be combined to realize the desired capacity of m bits and k hash functions. When only h hash functions are allowed to map to a single memory, the false positive probability may be expressed as:
    ƒ′=[1−(1−l/m/(k/h)) hn](k/h)h≈[1−e −nk/m]k  (18)
  • Comparing equation 18 with equation 2, restricting the number of hash functions mapping to a particular memory, does not affect the false positive probability provided the memories are sufficiently large.
  • Accordingly, a Longest Prefix Matching (LPM) system consistent with the present invention employs Bloom filters to efficiently narrow the scope of the network address lookup search. In order to optimize average network address lookup performance, asymmetric Bloom filters 101 may be used that allocate memory resources according to prefix distribution and provide viable means for their implementation. By using a direct lookup array 400 and Controlled Prefix Expansion (CPE), worst case performance is limited to two hash probes and one array access per lookup. Performance analysis and simulations show that average performance approaches one hash probe per lookup with modest embedded memory resources, less than 8 bits per prefix. The future viability for IPv6 route lookups is assured with the present invention. If implemented in current semiconductor technology and coupled with a commodity SRAM device operating at 333 MHz, the present system could achieve average performance of over 300 million lookups per second and worst case performance of over 100 million lookups per second. In comparison, state-of-the-art TCAM-based solutions for LPM provide 100 million lookups per second, consume 150 times more power per bit of storage than SRAM, and cost approximately 30 times as much per bit of storage than SRAM.
  • It should be emphasized that the above-described embodiments of the invention are merely possible examples of implementations set forth for a clear understanding of the principles of the invention. Variations and modifications may be made to the above-described embodiments of the invention without departing from the spirit and principles of the invention. All such modifications and variations are intended to be included herein within the scope of the invention and protected by the following claims.

Claims (38)

1. A method of performing a network address lookup, comprising:
grouping forwarding prefixes from a routing table by prefix length;
associating each of a plurality of Bloom filters with a unique prefix length;
programming each of said plurality of Bloom filters with said prefixes corresponding to said associated unique prefix length; and
performing membership probes to said Bloom filters by using predetermined prefixes of a network address.
2. The method according to claim 1, further comprising:
storing said prefixes in a hash table.
3. The method according to claim 2, wherein said hash table comprises a single hash table containing all of the prefixes.
4. The method according to claim 2, wherein said hash table comprises a plurality of hash tables, each containing prefixes of a particular length.
5. The method according to claim 1 wherein the Bloom filters comprise a bit vector of a plurality of bits.
6. The method according to claim 5 further comprising providing a plurality of counting Bloom filters, each corresponding to one of the plurality of Bloom filters and each counting Bloom filter comprising a plurality of counters corresponding to the plurality of bits in its corresponding Bloom filter.
7. The method according to claim 1, wherein said Bloom filters are characterized by a false positive probability greater than 0 and a false negative probability of zero.
8. The method according to claim 2, wherein the step of performing membership probes comprises the step of probing the hash table corresponding to said prefix lengths represented in a match vector in an order of longest prefix to shortest prefix.
9. The method according to claim 8, wherein probing of said hash tables is terminated when a match is found, and all of said prefix lengths represented in said match vector are searched.
10. The method according to claim 7, wherein the false positive probability is the same for all of said Bloom filters such that performance is independent of prefix distribution.
11. The method according to claim 5, further comprising:
providing asymmetric Bloom filters by proportionally allocating an amount of an embedded memory per Bloom filter based on said Bloom filter's current share of a total number of prefixes while adjusting a number of hash functions of said Bloom filters to maintain a minimal false positive probability.
12. The method according to claim 9, wherein a number of hash probes to said hash table per lookup is held constant for network address lengths in said routing table that are greater than a predetermined length.
13. The method according to claim 9, wherein a number of dependent memory accesses per network lookup is held constant for additional unique prefix lengths in a forwarding table, provided that memory resources scale linearly with a number of prefixes in said routing table.
14. The method according to claim 1, further comprising:
utilizing a direct lookup array for initial prefix lengths and asymmetric Bloom filters for the rest of the prefix lengths.
15. The method according to claim 14, wherein for every prefix length represented in said direct lookup array, a number of worst case hash probes is reduced by one.
16. The method according to claim 14, wherein said direct lookup array comprises:
storing prefixes of not more than a predetermined number for a predetermined length, in a binary trie;
performing Controlled Prefix Expansion (CPE) in a CPE trie for a stride length equal to said predetermined number;
writing a next hop associated with each leaf at a level of said CPE trie corresponding to said predetermined number to an array slot addressed by bits that label a path from a root of said CPE trie to said leaf; and
searching said array using bits of said network address of said predetermined number to index into said array.
17. The method according to claim 5, further comprising:
uniformly distributing prefixes of a predetermined length across a set of mini-Bloom filters; and
storing each of said prefixes in only one of said mini-Bloom filters.
18. The method according to claim 17, further comprising:
calculating a primary hash value over said one of said prefixes.
19. The method according to claim 18, further comprising:
storing said one of said prefixes in said one of said mini-Bloom filters pointed to by said primary hash value, within said set.
20. The method according to claim 19, further comprising:
dispatching a given network address to all sets of mini-Bloom filters for distinct prefix lengths on a tri-state bus in said probing process.
21. The method according to claim 19, wherein a same primary hash value is calculated on said network address to determine which of said mini-Bloom filters within a corresponding set should be probed with a given prefix.
22. A system for performing a network address lookup, comprising:
means for sorting forwarding prefixes from a routing table by prefix length;
means for associating each of a plurality of Bloom filters with a unique prefix length;
means for programming each of said plurality of Bloom filters with said prefixes corresponding to said associated unique prefix length; and
means for performing membership queries to said Bloom filters by using predetermined prefixes of an network address.
23. The system according to claim 22, further comprising a hash table operable to store said prefixes.
24. The system according to claim 23 wherein said hash table comprises a single hash table containing all of the prefixes.
25. The system according to claim 23, wherein said hash table comprises a plurality of hash tables, each containing prefixes of a particular length.
26. The system according to claim 22, wherein the Bloom filters comprise a bit vector of a plurality of bits.
27. The system according to claim 26 further comprising a plurality of counting Bloom filters, each corresponding to one of the plurality of Bloom filters and each counting Bloom filter comprising a plurality of counters corresponding to the plurality of bits in its corresponding Bloom filter.
28. The method according to claim 23, wherein the means for performing membership queries comprises the means for probing the hash table corresponding to said prefix lengths represented in a match vector in an order of longest prefix to shortest prefix.
29. The system according to claim 22, further comprising:
a direct lookup array for initial prefix lengths and asymmetric Bloom filters for the rest of the prefix lengths.
30. The system according to claim 29, wherein for every prefix length represented in said direct lookup array, a number of worst case hash probes is reduced by one.
31. The system according to claim 29, wherein said direct lookup array comprises:
prefixes of not more than a predetermined number for a predetermined length, in a binary trie;
means for performing Controlled Prefix Expansion (CPE) in a CPE trie for a stride length equal to said predetermined number;
means for writing a next hop associated with each leaf at a level of said CPE trie corresponding to said predetermined number to an array slot addressed by bits labeling a path from a root of said CPE trie to said leaf; and
means for searching said array using bits of said network address of said predetermined number to index into said array.
32. The system according to claim 31, further comprising:
means for utilizing CPE to reduce a number of said Bloom filters such that a maximum of two hash probes and one array access per network lookup is achieved.
33. The system according to claim 22, wherein multiple mini-Bloom filters are proportionally allocated according to a prefix distribution.
34. The system according to claim 33, wherein on-chip resources are allocated to individual Bloom filters in units of mini-Bloom filters instead of bits.
35. The system according to claim 34, further comprising:
means for uniformly distributing prefixes of a predetermined length across a set of mini-Bloom filters; and
means for storing each of said prefixes in only one of said mini-Bloom filters.
36. The system according to claim 35, further comprising:
means for calculating a primary hash value over said one of said prefixes.
37. The system according to claim 36, further comprising:
means for storing said one of said prefixes in said one of said mini-Bloom filters pointed to by said primary hash value, within said set.
38. The system according to claim 37, further comprising:
means for dispatching a given network address to all sets of mini-Bloom filters for distinct prefix lengths on a tri-state bus in said probing process.
US11/055,767 2004-02-09 2005-02-09 Method and system for performing longest prefix matching for network address lookup using bloom filters Active 2027-12-24 US7602785B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/055,767 US7602785B2 (en) 2004-02-09 2005-02-09 Method and system for performing longest prefix matching for network address lookup using bloom filters
US12/566,150 US20100098081A1 (en) 2004-02-09 2009-09-24 Longest prefix matching for network address lookups using bloom filters

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US54322204P 2004-02-09 2004-02-09
US11/055,767 US7602785B2 (en) 2004-02-09 2005-02-09 Method and system for performing longest prefix matching for network address lookup using bloom filters

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/566,150 Division US20100098081A1 (en) 2004-02-09 2009-09-24 Longest prefix matching for network address lookups using bloom filters

Publications (2)

Publication Number Publication Date
US20050195832A1 true US20050195832A1 (en) 2005-09-08
US7602785B2 US7602785B2 (en) 2009-10-13

Family

ID=34914852

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/055,767 Active 2027-12-24 US7602785B2 (en) 2004-02-09 2005-02-09 Method and system for performing longest prefix matching for network address lookup using bloom filters
US12/566,150 Abandoned US20100098081A1 (en) 2004-02-09 2009-09-24 Longest prefix matching for network address lookups using bloom filters

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/566,150 Abandoned US20100098081A1 (en) 2004-02-09 2009-09-24 Longest prefix matching for network address lookups using bloom filters

Country Status (1)

Country Link
US (2) US7602785B2 (en)

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030110229A1 (en) * 2001-10-19 2003-06-12 Kulig Matthew P. System and method for controlling transmission of data packets over an information network
US20060114914A1 (en) * 2004-11-30 2006-06-01 Broadcom Corporation Pipeline architecture of a network device
US20070130140A1 (en) * 2005-12-02 2007-06-07 Cytron Ron K Method and device for high performance regular expression pattern matching
US20080080505A1 (en) * 2006-09-29 2008-04-03 Munoz Robert J Methods and Apparatus for Performing Packet Processing Operations in a Network
CN100385443C (en) * 2005-09-09 2008-04-30 湖南大学 Searching method based on classified file BloomFilter structure
US20080112413A1 (en) * 2006-11-10 2008-05-15 Fong Pong Method and system for hash table based routing via table and prefix aggregation
US20080112412A1 (en) * 2006-11-10 2008-05-15 Fong Pong Method and system for hash table based routing via a prefix transformation
CN100396057C (en) * 2005-10-21 2008-06-18 清华大学 High speed block detecting method based on stated filter engine
US20080147714A1 (en) * 2006-12-19 2008-06-19 Mauricio Breternitz Efficient bloom filter
US20080256094A1 (en) * 2007-04-12 2008-10-16 Cisco Technology, Inc. Enhanced bloom filters
US20080301218A1 (en) * 2007-05-31 2008-12-04 Microsoft Corporation Strategies for Compressing Information Using Bloom Filters
WO2008151673A1 (en) * 2007-06-14 2008-12-18 Telefonaktiebolaget Lm Ericsson (Publ) Routing in a network
CN101383034A (en) * 2008-09-18 2009-03-11 腾讯科技(深圳)有限公司 Method and system for advertisement statistic and delivery
US7505960B2 (en) 2005-11-15 2009-03-17 Microsoft Corporation Scalable retrieval of data entries using an array index or a secondary key
KR100931796B1 (en) 2007-11-28 2009-12-14 한양대학교 산학협력단 Packet collection system using bloom filter, a method of reducing the storage size of packets in the packet collection system, a packet retrieval system and a method of reducing the rate of false positives
US7660793B2 (en) 2006-11-13 2010-02-09 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US20100040066A1 (en) * 2008-08-13 2010-02-18 Lucent Technologies Inc. Network address lookup based on bloom filters
US20100040067A1 (en) * 2008-08-13 2010-02-18 Lucent Technologies Inc. Hash functions for applications such as network address lookup
US7680790B2 (en) 2000-04-07 2010-03-16 Washington University Method and apparatus for approximate matching of DNA sequences
US7711844B2 (en) 2002-08-15 2010-05-04 Washington University Of St. Louis TCP-splitter: reliable packet monitoring methods and apparatus for high speed networks
US7747599B1 (en) 2004-07-23 2010-06-29 Netlogic Microsystems, Inc. Integrated search engine devices that utilize hierarchical memories containing b-trees and span prefix masks to support longest prefix match search operations
US20100229040A1 (en) * 2008-02-01 2010-09-09 Huawei Technologies Co., Ltd. Method and device for creating pattern matching state machine
US20100228701A1 (en) * 2009-03-06 2010-09-09 Microsoft Corporation Updating bloom filters
US7805427B1 (en) * 2006-11-27 2010-09-28 Netlogic Microsystems, Inc. Integrated search engine devices that support multi-way search trees having multi-column nodes
US7840482B2 (en) 2006-06-19 2010-11-23 Exegy Incorporated Method and system for high speed options pricing
US20100325213A1 (en) * 2009-06-17 2010-12-23 Microsoft Corporation Multi-tier, multi-state lookup
CN101930418A (en) * 2009-06-26 2010-12-29 英特尔公司 The multiple compress technique that is used for grouping information
US20110069632A1 (en) * 2009-09-21 2011-03-24 Alcatel-Lucent Usa Inc. Tracking network-data flows
US7917299B2 (en) 2005-03-03 2011-03-29 Washington University Method and apparatus for performing similarity searching on a data stream with respect to a query string
US7921046B2 (en) 2006-06-19 2011-04-05 Exegy Incorporated High speed processing of financial information using FPGA devices
KR101028470B1 (en) 2009-05-07 2011-04-14 이화여자대학교 산학협력단 Method and Apparatus for Searching IP Address
CN101309216B (en) * 2008-07-03 2011-05-04 中国科学院计算技术研究所 IP packet classification method and apparatus
US7953721B1 (en) 2006-11-27 2011-05-31 Netlogic Microsystems, Inc. Integrated search engine devices that support database key dumping and methods of operating same
US7954114B2 (en) 2006-01-26 2011-05-31 Exegy Incorporated Firmware socket module for FPGA-based pipeline processing
US7987205B1 (en) 2006-11-27 2011-07-26 Netlogic Microsystems, Inc. Integrated search engine devices having pipelined node maintenance sub-engines therein that support database flush operations
KR101068716B1 (en) 2009-12-28 2011-09-28 경희대학교 산학협력단 Method for tracebacking packet sensor network
US8069102B2 (en) 2002-05-21 2011-11-29 Washington University Method and apparatus for processing financial information at hardware speeds using FPGA devices
US8086641B1 (en) 2006-11-27 2011-12-27 Netlogic Microsystems, Inc. Integrated search engine devices that utilize SPM-linked bit maps to reduce handle memory duplication and methods of operating same
US20110320630A1 (en) * 2010-06-24 2011-12-29 Jeffrey Mogul Forwarding broadcast traffic to a host environment
US8095508B2 (en) 2000-04-07 2012-01-10 Washington University Intelligent data storage and processing using FPGA devices
CN102333036A (en) * 2011-10-17 2012-01-25 中兴通讯股份有限公司 Method and system for realizing high-speed routing lookup
WO2012045240A1 (en) * 2010-10-04 2012-04-12 Huawei Technologies Co., Ltd. Content router forwarding plane architecture
US8326819B2 (en) 2006-11-13 2012-12-04 Exegy Incorporated Method and system for high performance data metatagging and data indexing using coprocessors
US8374986B2 (en) 2008-05-15 2013-02-12 Exegy Incorporated Method and system for accelerated stream processing
US8379841B2 (en) 2006-03-23 2013-02-19 Exegy Incorporated Method and system for high throughput blockwise independent encryption/decryption
FR2982974A1 (en) * 2011-11-22 2013-05-24 France Telecom METHOD OF PROCESSING A QUERY IN A COMMUNICATION NETWORK CENTERED ON INFORMATION
EP2640021A1 (en) * 2012-03-13 2013-09-18 Juniper Networks, Inc. Longest prefix match searches with variable numbers of prefixes
US8571034B2 (en) 2008-09-30 2013-10-29 Juniper Networks, Inc. Methods and apparatus related to packet classification associated with a multi-stage switch
US20130318067A1 (en) * 2012-05-23 2013-11-28 International Business Machines Corporation Hardware-accelerated relational joins
US8620881B2 (en) 2003-05-23 2013-12-31 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US8630294B1 (en) 2011-05-11 2014-01-14 Juniper Networks, Inc. Dynamic bypass mechanism to alleviate bloom filter bank contention
US8675661B1 (en) * 2009-05-07 2014-03-18 Sprint Communications Company L.P. Allocating IP version fields to increase address space
US20140081701A1 (en) * 2012-09-20 2014-03-20 Ebay Inc. Determining and using brand information in electronic commerce
US20140149433A1 (en) * 2012-11-27 2014-05-29 Hewlett-Packard Development Company, L.P. Estimating Unique Entry Counts Using a Counting Bloom Filter
US8762249B2 (en) 2008-12-15 2014-06-24 Ip Reservoir, Llc Method and apparatus for high-speed processing of financial market depth data
US8776206B1 (en) * 2004-10-18 2014-07-08 Gtb Technologies, Inc. Method, a system, and an apparatus for content security in computer networks
US8804950B1 (en) * 2008-09-30 2014-08-12 Juniper Networks, Inc. Methods and apparatus for producing a hash value based on a hash function
US8879727B2 (en) 2007-08-31 2014-11-04 Ip Reservoir, Llc Method and apparatus for hardware-accelerated encryption/decryption
US8886677B1 (en) 2004-07-23 2014-11-11 Netlogic Microsystems, Inc. Integrated search engine devices that support LPM search operations using span prefix masks that encode key prefix length
US8886827B2 (en) 2012-02-13 2014-11-11 Juniper Networks, Inc. Flow cache mechanism for performing packet flow lookups in a network device
US8898204B1 (en) * 2011-10-21 2014-11-25 Applied Micro Circuits Corporation System and method for controlling updates of a data structure
US20150098470A1 (en) * 2013-10-04 2015-04-09 Broadcom Corporation Hierarchical hashing for longest prefix matching
WO2015081524A1 (en) * 2013-12-05 2015-06-11 北京大学深圳研究生院 Method and apparatus for forwarding heterogeneous address route
US9152661B1 (en) * 2011-10-21 2015-10-06 Applied Micro Circuits Corporation System and method for searching a data structure
US20150372915A1 (en) * 2013-01-31 2015-12-24 Hewlett-Packard Development Company, L.P. Incremental update of a shape graph
US20160112254A1 (en) * 2014-10-17 2016-04-21 Cisco Technology, Inc. Address autoconfiguration using bloom filter parameters for unique address computation
US20160294625A1 (en) * 2015-03-31 2016-10-06 Telefonaktiebolaget L M Ericsson (Publ) Method for network monitoring using efficient group membership test based rule consolidation
US9596181B1 (en) * 2014-10-20 2017-03-14 Juniper Networks, Inc. Two stage bloom filter for longest prefix match
US9633093B2 (en) 2012-10-23 2017-04-25 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US9633097B2 (en) 2012-10-23 2017-04-25 Ip Reservoir, Llc Method and apparatus for record pivoting to accelerate processing of data fields
US9860152B2 (en) 2015-09-21 2018-01-02 Telefonaktiebolaget L M Ericsson (Publ) Non-intrusive method for testing and profiling network service functions
US9990393B2 (en) 2012-03-27 2018-06-05 Ip Reservoir, Llc Intelligent feed switch
US10037568B2 (en) 2010-12-09 2018-07-31 Ip Reservoir, Llc Method and apparatus for managing orders in financial markets
US10121196B2 (en) 2012-03-27 2018-11-06 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US10146845B2 (en) 2012-10-23 2018-12-04 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US10169356B2 (en) * 2013-02-26 2019-01-01 Facebook, Inc. Intelligent data caching for typeahead search
US10229453B2 (en) 2008-01-11 2019-03-12 Ip Reservoir, Llc Method and system for low latency basket calculation
US20190108277A1 (en) * 2017-10-11 2019-04-11 Adobe Inc. Method to identify and extract fragments among large collections of digital documents using repeatability and semantic information
US10572824B2 (en) 2003-05-23 2020-02-25 Ip Reservoir, Llc System and method for low latency multi-functional pipeline with correlation logic and selectively activated/deactivated pipelined data processing engines
US10650452B2 (en) 2012-03-27 2020-05-12 Ip Reservoir, Llc Offload processing of data packets
US10846624B2 (en) 2016-12-22 2020-11-24 Ip Reservoir, Llc Method and apparatus for hardware-accelerated machine learning
US10902013B2 (en) 2014-04-23 2021-01-26 Ip Reservoir, Llc Method and apparatus for accelerated record layout detection
US10942943B2 (en) 2015-10-29 2021-03-09 Ip Reservoir, Llc Dynamic field data translation to support high performance stream data processing
US11132400B2 (en) * 2018-07-23 2021-09-28 Microsoft Technology Licensing, Llc Data classification using probabilistic data structures
US11436672B2 (en) 2012-03-27 2022-09-06 Exegy Incorporated Intelligent switch for processing financial market data

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782200B2 (en) * 2004-09-14 2014-07-15 Sitespect, Inc. System and method for optimizing website visitor actions
US7853578B1 (en) * 2005-12-09 2010-12-14 Marvell International Ltd. High-performance pattern matching
US7889656B2 (en) * 2006-03-30 2011-02-15 Alcatel Lucent Binned duration flow tracking
US20100306209A1 (en) * 2006-07-22 2010-12-02 Tien-Fu Chen Pattern matcher and its matching method
US9106606B1 (en) 2007-02-05 2015-08-11 F5 Networks, Inc. Method, intermediate device and computer program code for maintaining persistency
US8640215B2 (en) * 2007-03-23 2014-01-28 Microsoft Corporation Secure isolation of application pools
US8478544B2 (en) 2007-11-21 2013-07-02 Cosmosid Inc. Direct identification and measurement of relative populations of microorganisms with direct DNA sequencing and probabilistic methods
EP3144672B1 (en) * 2007-11-21 2018-08-22 Cosmosid Inc. Genome identification system
US8250080B1 (en) * 2008-01-11 2012-08-21 Google Inc. Filtering in search engines
US20090183159A1 (en) * 2008-01-11 2009-07-16 Michael Maged M Managing concurrent transactions using bloom filters
US8005868B2 (en) * 2008-03-07 2011-08-23 International Business Machines Corporation System and method for multiple distinct aggregate queries
KR100962653B1 (en) * 2008-07-24 2010-06-11 이화여자대학교 산학협력단 IP Address Lookup Method And Apparatus by Using Bloom Filter And Multi-Hashing Architecture
US8468220B2 (en) * 2009-04-21 2013-06-18 Techguard Security Llc Methods of structuring data, pre-compiled exception list engines, and network appliances
US9342691B2 (en) 2013-03-14 2016-05-17 Bandura, Llc Internet protocol threat prevention
US9894093B2 (en) 2009-04-21 2018-02-13 Bandura, Llc Structuring data and pre-compiled exception list engines and internet protocol threat prevention
US8942233B2 (en) * 2009-09-08 2015-01-27 Wichorus, Inc. Method and apparatus for performing network address translation
US8990424B2 (en) * 2009-09-08 2015-03-24 Wichorus, Inc. Network address translation based on recorded application state
US9013992B2 (en) * 2009-09-08 2015-04-21 Wichorus, Inc. Method and apparatus for network address translation
WO2012155065A2 (en) 2011-05-12 2012-11-15 Huawei Technologies, Co., Ltd. Method and system for longest prefix matching of variable-sized hierarchical names by treelets
US8762396B2 (en) * 2011-12-22 2014-06-24 Sap Ag Dynamic, hierarchical bloom filters for network data routing
US8948171B1 (en) * 2012-07-20 2015-02-03 Time Warner Cable Inc. System and method for IP multicast
US9300569B2 (en) * 2012-07-31 2016-03-29 Cisco Technology, Inc. Compressing data packet routing information using bloom filters
JP2014130549A (en) * 2012-12-28 2014-07-10 Fujitsu Ltd Storage device, control method, and control program
US9819637B2 (en) * 2013-02-27 2017-11-14 Marvell World Trade Ltd. Efficient longest prefix matching techniques for network devices
WO2015167559A1 (en) 2014-04-30 2015-11-05 Hewlett-Packard Development Company, L.P. Partitionable ternary content addressable memory (tcam) for use with a bloom filter
US9979650B1 (en) 2015-03-30 2018-05-22 Juniper Networks, Inc. Forwarding packets using a probabilistic filter and a grouping technique
CN108632131B (en) * 2017-03-16 2020-10-20 哈尔滨英赛克信息技术有限公司 Email address matching method based on fingerprint type variable-length bloom filter
JP6916442B2 (en) * 2017-11-21 2021-08-11 富士通株式会社 Data processing equipment and data processing programs

Citations (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347634A (en) * 1990-03-15 1994-09-13 Hewlett-Packard Company System and method for directly executing user DMA instruction from user controlled process by employing processor privileged work buffer pointers
US5371794A (en) * 1993-11-02 1994-12-06 Sun Microsystems, Inc. Method and apparatus for privacy and authentication in wireless networks
US5421028A (en) * 1991-03-15 1995-05-30 Hewlett-Packard Company Processing commands and data in a common pipeline path in a high-speed computer graphics system
US5701464A (en) * 1995-09-15 1997-12-23 Intel Corporation Parameterized bloom filters
US6028939A (en) * 1997-01-03 2000-02-22 Redcreek Communications, Inc. Data security system and method
US6175874B1 (en) * 1997-07-03 2001-01-16 Fujitsu Limited Packet relay control method packet relay device and program memory medium
US20010052038A1 (en) * 2000-02-03 2001-12-13 Realtime Data, Llc Data storewidth accelerator
US20010056547A1 (en) * 1998-06-09 2001-12-27 Placeware, Inc. Bi-directional process-to-process byte stream protocol
US20020031125A1 (en) * 1999-12-28 2002-03-14 Jun Sato Packet transfer communication apparatus, packet transfer communication method, and storage medium
US6381242B1 (en) * 2000-08-29 2002-04-30 Netrake Corporation Content processor
US6389532B1 (en) * 1998-04-20 2002-05-14 Sun Microsystems, Inc. Method and apparatus for using digital signatures to filter packets in a network
US20020069370A1 (en) * 2000-08-31 2002-06-06 Infoseer, Inc. System and method for tracking and preventing illegal distribution of proprietary material over computer networks
US20020095512A1 (en) * 2000-11-30 2002-07-18 Rana Aswinkumar Vishanji Method for reordering and reassembling data packets in a network
US20030002502A1 (en) * 1998-05-01 2003-01-02 Gibson William A. System for recovering lost information in a data stream by means of parity packets
US20030009693A1 (en) * 2001-07-09 2003-01-09 International Business Machines Corporation Dynamic intrusion detection for computer systems
US20030014662A1 (en) * 2001-06-13 2003-01-16 Gupta Ramesh M. Protocol-parsing state machine and method of using same
US20030051043A1 (en) * 2001-09-12 2003-03-13 Raqia Networks Inc. High speed data stream pattern recognition
US20030074582A1 (en) * 2001-10-12 2003-04-17 Motorola, Inc. Method and apparatus for providing node security in a router of a packet network
US6628652B1 (en) * 1998-09-18 2003-09-30 Lucent Technologies Inc. Flexible telecommunications switching network
US20030221013A1 (en) * 2002-05-21 2003-11-27 John Lockwood Methods, systems, and devices using reprogrammable hardware for high-speed processing of streaming data to find a redefinable pattern and respond thereto
US6704816B1 (en) * 1999-07-26 2004-03-09 Sun Microsystems, Inc. Method and apparatus for executing standard functions in a computer system using a field programmable gate array
US20040054924A1 (en) * 2002-09-03 2004-03-18 Chuah Mooi Choo Methods and devices for providing distributed, adaptive IP filtering against distributed denial of service attacks
US6728929B1 (en) * 2001-02-16 2004-04-27 Spirent Communications Of Calabasas, Inc. System and method to insert a TCP checksum in a protocol neutral manner
US20040100977A1 (en) * 2002-11-01 2004-05-27 Kazuyuki Suzuki Packet processing apparatus
US20040105458A1 (en) * 2002-11-29 2004-06-03 Kabushiki Kaisha Toshiba Communication control method, server apparatus, and client apparatus
US20040107361A1 (en) * 2002-11-29 2004-06-03 Redan Michael C. System for high speed network intrusion detection
US20040133634A1 (en) * 2000-11-02 2004-07-08 Stanley Luke Switching system
US6804667B1 (en) * 1999-11-30 2004-10-12 Ncr Corporation Filter for checking for duplicate entries in database
US6870837B2 (en) * 1999-08-19 2005-03-22 Nokia Corporation Circuit emulation service over an internet protocol network
US20050086520A1 (en) * 2003-08-14 2005-04-21 Sarang Dharmapurikar Method and apparatus for detecting predefined signatures in packet payload using bloom filters
US20050175010A1 (en) * 2004-02-09 2005-08-11 Alcatel Filter based longest prefix match algorithm
US20050232180A1 (en) * 1999-02-02 2005-10-20 Toporek Jerome D Internet over satellite apparatus
US20060023384A1 (en) * 2004-07-28 2006-02-02 Udayan Mukherjee Systems, apparatus and methods capable of shelf management
US20060036693A1 (en) * 2004-08-12 2006-02-16 Microsoft Corporation Spam filtering with probabilistic secure hashes
US20060053295A1 (en) * 2004-08-24 2006-03-09 Bharath Madhusudan Methods and systems for content detection in a reconfigurable hardware
US7019674B2 (en) * 2004-02-05 2006-03-28 Nec Laboratories America, Inc. Content-based information retrieval architecture
US20060075119A1 (en) * 2004-09-10 2006-04-06 Hussain Muhammad R TCP host
US20060092943A1 (en) * 2004-11-04 2006-05-04 Cisco Technology, Inc. Method and apparatus for guaranteed in-order delivery for FICON over SONET/SDH transport
US7046848B1 (en) * 2001-08-22 2006-05-16 Olcott Peter L Method and system for recognizing machine generated character glyphs and icons in graphic images
US20060136570A1 (en) * 2003-06-10 2006-06-22 Pandya Ashish A Runtime adaptable search processor
US20060164978A1 (en) * 2005-01-21 2006-07-27 At&T Corp. Methods, systems, and devices for determining COS level
US7127510B2 (en) * 2000-02-02 2006-10-24 International Business Machines Corporation Access chain tracing system, network system, and storage medium
US20080037420A1 (en) * 2003-10-08 2008-02-14 Bob Tang Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square waveform) TCP friendly san
US7386564B2 (en) * 2004-01-15 2008-06-10 International Business Machines Corporation Generating statistics on text pattern matching predicates for access planning
US7408932B2 (en) * 2003-10-20 2008-08-05 Intel Corporation Method and apparatus for two-stage packet classification using most specific filter matching and transport level sharing
US7411957B2 (en) * 2004-03-26 2008-08-12 Cisco Technology, Inc. Hardware filtering support for denial-of-service attacks
US7457834B2 (en) * 2004-07-30 2008-11-25 Searete, Llc Aggregation and retrieval of network sensor data
US7461064B2 (en) * 2004-09-24 2008-12-02 International Buiness Machines Corporation Method for searching documents for ranges of numeric values
US20090019538A1 (en) * 2002-06-11 2009-01-15 Pandya Ashish A Distributed network security system and a hardware processor therefor
US7480253B1 (en) * 2002-05-30 2009-01-20 Nortel Networks Limited Ascertaining the availability of communications between devices

Family Cites Families (149)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3124844A (en) * 1960-06-10 1964-03-17 Means to process fibers in a tow or sheet-like material
US3601808A (en) 1968-07-18 1971-08-24 Bell Telephone Labor Inc Advanced keyword associative access memory system
US3611314A (en) 1969-09-09 1971-10-05 Texas Instruments Inc Dynamic associative data processing system
US3824375A (en) 1970-08-28 1974-07-16 Financial Security Syst Inc Memory system
US3729712A (en) 1971-02-26 1973-04-24 Eastman Kodak Co Information storage and retrieval system
US3848235A (en) 1973-10-24 1974-11-12 Ibm Scan and read control apparatus for a disk storage drive in a computer system
US3906455A (en) 1974-03-15 1975-09-16 Boeing Computer Services Inc Associative memory device
CA1056504A (en) 1975-04-02 1979-06-12 Visvaldis A. Vitols Keyword detection in continuous speech using continuous asynchronous correlation
US4298898A (en) 1979-04-19 1981-11-03 Compagnie Internationale Pour L'informatique Cii Honeywell Bull Method of and apparatus for reading data from reference zones of a memory
US4314356A (en) 1979-10-24 1982-02-02 Bunker Ramo Corporation High-speed term searcher
FR2481026B1 (en) 1980-04-21 1984-06-15 France Etat
US4464718A (en) 1982-07-30 1984-08-07 International Business Machines Corporation Associative file processing method and apparatus
US4550436A (en) 1983-07-26 1985-10-29 At&T Bell Laboratories Parallel text matching methods and apparatus
US5270922A (en) 1984-06-29 1993-12-14 Merrill Lynch & Company, Inc. System for distributing, processing and displaying financial information
US4941178A (en) 1986-04-01 1990-07-10 Gte Laboratories Incorporated Speech recognition using preclassification and spectral normalization
US4823306A (en) 1987-08-14 1989-04-18 International Business Machines Corporation Text search system
US5023910A (en) 1988-04-08 1991-06-11 At&T Bell Laboratories Vector quantization in a harmonic speech coding arrangement
US5179626A (en) 1988-04-08 1993-01-12 At&T Bell Laboratories Harmonic speech coding arrangement where a set of parameters for a continuous magnitude spectrum is determined by a speech analyzer and the parameters are used by a synthesizer to determine a spectrum which is used to determine senusoids for synthesis
US5050075A (en) 1988-10-04 1991-09-17 Bell Communications Research, Inc. High performance VLSI data filter
US5249292A (en) 1989-03-31 1993-09-28 Chiappa J Noel Data packet switch using a primary processing unit to designate one of a plurality of data stream control circuits to selectively handle the header processing of incoming packets in one data packet stream
US5077665A (en) 1989-05-25 1991-12-31 Reuters Limited Distributed matching system
JPH0314075A (en) 1989-06-13 1991-01-22 Ricoh Co Ltd Document retrieval device
AU620994B2 (en) 1989-07-12 1992-02-27 Digital Equipment Corporation Compressed prefix matching database searching
US5126936A (en) 1989-09-01 1992-06-30 Champion Securities Goal-directed financial asset management system
US5163131A (en) 1989-09-08 1992-11-10 Auspex Systems, Inc. Parallel i/o network file server architecture
EP0565738A1 (en) 1990-01-05 1993-10-20 Symbol Technologies, Inc. System for encoding and decoding data in machine readable graphic form
US5319776A (en) 1990-04-19 1994-06-07 Hilgraeve Corporation In transit detection of computer virus with safeguard
US5497488A (en) 1990-06-12 1996-03-05 Hitachi, Ltd. System for parallel string search with a function-directed parallel collation of a first partition of each string followed by matching of second partitions
GB9016341D0 (en) 1990-07-25 1990-09-12 British Telecomm Speed estimation
US5255136A (en) 1990-08-17 1993-10-19 Quantum Corporation High capacity submicro-winchester fixed disk drive
US5404488A (en) * 1990-09-26 1995-04-04 Lotus Development Corporation Realtime data feed engine for updating an application with the most currently received data from multiple data feeds
US5101424A (en) 1990-09-28 1992-03-31 Northern Telecom Limited Method for generating a monitor program for monitoring text streams and executing actions when pre-defined patterns, are matched using an English to AWK language translator
GB9023096D0 (en) 1990-10-24 1990-12-05 Int Computers Ltd Database search processor
US5339411A (en) 1990-12-21 1994-08-16 Pitney Bowes Inc. Method for managing allocation of memory space
US5404411A (en) * 1990-12-27 1995-04-04 Xerox Corporation Bitmap-image pattern matching apparatus for correcting bitmap errors in a printing system
FI921268A (en) 1991-04-15 1992-10-16 Hochiki Co DETEKTERINGSSYSTEM FOER OEVERFOERNINGSFEL FOER ANVAENDNING I BEVAKNINGSSYSTEM FOEREBYGGANDE AV DESTRUKTIONER
EP0510634B1 (en) 1991-04-25 1999-07-07 Nippon Steel Corporation Data base retrieval system
JP2641999B2 (en) 1991-05-10 1997-08-20 日本電気株式会社 Data format detection circuit
US5477451A (en) 1991-07-25 1995-12-19 International Business Machines Corp. Method and system for natural language translation
US5488725A (en) 1991-10-08 1996-01-30 West Publishing Company System of document representation retrieval by successive iterated probability sampling
US5265065A (en) 1991-10-08 1993-11-23 West Publishing Company Method and apparatus for information retrieval from a database by replacing domain specific stemmed phases in a natural language to create a search query
US5826075A (en) 1991-10-16 1998-10-20 International Business Machines Corporation Automated programmable fireware store for a personal computer system
WO1993018505A1 (en) 1992-03-02 1993-09-16 The Walt Disney Company Voice transformation system
US5388259A (en) 1992-05-15 1995-02-07 Bell Communications Research, Inc. System for accessing a database with an iterated fuzzy query notified by retrieval response
US5524268A (en) 1992-06-26 1996-06-04 Cirrus Logic, Inc. Flexible processor-driven control of SCSI buses utilizing tags appended to data bytes to determine SCSI-protocol phases
GB9220404D0 (en) 1992-08-20 1992-11-11 Nat Security Agency Method of identifying,retrieving and sorting documents
US5721898A (en) 1992-09-02 1998-02-24 International Business Machines Corporation Method and system for data search in a data processing system
JP2575595B2 (en) * 1992-10-20 1997-01-29 インターナショナル・ビジネス・マシーンズ・コーポレイション Image frame compression method and data processing system
US6044407A (en) 1992-11-13 2000-03-28 British Telecommunications Public Limited Company Interface for translating an information message from one protocol to another
US5481735A (en) 1992-12-28 1996-01-02 Apple Computer, Inc. Method for modifying packets that meet a particular criteria as the packets pass between two layers in a network
US5440723A (en) 1993-01-19 1995-08-08 International Business Machines Corporation Automatic immune system for computers and computer networks
US5432822A (en) 1993-03-12 1995-07-11 Hughes Aircraft Company Error correcting decoder and decoding method employing reliability based erasure decision-making in cellular communication system
US5546462A (en) 1993-04-09 1996-08-13 Washington University Method and apparatus for fingerprinting and authenticating various magnetic media
US5544352A (en) 1993-06-14 1996-08-06 Libertech, Inc. Method and apparatus for indexing, searching and displaying data
EP0651321B1 (en) 1993-10-29 2001-11-14 Advanced Micro Devices, Inc. Superscalar microprocessors
US5813000A (en) 1994-02-15 1998-09-22 Sun Micro Systems B tree structure and method
EP0749663B1 (en) * 1994-03-08 1999-12-01 Excel Switching Corporation Telecommunications switch with improved redundancy
US5465353A (en) 1994-04-01 1995-11-07 Ricoh Company, Ltd. Image matching and retrieval by multi-access redundant hashing
US5461712A (en) 1994-04-18 1995-10-24 International Business Machines Corporation Quadrant-based two-dimensional memory manager
US5987432A (en) 1994-06-29 1999-11-16 Reuters, Ltd. Fault-tolerant central ticker plant system for distributing financial market data
JPH0822392A (en) 1994-07-11 1996-01-23 Hitachi Ltd Method and device for deciding will
US5623652A (en) 1994-07-25 1997-04-22 Apple Computer, Inc. Method and apparatus for searching for information in a network and for controlling the display of searchable information on display devices in the network
US5884286A (en) 1994-07-29 1999-03-16 Daughtery, Iii; Vergil L. Apparatus and process for executing an expirationless option transaction
JP2964879B2 (en) 1994-08-22 1999-10-18 日本電気株式会社 Post filter
US5629980A (en) 1994-11-23 1997-05-13 Xerox Corporation System for controlling the distribution and use of digital works
SE505156C2 (en) 1995-01-30 1997-07-07 Ericsson Telefon Ab L M Procedure for noise suppression by spectral subtraction
US7124302B2 (en) * 1995-02-13 2006-10-17 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection
US5710757A (en) 1995-03-27 1998-01-20 Hewlett Packard Company Electronic device for processing multiple rate wireless information
US5819290A (en) 1995-04-10 1998-10-06 Sony Corporation Data recording and management system and method for detecting data file division based on quantitative number of blocks
US5687297A (en) * 1995-06-29 1997-11-11 Xerox Corporation Multifunctional apparatus for appearance tuning and resolution reconstruction of digital images
US5886701A (en) * 1995-08-04 1999-03-23 Microsoft Corporation Graphics rendering device and method for operating same
US5943421A (en) 1995-09-11 1999-08-24 Norand Corporation Processor having compression and encryption circuitry
JPH0981574A (en) 1995-09-14 1997-03-28 Fujitsu Ltd Method and system for data base retrieval using retrieval set display picture
US6134551A (en) 1995-09-15 2000-10-17 Intel Corporation Method of caching digital certificate revocation lists
US5774839A (en) 1995-09-29 1998-06-30 Rockwell International Corporation Delayed decision switched prediction multi-stage LSF vector quantization
US5864738A (en) 1996-03-13 1999-01-26 Cray Research, Inc. Massively parallel processing system using two data paths: one connecting router circuit to the interconnect network and the other connecting router circuit to I/O controller
US5761431A (en) 1996-04-12 1998-06-02 Peak Audio, Inc. Order persistent timer for controlling events at multiple processing stations
US5781921A (en) 1996-05-06 1998-07-14 Ohmeda Inc. Method and apparatus to effect firmware upgrades using a removable memory device under software control
US5712942A (en) * 1996-05-13 1998-01-27 Lucent Technologies Inc. Optical communications system having distributed intelligence
GB2314433A (en) 1996-06-22 1997-12-24 Xerox Corp Finding and modifying strings of a regular language in a text
US6147976A (en) 1996-06-24 2000-11-14 Cabletron Systems, Inc. Fast network layer packet filter
US5995963A (en) 1996-06-27 1999-11-30 Fujitsu Limited Apparatus and method of multi-string matching based on sparse state transition list
US5974414A (en) 1996-07-03 1999-10-26 Open Port Technology, Inc. System and method for automated received message handling and distribution
US6061662A (en) 1997-08-15 2000-05-09 Options Technology Company, Inc. Simulation method and system for the valuation of derivative financial instruments
US6084584A (en) * 1996-10-01 2000-07-04 Diamond Multimedia Systems, Inc. Computer system supporting portable interactive graphics display tablet and communications systems
US5991881A (en) 1996-11-08 1999-11-23 Harris Corporation Network surveillance system
JP3231673B2 (en) 1996-11-21 2001-11-26 シャープ株式会社 Character and character string search method and recording medium used in the method
US6205148B1 (en) 1996-11-26 2001-03-20 Fujitsu Limited Apparatus and a method for selecting an access router's protocol of a plurality of the protocols for transferring a packet in a communication system
US6108782A (en) 1996-12-13 2000-08-22 3Com Corporation Distributed remote monitoring (dRMON) for networks
US6073160A (en) 1996-12-18 2000-06-06 Xerox Corporation Document communications controller
US6070172A (en) 1997-03-06 2000-05-30 Oracle Corporation On-line free space defragmentation of a contiguous-file file system
US5930753A (en) 1997-03-20 1999-07-27 At&T Corp Combining frequency warping and spectral shaping in HMM based speech recognition
US6115751A (en) 1997-04-10 2000-09-05 Cisco Technology, Inc. Technique for capturing information needed to implement transmission priority routing among heterogeneous nodes of a computer network
US6067569A (en) 1997-07-10 2000-05-23 Microsoft Corporation Fast-forwarding and filtering of network packets in a computer system
US6173276B1 (en) 1997-08-21 2001-01-09 Scicomp, Inc. System and method for financial instrument modeling and valuation
US6370592B1 (en) * 1997-11-04 2002-04-09 Hewlett-Packard Company Network interface device which allows peripherals to utilize network transport services
US6112181A (en) 1997-11-06 2000-08-29 Intertrust Technologies Corporation Systems and methods for matching, selecting, narrowcasting, and/or classifying based on rights management and/or other information
US6138176A (en) 1997-11-14 2000-10-24 3Ware Disk array controller with automated processor which routes I/O data according to addresses and commands received from disk drive controllers
US6321258B1 (en) * 1997-12-11 2001-11-20 Hewlett-Packard Company Administration of networked peripherals using particular file system
US7424552B2 (en) * 1997-12-17 2008-09-09 Src Computers, Inc. Switch/network adapter port incorporating shared memory resources selectively accessible by a direct execution logic element and one or more dense logic devices
US6339819B1 (en) * 1997-12-17 2002-01-15 Src Computers, Inc. Multiprocessor with each processor element accessing operands in loaded input buffer and forwarding results to FIFO output buffer
US6058391A (en) 1997-12-17 2000-05-02 Mci Communications Corporation Enhanced user view/update capability for managing data from relational tables
US6216173B1 (en) 1998-02-03 2001-04-10 Redbox Technologies Limited Method and apparatus for content processing and routing
US6105067A (en) 1998-06-05 2000-08-15 International Business Machines Corp. Connection pool management for backend servers using common interface
US6169969B1 (en) 1998-08-07 2001-01-02 The United States Of America As Represented By The Director Of The National Security Agency Device and method for full-text large-dictionary string matching using n-gram hashing
US6456632B1 (en) * 1998-08-27 2002-09-24 Robert T. Baum Protocol separation in packet communication
US6219786B1 (en) 1998-09-09 2001-04-17 Surfcontrol, Inc. Method and system for monitoring and controlling network access
US6226676B1 (en) 1998-10-07 2001-05-01 Nortel Networks Corporation Connection establishment and termination in a mixed protocol network
US6625150B1 (en) * 1998-12-17 2003-09-23 Watchguard Technologies, Inc. Policy engine architecture
US6581098B1 (en) * 1999-09-27 2003-06-17 Hewlett-Packard Development Company, L.P. Server providing access to a plurality of functions of a multifunction peripheral in a network
ATE364866T1 (en) * 2000-01-06 2007-07-15 Ibm METHOD AND CIRCUIT FOR QUICKLY FINDING THE MINIMUM/MAXIMUM VALUE IN A SET OF NUMBERS
US20030099254A1 (en) * 2000-03-03 2003-05-29 Richter Roger K. Systems and methods for interfacing asynchronous and non-asynchronous data media
US7363277B1 (en) * 2000-03-27 2008-04-22 International Business Machines Corporation Detecting copyright violation via streamed extraction and signature analysis in a method, system and program
US7353267B1 (en) * 2000-04-07 2008-04-01 Netzero, Inc. Targeted network video download interface
US6601094B1 (en) * 2000-04-27 2003-07-29 Hewlett-Packard Development Company, L.P. Method and system for recommending an available network protocol
US7128816B2 (en) * 2000-06-14 2006-10-31 Wisconsin Alumni Research Foundation Method and apparatus for producing colloidal nanoparticles in a dense medium plasma
US7328349B2 (en) * 2001-12-14 2008-02-05 Bbn Technologies Corp. Hash-based systems and methods for detecting, preventing, and tracing network worms and viruses
US20040064737A1 (en) * 2000-06-19 2004-04-01 Milliken Walter Clark Hash-based systems and methods for detecting and preventing transmission of polymorphic network worms and viruses
US8204082B2 (en) * 2000-06-23 2012-06-19 Cloudshield Technologies, Inc. Transparent provisioning of services over a network
US6820129B1 (en) * 2000-09-22 2004-11-16 Hewlett-Packard Development Company, L.P. System and method of managing network buffers
US7117280B2 (en) * 2000-12-27 2006-10-03 Intel Corporation Network based intra-system communications architecture
US6868265B2 (en) * 2001-01-29 2005-03-15 Accelerated Performance, Inc. Locator for physically locating an electronic device in a communication network
CA2372380A1 (en) * 2001-02-20 2002-08-20 Martin D. Levine Method for secure transmission and receipt of data over a computer network using biometrics
US6847645B1 (en) * 2001-02-22 2005-01-25 Cisco Technology, Inc. Method and apparatus for controlling packet header buffer wrap around in a forwarding engine of an intermediate network node
US20030097481A1 (en) * 2001-03-01 2003-05-22 Richter Roger K. Method and system for performing packet integrity operations using a data movement engine
US7065482B2 (en) * 2001-05-17 2006-06-20 International Business Machines Corporation Internet traffic analysis tool
US7207041B2 (en) * 2001-06-28 2007-04-17 Tranzeo Wireless Technologies, Inc. Open platform architecture for shared resource access management
US7587476B2 (en) * 2001-08-07 2009-09-08 Ricoh Company, Ltd. Peripheral device with a centralized management server, and system, computer program product and method for managing peripheral devices connected to a network
US7191233B2 (en) * 2001-09-17 2007-03-13 Telecommunication Systems, Inc. System for automated, mid-session, user-directed, device-to-device session transfer system
US20030149869A1 (en) * 2002-02-01 2003-08-07 Paul Gleichauf Method and system for securely storing and trasmitting data by applying a one-time pad
US20030198345A1 (en) * 2002-04-15 2003-10-23 Van Buer Darrel J. Method and apparatus for high speed implementation of data encryption and decryption utilizing, e.g. Rijndael or its subset AES, or other encryption/decryption algorithms having similar key expansion data flow
US7478431B1 (en) * 2002-08-02 2009-01-13 Symantec Corporation Heuristic detection of computer viruses
US7420931B2 (en) * 2003-06-05 2008-09-02 Nvidia Corporation Using TCP/IP offload to accelerate packet filtering
US7257842B2 (en) * 2003-07-21 2007-08-14 Mcafee, Inc. Pre-approval of computer files during a malware detection
US7200837B2 (en) * 2003-08-21 2007-04-03 Qst Holdings, Llc System, method and software for static and dynamic programming and configuration of an adaptive computing architecture
US7454418B1 (en) * 2003-11-07 2008-11-18 Qiang Wang Fast signature scan
US7546327B2 (en) * 2003-12-22 2009-06-09 Wells Fargo Bank, N.A. Platform independent randomness accumulator for network applications
US7966658B2 (en) * 2004-04-08 2011-06-21 The Regents Of The University Of California Detecting public network attacks using signatures and fast content analysis
JP4394541B2 (en) * 2004-08-23 2010-01-06 日本電気株式会社 COMMUNICATION DEVICE, DATA COMMUNICATION METHOD, AND PROGRAM
WO2006031551A2 (en) * 2004-09-10 2006-03-23 Cavium Networks Selective replication of data structure
JP4506430B2 (en) * 2004-11-24 2010-07-21 日本電気株式会社 Application monitor device
US20060198375A1 (en) * 2004-12-07 2006-09-07 Baik Kwang H Method and apparatus for pattern matching based on packet reassembly
US20060129745A1 (en) * 2004-12-11 2006-06-15 Gunther Thiel Process and appliance for data processing and computer program product
US7101188B1 (en) * 2005-03-30 2006-09-05 Intel Corporation Electrical edge connector adaptor
US20070011687A1 (en) * 2005-07-08 2007-01-11 Microsoft Corporation Inter-process message passing
US7801910B2 (en) * 2005-11-09 2010-09-21 Ramp Holdings, Inc. Method and apparatus for timed tagging of media content
EP1868321B1 (en) * 2006-06-12 2016-01-20 Mitsubishi Denki Kabushiki Kaisha In-line content analysis of a TCP segment stream
US8179895B2 (en) * 2006-08-01 2012-05-15 Tekelec Methods, systems, and computer program products for monitoring tunneled internet protocol (IP) traffic on a high bandwidth IP network
US7924720B2 (en) * 2007-02-26 2011-04-12 Hewlett-Packard Development Company, L.P. Network traffic monitoring

Patent Citations (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347634A (en) * 1990-03-15 1994-09-13 Hewlett-Packard Company System and method for directly executing user DMA instruction from user controlled process by employing processor privileged work buffer pointers
US5421028A (en) * 1991-03-15 1995-05-30 Hewlett-Packard Company Processing commands and data in a common pipeline path in a high-speed computer graphics system
US5371794A (en) * 1993-11-02 1994-12-06 Sun Microsystems, Inc. Method and apparatus for privacy and authentication in wireless networks
US5701464A (en) * 1995-09-15 1997-12-23 Intel Corporation Parameterized bloom filters
US6028939A (en) * 1997-01-03 2000-02-22 Redcreek Communications, Inc. Data security system and method
US6175874B1 (en) * 1997-07-03 2001-01-16 Fujitsu Limited Packet relay control method packet relay device and program memory medium
US6389532B1 (en) * 1998-04-20 2002-05-14 Sun Microsystems, Inc. Method and apparatus for using digital signatures to filter packets in a network
US20030002502A1 (en) * 1998-05-01 2003-01-02 Gibson William A. System for recovering lost information in a data stream by means of parity packets
US20010056547A1 (en) * 1998-06-09 2001-12-27 Placeware, Inc. Bi-directional process-to-process byte stream protocol
US6628652B1 (en) * 1998-09-18 2003-09-30 Lucent Technologies Inc. Flexible telecommunications switching network
US20050232180A1 (en) * 1999-02-02 2005-10-20 Toporek Jerome D Internet over satellite apparatus
US6704816B1 (en) * 1999-07-26 2004-03-09 Sun Microsystems, Inc. Method and apparatus for executing standard functions in a computer system using a field programmable gate array
US6870837B2 (en) * 1999-08-19 2005-03-22 Nokia Corporation Circuit emulation service over an internet protocol network
US6804667B1 (en) * 1999-11-30 2004-10-12 Ncr Corporation Filter for checking for duplicate entries in database
US20020031125A1 (en) * 1999-12-28 2002-03-14 Jun Sato Packet transfer communication apparatus, packet transfer communication method, and storage medium
US7127510B2 (en) * 2000-02-02 2006-10-24 International Business Machines Corporation Access chain tracing system, network system, and storage medium
US20010052038A1 (en) * 2000-02-03 2001-12-13 Realtime Data, Llc Data storewidth accelerator
US6381242B1 (en) * 2000-08-29 2002-04-30 Netrake Corporation Content processor
US20020069370A1 (en) * 2000-08-31 2002-06-06 Infoseer, Inc. System and method for tracking and preventing illegal distribution of proprietary material over computer networks
US20040133634A1 (en) * 2000-11-02 2004-07-08 Stanley Luke Switching system
US20020095512A1 (en) * 2000-11-30 2002-07-18 Rana Aswinkumar Vishanji Method for reordering and reassembling data packets in a network
US6728929B1 (en) * 2001-02-16 2004-04-27 Spirent Communications Of Calabasas, Inc. System and method to insert a TCP checksum in a protocol neutral manner
US20030014662A1 (en) * 2001-06-13 2003-01-16 Gupta Ramesh M. Protocol-parsing state machine and method of using same
US20030009693A1 (en) * 2001-07-09 2003-01-09 International Business Machines Corporation Dynamic intrusion detection for computer systems
US7046848B1 (en) * 2001-08-22 2006-05-16 Olcott Peter L Method and system for recognizing machine generated character glyphs and icons in graphic images
US20030051043A1 (en) * 2001-09-12 2003-03-13 Raqia Networks Inc. High speed data stream pattern recognition
US20030074582A1 (en) * 2001-10-12 2003-04-17 Motorola, Inc. Method and apparatus for providing node security in a router of a packet network
US20030221013A1 (en) * 2002-05-21 2003-11-27 John Lockwood Methods, systems, and devices using reprogrammable hardware for high-speed processing of streaming data to find a redefinable pattern and respond thereto
US7480253B1 (en) * 2002-05-30 2009-01-20 Nortel Networks Limited Ascertaining the availability of communications between devices
US20090019538A1 (en) * 2002-06-11 2009-01-15 Pandya Ashish A Distributed network security system and a hardware processor therefor
US20040054924A1 (en) * 2002-09-03 2004-03-18 Chuah Mooi Choo Methods and devices for providing distributed, adaptive IP filtering against distributed denial of service attacks
US20040100977A1 (en) * 2002-11-01 2004-05-27 Kazuyuki Suzuki Packet processing apparatus
US20040105458A1 (en) * 2002-11-29 2004-06-03 Kabushiki Kaisha Toshiba Communication control method, server apparatus, and client apparatus
US20040107361A1 (en) * 2002-11-29 2004-06-03 Redan Michael C. System for high speed network intrusion detection
US20060136570A1 (en) * 2003-06-10 2006-06-22 Pandya Ashish A Runtime adaptable search processor
US20050086520A1 (en) * 2003-08-14 2005-04-21 Sarang Dharmapurikar Method and apparatus for detecting predefined signatures in packet payload using bloom filters
US7444515B2 (en) * 2003-08-14 2008-10-28 Washington University Method and apparatus for detecting predefined signatures in packet payload using Bloom filters
US20080037420A1 (en) * 2003-10-08 2008-02-14 Bob Tang Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square waveform) TCP friendly san
US7408932B2 (en) * 2003-10-20 2008-08-05 Intel Corporation Method and apparatus for two-stage packet classification using most specific filter matching and transport level sharing
US7386564B2 (en) * 2004-01-15 2008-06-10 International Business Machines Corporation Generating statistics on text pattern matching predicates for access planning
US7019674B2 (en) * 2004-02-05 2006-03-28 Nec Laboratories America, Inc. Content-based information retrieval architecture
US20050175010A1 (en) * 2004-02-09 2005-08-11 Alcatel Filter based longest prefix match algorithm
US7411957B2 (en) * 2004-03-26 2008-08-12 Cisco Technology, Inc. Hardware filtering support for denial-of-service attacks
US20060023384A1 (en) * 2004-07-28 2006-02-02 Udayan Mukherjee Systems, apparatus and methods capable of shelf management
US7457834B2 (en) * 2004-07-30 2008-11-25 Searete, Llc Aggregation and retrieval of network sensor data
US20060036693A1 (en) * 2004-08-12 2006-02-16 Microsoft Corporation Spam filtering with probabilistic secure hashes
US20060053295A1 (en) * 2004-08-24 2006-03-09 Bharath Madhusudan Methods and systems for content detection in a reconfigurable hardware
US20060075119A1 (en) * 2004-09-10 2006-04-06 Hussain Muhammad R TCP host
US7461064B2 (en) * 2004-09-24 2008-12-02 International Buiness Machines Corporation Method for searching documents for ranges of numeric values
US20060092943A1 (en) * 2004-11-04 2006-05-04 Cisco Technology, Inc. Method and apparatus for guaranteed in-order delivery for FICON over SONET/SDH transport
US20060164978A1 (en) * 2005-01-21 2006-07-27 At&T Corp. Methods, systems, and devices for determining COS level

Cited By (186)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8095508B2 (en) 2000-04-07 2012-01-10 Washington University Intelligent data storage and processing using FPGA devices
US8549024B2 (en) 2000-04-07 2013-10-01 Ip Reservoir, Llc Method and apparatus for adjustable data matching
US7680790B2 (en) 2000-04-07 2010-03-16 Washington University Method and apparatus for approximate matching of DNA sequences
US7949650B2 (en) 2000-04-07 2011-05-24 Washington University Associative database scanning and information retrieval
US7953743B2 (en) 2000-04-07 2011-05-31 Washington University Associative database scanning and information retrieval
US9020928B2 (en) 2000-04-07 2015-04-28 Ip Reservoir, Llc Method and apparatus for processing streaming data using programmable logic
US8131697B2 (en) 2000-04-07 2012-03-06 Washington University Method and apparatus for approximate matching where programmable logic is used to process data being written to a mass storage medium and process data being read from a mass storage medium
US7716330B2 (en) 2001-10-19 2010-05-11 Global Velocity, Inc. System and method for controlling transmission of data packets over an information network
US20030110229A1 (en) * 2001-10-19 2003-06-12 Kulig Matthew P. System and method for controlling transmission of data packets over an information network
US10909623B2 (en) 2002-05-21 2021-02-02 Ip Reservoir, Llc Method and apparatus for processing financial information at hardware speeds using FPGA devices
US8069102B2 (en) 2002-05-21 2011-11-29 Washington University Method and apparatus for processing financial information at hardware speeds using FPGA devices
US7711844B2 (en) 2002-08-15 2010-05-04 Washington University Of St. Louis TCP-splitter: reliable packet monitoring methods and apparatus for high speed networks
US10346181B2 (en) 2003-05-23 2019-07-09 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US9898312B2 (en) 2003-05-23 2018-02-20 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US8751452B2 (en) 2003-05-23 2014-06-10 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US11275594B2 (en) 2003-05-23 2022-03-15 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US10572824B2 (en) 2003-05-23 2020-02-25 Ip Reservoir, Llc System and method for low latency multi-functional pipeline with correlation logic and selectively activated/deactivated pipelined data processing engines
US10929152B2 (en) 2003-05-23 2021-02-23 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US9176775B2 (en) 2003-05-23 2015-11-03 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US10719334B2 (en) 2003-05-23 2020-07-21 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US8768888B2 (en) 2003-05-23 2014-07-01 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US8620881B2 (en) 2003-05-23 2013-12-31 Ip Reservoir, Llc Intelligent data storage and processing using FPGA devices
US7747599B1 (en) 2004-07-23 2010-06-29 Netlogic Microsystems, Inc. Integrated search engine devices that utilize hierarchical memories containing b-trees and span prefix masks to support longest prefix match search operations
US8886677B1 (en) 2004-07-23 2014-11-11 Netlogic Microsystems, Inc. Integrated search engine devices that support LPM search operations using span prefix masks that encode key prefix length
US8776206B1 (en) * 2004-10-18 2014-07-08 Gtb Technologies, Inc. Method, a system, and an apparatus for content security in computer networks
US8000324B2 (en) * 2004-11-30 2011-08-16 Broadcom Corporation Pipeline architecture of a network device
US20060114914A1 (en) * 2004-11-30 2006-06-01 Broadcom Corporation Pipeline architecture of a network device
US10957423B2 (en) 2005-03-03 2021-03-23 Washington University Method and apparatus for performing similarity searching
US9547680B2 (en) 2005-03-03 2017-01-17 Washington University Method and apparatus for performing similarity searching
US10580518B2 (en) 2005-03-03 2020-03-03 Washington University Method and apparatus for performing similarity searching
US7917299B2 (en) 2005-03-03 2011-03-29 Washington University Method and apparatus for performing similarity searching on a data stream with respect to a query string
US8515682B2 (en) 2005-03-03 2013-08-20 Washington University Method and apparatus for performing similarity searching
CN100385443C (en) * 2005-09-09 2008-04-30 湖南大学 Searching method based on classified file BloomFilter structure
CN100396057C (en) * 2005-10-21 2008-06-18 清华大学 High speed block detecting method based on stated filter engine
US7505960B2 (en) 2005-11-15 2009-03-17 Microsoft Corporation Scalable retrieval of data entries using an array index or a secondary key
US7702629B2 (en) 2005-12-02 2010-04-20 Exegy Incorporated Method and device for high performance regular expression pattern matching
US20070130140A1 (en) * 2005-12-02 2007-06-07 Cytron Ron K Method and device for high performance regular expression pattern matching
US7945528B2 (en) 2005-12-02 2011-05-17 Exegy Incorporated Method and device for high performance regular expression pattern matching
US7954114B2 (en) 2006-01-26 2011-05-31 Exegy Incorporated Firmware socket module for FPGA-based pipeline processing
US8737606B2 (en) 2006-03-23 2014-05-27 Ip Reservoir, Llc Method and system for high throughput blockwise independent encryption/decryption
US8983063B1 (en) 2006-03-23 2015-03-17 Ip Reservoir, Llc Method and system for high throughput blockwise independent encryption/decryption
US8379841B2 (en) 2006-03-23 2013-02-19 Exegy Incorporated Method and system for high throughput blockwise independent encryption/decryption
US8407122B2 (en) 2006-06-19 2013-03-26 Exegy Incorporated High speed processing of financial information using FPGA devices
US10467692B2 (en) 2006-06-19 2019-11-05 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US11182856B2 (en) 2006-06-19 2021-11-23 Exegy Incorporated System and method for routing of streaming data as between multiple compute resources
US10169814B2 (en) 2006-06-19 2019-01-01 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US9672565B2 (en) 2006-06-19 2017-06-06 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US8595104B2 (en) 2006-06-19 2013-11-26 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US8843408B2 (en) 2006-06-19 2014-09-23 Ip Reservoir, Llc Method and system for high speed options pricing
US7921046B2 (en) 2006-06-19 2011-04-05 Exegy Incorporated High speed processing of financial information using FPGA devices
US8478680B2 (en) 2006-06-19 2013-07-02 Exegy Incorporated High speed processing of financial information using FPGA devices
US7840482B2 (en) 2006-06-19 2010-11-23 Exegy Incorporated Method and system for high speed options pricing
US8655764B2 (en) 2006-06-19 2014-02-18 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US9916622B2 (en) 2006-06-19 2018-03-13 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US8626624B2 (en) 2006-06-19 2014-01-07 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US10360632B2 (en) 2006-06-19 2019-07-23 Ip Reservoir, Llc Fast track routing of streaming data using FPGA devices
US8458081B2 (en) 2006-06-19 2013-06-04 Exegy Incorporated High speed processing of financial information using FPGA devices
US10504184B2 (en) 2006-06-19 2019-12-10 Ip Reservoir, Llc Fast track routing of streaming data as between multiple compute resources
US9582831B2 (en) 2006-06-19 2017-02-28 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US8600856B2 (en) 2006-06-19 2013-12-03 Ip Reservoir, Llc High speed processing of financial information using FPGA devices
US10817945B2 (en) 2006-06-19 2020-10-27 Ip Reservoir, Llc System and method for routing of streaming data as between multiple compute resources
US20080080505A1 (en) * 2006-09-29 2008-04-03 Munoz Robert J Methods and Apparatus for Performing Packet Processing Operations in a Network
US20080112413A1 (en) * 2006-11-10 2008-05-15 Fong Pong Method and system for hash table based routing via table and prefix aggregation
US7852851B2 (en) * 2006-11-10 2010-12-14 Broadcom Corporation Method and system for hash table based routing via a prefix transformation
US20080112412A1 (en) * 2006-11-10 2008-05-15 Fong Pong Method and system for hash table based routing via a prefix transformation
US7885268B2 (en) * 2006-11-10 2011-02-08 Broadcom Corporation Method and system for hash table based routing via table and prefix aggregation
US8326819B2 (en) 2006-11-13 2012-12-04 Exegy Incorporated Method and system for high performance data metatagging and data indexing using coprocessors
US9396222B2 (en) 2006-11-13 2016-07-19 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US8880501B2 (en) 2006-11-13 2014-11-04 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US10191974B2 (en) 2006-11-13 2019-01-29 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data
US20100094858A1 (en) * 2006-11-13 2010-04-15 Exegy Incorporated Method and System for High Performance Integration, Processing and Searching of Structured and Unstructured Data Using Coprocessors
US9323794B2 (en) 2006-11-13 2016-04-26 Ip Reservoir, Llc Method and system for high performance pattern indexing
US8156101B2 (en) 2006-11-13 2012-04-10 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US11449538B2 (en) 2006-11-13 2022-09-20 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data
US7660793B2 (en) 2006-11-13 2010-02-09 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US8086641B1 (en) 2006-11-27 2011-12-27 Netlogic Microsystems, Inc. Integrated search engine devices that utilize SPM-linked bit maps to reduce handle memory duplication and methods of operating same
US7805427B1 (en) * 2006-11-27 2010-09-28 Netlogic Microsystems, Inc. Integrated search engine devices that support multi-way search trees having multi-column nodes
US7953721B1 (en) 2006-11-27 2011-05-31 Netlogic Microsystems, Inc. Integrated search engine devices that support database key dumping and methods of operating same
US7987205B1 (en) 2006-11-27 2011-07-26 Netlogic Microsystems, Inc. Integrated search engine devices having pipelined node maintenance sub-engines therein that support database flush operations
US7831626B1 (en) 2006-11-27 2010-11-09 Netlogic Microsystems, Inc. Integrated search engine devices having a plurality of multi-way trees of search keys therein that share a common root node
US20080147714A1 (en) * 2006-12-19 2008-06-19 Mauricio Breternitz Efficient bloom filter
US7620781B2 (en) * 2006-12-19 2009-11-17 Intel Corporation Efficient Bloom filter
US9363078B2 (en) 2007-03-22 2016-06-07 Ip Reservoir, Llc Method and apparatus for hardware-accelerated encryption/decryption
US8032529B2 (en) * 2007-04-12 2011-10-04 Cisco Technology, Inc. Enhanced bloom filters
US20080256094A1 (en) * 2007-04-12 2008-10-16 Cisco Technology, Inc. Enhanced bloom filters
US8224940B2 (en) 2007-05-31 2012-07-17 Microsoft Corporation Strategies for compressing information using bloom filters
US20080301218A1 (en) * 2007-05-31 2008-12-04 Microsoft Corporation Strategies for Compressing Information Using Bloom Filters
WO2008151673A1 (en) * 2007-06-14 2008-12-18 Telefonaktiebolaget Lm Ericsson (Publ) Routing in a network
US8879727B2 (en) 2007-08-31 2014-11-04 Ip Reservoir, Llc Method and apparatus for hardware-accelerated encryption/decryption
KR100931796B1 (en) 2007-11-28 2009-12-14 한양대학교 산학협력단 Packet collection system using bloom filter, a method of reducing the storage size of packets in the packet collection system, a packet retrieval system and a method of reducing the rate of false positives
US10229453B2 (en) 2008-01-11 2019-03-12 Ip Reservoir, Llc Method and system for low latency basket calculation
US20100229040A1 (en) * 2008-02-01 2010-09-09 Huawei Technologies Co., Ltd. Method and device for creating pattern matching state machine
US8583961B2 (en) * 2008-02-01 2013-11-12 Huawei Technologies Co., Ltd. Method and device for creating pattern matching state machine
US10158377B2 (en) 2008-05-15 2018-12-18 Ip Reservoir, Llc Method and system for accelerated stream processing
US11677417B2 (en) 2008-05-15 2023-06-13 Ip Reservoir, Llc Method and system for accelerated stream processing
US10411734B2 (en) 2008-05-15 2019-09-10 Ip Reservoir, Llc Method and system for accelerated stream processing
US9547824B2 (en) 2008-05-15 2017-01-17 Ip Reservoir, Llc Method and apparatus for accelerated data quality checking
US8374986B2 (en) 2008-05-15 2013-02-12 Exegy Incorporated Method and system for accelerated stream processing
US10965317B2 (en) 2008-05-15 2021-03-30 Ip Reservoir, Llc Method and system for accelerated stream processing
CN101309216B (en) * 2008-07-03 2011-05-04 中国科学院计算技术研究所 IP packet classification method and apparatus
US7990973B2 (en) * 2008-08-13 2011-08-02 Alcatel-Lucent Usa Inc. Hash functions for applications such as network address lookup
US8018940B2 (en) * 2008-08-13 2011-09-13 Alcatel Lucent Network address lookup based on bloom filters
US20100040067A1 (en) * 2008-08-13 2010-02-18 Lucent Technologies Inc. Hash functions for applications such as network address lookup
US20100040066A1 (en) * 2008-08-13 2010-02-18 Lucent Technologies Inc. Network address lookup based on bloom filters
CN101383034A (en) * 2008-09-18 2009-03-11 腾讯科技(深圳)有限公司 Method and system for advertisement statistic and delivery
US8804950B1 (en) * 2008-09-30 2014-08-12 Juniper Networks, Inc. Methods and apparatus for producing a hash value based on a hash function
US8571034B2 (en) 2008-09-30 2013-10-29 Juniper Networks, Inc. Methods and apparatus related to packet classification associated with a multi-stage switch
US8762249B2 (en) 2008-12-15 2014-06-24 Ip Reservoir, Llc Method and apparatus for high-speed processing of financial market depth data
US10929930B2 (en) 2008-12-15 2021-02-23 Ip Reservoir, Llc Method and apparatus for high-speed processing of financial market depth data
US10062115B2 (en) 2008-12-15 2018-08-28 Ip Reservoir, Llc Method and apparatus for high-speed processing of financial market depth data
US11676206B2 (en) 2008-12-15 2023-06-13 Exegy Incorporated Method and apparatus for high-speed processing of financial market depth data
US8768805B2 (en) 2008-12-15 2014-07-01 Ip Reservoir, Llc Method and apparatus for high-speed processing of financial market depth data
US20100228701A1 (en) * 2009-03-06 2010-09-09 Microsoft Corporation Updating bloom filters
US8675661B1 (en) * 2009-05-07 2014-03-18 Sprint Communications Company L.P. Allocating IP version fields to increase address space
KR101028470B1 (en) 2009-05-07 2011-04-14 이화여자대학교 산학협력단 Method and Apparatus for Searching IP Address
US20100325213A1 (en) * 2009-06-17 2010-12-23 Microsoft Corporation Multi-tier, multi-state lookup
US8271635B2 (en) 2009-06-17 2012-09-18 Microsoft Corporation Multi-tier, multi-state lookup
CN101930418A (en) * 2009-06-26 2010-12-29 英特尔公司 The multiple compress technique that is used for grouping information
US8111704B2 (en) * 2009-06-26 2012-02-07 Intel Corporation Multiple compression techniques for packetized information
US20100329255A1 (en) * 2009-06-26 2010-12-30 Abhishek Singhal Multiple Compression Techniques For Packetized Information
US20110069632A1 (en) * 2009-09-21 2011-03-24 Alcatel-Lucent Usa Inc. Tracking network-data flows
US8134934B2 (en) 2009-09-21 2012-03-13 Alcatel Lucent Tracking network-data flows
KR101068716B1 (en) 2009-12-28 2011-09-28 경희대학교 산학협력단 Method for tracebacking packet sensor network
US9191328B2 (en) * 2010-06-24 2015-11-17 Hewlett-Packard Development Company, L.P. Forwarding broadcast traffic to a host environment
US20110320630A1 (en) * 2010-06-24 2011-12-29 Jeffrey Mogul Forwarding broadcast traffic to a host environment
EP2793436A1 (en) * 2010-10-04 2014-10-22 Huawei Technologies Co., Ltd. Content router forwarding plane architecture
CN103141060A (en) * 2010-10-04 2013-06-05 华为技术有限公司 Content router forwarding plane architecture
WO2012045240A1 (en) * 2010-10-04 2012-04-12 Huawei Technologies Co., Ltd. Content router forwarding plane architecture
US8578049B2 (en) 2010-10-04 2013-11-05 Futurewei Technologies, Inc. Content router forwarding plane architecture
US10037568B2 (en) 2010-12-09 2018-07-31 Ip Reservoir, Llc Method and apparatus for managing orders in financial markets
US11397985B2 (en) 2010-12-09 2022-07-26 Exegy Incorporated Method and apparatus for managing orders in financial markets
US11803912B2 (en) 2010-12-09 2023-10-31 Exegy Incorporated Method and apparatus for managing orders in financial markets
US8630294B1 (en) 2011-05-11 2014-01-14 Juniper Networks, Inc. Dynamic bypass mechanism to alleviate bloom filter bank contention
CN102333036A (en) * 2011-10-17 2012-01-25 中兴通讯股份有限公司 Method and system for realizing high-speed routing lookup
US8898204B1 (en) * 2011-10-21 2014-11-25 Applied Micro Circuits Corporation System and method for controlling updates of a data structure
US9152661B1 (en) * 2011-10-21 2015-10-06 Applied Micro Circuits Corporation System and method for searching a data structure
US9253091B2 (en) 2011-11-22 2016-02-02 Orange Method for processing a request in an information-centric communication network
FR2982974A1 (en) * 2011-11-22 2013-05-24 France Telecom METHOD OF PROCESSING A QUERY IN A COMMUNICATION NETWORK CENTERED ON INFORMATION
WO2013076418A1 (en) * 2011-11-22 2013-05-30 France Telecom Method of processing a request in an information-centred communication network
US8886827B2 (en) 2012-02-13 2014-11-11 Juniper Networks, Inc. Flow cache mechanism for performing packet flow lookups in a network device
CN103312615A (en) * 2012-03-13 2013-09-18 丛林网络公司 Longest prefix match searches with variable numbers of prefixes
US20130246651A1 (en) * 2012-03-13 2013-09-19 Juniper Networks, Inc. Longest prefix match searches with variable numbers of prefixes
US8799507B2 (en) * 2012-03-13 2014-08-05 Juniper Networks, Inc. Longest prefix match searches with variable numbers of prefixes
EP2640021A1 (en) * 2012-03-13 2013-09-18 Juniper Networks, Inc. Longest prefix match searches with variable numbers of prefixes
US10872078B2 (en) 2012-03-27 2020-12-22 Ip Reservoir, Llc Intelligent feed switch
US10650452B2 (en) 2012-03-27 2020-05-12 Ip Reservoir, Llc Offload processing of data packets
US10121196B2 (en) 2012-03-27 2018-11-06 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US10963962B2 (en) 2012-03-27 2021-03-30 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US9990393B2 (en) 2012-03-27 2018-06-05 Ip Reservoir, Llc Intelligent feed switch
US11436672B2 (en) 2012-03-27 2022-09-06 Exegy Incorporated Intelligent switch for processing financial market data
US8805850B2 (en) * 2012-05-23 2014-08-12 International Business Machines Corporation Hardware-accelerated relational joins
US20130318067A1 (en) * 2012-05-23 2013-11-28 International Business Machines Corporation Hardware-accelerated relational joins
US20140081701A1 (en) * 2012-09-20 2014-03-20 Ebay Inc. Determining and using brand information in electronic commerce
US11392963B2 (en) * 2012-09-20 2022-07-19 Ebay Inc. Determining and using brand information in electronic commerce
US10657541B2 (en) 2012-09-20 2020-05-19 Ebay Inc. Determining and using brand information in electronic commerce
US10140621B2 (en) * 2012-09-20 2018-11-27 Ebay Inc. Determining and using brand information in electronic commerce
US10102260B2 (en) 2012-10-23 2018-10-16 Ip Reservoir, Llc Method and apparatus for accelerated data translation using record layout detection
US9633097B2 (en) 2012-10-23 2017-04-25 Ip Reservoir, Llc Method and apparatus for record pivoting to accelerate processing of data fields
US11789965B2 (en) 2012-10-23 2023-10-17 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US10621192B2 (en) 2012-10-23 2020-04-14 IP Resevoir, LLC Method and apparatus for accelerated format translation of data in a delimited data format
US10133802B2 (en) 2012-10-23 2018-11-20 Ip Reservoir, Llc Method and apparatus for accelerated record layout detection
US10949442B2 (en) 2012-10-23 2021-03-16 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US9633093B2 (en) 2012-10-23 2017-04-25 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US10146845B2 (en) 2012-10-23 2018-12-04 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US9465826B2 (en) * 2012-11-27 2016-10-11 Hewlett Packard Enterprise Development Lp Estimating unique entry counts using a counting bloom filter
US20140149433A1 (en) * 2012-11-27 2014-05-29 Hewlett-Packard Development Company, L.P. Estimating Unique Entry Counts Using a Counting Bloom Filter
US20150372915A1 (en) * 2013-01-31 2015-12-24 Hewlett-Packard Development Company, L.P. Incremental update of a shape graph
US10021026B2 (en) * 2013-01-31 2018-07-10 Hewlett Packard Enterprise Development Lp Incremental update of a shape graph
US10169356B2 (en) * 2013-02-26 2019-01-01 Facebook, Inc. Intelligent data caching for typeahead search
US20150098470A1 (en) * 2013-10-04 2015-04-09 Broadcom Corporation Hierarchical hashing for longest prefix matching
US9647941B2 (en) * 2013-10-04 2017-05-09 Avago Technologies General Ip (Singapore) Pte. Ltd. Hierarchical hashing for longest prefix matching
WO2015081524A1 (en) * 2013-12-05 2015-06-11 北京大学深圳研究生院 Method and apparatus for forwarding heterogeneous address route
US10902013B2 (en) 2014-04-23 2021-01-26 Ip Reservoir, Llc Method and apparatus for accelerated record layout detection
US9608863B2 (en) * 2014-10-17 2017-03-28 Cisco Technology, Inc. Address autoconfiguration using bloom filter parameters for unique address computation
US20160112254A1 (en) * 2014-10-17 2016-04-21 Cisco Technology, Inc. Address autoconfiguration using bloom filter parameters for unique address computation
US9596181B1 (en) * 2014-10-20 2017-03-14 Juniper Networks, Inc. Two stage bloom filter for longest prefix match
US10158571B2 (en) 2014-10-20 2018-12-18 Juniper Networks, Inc. Two stage bloom filter for longest prefix match
US20160294625A1 (en) * 2015-03-31 2016-10-06 Telefonaktiebolaget L M Ericsson (Publ) Method for network monitoring using efficient group membership test based rule consolidation
US9860152B2 (en) 2015-09-21 2018-01-02 Telefonaktiebolaget L M Ericsson (Publ) Non-intrusive method for testing and profiling network service functions
US11526531B2 (en) 2015-10-29 2022-12-13 Ip Reservoir, Llc Dynamic field data translation to support high performance stream data processing
US10942943B2 (en) 2015-10-29 2021-03-09 Ip Reservoir, Llc Dynamic field data translation to support high performance stream data processing
US11416778B2 (en) 2016-12-22 2022-08-16 Ip Reservoir, Llc Method and apparatus for hardware-accelerated machine learning
US10846624B2 (en) 2016-12-22 2020-11-24 Ip Reservoir, Llc Method and apparatus for hardware-accelerated machine learning
US20190108277A1 (en) * 2017-10-11 2019-04-11 Adobe Inc. Method to identify and extract fragments among large collections of digital documents using repeatability and semantic information
US10872105B2 (en) * 2017-10-11 2020-12-22 Adobe Inc. Method to identify and extract fragments among large collections of digital documents using repeatability and semantic information
US11132400B2 (en) * 2018-07-23 2021-09-28 Microsoft Technology Licensing, Llc Data classification using probabilistic data structures

Also Published As

Publication number Publication date
US7602785B2 (en) 2009-10-13
US20100098081A1 (en) 2010-04-22

Similar Documents

Publication Publication Date Title
US7602785B2 (en) Method and system for performing longest prefix matching for network address lookup using bloom filters
Dharmapurikar et al. Longest prefix matching using bloom filters
Dharmapurikar et al. Longest prefix matching using bloom filters
US7415472B2 (en) Comparison tree data structures of particular use in performing lookup operations
US7613134B2 (en) Method and apparatus for storing tree data structures among and within multiple memory channels
Song et al. Ipv6 lookups using distributed and load balanced bloom filters for 100gbps core router line cards
US8089961B2 (en) Low power ternary content-addressable memory (TCAMs) for very large forwarding tables
US6434144B1 (en) Multi-level table lookup
US8780926B2 (en) Updating prefix-compressed tries for IP route lookup
Eatherton et al. Tree bitmap: hardware/software IP lookups with incremental updates
US7630373B2 (en) Packet transfer apparatus
US7653670B2 (en) Storage-efficient and collision-free hash-based packet processing architecture and method
US7415463B2 (en) Programming tree data structures and handling collisions while performing lookup operations
US7019674B2 (en) Content-based information retrieval architecture
Panigrahy et al. Reducing TCAM power consumption and increasing throughput
US7418505B2 (en) IP address lookup using either a hashing table or multiple hash functions
Warkhede et al. Multiway range trees: scalable IP lookup with fast updates
Hasan et al. Chisel: A storage-efficient, collision-free hash-based network processing architecture
Pao et al. Efficient hardware architecture for fast IP address lookup
US6574701B2 (en) Technique for updating a content addressable memory
US6532516B1 (en) Technique for updating a content addressable memory
US7558775B1 (en) Methods and apparatus for maintaining sets of ranges typically using an associative memory and for using these ranges to identify a matching range based on a query point or query range and to maintain sorted elements for use such as in providing priority queue operations
US7299317B1 (en) Assigning prefixes to associative memory classes based on a value of a last bit of each prefix and their use including but not limited to locating a prefix and for maintaining a Patricia tree data structure
Wang Scalable packet classification with controlled cross-producting
Sahni et al. IP router tables

Legal Events

Date Code Title Description
AS Assignment

Owner name: WASHINGTON UNIVERSITY, MISSOURI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DHARMAPURIKAR, SARANG;KRISHNAMURTHY, PRAVEEN;TAYLOR, DAVID E.;REEL/FRAME:016276/0709

Effective date: 20040126

AS Assignment

Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:WASHINGTON UNIVERSITY;REEL/FRAME:019897/0985

Effective date: 20070619

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PATENT HOLDER CLAIMS MICRO ENTITY STATUS, ENTITY STATUS SET TO MICRO (ORIGINAL EVENT CODE: STOM); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
SULP Surcharge for late payment
FEPP Fee payment procedure

Free format text: PAT HLDR NO LONGER CLAIMS MICRO ENTITY STATE, ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: MTOS); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2553); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 12