US20030084038A1 - Transactional memory manager - Google Patents

Transactional memory manager Download PDF

Info

Publication number
US20030084038A1
US20030084038A1 US10/285,544 US28554402A US2003084038A1 US 20030084038 A1 US20030084038 A1 US 20030084038A1 US 28554402 A US28554402 A US 28554402A US 2003084038 A1 US2003084038 A1 US 2003084038A1
Authority
US
United States
Prior art keywords
search
database
pointer
new
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/285,544
Inventor
Aristotle Balogh
William Haworth
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verisign Inc
Original Assignee
Verisign Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=26987480&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20030084038(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Verisign Inc filed Critical Verisign Inc
Priority to US10/285,544 priority Critical patent/US20030084038A1/en
Assigned to VERISIGN, INC. reassignment VERISIGN, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BALOGH, ARISTOTLE, HAWORTH JR., WILLIAM F.
Publication of US20030084038A1 publication Critical patent/US20030084038A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/40Data acquisition and logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2315Optimistic concurrency control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2358Change logging, detection, and notification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/953Organization of data
    • Y10S707/959Network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/964Database arrangement
    • Y10S707/966Distributed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99938Concurrency, e.g. lock management in shared database
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99943Generating database or data structure, e.g. via user interface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability

Definitions

  • This disclosure relates to computer systems. More specifically, this disclosure relates to a method and system for providing high-speed database searching with concurrent updating for large database systems.
  • a root server i.e., a.root-server.net
  • the A root server maintains and distributes the Internet namespace root zone file to the 12 secondary root servers geographically distributed around the world (i.e., b.root-server.net, c.rootserver.net, etc.), while the corresponding gTLD servers (i.e., a.gtld-servers.net, b.gtldservers.net, etc.) are similarly distributed and support the top level domains (e.g., *.com, *.net, *.org, etc.).
  • FIG. 1 is a system block diagram, according to an embodiment of the present invention.
  • FIG. 2 is a detailed block diagram that illustrates a message data structure, according to an embodiment of the present invention.
  • FIG. 3 is a detailed block diagram that illustrates a message latency data structure architecture, according to an embodiment of the present invention.
  • FIG. 4 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • FIG. 5 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • FIG. 6 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • FIG. 7 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • FIG. 8 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • FIG. 9 is a top level flow diagram that illustrates a method for searching and concurrently updating a database, according to an embodiment of the present invention.
  • FIG. 10 is a top level flow diagram that illustrates a method for searching and concurrently updating a database, according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method and system for high-speed database searching with concurrent updating for large database systems. Specifically, a plurality of search queries may be received over a network, the database may be searched, and a plurality of search replies may be sent over the network. While searching the database, new information may be received over the network, a plurality of new database elements may be created based on the new information and a dirty bit may be set within each new database element. A pointer to each new database element may be written to the database using a single uninterruptible operation and the dirty bit within each new database element may be cleared.
  • FIG. 1 is a block diagram that illustrates a system according to an embodiment of the present invention.
  • system 100 may host a large, memory-resident database, receive search requests and provide search responses over a network.
  • system 100 may be a symmetric, multiprocessing (SMP) computer, such as, for example, an IBM RS/6000® M80 or S80 manufactured by International Business Machines Corporation of Armonk, N.Y., a Sun EnterpriseTM 10000 manufactured by Sun Microsystems, Inc. of Santa Clara, Calif., etc.
  • SMP symmetric, multiprocessing
  • System 100 may also be a multi-processor personal computer, such as, for example, a Compaq ProLiantTM ML530 (including two Intel Pentium® III 866 MHz processors) manufactured by Hewlett-Packard Company of Palo Alto, Calif.
  • System 100 may also include a multiprocessing operating system, such as, for example, IBM AIX® 4, Sun SolarisTM 8 Operating Environment, Red Hat Linux®6.2, etc.
  • System 100 may receive periodic updates over network 124 , which may be concurrently incorporated into the database.
  • system 100 may include at least one processor 102 - 1 coupled to bus 101 .
  • Processor 102 - 1 may include an internal memory cache (e.g., an L1 cache, not shown for clarity).
  • a secondary memory cache 103 - 1 (e.g., an L2 cache, L2/L3 caches, etc.) may reside between processor 102 - 1 and bus 101 .
  • system 100 may include a plurality of processors 102 - 1 . . . 102 -P coupled to bus 101 .
  • a plurality of secondary memory caches 103 - 1 . . . 103 -P may also reside between plurality of processors 102 - 1 . . .
  • System 100 may include memory 104 , such as, for example, random access memory (RAM), etc., coupled to bus 101 , for storing information and instructions to be executed by plurality of processors 102 - 1 . . . 102 -P.
  • memory 104 such as, for example, random access memory (RAM), etc.
  • Memory 104 may store a large database, for example, for translating Internet domain names into Internet addresses, for translating names or phone numbers into network addresses, for providing and updating subscriber profile data, for providing and updating user presence data, etc.
  • a large database for example, for translating Internet domain names into Internet addresses, for translating names or phone numbers into network addresses, for providing and updating subscriber profile data, for providing and updating user presence data, etc.
  • both the size of the database and the number of translations per second may be very large.
  • memory 104 may include at least 64 GB of RAM and may host a 500M (i.e., 500 ⁇ 10 6 ) record domain name database, a 500M record subscriber database, a 450M record telephone number portability database, etc.
  • an 8-byte pointer value may be written to a memory address on an 8-byte boundary (i.e., a memory address divisible by eight, or, e.g., 2N) using a single, uninterruptible operation.
  • an 8-byte boundary i.e., a memory address divisible by eight, or, e.g., 2N
  • secondary memory cache 103 - 1 may simply delay the 8-byte pointer write to memory 104 .
  • secondary memory cache 103 - 1 may be a look-through cache operating in write-through mode, so that a single, 8-byte store instruction may move eight bytes of data from processor 102 - 1 to memory 104 , without interruption, and in as few as two system clock cycles.
  • secondary memory cache 103 - 1 may be a look-through cache operating in write-back mode, so that the 8-byte pointer may first be written to secondary memory cache 103 - 1 , which may then write the 8-byte pointer to memory 104 at a later time, such as, for example, when the cache line in which the 8-byte pointer is stored is written to memory 104 (i.e., e.g., when the particular cache line, or the entire secondary memory cache, is “flushed”).
  • processor 102 - 1 once the data are latched onto the output pins of processor 102 - 1 , all eight bytes of data are written to memory 104 in one contiguous, uninterrupted transfer, which may be delayed by the effects of a secondary memory cache 103 - 1 , if present.
  • processors 102 - 2 . . . 102 -P once the data are latched onto the output pins of processor 102 - 1 , all eight bytes of data are written to memory 104 in one contiguous, uninterrupted transfer, which is enforced by the cache coherency protocol across secondary memory caches 103 - 1 . . . 103 -P, which may delay the write to memory 104 if present
  • processor 102 - 1 may issue two separate and distinct store instructions. For example, if the memory address begins four bytes before an 8-byte boundary (e.g., 8N ⁇ 4), the first store instruction transfers the four most significant bytes to memory 104 (e.g., 8N ⁇ 4), while the second store instruction transfers the four least significant bytes to memory 104 (e.g., 8N).
  • processor 102 - 1 may be interrupted, or, processor 102 - 1 may loose control of bus 101 to another system component (e.g., processor 102 -P, etc.). Consequently, the pointer value residing in memory 104 will be invalid until processor 102 - 1 can complete the second store instruction. If another component begins a single, uninterruptible memory read to this memory location, an invalid value will be returned as a presumably valid one.
  • another system component e.g., processor 102 -P, etc.
  • a new 4-byte pointer value may be written to a memory address divisible by four (e.g., 4N) using a single, uninterruptible operation.
  • a 4-byte pointer value may be written to the 8N ⁇ 4 memory location using a single store instruction.
  • all four bytes of data can not be transferred from processor 102 - 1 using a single store instruction, and the pointer value residing in memory 104 may be invalid for some period of time.
  • System 100 may also include a read only memory (ROM) 106 , or other static storage device, coupled to bus 101 for storing static information and instructions for processor 102 - 1 .
  • a storage device 108 such as a magnetic or optical disk, may be coupled to bus 101 for storing information and instructions.
  • System 100 may also include display 110 (e.g., an LCD monitor) and input device 112 (e.g., keyboard, mouse, trackball, etc.), coupled to bus 101 .
  • System 100 may include a plurality of network interfaces 114 - 1 . . . 114 - 0 , which may send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • network interface 114 - 1 may be coupled to bus 101 and local area network (LAN) 122
  • network interface 114 - 0 may coupled to bus 101 and wide area network (WAN) 124
  • Plurality of network interfaces 114 - 1 . . . 114 - 0 may support various network protocols, including, for example, Gigabit Ethernet (e.g., IEEE Standard 802.3-2002, published 2002), Fiber Channel (e.g., ANSI Standard X.3230-1994, published 1994), etc.
  • Plurality of network computers 120 - 1 . . . 120 -N may be coupled to LAN 122 and WAN 124 .
  • LAN 122 and WAN 124 may be physically distinct networks, while in another embodiment, LAN 122 and WAN 124 may be via a network gateway or router (not shown for clarity). Alternatively, LAN 122 and WAN 124 may be the same network.
  • system 100 may provide DNS resolution services.
  • DNS resolution services may generally be divided between network transport and data look-up functions.
  • system 100 may be a backend look-up engine (LUE) optimized for data look-up on large data sets, while plurality of network computers 120 - 1 . . . 120 -N may be a plurality of front-end protocol engines (PEs) optimized for network processing and transport.
  • LUE may be a powerful multiprocessor server that stores the entire DNS record set in memory 104 to facilitate high-speed, high-throughput searching and updating.
  • DNS resolution services may be provided by a series of powerful multiprocessor servers, or LUEs, each storing a subset of the entire DNS record set in memory to facilitate high-speed, high-throughput searching and updating
  • the plurality of PEs may be generic, low profile, PC-based machines, running an efficient multitasking operating system (e.g., Red Hat Linux® 6.2), that minimize the network processing transport load on the LUE in order to maximize the available resources for DNS resolution.
  • the PEs may handle the nuances of wire-line DNS protocol, respond to invalid DNS queries and multiplex valid DNS queries to the LUE over LAN 122 .
  • the number of PEs for a single LUE may be determined, for example, by the number of DNS queries to be processed per second and the performance characteristics of the particular system. Other metrics may also be used to determine the appropriate mapping ratios and behaviors.
  • a central on-line transaction processing (OLTP) server 140 - 1 may be coupled to WAN 124 and receive additions, modifications and deletions (i.e., update traffic) to database 142 - 1 from various sources.
  • OLTP server 140 - 1 may send updates to system 100 , which includes a local copy of database 142 - 1 , over WAN 124 .
  • OLTP server 140 - 1 may be optimized for processing update traffic in various formats and protocols, including, for example, HyperText Transmission Protocol (HTTP), Registry Registrar Protocol (RRP), Extensible Provisioning Protocol (EPP), Service Management System/800 Mechanized Generic Interface (MGI), and other on-line provisioning protocols.
  • a constellation of read-only LUEs may be deployed in a hub and spoke architecture to provide high-speed search capability conjoined with high-volume, incremental updates from OLTP server 140 - 1 .
  • data may be distributed over multiple OLTP servers 140 - 1 . . . 140 -S, each of which may be coupled to WAN 124 .
  • OLTP servers 140 - 1 . . . 140 -S may receive additions, modifications, and deletions (i.e., update traffic) to their respective databases 142 - 1 . . . 142 -S (not shown for clarity) from various sources.
  • OLTP servers 140 - 1 . . . 140 -S may send updates to system 100 , which may include copies of databases 142 - 1 . . . 142 -S, other dynamically-created data, etc., over WAN 124 .
  • OLTP servers 140 - 1 . . . 140 -S may receive update traffic from groups of remote sensors.
  • plurality of network computers 120 - 1 . . . 120 -N may also receive additions, modifications, and deletions (i.e., update traffic) from various sources over WAN 124 or LAN 122 .
  • plurality of network computers 120 - 1 . . . 120 -N may send updates, as well as queries, to system 100 .
  • each PE may combine, or multiplex, several DNS query messages, received over a wide area network (e.g., WAN 124 ), into a single Request SuperPacket and send the Request SuperPacket to the LUE (e.g., system 100 ) over a local area network (e.g., LAN 122 ).
  • the LUE may combine, or multiplex, several DNS query message replies into a single Response SuperPacket and send the Response SuperPacket to the appropriate PE over the local area network.
  • the maximum size of a Request or Response SuperPacket may be limited by the maximum transmission unit (MTU) of the physical network layer (e.g., Gigabit Ethernet).
  • MTU maximum transmission unit
  • typical DNS query and reply message sizes of less than 100 bytes and 200 bytes, respectively allow for over 30 queries to be multiplexed into a single Request SuperPacket, as well as over 15 replies to be multiplexed into a single Response SuperPacket.
  • a smaller number of queries e.g., 20 queries
  • the number of multiplexed queries and replies may be increased accordingly.
  • Each multitasking PE may include an inbound thread and an outbound thread to manage DNS queries and replies, respectively.
  • the inbound thread may un-marshal the DNS query components from the incoming DNS query packets received over a wide area network and multiplex several milliseconds of queries into a single Request SuperPacket.
  • the inbound thread may then send the Request SuperPacket to the LUE over a local area network.
  • the outbound thread may receive the Response SuperPacket from the LUE, de-multiplex the replies contained therein, and marshal the various fields into a valid DNS reply, which may then be transmitted over the wide area network.
  • other large-volume, query-based embodiments may be supported.
  • the Request SuperPacket may also include state information associated with each DNS query, such as, for example, the source address, the protocol type, etc.
  • the LUE may include the state information, and associated DNS replies, within the Response SuperPacket.
  • Each PE may then construct and return valid DNS reply messages using the information transmitted from the LUE. Consequently, each PE may advantageously operate as a stateless machine, i.e., valid DNS replies may be formed from the information contained in the Response SuperPacket.
  • the LUE may return the Response SuperPacket to the PE from which the incoming SuperPacket originated; however, other variations may obviously be possible.
  • each PE may maintain the state information associated with each DNS query and include a reference, or handle, to the state information within the Request SuperPacket.
  • the LUE may include the state information references, and associated DNS replies, within the Response SuperPacket.
  • Each PE may then construct and return valid DNS reply messages using the state information references transmitted from the LUE, as well as the state information maintained thereon.
  • the LUE may return the Response SuperPacket to the PE from which the incoming SuperPacket originated.
  • FIG. 2 is a detailed block diagram that illustrates a message data structure, according to an embodiment of the present invention.
  • message 200 may include header 210 , having a plurality of sequence number 211 - 1 . . . 211 -S and a plurality of message counts 212 - 1 . . . 212 -S, and data payload 215 .
  • message 200 may be used for Request SuperPackets and Response SuperPackets.
  • Request SuperPacket 220 may include header 230 , having a plurality of sequence number 231 - 1 . . . 231 -S and a plurality of message counts 232 - 1 . . . 232 -S, and data payload 235 having multiple DNS queries 236 - 1 . . . 236 -Q, accumulated by a PE over a predetermined period of time, such as, for example, several milliseconds.
  • each DNS query 236 - 1 . . . 236 -Q may include state information
  • each DNS query 236 - 1 . . . 236 -Q may include a handle to state information.
  • Response SuperPacket 240 may include header 250 , having a plurality of sequence number 251 - 1 . . . 251 -S and a plurality of message counts 252 - 1 . . . 252 -S, and data payload 255 having multiple DNS replies 256 - 1 . . . 256 -R approximately corresponding to the multiple DNS queries contained within Request SuperPacket 220 .
  • each DNS reply 256 - 1 . . . 256 -R may include state information associated with the corresponding DNS query, while in an alternative embodiment, each DNS reply 256 - 1 . . . 256 -R may include a handle to state information associated with the corresponding DNS query.
  • the total size of the corresponding DNS replies may exceed the size of data payload 255 of the Response SuperPacket 240 .
  • This overflow may be limited, for example, to a single reply, i.e., the reply associated with the last query contained within Request SuperPacket 220 .
  • the overflow reply may be preferably included in the next Response SuperPacket 240 corresponding to the next Request SuperPacket.
  • header 250 may include appropriate information to determine the extent of the overflow condition. Under peak processing conditions, more than one reply may overflow into the next Response SuperPacket.
  • header 250 may include at least two sequence numbers 251 - 1 and 251 - 2 and at least two message counts 252 - 1 and 252 - 2 , grouped as two pairs of complementary fields. While there may be “S” number of sequence number and message count pairs, typically, S is a small number, such as, e.g., 2, 3, 4, etc. Thus, header 250 may include sequence number 251 - 1 paired with message count 252 - 1 , sequence number 251 - 2 paired with message count 252 - 2 , etc. Generally, message count 252 - 1 may reflect the number of replies contained within data payload 255 that are associated with sequence number 251 - 1 . In an embodiment, sequence number 251 - 1 may be a two-byte field, while message count 252 - 1 may be a one-byte field.
  • data payload 235 of Request SuperPacket 220 may include seven DNS queries (as depicted in FIG. 2).
  • sequence number 231 - 1 may be set to a unique value (e.g., 1024) and message count 232 - 1 may be set to seven, while sequence number 231 - 2 and message count 232 - 2 may be set to zero.
  • header 230 may contain only one sequence number and one message count, e.g., sequence number 231 - 1 and message count 232 - 1 set to 1024 and seven, respectively.
  • Request SuperPacket 220 may contain all of the queries associated with a particular sequence number.
  • Data payload 255 of Response SuperPacket 240 may include seven corresponding DNS replies (as depicted in FIG. 2).
  • header 250 may include information similar to Request SuperPacket 220 , i.e., sequence number 251 - 1 set to the same unique value (i.e., 1024), message count 252 - 1 set to seven, and both sequence number 252 - 2 and message count 252 - 2 set to zero.
  • data payload 255 of Response SuperPacket 240 may include only five corresponding DNS replies, and message count 252 - 1 may be set to five instead. The remaining two responses associated with sequence number 1024 may be included within the next Response SuperPacket 240 .
  • the next Request SuperPacket 240 may include a different sequence number (e.g., 1025) and at least one DNS query, so that the next Response SuperPacket 240 may include the two previous replies associated with the 1024 sequence number, as well as at least one reply associated with the 1025 sequence number.
  • header 250 of the next Response SuperPacket 240 may include sequence number 251 - 1 set to 1024, message count 252 - 1 set to two, sequence number 251 - 2 set to 1025 and message count 252 - 2 set to one.
  • Response SuperPacket 240 may include a total of three replies associated with three queries contained within two different Request SuperPackets.
  • FIG. 3 is a detailed block diagram that illustrates a message latency data structure architecture, according to an embodiment of the present invention.
  • Message latency data structure 300 may include information generally associated with the transmission and reception of message 200 .
  • message latency data structure 300 may include latency information about Request SuperPackets and Response SuperPackets; this latency information may be organized in a table format indexed according to sequence number value (e.g., index 301 ).
  • sequence number value e.g., index 301
  • message latency data structure 300 may include a number of rows N equal to the total number of unique sequence numbers, as illustrated, generally, by table elements 310 , 320 and 330 .
  • SuperPacket header sequence numbers may be two bytes in length and define a range of unique sequence numbers from zero to 2 16 ⁇ 1 (i.e., 65,535). In this case, N may be equal to 65,536.
  • Latency information may include Request Timestamp 302 , Request Query Count 303 , Response Timestamp 304 , Response Reply Count 305 , and Response Message Count 306 . In an alternative embodiment, latency information may also include an Initial Response Timestamp (not shown).
  • table element 320 illustrates latency information for a Request SuperPacket 220 having a single sequence number 231 - 1 equal to 1024.
  • Request Timestamp 302 may indicate when this particular Request SuperPacket was sent to the LUE.
  • Request Query Count 303 may indicate how many queries were contained within this particular Request SuperPacket.
  • Response Timestamp 304 may indicate when a Response SuperPacket having a sequence number equal to 1024 was received at the PE (e.g., network computer 120 -N) and may be updated if more than one Response SuperPacket is received at the PE.
  • Response Reply Count 305 may indicate the total number of replies contained within all of the received Response SuperPackets associated with this sequence number (i.e., 1024).
  • Response Message Count 306 may indicate how many Response SuperPackets having this sequence number (i.e., 1024) arrived at the PE. Replies to the queries contained within this particular Request SuperPacket may be split over several Response SuperPackets, in which case, Response Timestamp 304 , Response Reply Count 305 , and Response Message Count 306 may be updated as each of the additional Response SuperPackets are received.
  • the Initial Response Timestamp may indicate when the first Response SuperPacket containing replies for this sequence number (i.e., 1024) was received at the PE.
  • Response Timestamp 304 may be updated when additional (i.e., second and subsequent) Response SuperPackets are received.
  • Various important latency metrics may be determined from the latency information contained within message latency data structure 300 .
  • simple cross-checking between Request Query Count 303 and Response Reply Count 305 for a given index 301 may indicate a number of missing replies. This difference may indicate the number of queries inexplicably dropped by the LUE.
  • Comparing Request Timestamp 302 and Response Timestamp 304 may indicate how well the particular PE/LUE combination may be performing under the current message load.
  • the difference between the current Request SuperPacket sequence number and the current Response SuperPacket sequence number may be associated with the response performance of the LUE; e.g., the larger the difference, the slower the performance.
  • the Response Message Count 306 may indicate how many Response SuperPackets are being used for each Request SuperPacket, and may be important in DNS resolution traffic analysis. As the latency of the queries and replies travelling between the PEs and LUE increases, the PEs may reduce the number of DNS query packets processed by the system.
  • the LUE may perform a multi-threaded look-up on the incoming, multiplexed Request SuperPackets, and may combine the replies into outgoing, multiplexed Response SuperPackets.
  • the LUE may spawn one search thread, or process, for each active PE and route all the incoming Request SuperPackets from that PE to that search thread.
  • the LUE may spawn a manager thread, or process, to control the association of PEs to search threads, as well as an update thread, or process, to update the database located in memory 104 .
  • Each search thread may extract the search queries from the incoming Request SuperPacket, execute the various searches, construct an outgoing Response SuperPacket containing the search replies and send the SuperPacket to the appropriate PE.
  • the update thread may receive updates to the database, from OLTP 140 - 1 , and incorporate the new data into the database.
  • plurality of network computers 120 - 1 . . . 120 -N may send updates to system 100 . These updates may be included, for example, within the incoming Request SuperPacket message stream.
  • the LUE may spend less than 15% of its processor capacity on network processing, thereby dramatically increasing search query throughput.
  • an IBM® 8-way M80 may sustain search rates of 180 k to 220 k queries per second (qps), while an IBM® 24-way S80 may sustain 400 k to 500 k qps.
  • a dual Pentium® III 866 MHz multi-processor personal computer operating Red Hat Linux® 6.2 may sustain update rates on the order of 100K/sec.
  • increases in hardware performance also increase search and update rates associated with embodiments of the present invention, and as manufacturers replace these multiprocessor computers with faster-performing machines, for example, the sustained search and update rates may increase commensurately.
  • system 100 is not limited to a client or server architecture, and embodiments of the present invention are not limited to any specific combination of hardware and/or software.
  • FIG. 4 is a block diagram that illustrates a general database architecture according to an embodiment of the present invention.
  • database 400 may include at least one table or group of database records 401 , and at least one corresponding search index 402 with pointers (indices, direct byte-offsets, etc.) to individual records within the group of database records 401 .
  • pointer 405 may reference database record 410 .
  • database 400 may include at least one hash table 403 as a search index with pointers (indices, direct byte-offsets, etc.) into the table or group of database records 401 .
  • a hash function may map a search key to an integer value which may then be used as an index into hash table 403 .
  • hash buckets may be created using a singly-linked list of hash chain pointers.
  • each entry within hash table 403 may contain a pointer to the first element of a hash bucket, and each element of the hash bucket may contain a hash chain pointer to the next element, or database record, in the linked-list.
  • a hash chain pointer may be required only for those elements, or database records, that reference a subsequent element in the hash bucket.
  • Hash table 403 may include an array of 8-byte pointers to individual database records 401 .
  • hash pointer 404 within hash table 403 may reference database record 420 as the first element within a hash bucket.
  • Database record 420 may contain a hash chain pointer 424 which may reference the next element, or database record, in the hash bucket.
  • Database record 420 may also include a data length 421 , and associated fixed or variable-length data 422 .
  • a null character 423 indicating the termination of data 422 , may be included.
  • database record 420 may include a data pointer 425 which may reference another database record, either within the group of database records 401 or within a different table or group of database records (not shown), in which additional data may be located.
  • System 100 may use various, well-known algorithms to search this data structure architecture for a given search term or key.
  • database 400 may be searched by multiple search processes, or threads, executing on at least one of the plurality of processors 102 - 1 . . . 102 -P.
  • modifications to database 400 may not be integrally performed by an update thread (or threads) unless the search thread(s) are prevented from accessing database 400 for the period of time necessary to add, modify, or delete information within database 400 .
  • the group of database records 401 may be locked by an update thread to prevent the search threads from accessing database 400 while the update thread is modifying the information within database record 430 .
  • FIG. 5 is a block diagram that illustrates a general database architecture according to another embodiment of the present invention.
  • database 500 may include a highly-optimized, read-only, master snapshot file 510 and a growing, look-aside file 520 .
  • Master snapshot file 510 may include at least one table or group of database records 511 , and at least one corresponding search index 512 with pointers (indices, direct byte-offsets, etc.) to individual records within the group of database records 511 .
  • master snapshot file 510 may include at least one hash table 513 as a search index with pointers (indices, direct byte-offsets, etc.) into the table or group of database records 511 .
  • look-aside file 520 may include at least two tables or groups of database records, including database addition records 521 and database deletion records 531 .
  • Corresponding search indices 522 and 532 may be provided, with pointers (indices, direct byte-offsets, etc.) to individual records within the database addition records 521 and database deletion records 531 .
  • look-aside file 520 may include hash tables 523 and 533 as search indices, with pointers (indices, direct byte-offsets, etc.) into database addition records 521 and database deletion records 531 , respectively.
  • System 100 may use various, well-known algorithms to search this data structure architecture for a given search term or key.
  • look-aside file 520 may include all the recent changes to the data, and may be searched before read-only master snapshot file 510 . If the search key is found in look-aside file 520 , the response is returned without accessing snapshot file 510 , but if the key is not found, then snapshot file 510 may be searched.
  • search query rates drop dramatically, by a factor of 10 to 50, or more, for example. Consequently, to avoid or minimize any drop in search query rates, snapshot file 510 may be periodically updated, or recreated, by incorporating all of the additions, deletions and modifications contained within look-aside file 520
  • Data within snapshot file 510 are not physically altered but logically added, modified or deleted.
  • data within snapshot file 510 may be deleted, or logically “forgotten,” by creating a corresponding delete record within database deletion records 531 and writing a pointer to the delete record to the appropriate location in hash table 533 .
  • Data within snapshot file 510 may be logically modified by copying a data record from snapshot file 510 to a new data record within database addition records 521 , modifying the data within the new entry, and then writing a pointer to the new entry to the appropriate hash table (e.g., hash table 522 ) or chain pointer within database addition records 521 .
  • appropriate hash table e.g., hash table 522
  • data within snapshot file 510 may be logically added to snapshot file 510 by creating a new data record within database addition records 521 and then writing a pointer to the new entry to the appropriate hash table (e.g., hash table 522 ) or chain pointer within database addition records 521 .
  • hash table e.g., hash table 522
  • chain pointer within database addition records 521 .
  • snapshot file 510 may include domain name data and name server data, organized as separate data tables, or blocks, with separate search indices (e.g., 511 - 1 , 511 - 2 , 512 - 1 , 512 - 2 , 513 - 1 , 513 - 2 , etc., not shown for clarity).
  • search indices e.g., 511 - 1 , 511 - 2 , 512 - 1 , 512 - 2 , 513 - 1 , 513 - 2 , etc., not shown for clarity).
  • look-aside file 520 may include additions and modifications to both the domain name data and the name server data, as well as deletions to both the domain name data and the name server data (e.g., 521 - 1 , 521 - 2 , 522 - 1 , 522 - 2 , 523 - 1 , 523 - 2 , 531 - 1 , 531 - 2 , 532 - 1 , 532 - 2 , 533 - 1 , 533 - 2 , etc., not shown for clarity).
  • FIG. 6 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • database 600 may be organized into a single, searchable representation of the data. Data set updates may be continuously incorporated into database 600 , and deletes or modifications may be physically performed on the relevant database records to free space within memory 104 , for example, for subsequent additions or modifications.
  • the single, searchable representation scales extremely well to large data set sizes and high search and update rates, and obviates the need to periodically recreate, propagate and reload snapshot files among multiple search engine computers.
  • database 600 may include domain name data 610 and name server data 620 .
  • Domain name data 610 and name server data 620 may include search indices with pointers (indices, direct byte-offsets, etc.) into blocks of variable length records.
  • a hash function may map a search key to an integer value which may then be used as an index into a hash table.
  • hash buckets may be created for each hash table index using a singly-linked list of hash chain pointers.
  • Domain name data 610 may include, for example, a hash table 612 as a search index and a block of variable-length domain name records 611 .
  • Hash table 612 may include an array of 8-byte pointers to individual domain name records 611 , such as, for example, pointer 613 referencing domain name record 620 .
  • Variable-length domain name record 620 may include, for example, a next record offset 621 , a name length 622 , a normalized name 623 , a chain pointer 624 (i.e., e.g., pointing to the next record in the hash chain), a number of name servers 625 , and a name server pointer 626 .
  • the size of both chain pointer 624 and name server pointer 626 may be optimized to reflect the required block size for each particular type of data, e.g., eight bytes for chain pointer 624 and four bytes for name server pointer 626 .
  • Name server data 630 may include, for example, a hash table 632 as a search index and a block of variable-length name server records 631 .
  • Hash table 632 may include an array of 4-byte pointers to individual name server records 631 , such as, for example, pointer 633 referencing name server record 640 .
  • Variable-length name server record 640 may include, for example, a next record offset 641 , a name length 642 , a normalized name 643 , a chain pointer 644 (i.e., e.g., pointing to the next record in the hash chain), a number of name server network addresses 645 , a name server address length 646 , and a name server network address 647 , which may be, for example, an Internet Protocol (IP) network address.
  • IP Internet Protocol
  • name server network addresses may be stored in ASCII (American Standard Code for Information Interchange, e.g., ISO-14962-1997, ANSI-X3.4-1997, etc.) or binary format; in this example, name server network address length 646 indicates that name server network address 647 is stored in binary format (i.e., four bytes).
  • the size of chain pointer 644 may also be optimized to reflect the required name server data block size, e.g., four bytes.
  • both search indices such as hash tables, and variable-length data records may be structured so that 8-byte pointers are located on 8-byte boundaries in memory.
  • hash table 612 may contain a contiguous array of 8-byte pointers to domain name records 611 , and may be stored at a memory address divisible by eight (i.e., an 8-byte boundary, or 8N).
  • search indices such as hash tables and variable-length data records may be structured so that 4-byte pointers are located on 4-byte boundaries in memory.
  • hash table 632 may contain a contiguous array of 4-byte pointers to name server records 631 , and may be stored at a memory address divisible by four (i.e., a 4-byte boundary, or 4N). Consequently, modifications to database 600 may conclude by updating a pointer to an aligned address in memory using a single uninterruptible operation, including, for example writing a new pointer to the search index, such as a hash table or writing a new hash chain pointer to a variable-length data record.
  • FIG. 7 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • database 700 may also be organized into a single, searchable representation of the data. Data set updates may be continuously incorporated into database 700 , and deletes or modifications may be physically performed on the relevant database records to free space within memory 104 , for example, for subsequent additions or modifications.
  • the single, searchable representation scales extremely well to large data set sizes and high search and update rates, and obviates the need to periodically recreate, propagate and reload snapshot files among multiple search engine computers.
  • An exemplary organization may use an alternative search index to hash tables for ordered, sequential access to the data records, such as the ternary search tree (trie), or TST, which combines the features of binary search trees and digital search tries.
  • TSTs advantageously minimize the number of comparison operations required to be performed, particularly in the case of a search miss, and may yield search performance metrics exceeding search engine implementations with hashing.
  • TSTs may also provide advanced text search features, such as, e.g., wildcard searches, which may be useful in text search applications, such as, for example, whois, domain name resolution, Internet content search, etc..
  • a TST may contain a sequence of nodes linked together in a hierarchical relationship.
  • a root node may be located at the top of the tree, related child nodes and links may form branches, and leaf nodes may terminate the end of each branch.
  • Each leaf node may be associated with a particular search key, and each node on the path to the leaf node may contain a single, sequential element of the key.
  • Each node in the tree contains a comparison character, or split value, and three pointers to other successive, or “child,” nodes in the tree. These pointers reference child nodes whose split values are less than, equal to, or greater than the node's split value.
  • a leaf node may also contain a pointer to a key record, which may, in turn, contain at least one pointer to a terminal data record containing the record data associated with the key (e.g., an IP address).
  • the key record may contain the record data in its entirety. Record data may be stored in binary format, ASCII text format, etc.
  • database 700 may be organized as a TST, including a plurality of fixed-length search nodes 701 , a plurality of variable-length key data records 702 and a plurality of variable-length terminal data records 703 .
  • Search nodes 701 may include various types of information as described above, including, for example, a comparison character (or value) and position, branch node pointers and a key pointer.
  • the size of the node pointers may generally be determined by the number of nodes, while the size of the key pointers may generally be determined by the size of the variable-length key data set.
  • Key data records 702 may contain key information and terminal data information, including, for example, pointers to terminal data records or embedded record data, while terminal data records 703 may contain record data.
  • each fixed-length search node may be 24 bytes in length.
  • Search node 710 may contain an eight-bit comparison character (or byte value) 711 , a 12-bit character (or byte) position 712 , and a 12-bit node type/status (not shown for clarity); these data may be encoded within the first four bytes of the node.
  • the comparison character 711 may be encoded within the first byte of the node as depicted in FIG. 7, or, alternatively, character position 712 may be encoded within the first 12 bits of the node in order to optimize access to character position 712 using a simple shift operation.
  • each search node may contain three 32-bit pointers, i.e., pointer 713 , pointer 714 and pointer 715 , representing “less than,” “equal to,” and “greater than” branch node pointers, respectively.
  • These pointers may contain a counter, or node index, rather than a byte-offset or memory address.
  • the byte-offset may be calculated from the counter, or index value, and the fixed-length, e.g., counter*length.
  • the final four bytes may contain a 40-bit key pointer 716 , which may be a null value indicating that a corresponding key data record does not exist (shown) or a pointer to an existing corresponding key data record (not shown), as well as other data, including, for example, a 12-bit key length and a 12-bit pointer type/status field.
  • Key pointer 716 may contain a byte offset to the appropriate key data record, while the key length may be used to optimize search and insertion when eliminating one-way branching within the TST.
  • the pointer type/status field may contain information used in validity checking and allocation data used in memory management.
  • key data record 750 may include, for example, a variable-length key 753 and at least one terminal data pointer. As depicted in FIG. 7, key data record 750 includes two terminal data pointers: terminal data pointer 757 and terminal data pointer 758 . Key data record 750 may be prefixed with a 12-bit key length 751 and a 12-bit terminal pointer count/status 752 , and may include padding (not shown for clarity) to align the terminal data pointer 757 and terminal data pointer 758 on an 8-byte boundary in memory 104 . Terminal data pointer 757 and terminal data pointer 758 may each contain various data, such as, for example, terminal data type, length, status or data useful in binary record searches.
  • Terminal data pointer 757 and terminal data pointer 758 may be sorted by terminal data type for quicker retrieval of specific resource records (e.g., terminal data record 760 and terminal data record 770 ).
  • key data record 740 may include embedded terminal data 746 rather than, or in addition to, terminal data record pointers.
  • key data record 740 may include a key length 741 , a terminal pointer count 742 , a variable-length key 743 , the number of embedded record elements 744 , followed by a record element length 745 (in bytes, for example) and embedded record data 746 (e.g., a string, a byte sequence, etc.) for each of the number of embedded record elements 744 .
  • terminal data record 760 may include a 12-bit length 761 , a 4-bit status, and a variable-length string 762 (e.g., an IP address).
  • variable length string 762 may be a byte sequence.
  • Terminal data record 760 may include padding to align each terminal data record to an 8-byte boundary in memory 104 .
  • terminal data record 760 may include padding to a 4-byte boundary, or, terminal data record 760 may not include any padding.
  • Memory management algorithms may determine, generally, whether terminal data records 760 are padded to 8-byte, 4-byte, or 0-byte boundaries.
  • terminal data record 770 may include a 12-bit length 771 , a 4-bit status, and a variable-length string 772 (e.g., an IP address).
  • both search indices such as TSTs, and data records may be structured so that 8-byte pointers are located on 8-byte boundaries in memory.
  • key pointer 726 may contain an 8-byte (or less) pointer to key data record 740 , and may be stored at a memory address divisible by eight (i.e., an 8-byte boundary, or 8N).
  • both search indices, such as TSTs, and data records may be structured so that 4-byte pointers are located on 4-byte boundaries in memory.
  • node branch pointer 724 may contain a 4-byte (or less) pointer to node 730 , and may be stored at a memory address divisible by four (i.e., a 4-byte boundary, or 4N). Consequently, modifications to database 700 may conclude by updating a pointer to an aligned address in memory using a single uninterruptible operation, including, for example writing a new pointer to the search index, such as a TST node, or writing a new pointer to a data record.
  • FIG. 8 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention.
  • database 800 may also be organized into a single, searchable representation of the data. Data set updates may be continuously incorporated into database 800 , and deletes or modifications may be physically performed on the relevant database records to free space within memory 104 , for example, for subsequent additions or modifications.
  • the single, searchable representation scales extremely well to large data set sizes and high search and update rates, and obviates the need to periodically recreate, propagate and reload snapshot files among multiple search engine computers.
  • database 800 may use an alternative ordered search index, organized as an ordered access key tree (i.e., “OAK tree”).
  • Database 800 may include, for example, a plurality of variable-length search nodes 801 , a plurality of variable-length key records 802 and a plurality of variable-length terminal data records 803 .
  • Search nodes 801 may include various types of information as described above, such as, for example, search keys, pointers to other search nodes, pointers to key records, etc.
  • plurality of search nodes 801 may include vertical and horizontal nodes containing fragments of search keys (e.g., strings), as well as pointers to other search nodes or key records.
  • Vertical nodes may include, for example, at least one search key, or character, pointers to horizontal nodes within the plurality of search nodes 801 , pointers to key records within the plurality of key records 802 , etc.
  • Horizontal nodes may include, for example, at least two search keys, or characters, pointers to vertical nodes within the plurality of search nodes 801 , pointers to horizontal nodes within the plurality of search nodes 801 , pointers to key records within the plurality of key records 802 , etc.
  • vertical nodes may include a sequence of keys (e.g., characters) representing a search key fragment (e.g., string), while horizontal nodes may include various keys (e.g., characters) that may exist at a particular position within the search key fragment (e.g., string).
  • keys e.g., characters
  • horizontal nodes may include various keys (e.g., characters) that may exist at a particular position within the search key fragment (e.g., string).
  • plurality of search nodes 801 may include vertical node 810 , vertical node 820 and horizontal node 830 .
  • Vertical node 810 may include, for example, a 2-bit node type 811 (e.g., “10”), a 38-bit address 812 , an 8-bit length 813 (e.g., “8”), an 8-bit first character 814 (e.g., “I”) and an 8-bit second character 815 (e.g., “null”).
  • address 812 may point to the next node in the search tree, i.e., vertical node 820 .
  • 38-bit address 812 may include a 1-bit terminal/nodal indicator and a 37-bit offset address to reference one of the 8-byte words within a 1 Tbyte ( ⁇ 10 12 byte) address space of memory 104 .
  • vertical node 810 may be eight bytes (64 bits) in length, and, advantageously, may be located on an 8-byte word boundary within memory 104 .
  • each vertical node within plurality of search nodes 801 may be located on an 8-byte word boundary within memory 104 .
  • a vertical node may include a multi-character, search key fragment (e.g., string). Generally, search keys without associated key data records may be collapsed into a single vertical node to effectively reduce the number of vertical nodes required within plurality of search nodes 801 .
  • vertical node 810 may include eight bits for each additional character, above two characters, within the search key fragment, such as, for example, 8-bit characters 816 - 1 , 816 - 2 . . . 816 -N (shown in phantom outline).
  • vertical node 810 may be padded to a 64-bit boundary within memory 104 in accordance with the number of additional characters located within the string fragment.
  • characters one and two may be assigned to first character 814 and second character 815 , respectively, and 56 bits of additional character information, corresponding to characters three through nine, may be appended to vertical node 810 .
  • An additional eight bits of padding may be included to align the additional character information on an 8-byte word boundary.
  • vertical node 820 may include, for example, a 2-bit node type 821 (e.g., “10”), a 38-bit address 822 , an 8-bit length 823 (e.g., “8”), an 8-bit first character 824 (e.g., “a”) and an 8-bit second character 825 (e.g., “null”).
  • address 822 may point to the next node in the search tree, i.e., horizontal node 830 .
  • vertical node 820 may be eight bytes in length, and, advantageously, may be located on an 8-byte word boundary within memory 104 .
  • additional information may also be included within vertical node 820 if required, as described above with reference to vertical node 810 .
  • Horizontal node 830 may include, for example, a 2-bit node type 831 (e.g., “01”), a 38-bit first address 832 , an 8-bit address count 833 (e.g., 2), an 8-bit first character 834 (e.g., “•”), an 8-bit last character 835 (e.g., “w”), a variable-length bitmap 836 and a 38-bit second address 837 .
  • a 2-bit node type 831 e.g., “01”
  • a 38-bit first address 832 e.g., an 8-bit address count 833 (e.g., 2)
  • an 8-bit first character 834 e.g., “•”
  • an 8-bit last character 835 e.g., “w”
  • variable-length bitmap 836 e.g., “w”
  • first character 834 may include a single character, “•” representing the search key fragment “la” defined by vertical nodes 810 and 820
  • last character 831 may include a single character “w,” representing the search key fragment “law” defined by vertical nodes 810 and 820
  • the last character 835 of horizontal node 830
  • First address 832 may point to key data record 840 , associated with the search key fragment “la,” while second address 837 may point to key data record 850 associated with the search key fragment “law.”
  • Bitmap 836 may advantageously indicate which keys (e.g., characters) are referenced by horizontal node 830 .
  • a “1” within a bit position in bitmap 836 indicates that the key, or character, is referenced by horizontal node 830
  • a “0” within a bit position in bitmap 836 may indicate that the key, or character, is not referenced by horizontal node 830 .
  • the length of bitmap 836 may depend upon the number of sequential keys, or characters, between first character 834 and last character 835 , inclusive of these boundary characters.
  • bitmap 836 may be 26 bits in length, where each bit corresponds to one of the characters between, and including, “a” through “z.”
  • additional 38-bit addresses would be appended to the end of horizontal node 830 , corresponding to each of the characters represented within bitmap 836 .
  • Each of these 38-bit addresses, as well as bitmap 836 may be padded to align each quantity on an 8-byte word boundary within memory 104 .
  • the eight-bit ASCII character set may be used as the search key space so that bitmap 836 may be as long as 256 bits (i.e., 2 8 bits or 32 bytes).
  • bitmap 836 may be two bits in length and may include a “1” in each bit position corresponding to last character 835 .
  • key data record 850 may include, for example, a variable-length key 853 and at least one terminal data pointer. As depicted in FIG. 8, key data record 850 includes two terminal data pointers, terminal data pointer 857 and terminal data pointer 858 . Key data record 850 may be prefixed with a 12-bit key length 851 and a 12-bit terminal pointer count/status 852 , and may include padding (not shown for clarity) to align the terminal data pointer 857 and terminal data pointer 858 on an 8-byte boundary in memory 104 .
  • Terminal data pointer 857 and terminal data pointer 858 may each contain a 10-bit terminal data type and other data, such as, for example, length, status or data useful in binary record searches. Terminal data pointer 857 and terminal data pointer 858 may be sorted by terminal data type for quicker retrieval of specific resource records (e.g., terminal data record 860 and terminal data record 870 ).
  • key data record 840 may include embedded terminal data 846 rather than a terminal data record pointer.
  • key data record 840 may include a key length 841 , a terminal pointer count 842 , a variable-length key 843 , the number of embedded record elements 844 , followed by a record element length 845 (in bytes, for example) and embedded record data 846 (e.g., a string, a byte sequence, etc.) for each of the number of embedded record elements 844 .
  • terminal data record 860 may include a 12-bit length 861 , a 4-bit status, and a variable-length string 862 (e.g., an IP address).
  • variable length string 862 may be a byte sequence.
  • Terminal data record 860 may include padding (not shown for clarity) to align each terminal data record to an 8-byte boundary in memory 104 .
  • terminal data record 860 may include padding (not shown for clarity) to a 4-byte boundary, or, terminal data record 860 may not include any padding.
  • Memory management algorithms may determine, generally, whether terminal data records 760 are padded to 8-byte, 4-byte, or 0-byte boundaries.
  • terminal data record 870 may include a 12-bit length 871 , a 4-bit status, and a variable-length string 872 (e.g., an IP address).
  • both search indices such as OAK trees, and data records may be structured so that 8-byte pointers are located on 8-byte boundaries in memory.
  • vertical node 810 may contain an 8-byte (or less) pointer to vertical node 820 , and may be stored at a memory address divisible by eight (i.e., an 8-byte boundary, or 8N).
  • both search indices, such as OAK trees, and data records may be structured so that 4-byte pointers are located on 4-byte boundaries in memory.
  • modifications to database 800 may conclude by updating a pointer to an aligned address in memory using a single uninterruptible operation, including, for example writing a new pointer to the search index, such as an OAK trees node, or writing a new pointer to a data record.
  • a search index such as an OAK trees node
  • an OAK tree data structure is extremely space efficient and 8-bit clean.
  • Regular expression searches may be used to search vertical nodes containing multi-character string fragments, since the 8-bit first character (e.g., first character 814 ), the 8-bit second character (e.g., second character 8 - 15 ) and any additional 8-bit characters (e.g., additional characters 816 - 1 . . . 816 -N) may be contiguously located within the vertical node (e.g., vertical node 810 ). Search misses may be discovered quickly, and, no more than N nodes may need to be traversed to search for an N-character length search string.
  • FIG. 9 is a top level flow diagram that illustrates a method for searching and concurrently updating a database without the use of operating system or database table locks, according to embodiments of the present invention.
  • An update thread and a plurality of search threads may be created ( 900 ).
  • system 100 may spawn a single update thread to incorporate updates to the local database received, for example, from OLTP server 140 - 1 over WAN 124 .
  • system 100 may receive updates from OLTP servers 140 - 1 . . . 140 -S over WAN 124 , and from plurality of network computers 120 - 1 . . . 120 -N over WAN 124 or LAN 122 .
  • System 100 may also spawn a search thread in response to each session request received from the plurality of network computers 120 - 1 . . . 120 -N.
  • a manger thread may poll one or more control ports, associated with one or more network interfaces 114 - 1 . . . 114 - 0 , for session requests transmitted from the plurality of network computers 120 - 1 . . . 120 -N.
  • the manage thread may spawn a search thread and associate the search thread with that particular network computer (e.g., PE).
  • system 100 may spawn a number of search threads without polling for session requests from the plurality of network computers 120 - 1 . . . 120 -N.
  • the search threads may not be associated with particular network computers and may be distributed evenly among the plurality of processors 102 - 1 . . . 102 -P.
  • the search threads may execute on a subset of the plurality of processors 102 - 1 . . . 102 -P.
  • the number of search threads may not necessarily match the number of network computers (e.g., N).
  • a plurality of search queries may be received ( 910 ) over the network.
  • plurality of network computers 120 - 1 . . . 120 -N may send the plurality of search queries to system 100 over LAN 122 , or, alternatively, WAN 124 .
  • the plurality of search queries may contain, for example, a search term or key, as well as state information that may be associated with each query (e.g., query source address, protocol type, etc.). State information may be explicitly maintained by system 100 , or, alternatively, a state information handle may be provided.
  • each of the plurality of network computers 120 - 1 . . . 120 -N may multiplex a predetermined number of search queries into a single network packet for transmission to system 100 (e.g., a Request SuperPacket 220 as depicted in FIG. 2).
  • a plurality of search queries and the new information may be received ( 910 , 960 ) concurrently over the network.
  • plurality of network computers 120 - 1 . . . 120 -N may send the plurality of search queries and the new information to system 100 over LAN 122 , or, alternatively, WAN 124 .
  • the plurality of search queries may contain, for example, a search term or key, as well as state information that may be associated with each query (e.g., query source address, protocol type, etc.).
  • the new information may include, for example, additions, modifications or deletions to database, and may be grouped together as a transaction with an associated identifier.
  • each of the plurality of network computers 120 - 1 . . . 120 -N may multiplex a predetermined number of search queries and new information into a single network packet for transmission to system 100 , such as, for example, a single Request SuperPacket 220 (new information not depicted for clarity).
  • the state information associated with those queries may include the transaction identifier, and, typically, may be maintained by system 100 .
  • search queries that depend upon the transaction will pend until the update thread successfully completes and commits the transaction.
  • Each search query may be assigned ( 920 ) to one of the search threads for processing.
  • each search thread may be associated with one of the plurality of network computers 120 - 1 . . . 120 -N and all of the search queries received from that particular network computer may be assigned ( 920 ) to the search thread.
  • one search thread may process all of the search queries arriving from a single network computer (e.g., a single PE).
  • each search thread may extract individual search queries from a single, multiplexed network packet (e.g., Request SuperPacket 220 as depicted in FIG. 2), or, alternatively, the extraction may be performed by a different process or thread.
  • the search queries received from each of the plurality of network computers 120 - 1 . . . 120 -N may be assigned ( 920 ) to different search threads.
  • the multi-thread assignment may be based on an optimal distribution function which may incorporate various system parameters including, for example, processor loading.
  • the assignment of search queries to search threads may change over time, based upon various system parameters, including processor availability, system component performance, etc.
  • Various mechanisms may be used to convey search queries to assigned search threads within system 100 , such as, for example, shared memory, inter-process messages, tokens, semaphores, etc.
  • Each search thread may search ( 930 ) the database based on the assigned search queries.
  • each search thread may extract individual search queries from a single, multiplexed network packet (e.g., Request SuperPacket 220 as depicted in FIG. 2), or, alternatively, the extraction may be performed by a different process or thread.
  • searching the database may depend upon the underlying structure of the database. In an embodiment, searching the database may depend upon the modifications contained within a particular transaction for those search queries dependent upon the transaction.
  • database 400 may be searched ( 930 ) for the search key.
  • the data record (e.g., database record 420 ) corresponding to the search key may then be determined.
  • look-aside file 520 may first be searched ( 930 ) for the search key, and, if a match is not determined, then snapshot file 510 may be searched ( 930 ). The data record corresponding to the search key may then be determined.
  • domain name data 610 may first be searched ( 930 ) for the search key, and then the resource data within name server data 630 , corresponding to the search key, may then be determined. For example, for the “la.com” search key, a match may be determined with domain name record 620 in domain name data 610 . The appropriate information may be extracted, including, for example, name server pointer 626 . Then, the appropriate name server record 640 may be indexed using name server pointer 626 , and name server network address 647 may be extracted.
  • the TST may be searched ( 930 ) for the search key, from which the resource data may be determined.
  • search nodes 701 may be searched ( 930 ), and a match determined with node 730 .
  • Key pointer 736 may be extracted, from which the key data record 750 may be determined.
  • the number of terminal data pointers 752 may then be identified and each terminal data pointer may be extracted.
  • terminal data pointer 757 may reference terminal data record 760 and terminal data pointer 758 may reference and terminal data record 770 .
  • the variable-length resource data e.g., name server network address 762 and name server network address 772 , may then be extracted from each terminal data record using the length 761 and 771 , respectively..
  • the OAK tree may be searched ( 930 ) for the search key, from which the resource data may be determined.
  • search nodes 801 may be searched ( 930 ), and a match determined with node 830 .
  • Second address 837 may be extracted, from which the key data record 850 may be determined.
  • the number of terminal data pointers 852 may then be identified and each terminal data pointer may be extracted.
  • terminal data pointer 857 may reference terminal data record 860 and terminal data pointer 858 may reference and terminal data record 870 .
  • the variable-length resource data e.g., name server network address 862 and name server network address 872 , may then be extracted from each terminal data record using the length 861 and 871 , respectively.
  • Each search thread may create ( 940 ) a plurality of search replies corresponding to the assigned search queries. If a match is not found for a particular search key, the reply may include an appropriate indication, such as, for example the null character. Referring to FIGS. 6 - 8 , for example, a search key might be “law.com” and the corresponding resource data might be “180.1.1.1”. More than one name server network address may be associated with a search key, in which case, more than one name server network address may be determined.
  • the replies may be sent ( 950 ) over the network.
  • each search thread may multiplex the appropriate replies into a single network packet (e.g., Response SuperPacket 240 ) corresponding to the single network packet containing the original queries (e.g., Request SuperPacket 220 ).
  • a different process or thread may multiplex the appropriate replies into the single network packet.
  • the response network packet may then be sent ( 950 ) to the appropriate network computer within the plurality of network computers 120 - 1 . . . 120 -N via LAN 122 , or alternatively, WAN 124 .
  • the response packets may be sent to the same network computer from which the request packets originated, while in another embodiment, the response packets may be sent to a different network computer.
  • the update thread may receive ( 960 ) new information over the network.
  • new information may be sent, for example, from the OLTP server 140 - 1 to system 100 over WAN 124 .
  • system 100 may receive updates from OLTP servers 140 - 1 . . . 140 -S over WAN 124 , and from plurality of network computers 120 - 1 . . . 120 -N over WAN 124 or LAN 122 .
  • plurality of network computers 120 - 1 . . . 120 -N may send the plurality of search queries and the new information to system 100 over LAN 122 , or, alternatively, WAN 124 . Consequently, in this embodiment, the plurality of search queries and the new information may be received ( 910 , 960 ) concurrently over the network.
  • the new information may include new domain name data, new name server data, a new name server for an existing domain name, etc.
  • the new information may indicate that a domain name record, name server record, etc., may be deleted from the database.
  • any information contained within the database may be added, modified or deleted, as appropriate.
  • several modifications to the database may be grouped together as a transaction and applied to the database as a consistent modification set.
  • a transaction may include various combinations of database record additions, modifications or deletions.
  • an indicator field e.g., “dirty bit”
  • the dirty bits may be cleared for all the new database elements effected by the transaction. In some sense, the new information may be considered to be “committed.”
  • the database may be transformed from one valid state to another valid state without restricting search access to the database.
  • no operating system or database table locks are required to prevent search queries from accessing the database during these update periods.
  • a slight performance penalty is incurred, because a search query may need to be repeated if the dirty bit is determined to be set for any particular database record.
  • the dirty bit may be located within the most significant word of the database record, so that the bit may be inspected as soon as this word is transferred from memory 104 to processor 102 - 1 , for example. Additional memory transfers associated with the remaining portion of the database record may thus be avoided if the dirty bit is determined to be set.
  • the query-retry period may be on the order of nano-seconds for the exemplary system embodiments discussed with reference to FIG. 1. Typically, the dirty bit may be cleared before the query-retry accesses the particular database record again.
  • the point-in-time consistent query result may be reconstructed from the contents of the redo log, or log manager, for example, as is common practice in transactional databases systems.
  • repeating the query may usually incur a lesser performance penalty than reconstructing the query result from the log manager.
  • reconstructing the query result from the log manager may be preferred, so that the query result may not be unduly delayed.
  • a transaction may include modifying database records 410 and 420 , modifying database record 420 and adding a new database record (e.g., database record 430 ), modifying database record 420 and deleting a database record (e.g., database record 410 ), etc.
  • a transaction may include modifying domain name record 620 and name server record 640 , deleting domain name record 620 and adding domain name record 615 , etc.
  • a transaction may include modifying key data record 750 and terminal data record 760 and deleting terminal data record 770 , adding key data record 780 and deleting key data record 740 , etc.
  • a transaction may include modifying key data record 850 and terminal data record 860 and deleting terminal data record 870 , adding key data record 880 and deleting key data record 840 , etc.
  • the update thread may create ( 970 ) a plurality of new elements based on the new information.
  • modifications to the information contained within an existing element of the database may be incorporated by creating a new element based on the existing element and then modifying the new element to include the new information.
  • the new element may not be visible to the search threads or processes currently executing on system 100 until a pointer to the new element has been written to the database.
  • additions to the database may be accomplished in a similar fashion, without necessarily using information contained within an existing element.
  • the deletion of an existing element from the database may be accomplished by adding a new, explicit “delete” element to the database.
  • the deletion of an existing element from the database may be accomplished by overwriting a pointer to the existing element with an appropriate indicator (e.g., a null pointer, etc.).
  • an appropriate indicator e.g., a null pointer, etc.
  • the update thread does not create a new element in the database containing new information
  • memory space for a new data record may be allocated from a memory pool associated with database records 401 .
  • New information may be copied to data 432 of data record 430 , and other information may be calculated and added to data record 430 , such as, for example, chain pointer 434 , data pointer 435 , etc.
  • a dirty bit 408 may also be included within new data record 430 .
  • the new information may include new domain names and/or domain name servers to be added to the database.
  • memory space for a new domain name record 615 may be allocated from a memory pool associated with the domain name records 611 , or, alternatively, from a general memory pool associated with domain name data 610 .
  • the new domain name may be normalized and copied to the new domain name record 615 , a pointer to an existing name server (e.g., name server record 655 ) may be determined and copied to the new domain name record 615 .
  • a dirty bit 618 may be included within new domain name record 615 .
  • Other information may be calculated and added to new domain name record 615 , such as, for example, a number of name servers, a chain pointer, etc. In more complicated examples, the new information may include a new search key with corresponding resource data.
  • a new search node 705 may be created.
  • the new search node 705 may include a comparison character (“m”), in the first position, that is greater than the comparison character (“I”), in the first position, of existing search node 710 . Consequently, search node 705 may be inserted in the TST at the same “level” (i.e., 1 st character position) as search node 710 .
  • the 4-byte “greater than” pointer 715 of search node 710 may contain a “null” pointer.
  • Search node 705 may also include a 4-byte key pointer 706 which may contain a 40-bit pointer to the new key data record 780 .
  • Key data record 780 may include a key length 781 (e.g., “5”) and type 782 (e.g., indicating embedded resource data), a variable length key 783 (e.g., “m.com”), a number of embedded resources 784 (e.g., “1”), a resource length 785 (e.g., “9”), a variable-length resource string 786 or byte sequence (e.g., “180.1.1.1”) and dirty bit 707 .
  • Memory space may be allocated for search node 705 from a memory pool associated with TST nodes 701 , while memory space may be allocated for the key data record 770 from a memory pool associated with plurality of key data records 702 .
  • a new search node 890 may be created.
  • the new search node 890 may be a horizontal node including, for example, a two-bit node type 891 (e.g., “01”), a 38-bit first address 892 , an eight-bit address count 893 (e.g., 2), an eight-bit first character 894 (e.g., “I”), an eight-bit last character 895 (e.g., “m”), a variable-length bitmap 896 and a 38-bit second address 897 .
  • First address 892 may point to vertical node 820 , the next vertical node in the “I ⁇ .
  • Key data record 880 may include a key length 881 (e.g., “5”) and type 882 (e.g., indicating embedded resource data), a variable length key 883 (e.g., “m.com”), a number of embedded resources 884 (e.g., “1”), a resource length 885 (e.g., “9”), a variable-length resource string 886 or byte sequence (e.g., “180.1.1.1”) and dirty bit 807 .
  • Memory space may be allocated for search node 890 from a memory pool associated with plurality of search nodes 801 , while memory space may be allocated for key data record 880 from a memory pool associated with plurality of key data records 802 .
  • the new information may also include several modifications to existing records within the database.
  • the new information may include modifications to data record 410 .
  • new data record 420 may be created and the information from data record 410 copied thereto.
  • memory space for data record 420 may be allocated from a memory pool associated with database records 401 .
  • the modifications may then be applied to data 422 .
  • Data records 410 and 420 may also include dirty bits 406 and 407 , respectively.
  • the new information may include modifications to name server record 640 , such as, for example, a new IP address (e.g., “180.2.1.2”).
  • new name server record 660 may be created and the information from old name server record 640 copied thereto.
  • memory space for name server record 660 may be allocated from a memory pool associated with the name server records 631 , or, alternatively, from a general memory pool associated with name server data 630 .
  • the new name server IP address may then be copied to the appropriate field within name server record 660 , i.e., e.g., name server IP address 667 .
  • a dirty bit 668 may be included within new name server record 660 . Similar modifications to the various elements within the database embodiments described with reference to FIGS. 7 and 8 are also contemplated.
  • the new information may also include the deletion of at least one existing element within the database.
  • no new element may be created, but the dirty bit of the element to be deleted may be set by the update thread.
  • a new, explicit “delete” element may be created, with the dirty bit set, indicating that the former element has been removed from the database.
  • the new information may include the deletion of data record 410 , which may include dirty bit 407 .
  • the new information may include the deletion of domain name record 670 , which may include dirty bit 678 . Similar deletions to the various elements within the database embodiments described with reference to FIGS. 7 and 8 are also contemplated.
  • the update thread may set ( 975 ) a dirty bit within each of the plurality of new elements.
  • the dirty bit may notify the search threads that the particular database record is associated with a current transaction, and that a subsequent query-retry of the database should be performed.
  • each of the database records effected by a transaction may be identified.
  • the update thread may set a dirty bit within each of the database records affected by the transaction. Dirty bit 408 may be set to “1” for new data record 430 and dirty bits 407 and 406 may be set to “1” for modified data records 410 and 420 , respectively.
  • Dirty bit 618 may be set to “1” for new domain name record 615 and dirty bits 606 and 668 may be set to “1” for modified name server records 640 and 660 , respectively.
  • Dirty bits 707 and 807 may be set to “1” for new key data records 780 and 880 , respectively.
  • the update thread may also set ( 1075 ) a dirty bit within the appropriate database records.
  • dirty bit 407 may be set to “1” for deleted data record 410
  • dirty bit 678 may be set to “1” for deleted domain name record 670 .
  • Data record 420 and 430 , domain name record 615 , name server record 660 and key data records 780 and 880 may be considered to be “new” elements within the database
  • modified data record 410 , modified name server record 640 , deleted data record 410 and deleted domain name record 670 may be considered to be “old” elements within the database.
  • data record 410 is used as both a “modified” data record and as a “deleted” data record.
  • the update thread may write ( 980 ) a pointer to the database using a single uninterruptible operation.
  • a new element may be committed to the database, (i.e., become instantaneously visible to the search threads, or processes), by writing a pointer to the new element to the appropriate location within the database.
  • this appropriate location may be aligned in memory, so that the single operation includes a single store instruction of an appropriate length.
  • the “set” dirty bit notifies the search threads that each new database element may be part of a current transaction, and that a subsequent query-retry, or reconstruction from the redo log, may be necessary.
  • one index may contain pointers to “old” elements while another index to contain pointers to “new” elements. Consequently, in the DNS resolution embodiment, for example, two domain name records with the same domain name, or primary key, may exist within the search space simultaneously, but only during a transaction involving that record for a unique index.
  • an 8-byte pointer corresponding to new data record 430 may be written to hash table 403 .
  • an 8-byte pointer corresponding to new domain name record 615 may be written to hash table 612 .
  • these hash table entries may be aligned on 8-byte boundaries in memory 104 to ensure that a single, 8-byte store instruction is used to update this value.
  • a 4-byte pointer corresponding to the new search node 705 may be written to the 4-byte “greater-than” node pointer 715 within search node 710 .
  • the node pointer 715 may be aligned on a 4-byte boundary in memory 104 to ensure that a single, 4-byte store instruction may be used to update this value.
  • plurality of search nodes 801 may also include a top-of-tree address 899 , which may be aligned on an 8-byte word boundary in memory 104 and may reference the first node within plurality of search nodes 801 (i.e., e.g., vertical node 810 ).
  • An 8-byte pointer corresponding to the new search node 890 may be written to the top-of-tree address 899 using a single store instruction.
  • the new data are not visible to the search threads, while just after the store instruction, the new data are visible to the search threads.
  • the new data may be committed to the database without the use of operating system or database table locks.
  • a pointer, or pointers, to the existing record may be written ( 1080 ) with a null pointer using a single uninterruptible operation.
  • the null pointer may de-reference the existing record and indicate that the existing record has been deleted from the database.
  • data record 410 may be deleted from database 400 by overwriting the appropriate entry within hash table 403 with an 8-byte null pointer.
  • domain name record 670 may be deleted from database 600 by overwriting the appropriate entry within hash table 612 with an 8-byte null pointer.
  • an 8-byte pointer to a new, “explicit” delete record, corresponding to a “deleted” domain name record 670 may be written to hash table 613 .
  • modifications, additions and deletions to the database may be accomplished similarly.
  • the update thread may clear ( 985 ) the dirty bit within each of the plurality of new elements.
  • the dirty bit may be cleared from each new element by setting the dirty bit to “0.”
  • dirty bit 406 and 408 may be set to “0” for data records 420 and 430 , respectively.
  • Dirty bit 618 may be set to “0” for domain name record 615
  • dirty bits 606 and 668 may be set to “0” for name server records 640 and 660 , respectively.
  • Dirty bits 707 and 807 may be set to “0” for key data records 780 and 880 , respectively.
  • the dirty bit may be set to “0” for each of the new elements in any order. After the dirty bits within each of the new elements have been cleared ( 985 ), the “old,” or existing, database elements are no longer active, i.e., referenced within the database. In an embodiment, the dirty bits within these elements may then be cleared by setting the dirty bit to “0,” while in an alternative embodiment, the dirty bits may not be cleared at all.
  • the update thread may physically delete ( 990 ) existing database elements that have been modified after the dirty bits are cleared ( 985 ) from each of the new elements.
  • the physical deletion of these modified elements from memory 104 may be delayed to preserve consistency of in-progress searches. For example, after an existing element has been modified and the corresponding new element committed to the database, the physical deletion of the existing element from memory 104 may be delayed so that existing search threads that have a result, acquired just before the new element was committed to the database, may continue to use the previous state of the data.
  • the update thread may physically delete ( 990 ) the existing element after all the search threads that began before the existing element was modified have finished.
  • the physical deletion of the existing element from memory 104 may be delayed so that existing search threads that have a result, acquired just before the existing element was deleted from the database, may continue to use the previous state of the data.
  • the update thread may physically delete ( 1090 ) the existing element after all the search threads that began before the existing element was deleted have finished.
  • the processor on which the update thread is executing may include hardware to support out-of-order instruction execution.
  • system 100 may include an optimizing compiler which may produce a sequence of instructions, associated with embodiments of the present invention, that have been optimally rearranged to exploit the parallelism of the processor's internal architecture (e.g., processor 102 - 1 , 102 - 2 , etc.).
  • an optimizing compiler which may produce a sequence of instructions, associated with embodiments of the present invention, that have been optimally rearranged to exploit the parallelism of the processor's internal architecture (e.g., processor 102 - 1 , 102 - 2 , etc.).
  • Data hazards arising from out-of-order instruction execution may be eliminated, for example, by creating dependencies between the creation ( 970 ) of the new element and the pointer write ( 980 ) to the database.
  • these dependencies may be established by inserting additional arithmetic operations, such as, for example, an exclusive OR (XOR) instruction, into the sequence of instructions executed by processor 102 - 1 to force the execution of the instructions associated with the creation ( 970 ) of the new element to issue, or complete, before the execution of the pointer write ( 980 ) to the database.
  • additional arithmetic operations such as, for example, an exclusive OR (XOR) instruction
  • XOR exclusive OR
  • the contents of the location in memory 104 corresponding to the new element, and containing the dirty bit may be XOR'ed with the contents of the location in memory 104 corresponding to the pointer to the new element.
  • the address of the new element may be written ( 980 ) to memory 104 to commit the new element to the database. Numerous methods to overcome these complications may be readily discernable to one skilled in the art.

Abstract

Embodiments of the present invention provide a method and system for high-speed database searching with concurrent, transaction-based updating for large database systems. Specifically, a plurality of search queries may be received over a network, the database may be searched, and a plurality of search replies may be sent over the network. While searching the database, new information may be received over the network, a plurality of new database elements may be created based on the new information, a dirty bit may be set within each new database element, a pointer to each new database element may be written to the database using a single uninterruptible operation, and the dirty bit within each new database element may be cleared.

Description

    CLAIM FOR PRIORITY/CROSS REFERENCE TO RELATED APPLICATIONS
  • This non-provisional application claims the benefit of U.S. Provisional Patent Application Serial No. 60/330,842, filed Nov. 1, 2001, which is incorporated by reference in its entirety, and U.S. Provisional Patent Application Serial No. 60/365,169, filed Mar. 19, 2002, which is incorporated by reference in its entirety. This application is related to U.S. Non-Provisional Patent Application Serial Nos. [Att'y Dkt 12307/100178], [Att'y Dkt 12307/100179], [Att'y Dkt 12307/100181] and [Att'y Dkt 12307/100182].[0001]
  • TECHNICAL FIELD
  • This disclosure relates to computer systems. More specifically, this disclosure relates to a method and system for providing high-speed database searching with concurrent updating for large database systems. [0002]
  • BACKGROUND OF THE INVENTION
  • As the Internet continues its meteoric growth, scaling domain name service (DNS) resolution for root and generic top level domain (gTLD) servers at reasonable price points is becoming increasingly difficult. The A root server (i.e., a.root-server.net) maintains and distributes the Internet namespace root zone file to the 12 secondary root servers geographically distributed around the world (i.e., b.root-server.net, c.rootserver.net, etc.), while the corresponding gTLD servers (i.e., a.gtld-servers.net, b.gtldservers.net, etc.) are similarly distributed and support the top level domains (e.g., *.com, *.net, *.org, etc.). The ever-increasing volume of data coupled with the unrelenting growth in query rates is forcing a complete rethinking of the hardware and software infrastructure needed for root and gTLD DNS service over the next several years. The typical single server installation of the standard “bind” software distribution is already insufficient for the demands of the A root and will soon be unable to meet even gTLD needs. With the convergence of the public switched telephone network (PSTN) and the Internet, there are opportunities for a general purpose, high performance search mechanism to provide features normally associated with Service Control Points (SCPs) on the PSTN's SS7 signaling network as new, advanced services are offered that span the PSTN and the Internet, including Advanced Intelligent Network (AIN), Voice Over Internet Protocol (VoIP) services, geolocation services, etc.[0003]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a system block diagram, according to an embodiment of the present invention. [0004]
  • FIG. 2 is a detailed block diagram that illustrates a message data structure, according to an embodiment of the present invention. [0005]
  • FIG. 3 is a detailed block diagram that illustrates a message latency data structure architecture, according to an embodiment of the present invention. [0006]
  • FIG. 4 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. [0007]
  • FIG. 5 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. [0008]
  • FIG. 6 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. [0009]
  • FIG. 7 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. [0010]
  • FIG. 8 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. [0011]
  • FIG. 9 is a top level flow diagram that illustrates a method for searching and concurrently updating a database, according to an embodiment of the present invention. [0012]
  • FIG. 10 is a top level flow diagram that illustrates a method for searching and concurrently updating a database, according to an embodiment of the present invention.[0013]
  • DETAILED DESCRIPTION
  • Embodiments of the present invention provide a method and system for high-speed database searching with concurrent updating for large database systems. Specifically, a plurality of search queries may be received over a network, the database may be searched, and a plurality of search replies may be sent over the network. While searching the database, new information may be received over the network, a plurality of new database elements may be created based on the new information and a dirty bit may be set within each new database element. A pointer to each new database element may be written to the database using a single uninterruptible operation and the dirty bit within each new database element may be cleared. [0014]
  • FIG. 1 is a block diagram that illustrates a system according to an embodiment of the present invention. Generally, [0015] system 100 may host a large, memory-resident database, receive search requests and provide search responses over a network. For example, system 100 may be a symmetric, multiprocessing (SMP) computer, such as, for example, an IBM RS/6000® M80 or S80 manufactured by International Business Machines Corporation of Armonk, N.Y., a Sun Enterprise™ 10000 manufactured by Sun Microsystems, Inc. of Santa Clara, Calif., etc. System 100 may also be a multi-processor personal computer, such as, for example, a Compaq ProLiant™ ML530 (including two Intel Pentium® III 866 MHz processors) manufactured by Hewlett-Packard Company of Palo Alto, Calif. System 100 may also include a multiprocessing operating system, such as, for example, IBM AIX® 4, Sun Solaris™ 8 Operating Environment, Red Hat Linux®6.2, etc. System 100 may receive periodic updates over network 124, which may be concurrently incorporated into the database.
  • In an embodiment, [0016] system 100 may include at least one processor 102-1 coupled to bus 101. Processor 102-1 may include an internal memory cache (e.g., an L1 cache, not shown for clarity). A secondary memory cache 103-1 (e.g., an L2 cache, L2/L3 caches, etc.) may reside between processor 102-1 and bus 101. In a preferred embodiment, system 100 may include a plurality of processors 102-1 . . . 102-P coupled to bus 101. A plurality of secondary memory caches 103-1 . . . 103-P may also reside between plurality of processors 102-1 . . . 102-P and bus 101 (e.g., a look-through architecture), or, alternatively, at least one secondary memory cache 103-1 may be coupled to bus 101 (e.g., a look-aside architecture). System 100 may include memory 104, such as, for example, random access memory (RAM), etc., coupled to bus 101, for storing information and instructions to be executed by plurality of processors 102-1 . . . 102-P.
  • [0017] Memory 104 may store a large database, for example, for translating Internet domain names into Internet addresses, for translating names or phone numbers into network addresses, for providing and updating subscriber profile data, for providing and updating user presence data, etc. Advantageously, both the size of the database and the number of translations per second may be very large. For example, memory 104 may include at least 64 GB of RAM and may host a 500M (i.e., 500×106) record domain name database, a 500M record subscriber database, a 450M record telephone number portability database, etc.
  • On an exemplary 64-bit system architecture, such as, for example, a system including at least one 64-bit big-endian processor [0018] 102-1 coupled to at least a 64-bit bus 101 and a 64-bit memory 104, an 8-byte pointer value may be written to a memory address on an 8-byte boundary (i.e., a memory address divisible by eight, or, e.g., 2N) using a single, uninterruptible operation. Generally, the presence of secondary memory cache 103-1 may simply delay the 8-byte pointer write to memory 104. For example, in one embodiment, secondary memory cache 103-1 may be a look-through cache operating in write-through mode, so that a single, 8-byte store instruction may move eight bytes of data from processor 102-1 to memory 104, without interruption, and in as few as two system clock cycles. In another embodiment, secondary memory cache 103-1 may be a look-through cache operating in write-back mode, so that the 8-byte pointer may first be written to secondary memory cache 103-1, which may then write the 8-byte pointer to memory 104 at a later time, such as, for example, when the cache line in which the 8-byte pointer is stored is written to memory 104 (i.e., e.g., when the particular cache line, or the entire secondary memory cache, is “flushed”).
  • Ultimately, from the perspective of processor [0019] 102-1, once the data are latched onto the output pins of processor 102-1, all eight bytes of data are written to memory 104 in one contiguous, uninterrupted transfer, which may be delayed by the effects of a secondary memory cache 103-1, if present. From the perspective of processors 102-2 . . . 102-P, once the data are latched onto the output pins of processor 102-1, all eight bytes of data are written to memory 104 in one contiguous, uninterrupted transfer, which is enforced by the cache coherency protocol across secondary memory caches 103-1 . . . 103-P, which may delay the write to memory 104 if present
  • However, if an 8-byte pointer value is written to a misaligned location in [0020] memory 104, such as a memory address that crosses an 8-byte boundary, all eight bytes of data can not be transferred from processor 102-1 using a single, 8-byte store instruction. Instead, processor 102-1 may issue two separate and distinct store instructions. For example, if the memory address begins four bytes before an 8-byte boundary (e.g., 8N−4), the first store instruction transfers the four most significant bytes to memory 104 (e.g., 8N−4), while the second store instruction transfers the four least significant bytes to memory 104 (e.g., 8N). Importantly, between these two separate store instructions, processor 102-1 may be interrupted, or, processor 102-1 may loose control of bus 101 to another system component (e.g., processor 102-P, etc.). Consequently, the pointer value residing in memory 104 will be invalid until processor 102-1 can complete the second store instruction. If another component begins a single, uninterruptible memory read to this memory location, an invalid value will be returned as a presumably valid one.
  • Similarly, a new 4-byte pointer value may be written to a memory address divisible by four (e.g., 4N) using a single, uninterruptible operation. Note that in the example discussed above, a 4-byte pointer value may be written to the 8N−4 memory location using a single store instruction. Of course, if a 4-byte pointer value is written to a location that crosses a 4-byte boundary, e.g., 4N−2, all four bytes of data can not be transferred from processor [0021] 102-1 using a single store instruction, and the pointer value residing in memory 104 may be invalid for some period of time.
  • [0022] System 100 may also include a read only memory (ROM) 106, or other static storage device, coupled to bus 101 for storing static information and instructions for processor 102-1. A storage device 108, such as a magnetic or optical disk, may be coupled to bus 101 for storing information and instructions. System 100 may also include display 110 (e.g., an LCD monitor) and input device 112 (e.g., keyboard, mouse, trackball, etc.), coupled to bus 101. System 100 may include a plurality of network interfaces 114-1 . . . 114-0, which may send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. In an embodiment, network interface 114-1 may be coupled to bus 101 and local area network (LAN) 122, while network interface 114-0 may coupled to bus 101 and wide area network (WAN) 124. Plurality of network interfaces 114-1 . . . 114-0 may support various network protocols, including, for example, Gigabit Ethernet (e.g., IEEE Standard 802.3-2002, published 2002), Fiber Channel (e.g., ANSI Standard X.3230-1994, published 1994), etc. Plurality of network computers 120-1 . . . 120-N may be coupled to LAN 122 and WAN 124. In one embodiment, LAN 122 and WAN 124 may be physically distinct networks, while in another embodiment, LAN 122 and WAN 124 may be via a network gateway or router (not shown for clarity). Alternatively, LAN 122 and WAN 124 may be the same network.
  • As noted above, [0023] system 100 may provide DNS resolution services. In a DNS resolution embodiment, DNS resolution services may generally be divided between network transport and data look-up functions. For example, system 100 may be a backend look-up engine (LUE) optimized for data look-up on large data sets, while plurality of network computers 120-1 . . . 120-N may be a plurality of front-end protocol engines (PEs) optimized for network processing and transport. The LUE may be a powerful multiprocessor server that stores the entire DNS record set in memory 104 to facilitate high-speed, high-throughput searching and updating. In an alternative embodiment, DNS resolution services may be provided by a series of powerful multiprocessor servers, or LUEs, each storing a subset of the entire DNS record set in memory to facilitate high-speed, high-throughput searching and updating
  • Conversely, the plurality of PEs may be generic, low profile, PC-based machines, running an efficient multitasking operating system (e.g., Red Hat Linux® 6.2), that minimize the network processing transport load on the LUE in order to maximize the available resources for DNS resolution. The PEs may handle the nuances of wire-line DNS protocol, respond to invalid DNS queries and multiplex valid DNS queries to the LUE over [0024] LAN 122. The number of PEs for a single LUE may be determined, for example, by the number of DNS queries to be processed per second and the performance characteristics of the particular system. Other metrics may also be used to determine the appropriate mapping ratios and behaviors.
  • Generally, other large-volume, query-based embodiments may be supported, including, for example, telephone number resolution, SS7 signaling processing, geolocation determination, telephone number-to-subscriber mapping, subscriber location and presence determination, etc. [0025]
  • In an embodiment, a central on-line transaction processing (OLTP) server [0026] 140-1 may be coupled to WAN 124 and receive additions, modifications and deletions (i.e., update traffic) to database 142-1 from various sources. OLTP server 140-1 may send updates to system 100, which includes a local copy of database 142-1, over WAN 124. OLTP server 140-1 may be optimized for processing update traffic in various formats and protocols, including, for example, HyperText Transmission Protocol (HTTP), Registry Registrar Protocol (RRP), Extensible Provisioning Protocol (EPP), Service Management System/800 Mechanized Generic Interface (MGI), and other on-line provisioning protocols. A constellation of read-only LUEs may be deployed in a hub and spoke architecture to provide high-speed search capability conjoined with high-volume, incremental updates from OLTP server 140-1.
  • In an alternative embodiment, data may be distributed over multiple OLTP servers [0027] 140-1 . . . 140-S, each of which may be coupled to WAN 124. OLTP servers 140-1 . . . 140-S may receive additions, modifications, and deletions (i.e., update traffic) to their respective databases 142-1 . . . 142-S (not shown for clarity) from various sources. OLTP servers 140-1 . . . 140-S may send updates to system 100, which may include copies of databases 142-1 . . . 142-S, other dynamically-created data, etc., over WAN 124. For example, in a geolocation embodiment, OLTP servers 140-1 . . . 140-S may receive update traffic from groups of remote sensors. In another alternative embodiment, plurality of network computers 120-1 . . . 120-N may also receive additions, modifications, and deletions (i.e., update traffic) from various sources over WAN 124 or LAN 122. In this embodiment, plurality of network computers 120-1 . . . 120-N may send updates, as well as queries, to system 100.
  • In the DNS resolution embodiment, each PE (e.g., each of the plurality of network computers [0028] 120-1 . . . 120-N) may combine, or multiplex, several DNS query messages, received over a wide area network (e.g., WAN 124), into a single Request SuperPacket and send the Request SuperPacket to the LUE (e.g., system 100) over a local area network (e.g., LAN 122). The LUE may combine, or multiplex, several DNS query message replies into a single Response SuperPacket and send the Response SuperPacket to the appropriate PE over the local area network. Generally, the maximum size of a Request or Response SuperPacket may be limited by the maximum transmission unit (MTU) of the physical network layer (e.g., Gigabit Ethernet). For example, typical DNS query and reply message sizes of less than 100 bytes and 200 bytes, respectively, allow for over 30 queries to be multiplexed into a single Request SuperPacket, as well as over 15 replies to be multiplexed into a single Response SuperPacket. However, a smaller number of queries (e.g., 20 queries) may be included in a single Request SuperPacket in order to avoid MTU overflow on the response (e.g., 10 replies). For larger MTU sizes, the number of multiplexed queries and replies may be increased accordingly.
  • Each multitasking PE may include an inbound thread and an outbound thread to manage DNS queries and replies, respectively. For example, the inbound thread may un-marshal the DNS query components from the incoming DNS query packets received over a wide area network and multiplex several milliseconds of queries into a single Request SuperPacket. The inbound thread may then send the Request SuperPacket to the LUE over a local area network. Conversely, the outbound thread may receive the Response SuperPacket from the LUE, de-multiplex the replies contained therein, and marshal the various fields into a valid DNS reply, which may then be transmitted over the wide area network. Generally, as noted above, other large-volume, query-based embodiments may be supported. [0029]
  • In an embodiment, the Request SuperPacket may also include state information associated with each DNS query, such as, for example, the source address, the protocol type, etc. The LUE may include the state information, and associated DNS replies, within the Response SuperPacket. Each PE may then construct and return valid DNS reply messages using the information transmitted from the LUE. Consequently, each PE may advantageously operate as a stateless machine, i.e., valid DNS replies may be formed from the information contained in the Response SuperPacket. Generally, the LUE may return the Response SuperPacket to the PE from which the incoming SuperPacket originated; however, other variations may obviously be possible. [0030]
  • In an alternative embodiment, each PE may maintain the state information associated with each DNS query and include a reference, or handle, to the state information within the Request SuperPacket. The LUE may include the state information references, and associated DNS replies, within the Response SuperPacket. Each PE may then construct and return valid DNS reply messages using the state information references transmitted from the LUE, as well as the state information maintained thereon. In this embodiment, the LUE may return the Response SuperPacket to the PE from which the incoming SuperPacket originated. [0031]
  • FIG. 2 is a detailed block diagram that illustrates a message data structure, according to an embodiment of the present invention. Generally, [0032] message 200 may include header 210, having a plurality of sequence number 211-1 . . . 211-S and a plurality of message counts 212-1 . . . 212-S, and data payload 215.
  • In the DNS resolution embodiment, [0033] message 200 may be used for Request SuperPackets and Response SuperPackets. For example, Request SuperPacket 220 may include header 230, having a plurality of sequence number 231-1 . . . 231-S and a plurality of message counts 232-1 . . . 232-S, and data payload 235 having multiple DNS queries 236-1 . . . 236-Q, accumulated by a PE over a predetermined period of time, such as, for example, several milliseconds. In one embodiment, each DNS query 236-1 . . . 236-Q may include state information, while in an alternative embodiment, each DNS query 236-1 . . . 236-Q may include a handle to state information.
  • Similarly, [0034] Response SuperPacket 240 may include header 250, having a plurality of sequence number 251-1 . . . 251-S and a plurality of message counts 252-1 . . . 252-S, and data payload 255 having multiple DNS replies 256-1 . . . 256-R approximately corresponding to the multiple DNS queries contained within Request SuperPacket 220. In one embodiment, each DNS reply 256-1 . . . 256-R may include state information associated with the corresponding DNS query, while in an alternative embodiment, each DNS reply 256-1 . . . 256-R may include a handle to state information associated with the corresponding DNS query. Occasionally, the total size of the corresponding DNS replies may exceed the size of data payload 255 of the Response SuperPacket 240. This overflow may be limited, for example, to a single reply, i.e., the reply associated with the last query contained within Request SuperPacket 220. Rather than sending an additional Response SuperPacket 240 containing only the single reply, the overflow reply may be preferably included in the next Response SuperPacket 240 corresponding to the next Request SuperPacket. Advantageously, header 250 may include appropriate information to determine the extent of the overflow condition. Under peak processing conditions, more than one reply may overflow into the next Response SuperPacket.
  • For example, in [0035] Response SuperPacket 240, header 250 may include at least two sequence numbers 251-1 and 251-2 and at least two message counts 252-1 and 252-2, grouped as two pairs of complementary fields. While there may be “S” number of sequence number and message count pairs, typically, S is a small number, such as, e.g., 2, 3, 4, etc. Thus, header 250 may include sequence number 251-1 paired with message count 252-1, sequence number 251-2 paired with message count 252-2, etc. Generally, message count 252-1 may reflect the number of replies contained within data payload 255 that are associated with sequence number 251-1. In an embodiment, sequence number 251-1 may be a two-byte field, while message count 252-1 may be a one-byte field.
  • In a more specific example, [0036] data payload 235 of Request SuperPacket 220 may include seven DNS queries (as depicted in FIG. 2). In one embodiment, sequence number 231-1 may be set to a unique value (e.g., 1024) and message count 232-1 may be set to seven, while sequence number 231-2 and message count 232-2 may be set to zero. In another embodiment, header 230 may contain only one sequence number and one message count, e.g., sequence number 231-1 and message count 232-1 set to 1024 and seven, respectively. Typically, Request SuperPacket 220 may contain all of the queries associated with a particular sequence number.
  • [0037] Data payload 255 of Response SuperPacket 240 may include seven corresponding DNS replies (as depicted in FIG. 2). In this example, header 250 may include information similar to Request SuperPacket 220, i.e., sequence number 251-1 set to the same unique value (i.e., 1024), message count 252-1 set to seven, and both sequence number 252-2 and message count 252-2 set to zero. However, in another example, data payload 255 of Response SuperPacket 240 may include only five corresponding DNS replies, and message count 252-1 may be set to five instead. The remaining two responses associated with sequence number 1024 may be included within the next Response SuperPacket 240.
  • The [0038] next Request SuperPacket 240 may include a different sequence number (e.g., 1025) and at least one DNS query, so that the next Response SuperPacket 240 may include the two previous replies associated with the 1024 sequence number, as well as at least one reply associated with the 1025 sequence number. In this example, header 250 of the next Response SuperPacket 240 may include sequence number 251-1 set to 1024, message count 252-1 set to two, sequence number 251-2 set to 1025 and message count 252-2 set to one. Thus, Response SuperPacket 240 may include a total of three replies associated with three queries contained within two different Request SuperPackets.
  • FIG. 3 is a detailed block diagram that illustrates a message latency data structure architecture, according to an embodiment of the present invention. Message [0039] latency data structure 300 may include information generally associated with the transmission and reception of message 200. In the DNS resolution embodiment, message latency data structure 300 may include latency information about Request SuperPackets and Response SuperPackets; this latency information may be organized in a table format indexed according to sequence number value (e.g., index 301). For example, message latency data structure 300 may include a number of rows N equal to the total number of unique sequence numbers, as illustrated, generally, by table elements 310, 320 and 330. In an embodiment, SuperPacket header sequence numbers may be two bytes in length and define a range of unique sequence numbers from zero to 216−1 (i.e., 65,535). In this case, N may be equal to 65,536. Latency information may include Request Timestamp 302, Request Query Count 303, Response Timestamp 304, Response Reply Count 305, and Response Message Count 306. In an alternative embodiment, latency information may also include an Initial Response Timestamp (not shown).
  • In an example, [0040] table element 320 illustrates latency information for a Request SuperPacket 220 having a single sequence number 231-1 equal to 1024. Request Timestamp 302 may indicate when this particular Request SuperPacket was sent to the LUE. Request Query Count 303 may indicate how many queries were contained within this particular Request SuperPacket. Response Timestamp 304 may indicate when a Response SuperPacket having a sequence number equal to 1024 was received at the PE (e.g., network computer 120-N) and may be updated if more than one Response SuperPacket is received at the PE. Response Reply Count 305 may indicate the total number of replies contained within all of the received Response SuperPackets associated with this sequence number (i.e., 1024). Response Message Count 306 may indicate how many Response SuperPackets having this sequence number (i.e., 1024) arrived at the PE. Replies to the queries contained within this particular Request SuperPacket may be split over several Response SuperPackets, in which case, Response Timestamp 304, Response Reply Count 305, and Response Message Count 306 may be updated as each of the additional Response SuperPackets are received. In an alternative embodiment, the Initial Response Timestamp may indicate when the first Response SuperPacket containing replies for this sequence number (i.e., 1024) was received at the PE. In this embodiment, Response Timestamp 304 may be updated when additional (i.e., second and subsequent) Response SuperPackets are received.
  • Various important latency metrics may be determined from the latency information contained within message [0041] latency data structure 300. For example, simple cross-checking between Request Query Count 303 and Response Reply Count 305 for a given index 301 (i.e., sequence number) may indicate a number of missing replies. This difference may indicate the number of queries inexplicably dropped by the LUE. Comparing Request Timestamp 302 and Response Timestamp 304 may indicate how well the particular PE/LUE combination may be performing under the current message load. The difference between the current Request SuperPacket sequence number and the current Response SuperPacket sequence number may be associated with the response performance of the LUE; e.g., the larger the difference, the slower the performance. The Response Message Count 306 may indicate how many Response SuperPackets are being used for each Request SuperPacket, and may be important in DNS resolution traffic analysis. As the latency of the queries and replies travelling between the PEs and LUE increases, the PEs may reduce the number of DNS query packets processed by the system.
  • Generally, the LUE may perform a multi-threaded look-up on the incoming, multiplexed Request SuperPackets, and may combine the replies into outgoing, multiplexed Response SuperPackets. For example, the LUE may spawn one search thread, or process, for each active PE and route all the incoming Request SuperPackets from that PE to that search thread. The LUE may spawn a manager thread, or process, to control the association of PEs to search threads, as well as an update thread, or process, to update the database located in [0042] memory 104. Each search thread may extract the search queries from the incoming Request SuperPacket, execute the various searches, construct an outgoing Response SuperPacket containing the search replies and send the SuperPacket to the appropriate PE. The update thread may receive updates to the database, from OLTP 140-1, and incorporate the new data into the database. In an alternative embodiment, plurality of network computers 120-1 . . . 120-N may send updates to system 100. These updates may be included, for example, within the incoming Request SuperPacket message stream.
  • Accordingly, by virtue of the SuperPacket protocol, the LUE may spend less than 15% of its processor capacity on network processing, thereby dramatically increasing search query throughput. In an embodiment, an IBM® 8-way M80 may sustain search rates of 180 k to 220 k queries per second (qps), while an IBM® 24-way S80 may sustain 400 k to 500 k qps. Doubling the search rates, i.e., to 500 k and 1M qps, respectively, simply requires twice as much hardware, i.e., e.g., two LUEs with their attendant PEs. In another embodiment, a dual Pentium® III 866 MHz multi-processor personal computer operating Red Hat Linux® 6.2 may sustain update rates on the order of 100K/sec. Of course, increases in hardware performance also increase search and update rates associated with embodiments of the present invention, and as manufacturers replace these multiprocessor computers with faster-performing machines, for example, the sustained search and update rates may increase commensurately. Generally, [0043] system 100 is not limited to a client or server architecture, and embodiments of the present invention are not limited to any specific combination of hardware and/or software.
  • FIG. 4 is a block diagram that illustrates a general database architecture according to an embodiment of the present invention. In this embodiment, [0044] database 400 may include at least one table or group of database records 401, and at least one corresponding search index 402 with pointers (indices, direct byte-offsets, etc.) to individual records within the group of database records 401. For example, pointer 405 may reference database record 410.
  • In one embodiment, [0045] database 400 may include at least one hash table 403 as a search index with pointers (indices, direct byte-offsets, etc.) into the table or group of database records 401. A hash function may map a search key to an integer value which may then be used as an index into hash table 403. Because more than one search key may map to a single integer value, hash buckets may be created using a singly-linked list of hash chain pointers. For example, each entry within hash table 403 may contain a pointer to the first element of a hash bucket, and each element of the hash bucket may contain a hash chain pointer to the next element, or database record, in the linked-list. Advantageously, a hash chain pointer may be required only for those elements, or database records, that reference a subsequent element in the hash bucket.
  • Hash table [0046] 403 may include an array of 8-byte pointers to individual database records 401. For example, hash pointer 404 within hash table 403 may reference database record 420 as the first element within a hash bucket. Database record 420 may contain a hash chain pointer 424 which may reference the next element, or database record, in the hash bucket. Database record 420 may also include a data length 421, and associated fixed or variable-length data 422. In an embodiment, a null character 423, indicating the termination of data 422, may be included. Additionally, database record 420 may include a data pointer 425 which may reference another database record, either within the group of database records 401 or within a different table or group of database records (not shown), in which additional data may be located.
  • [0047] System 100 may use various, well-known algorithms to search this data structure architecture for a given search term or key. Generally, database 400 may be searched by multiple search processes, or threads, executing on at least one of the plurality of processors 102-1 . . . 102-P. However, modifications to database 400 may not be integrally performed by an update thread (or threads) unless the search thread(s) are prevented from accessing database 400 for the period of time necessary to add, modify, or delete information within database 400. For example, in order to modify database record 430 within database 400, the group of database records 401 may be locked by an update thread to prevent the search threads from accessing database 400 while the update thread is modifying the information within database record 430. There are many well-known mechanisms for locking database 400 to prevent search access, including the use of spin-locks, semaphores, mutexes, etc. Additionally, various off-the-shelf commercial databases provide specific commands to lock all or parts of database 400, e.g., the lock table command in the Oracle 8 Database, manufactured by Oracle Corporation of Redwood Shores, Calif., etc.
  • FIG. 5 is a block diagram that illustrates a general database architecture according to another embodiment of the present invention. In this embodiment, [0048] database 500 may include a highly-optimized, read-only, master snapshot file 510 and a growing, look-aside file 520. Master snapshot file 510 may include at least one table or group of database records 511, and at least one corresponding search index 512 with pointers (indices, direct byte-offsets, etc.) to individual records within the group of database records 511. Alternatively, master snapshot file 510 may include at least one hash table 513 as a search index with pointers (indices, direct byte-offsets, etc.) into the table or group of database records 511. Similarly, look-aside file 520 may include at least two tables or groups of database records, including database addition records 521 and database deletion records 531. Corresponding search indices 522 and 532 may be provided, with pointers (indices, direct byte-offsets, etc.) to individual records within the database addition records 521 and database deletion records 531. Alternatively, look-aside file 520 may include hash tables 523 and 533 as search indices, with pointers (indices, direct byte-offsets, etc.) into database addition records 521 and database deletion records 531, respectively.
  • [0049] System 100 may use various, well-known algorithms to search this data structure architecture for a given search term or key. In a typical example, look-aside file 520 may include all the recent changes to the data, and may be searched before read-only master snapshot file 510. If the search key is found in look-aside file 520, the response is returned without accessing snapshot file 510, but if the key is not found, then snapshot file 510 may be searched. However, when look-aside file 520 no longer fits in memory 104 with snapshot file 510, search query rates drop dramatically, by a factor of 10 to 50, or more, for example. Consequently, to avoid or minimize any drop in search query rates, snapshot file 510 may be periodically updated, or recreated, by incorporating all of the additions, deletions and modifications contained within look-aside file 520
  • Data within [0050] snapshot file 510 are not physically altered but logically added, modified or deleted. For example, data within snapshot file 510 may be deleted, or logically “forgotten,” by creating a corresponding delete record within database deletion records 531 and writing a pointer to the delete record to the appropriate location in hash table 533. Data within snapshot file 510 may be logically modified by copying a data record from snapshot file 510 to a new data record within database addition records 521, modifying the data within the new entry, and then writing a pointer to the new entry to the appropriate hash table (e.g., hash table 522) or chain pointer within database addition records 521. Similarly, data within snapshot file 510 may be logically added to snapshot file 510 by creating a new data record within database addition records 521 and then writing a pointer to the new entry to the appropriate hash table (e.g., hash table 522) or chain pointer within database addition records 521.
  • In the DNS resolution embodiment, for example, [0051] snapshot file 510 may include domain name data and name server data, organized as separate data tables, or blocks, with separate search indices (e.g., 511-1, 511-2, 512-1, 512-2, 513-1, 513-2, etc., not shown for clarity). Similarly, look-aside file 520 may include additions and modifications to both the domain name data and the name server data, as well as deletions to both the domain name data and the name server data (e.g., 521-1, 521-2, 522-1, 522-2, 523-1, 523-2, 531-1, 531-2, 532-1, 532-2, 533-1, 533-2, etc., not shown for clarity).
  • FIG. 6 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. Generally, [0052] database 600 may be organized into a single, searchable representation of the data. Data set updates may be continuously incorporated into database 600, and deletes or modifications may be physically performed on the relevant database records to free space within memory 104, for example, for subsequent additions or modifications. The single, searchable representation scales extremely well to large data set sizes and high search and update rates, and obviates the need to periodically recreate, propagate and reload snapshot files among multiple search engine computers.
  • In a DNS resolution embodiment, for example, [0053] database 600 may include domain name data 610 and name server data 620. Domain name data 610 and name server data 620 may include search indices with pointers (indices, direct byte-offsets, etc.) into blocks of variable length records. As discussed above, a hash function may map a search key to an integer value which may then be used as an index into a hash table. Similarly, hash buckets may be created for each hash table index using a singly-linked list of hash chain pointers. Domain name data 610 may include, for example, a hash table 612 as a search index and a block of variable-length domain name records 611. Hash table 612 may include an array of 8-byte pointers to individual domain name records 611, such as, for example, pointer 613 referencing domain name record 620. Variable-length domain name record 620 may include, for example, a next record offset 621, a name length 622, a normalized name 623, a chain pointer 624 (i.e., e.g., pointing to the next record in the hash chain), a number of name servers 625, and a name server pointer 626. The size of both chain pointer 624 and name server pointer 626 may be optimized to reflect the required block size for each particular type of data, e.g., eight bytes for chain pointer 624 and four bytes for name server pointer 626.
  • Name [0054] server data 630 may include, for example, a hash table 632 as a search index and a block of variable-length name server records 631. Hash table 632 may include an array of 4-byte pointers to individual name server records 631, such as, for example, pointer 633 referencing name server record 640. Variable-length name server record 640 may include, for example, a next record offset 641, a name length 642, a normalized name 643, a chain pointer 644 (i.e., e.g., pointing to the next record in the hash chain), a number of name server network addresses 645, a name server address length 646, and a name server network address 647, which may be, for example, an Internet Protocol (IP) network address. Generally, name server network addresses may be stored in ASCII (American Standard Code for Information Interchange, e.g., ISO-14962-1997, ANSI-X3.4-1997, etc.) or binary format; in this example, name server network address length 646 indicates that name server network address 647 is stored in binary format (i.e., four bytes). The size of chain pointer 644 may also be optimized to reflect the required name server data block size, e.g., four bytes.
  • Generally, both search indices, such as hash tables, and variable-length data records may be structured so that 8-byte pointers are located on 8-byte boundaries in memory. For example, hash table [0055] 612 may contain a contiguous array of 8-byte pointers to domain name records 611, and may be stored at a memory address divisible by eight (i.e., an 8-byte boundary, or 8N). Similarly, both search indices, such as hash tables and variable-length data records may be structured so that 4-byte pointers are located on 4-byte boundaries in memory. For example, hash table 632 may contain a contiguous array of 4-byte pointers to name server records 631, and may be stored at a memory address divisible by four (i.e., a 4-byte boundary, or 4N). Consequently, modifications to database 600 may conclude by updating a pointer to an aligned address in memory using a single uninterruptible operation, including, for example writing a new pointer to the search index, such as a hash table or writing a new hash chain pointer to a variable-length data record.
  • FIG. 7 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. Generally, [0056] database 700 may also be organized into a single, searchable representation of the data. Data set updates may be continuously incorporated into database 700, and deletes or modifications may be physically performed on the relevant database records to free space within memory 104, for example, for subsequent additions or modifications. The single, searchable representation scales extremely well to large data set sizes and high search and update rates, and obviates the need to periodically recreate, propagate and reload snapshot files among multiple search engine computers.
  • Many different physical data structure organizations are possible. An exemplary organization may use an alternative search index to hash tables for ordered, sequential access to the data records, such as the ternary search tree (trie), or TST, which combines the features of binary search trees and digital search tries. In a text-based applications such as, for example, whois, domain name resolution using DNS Secure Extensions (Internet Engineering Taskforce Request for Comments: 2535), etc. TSTs advantageously minimize the number of comparison operations required to be performed, particularly in the case of a search miss, and may yield search performance metrics exceeding search engine implementations with hashing. Additionally, TSTs may also provide advanced text search features, such as, e.g., wildcard searches, which may be useful in text search applications, such as, for example, whois, domain name resolution, Internet content search, etc.. [0057]
  • In an embodiment, a TST may contain a sequence of nodes linked together in a hierarchical relationship. A root node may be located at the top of the tree, related child nodes and links may form branches, and leaf nodes may terminate the end of each branch. Each leaf node may be associated with a particular search key, and each node on the path to the leaf node may contain a single, sequential element of the key. Each node in the tree contains a comparison character, or split value, and three pointers to other successive, or “child,” nodes in the tree. These pointers reference child nodes whose split values are less than, equal to, or greater than the node's split value. Searching the TST for a particular key, therefore, involves traversing the tree from the root node to a final leaf node, sequentially comparing each element, or character position, of the key with the split values of the nodes along the path. Additionally, a leaf node may also contain a pointer to a key record, which may, in turn, contain at least one pointer to a terminal data record containing the record data associated with the key (e.g., an IP address). Alternatively, the key record may contain the record data in its entirety. Record data may be stored in binary format, ASCII text format, etc. [0058]
  • In an embodiment, [0059] database 700 may be organized as a TST, including a plurality of fixed-length search nodes 701, a plurality of variable-length key data records 702 and a plurality of variable-length terminal data records 703. Search nodes 701 may include various types of information as described above, including, for example, a comparison character (or value) and position, branch node pointers and a key pointer. The size of the node pointers may generally be determined by the number of nodes, while the size of the key pointers may generally be determined by the size of the variable-length key data set. Key data records 702 may contain key information and terminal data information, including, for example, pointers to terminal data records or embedded record data, while terminal data records 703 may contain record data.
  • In an embodiment, each fixed-length search node may be 24 bytes in length. [0060] Search node 710, for example, may contain an eight-bit comparison character (or byte value) 711, a 12-bit character (or byte) position 712, and a 12-bit node type/status (not shown for clarity); these data may be encoded within the first four bytes of the node. The comparison character 711 may be encoded within the first byte of the node as depicted in FIG. 7, or, alternatively, character position 712 may be encoded within the first 12 bits of the node in order to optimize access to character position 712 using a simple shift operation. The next 12 bytes of each search node may contain three 32-bit pointers, i.e., pointer 713, pointer 714 and pointer 715, representing “less than,” “equal to,” and “greater than” branch node pointers, respectively. These pointers may contain a counter, or node index, rather than a byte-offset or memory address. For fixed-length search nodes, the byte-offset may be calculated from the counter, or index value, and the fixed-length, e.g., counter*length. The final four bytes may contain a 40-bit key pointer 716, which may be a null value indicating that a corresponding key data record does not exist (shown) or a pointer to an existing corresponding key data record (not shown), as well as other data, including, for example, a 12-bit key length and a 12-bit pointer type/status field. Key pointer 716 may contain a byte offset to the appropriate key data record, while the key length may be used to optimize search and insertion when eliminating one-way branching within the TST. The pointer type/status field may contain information used in validity checking and allocation data used in memory management.
  • In an embodiment, [0061] key data record 750 may include, for example, a variable-length key 753 and at least one terminal data pointer. As depicted in FIG. 7, key data record 750 includes two terminal data pointers: terminal data pointer 757 and terminal data pointer 758. Key data record 750 may be prefixed with a 12-bit key length 751 and a 12-bit terminal pointer count/status 752, and may include padding (not shown for clarity) to align the terminal data pointer 757 and terminal data pointer 758 on an 8-byte boundary in memory 104. Terminal data pointer 757 and terminal data pointer 758 may each contain various data, such as, for example, terminal data type, length, status or data useful in binary record searches. Terminal data pointer 757 and terminal data pointer 758 may be sorted by terminal data type for quicker retrieval of specific resource records (e.g., terminal data record 760 and terminal data record 770). In another embodiment, key data record 740 may include embedded terminal data 746 rather than, or in addition to, terminal data record pointers. For example, key data record 740 may include a key length 741, a terminal pointer count 742, a variable-length key 743, the number of embedded record elements 744, followed by a record element length 745 (in bytes, for example) and embedded record data 746 (e.g., a string, a byte sequence, etc.) for each of the number of embedded record elements 744.
  • In an embodiment, [0062] terminal data record 760, for example, may include a 12-bit length 761, a 4-bit status, and a variable-length string 762 (e.g., an IP address). Alternatively, variable length string 762 may be a byte sequence. Terminal data record 760 may include padding to align each terminal data record to an 8-byte boundary in memory 104. Alternatively, terminal data record 760 may include padding to a 4-byte boundary, or, terminal data record 760 may not include any padding. Memory management algorithms may determine, generally, whether terminal data records 760 are padded to 8-byte, 4-byte, or 0-byte boundaries. Similarly, terminal data record 770 may include a 12-bit length 771, a 4-bit status, and a variable-length string 772 (e.g., an IP address).
  • Generally, both search indices, such as TSTs, and data records may be structured so that 8-byte pointers are located on 8-byte boundaries in memory. For example, [0063] key pointer 726 may contain an 8-byte (or less) pointer to key data record 740, and may be stored at a memory address divisible by eight (i.e., an 8-byte boundary, or 8N). Similarly, both search indices, such as TSTs, and data records may be structured so that 4-byte pointers are located on 4-byte boundaries in memory. For example, node branch pointer 724 may contain a 4-byte (or less) pointer to node 730, and may be stored at a memory address divisible by four (i.e., a 4-byte boundary, or 4N). Consequently, modifications to database 700 may conclude by updating a pointer to an aligned address in memory using a single uninterruptible operation, including, for example writing a new pointer to the search index, such as a TST node, or writing a new pointer to a data record.
  • FIG. 8 is a detailed block diagram that illustrates a general database architecture, according to an embodiment of the present invention. As above, [0064] database 800 may also be organized into a single, searchable representation of the data. Data set updates may be continuously incorporated into database 800, and deletes or modifications may be physically performed on the relevant database records to free space within memory 104, for example, for subsequent additions or modifications. The single, searchable representation scales extremely well to large data set sizes and high search and update rates, and obviates the need to periodically recreate, propagate and reload snapshot files among multiple search engine computers.
  • Other search index structures are possible for accessing record data, In an embodiment, [0065] database 800 may use an alternative ordered search index, organized as an ordered access key tree (i.e., “OAK tree”). Database 800 may include, for example, a plurality of variable-length search nodes 801, a plurality of variable-length key records 802 and a plurality of variable-length terminal data records 803. Search nodes 801 may include various types of information as described above, such as, for example, search keys, pointers to other search nodes, pointers to key records, etc. In an embodiment, plurality of search nodes 801 may include vertical and horizontal nodes containing fragments of search keys (e.g., strings), as well as pointers to other search nodes or key records. Vertical nodes may include, for example, at least one search key, or character, pointers to horizontal nodes within the plurality of search nodes 801, pointers to key records within the plurality of key records 802, etc. Horizontal nodes may include, for example, at least two search keys, or characters, pointers to vertical nodes within the plurality of search nodes 801, pointers to horizontal nodes within the plurality of search nodes 801, pointers to key records within the plurality of key records 802, etc. Generally, vertical nodes may include a sequence of keys (e.g., characters) representing a search key fragment (e.g., string), while horizontal nodes may include various keys (e.g., characters) that may exist at a particular position within the search key fragment (e.g., string).
  • In an embodiment, plurality of [0066] search nodes 801 may include vertical node 810, vertical node 820 and horizontal node 830. Vertical node 810 may include, for example, a 2-bit node type 811 (e.g., “10”), a 38-bit address 812, an 8-bit length 813 (e.g., “8”), an 8-bit first character 814 (e.g., “I”) and an 8-bit second character 815 (e.g., “null”). In this example, address 812 may point to the next node in the search tree, i.e., vertical node 820. In an embodiment, 38-bit address 812 may include a 1-bit terminal/nodal indicator and a 37-bit offset address to reference one of the 8-byte words within a 1 Tbyte (˜1012 byte) address space of memory 104. Accordingly, vertical node 810 may be eight bytes (64 bits) in length, and, advantageously, may be located on an 8-byte word boundary within memory 104. Generally, each vertical node within plurality of search nodes 801 may be located on an 8-byte word boundary within memory 104.
  • A vertical node may include a multi-character, search key fragment (e.g., string). Generally, search keys without associated key data records may be collapsed into a single vertical node to effectively reduce the number of vertical nodes required within plurality of [0067] search nodes 801. In an embodiment, vertical node 810 may include eight bits for each additional character, above two characters, within the search key fragment, such as, for example, 8-bit characters 816-1, 816-2 . . . 816-N (shown in phantom outline). Advantageously, vertical node 810 may be padded to a 64-bit boundary within memory 104 in accordance with the number of additional characters located within the string fragment. For example, if nine characters are to be included within vertical node 810, then characters one and two may be assigned to first character 814 and second character 815, respectively, and 56 bits of additional character information, corresponding to characters three through nine, may be appended to vertical node 810. An additional eight bits of padding may be included to align the additional character information on an 8-byte word boundary.
  • Similarly, [0068] vertical node 820 may include, for example, a 2-bit node type 821 (e.g., “10”), a 38-bit address 822, an 8-bit length 823 (e.g., “8”), an 8-bit first character 824 (e.g., “a”) and an 8-bit second character 825 (e.g., “null”). In this example, address 822 may point to the next node in the search tree, i.e., horizontal node 830. Accordingly, vertical node 820 may be eight bytes in length, and, advantageously, may be located on an 8-byte word boundary within memory 104. Of course, additional information may also be included within vertical node 820 if required, as described above with reference to vertical node 810.
  • [0069] Horizontal node 830 may include, for example, a 2-bit node type 831 (e.g., “01”), a 38-bit first address 832, an 8-bit address count 833 (e.g., 2), an 8-bit first character 834 (e.g., “•”), an 8-bit last character 835 (e.g., “w”), a variable-length bitmap 836 and a 38-bit second address 837. In this example, first character 834 may include a single character, “•” representing the search key fragment “la” defined by vertical nodes 810 and 820, while last character 831 may include a single character “w,” representing the search key fragment “law” defined by vertical nodes 810 and 820, and the last character 835 of horizontal node 830. First address 832 may point to key data record 840, associated with the search key fragment “la,” while second address 837 may point to key data record 850 associated with the search key fragment “law.”
  • [0070] Bitmap 836 may advantageously indicate which keys (e.g., characters) are referenced by horizontal node 830. A “1” within a bit position in bitmap 836 indicates that the key, or character, is referenced by horizontal node 830, while a “0” within a bit position in bitmap 836 may indicate that the key, or character, is not referenced by horizontal node 830. Generally, the length of bitmap 836 may depend upon the number of sequential keys, or characters, between first character 834 and last character 835, inclusive of these boundary characters. For example, if first character 834 is “a” and last character 835 is “z,” then bitmap 836 may be 26 bits in length, where each bit corresponds to one of the characters between, and including, “a” through “z.” In this example, additional 38-bit addresses would be appended to the end of horizontal node 830, corresponding to each of the characters represented within bitmap 836. Each of these 38-bit addresses, as well as bitmap 836, may be padded to align each quantity on an 8-byte word boundary within memory 104. In an embodiment, the eight-bit ASCII character set may be used as the search key space so that bitmap 836 may be as long as 256 bits (i.e., 28 bits or 32 bytes). In the example depicted in FIG. 8, due to the special reference character “•” and address count 833 of “2,” bitmap 836 may be two bits in length and may include a “1” in each bit position corresponding to last character 835.
  • In an embodiment, and as discussed with reference to key data record [0071] 750 (FIG. 7), key data record 850 may include, for example, a variable-length key 853 and at least one terminal data pointer. As depicted in FIG. 8, key data record 850 includes two terminal data pointers, terminal data pointer 857 and terminal data pointer 858. Key data record 850 may be prefixed with a 12-bit key length 851 and a 12-bit terminal pointer count/status 852, and may include padding (not shown for clarity) to align the terminal data pointer 857 and terminal data pointer 858 on an 8-byte boundary in memory 104. Terminal data pointer 857 and terminal data pointer 858 may each contain a 10-bit terminal data type and other data, such as, for example, length, status or data useful in binary record searches. Terminal data pointer 857 and terminal data pointer 858 may be sorted by terminal data type for quicker retrieval of specific resource records (e.g., terminal data record 860 and terminal data record 870).
  • In another embodiment, and as discussed with reference to key data record [0072] 740 (FIG. 7), key data record 840 may include embedded terminal data 846 rather than a terminal data record pointer. For example, key data record 840 may include a key length 841, a terminal pointer count 842, a variable-length key 843, the number of embedded record elements 844, followed by a record element length 845 (in bytes, for example) and embedded record data 846 (e.g., a string, a byte sequence, etc.) for each of the number of embedded record elements 844.
  • In another embodiment, and as discussed with reference to terminal data record [0073] 760 (FIG. 7), terminal data record 860, for example, may include a 12-bit length 861, a 4-bit status, and a variable-length string 862 (e.g., an IP address). Alternatively, variable length string 862 may be a byte sequence. Terminal data record 860 may include padding (not shown for clarity) to align each terminal data record to an 8-byte boundary in memory 104. Alternatively, terminal data record 860 may include padding (not shown for clarity) to a 4-byte boundary, or, terminal data record 860 may not include any padding. Memory management algorithms may determine, generally, whether terminal data records 760 are padded to 8-byte, 4-byte, or 0-byte boundaries. Similarly, terminal data record 870 may include a 12-bit length 871, a 4-bit status, and a variable-length string 872 (e.g., an IP address).
  • Generally, both search indices, such as OAK trees, and data records may be structured so that 8-byte pointers are located on 8-byte boundaries in memory. For example, [0074] vertical node 810 may contain an 8-byte (or less) pointer to vertical node 820, and may be stored at a memory address divisible by eight (i.e., an 8-byte boundary, or 8N). Similarly, both search indices, such as OAK trees, and data records may be structured so that 4-byte pointers are located on 4-byte boundaries in memory. Consequently, modifications to database 800 may conclude by updating a pointer to an aligned address in memory using a single uninterruptible operation, including, for example writing a new pointer to the search index, such as an OAK trees node, or writing a new pointer to a data record.
  • The various embodiments discussed above with reference to FIG. 8 present many advantages. For example, an OAK tree data structure is extremely space efficient and 8-bit clean. Regular expression searches may be used to search vertical nodes containing multi-character string fragments, since the 8-bit first character (e.g., first character [0075] 814), the 8-bit second character (e.g., second character 8-15) and any additional 8-bit characters (e.g., additional characters 816-1 . . . 816-N) may be contiguously located within the vertical node (e.g., vertical node 810). Search misses may be discovered quickly, and, no more than N nodes may need to be traversed to search for an N-character length search string.
  • FIG. 9 is a top level flow diagram that illustrates a method for searching and concurrently updating a database without the use of operating system or database table locks, according to embodiments of the present invention. [0076]
  • An update thread and a plurality of search threads may be created ([0077] 900). In an embodiment, system 100 may spawn a single update thread to incorporate updates to the local database received, for example, from OLTP server 140-1 over WAN 124. In other embodiments, system 100 may receive updates from OLTP servers 140-1 . . . 140-S over WAN 124, and from plurality of network computers 120-1 . . . 120-N over WAN 124 or LAN 122. System 100 may also spawn a search thread in response to each session request received from the plurality of network computers 120-1 . . . 120-N. For example, a manger thread may poll one or more control ports, associated with one or more network interfaces 114-1 . . . 114-0, for session requests transmitted from the plurality of network computers 120-1 . . . 120-N. Once a session request from a particular network computer 120-1 . . . 120-N is received, the manage thread may spawn a search thread and associate the search thread with that particular network computer (e.g., PE).
  • In an alternative embodiment, [0078] system 100 may spawn a number of search threads without polling for session requests from the plurality of network computers 120-1 . . . 120-N. In this embodiment, the search threads may not be associated with particular network computers and may be distributed evenly among the plurality of processors 102-1 . . . 102-P. Alternatively, the search threads may execute on a subset of the plurality of processors 102-1 . . . 102-P. The number of search threads may not necessarily match the number of network computers (e.g., N).
  • A plurality of search queries may be received ([0079] 910) over the network. In an embodiment, plurality of network computers 120-1 . . . 120-N may send the plurality of search queries to system 100 over LAN 122, or, alternatively, WAN 124. The plurality of search queries may contain, for example, a search term or key, as well as state information that may be associated with each query (e.g., query source address, protocol type, etc.). State information may be explicitly maintained by system 100, or, alternatively, a state information handle may be provided. In a preferred embodiment, each of the plurality of network computers 120-1 . . . 120-N may multiplex a predetermined number of search queries into a single network packet for transmission to system 100 (e.g., a Request SuperPacket 220 as depicted in FIG. 2).
  • In an alternative embodiment, a plurality of search queries and the new information may be received ([0080] 910, 960) concurrently over the network. For example, plurality of network computers 120-1 . . . 120-N may send the plurality of search queries and the new information to system 100 over LAN 122, or, alternatively, WAN 124. The plurality of search queries may contain, for example, a search term or key, as well as state information that may be associated with each query (e.g., query source address, protocol type, etc.). The new information may include, for example, additions, modifications or deletions to database, and may be grouped together as a transaction with an associated identifier. For example, in an embodiment, each of the plurality of network computers 120-1 . . . 120-N may multiplex a predetermined number of search queries and new information into a single network packet for transmission to system 100, such as, for example, a single Request SuperPacket 220 (new information not depicted for clarity). For those queries that depend upon new information within the transaction, the state information associated with those queries may include the transaction identifier, and, typically, may be maintained by system 100. When the update thread applies the transaction to the database (i.e., e.g., an ongoing transaction), search queries that depend upon the transaction will pend until the update thread successfully completes and commits the transaction.
  • Each search query may be assigned ([0081] 920) to one of the search threads for processing. In an embodiment, each search thread may be associated with one of the plurality of network computers 120-1 . . . 120-N and all of the search queries received from that particular network computer may be assigned (920) to the search thread. In other words, one search thread may process all of the search queries arriving from a single network computer (e.g., a single PE). In an embodiment, each search thread may extract individual search queries from a single, multiplexed network packet (e.g., Request SuperPacket 220 as depicted in FIG. 2), or, alternatively, the extraction may be performed by a different process or thread.
  • In another embodiment, the search queries received from each of the plurality of network computers [0082] 120-1 . . . 120-N may be assigned (920) to different search threads. In this embodiment, the multi-thread assignment may be based on an optimal distribution function which may incorporate various system parameters including, for example, processor loading. Of course, the assignment of search queries to search threads may change over time, based upon various system parameters, including processor availability, system component performance, etc. Various mechanisms may be used to convey search queries to assigned search threads within system 100, such as, for example, shared memory, inter-process messages, tokens, semaphores, etc.
  • Each search thread may search ([0083] 930) the database based on the assigned search queries. In an embodiment, each search thread may extract individual search queries from a single, multiplexed network packet (e.g., Request SuperPacket 220 as depicted in FIG. 2), or, alternatively, the extraction may be performed by a different process or thread. Clearly, searching the database may depend upon the underlying structure of the database. In an embodiment, searching the database may depend upon the modifications contained within a particular transaction for those search queries dependent upon the transaction.
  • Referring to the database embodiment illustrated in FIG. 4, [0084] database 400 may be searched (930) for the search key. The data record (e.g., database record 420) corresponding to the search key may then be determined. Referring to the database embodiment illustrated in FIG. 5, look-aside file 520 may first be searched (930) for the search key, and, if a match is not determined, then snapshot file 510 may be searched (930). The data record corresponding to the search key may then be determined.
  • Referring to the database embodiment illustrated in FIG. 6, [0085] domain name data 610 may first be searched (930) for the search key, and then the resource data within name server data 630, corresponding to the search key, may then be determined. For example, for the “la.com” search key, a match may be determined with domain name record 620 in domain name data 610. The appropriate information may be extracted, including, for example, name server pointer 626. Then, the appropriate name server record 640 may be indexed using name server pointer 626, and name server network address 647 may be extracted.
  • Referring to the database embodiment illustrated in FIG. 7, the TST may be searched ([0086] 930) for the search key, from which the resource data may be determined. For example, for the “law.com” search key, search nodes 701 may be searched (930), and a match determined with node 730. Key pointer 736 may be extracted, from which the key data record 750 may be determined. The number of terminal data pointers 752 may then be identified and each terminal data pointer may be extracted. For example, terminal data pointer 757 may reference terminal data record 760 and terminal data pointer 758 may reference and terminal data record 770. The variable-length resource data, e.g., name server network address 762 and name server network address 772, may then be extracted from each terminal data record using the length 761 and 771, respectively..
  • Referring to the database embodiment illustrated in FIG. 8, the OAK tree may be searched ([0087] 930) for the search key, from which the resource data may be determined. For example, for the “law.com” search key, search nodes 801 may be searched (930), and a match determined with node 830. Second address 837 may be extracted, from which the key data record 850 may be determined. The number of terminal data pointers 852 may then be identified and each terminal data pointer may be extracted. For example, terminal data pointer 857 may reference terminal data record 860 and terminal data pointer 858 may reference and terminal data record 870. The variable-length resource data, e.g., name server network address 862 and name server network address 872, may then be extracted from each terminal data record using the length 861 and 871, respectively.
  • Each search thread may create ([0088] 940) a plurality of search replies corresponding to the assigned search queries. If a match is not found for a particular search key, the reply may include an appropriate indication, such as, for example the null character. Referring to FIGS. 6-8, for example, a search key might be “law.com” and the corresponding resource data might be “180.1.1.1”. More than one name server network address may be associated with a search key, in which case, more than one name server network address may be determined.
  • The replies may be sent ([0089] 950) over the network. In an embodiment, each search thread may multiplex the appropriate replies into a single network packet (e.g., Response SuperPacket 240) corresponding to the single network packet containing the original queries (e.g., Request SuperPacket 220). Alternatively, a different process or thread may multiplex the appropriate replies into the single network packet. The response network packet may then be sent (950) to the appropriate network computer within the plurality of network computers 120-1 . . . 120-N via LAN 122, or alternatively, WAN 124. In one embodiment, the response packets may be sent to the same network computer from which the request packets originated, while in another embodiment, the response packets may be sent to a different network computer.
  • The update thread may receive ([0090] 960) new information over the network. In an embodiment, new information may be sent, for example, from the OLTP server 140-1 to system 100 over WAN 124. In other embodiments, system 100 may receive updates from OLTP servers 140-1 . . . 140-S over WAN 124, and from plurality of network computers 120-1 . . . 120-N over WAN 124 or LAN 122. As discussed above, in an embodiment, plurality of network computers 120-1 . . . 120-N may send the plurality of search queries and the new information to system 100 over LAN 122, or, alternatively, WAN 124. Consequently, in this embodiment, the plurality of search queries and the new information may be received (910, 960) concurrently over the network.
  • In the DNS resolution embodiment, for example, the new information may include new domain name data, new name server data, a new name server for an existing domain name, etc. Alternatively, the new information may indicate that a domain name record, name server record, etc., may be deleted from the database. Generally, any information contained within the database may be added, modified or deleted, as appropriate. In an embodiment, several modifications to the database may be grouped together as a transaction and applied to the database as a consistent modification set. [0091]
  • For example, a transaction may include various combinations of database record additions, modifications or deletions. Because search access to the database is not restricted, an indicator field, (e.g., “dirty bit”) may be provided within each database record to notify the search threads that, when the dirty bit is set for a particular database record, database modifications associated with a transaction are in progress and a subsequent query-retry of that particular database record is required. Once the transaction has been applied and the modifications are complete, the dirty bits may be cleared for all the new database elements effected by the transaction. In some sense, the new information may be considered to be “committed.” Thus, the database may be transformed from one valid state to another valid state without restricting search access to the database. [0092]
  • Advantageously, no operating system or database table locks are required to prevent search queries from accessing the database during these update periods. A slight performance penalty is incurred, because a search query may need to be repeated if the dirty bit is determined to be set for any particular database record. The dirty bit may be located within the most significant word of the database record, so that the bit may be inspected as soon as this word is transferred from [0093] memory 104 to processor 102-1, for example. Additional memory transfers associated with the remaining portion of the database record may thus be avoided if the dirty bit is determined to be set. The query-retry period may be on the order of nano-seconds for the exemplary system embodiments discussed with reference to FIG. 1. Typically, the dirty bit may be cleared before the query-retry accesses the particular database record again.
  • Alternately, or when a dirty bit is set for during ongoing transaction, the point-in-time consistent query result may be reconstructed from the contents of the redo log, or log manager, for example, as is common practice in transactional databases systems. For search queries that may encounter a dirty bit due to a single in-progress modification that is not part of an ongoing transaction, repeating the query may usually incur a lesser performance penalty than reconstructing the query result from the log manager. Where the dirty bit is due to an ongoing transaction with an extended set of modifications received over an extended period of time, reconstructing the query result from the log manager may be preferred, so that the query result may not be unduly delayed. [0094]
  • While the number of database record modifications within a single transaction is generally unlimited, typically, a transaction includes sufficient information to maintain the atomicity, consistency, isolation and durability of the database. Many different transactions may be envisioned for each database embodiment depicted within FIGS. 4 and 6-[0095] 8. Referring to FIG. 4, for example, a transaction may include modifying database records 410 and 420, modifying database record 420 and adding a new database record (e.g., database record 430), modifying database record 420 and deleting a database record (e.g., database record 410), etc. Referring to FIG. 6, for example, a transaction may include modifying domain name record 620 and name server record 640, deleting domain name record 620 and adding domain name record 615, etc. Referring to FIG. 7, for example, a transaction may include modifying key data record 750 and terminal data record 760 and deleting terminal data record 770, adding key data record 780 and deleting key data record 740, etc. Similarly, referring to FIG. 8, for example, a transaction may include modifying key data record 850 and terminal data record 860 and deleting terminal data record 870, adding key data record 880 and deleting key data record 840, etc.
  • The update thread may create ([0096] 970) a plurality of new elements based on the new information. Typically, modifications to the information contained within an existing element of the database may be incorporated by creating a new element based on the existing element and then modifying the new element to include the new information. During this process, the new element may not be visible to the search threads or processes currently executing on system 100 until a pointer to the new element has been written to the database. Generally, additions to the database may be accomplished in a similar fashion, without necessarily using information contained within an existing element. In one embodiment, the deletion of an existing element from the database may be accomplished by adding a new, explicit “delete” element to the database. In another embodiment, the deletion of an existing element from the database may be accomplished by overwriting a pointer to the existing element with an appropriate indicator (e.g., a null pointer, etc.). In this embodiment, the update thread does not create a new element in the database containing new information
  • Referring to FIG. 4, for example, memory space for a new data record (e.g., data record [0097] 430) may be allocated from a memory pool associated with database records 401. New information may be copied to data 432 of data record 430, and other information may be calculated and added to data record 430, such as, for example, chain pointer 434, data pointer 435, etc. A dirty bit 408 may also be included within new data record 430. Referring to the database embodiments depicted in FIGS. 6-8, for example, the new information may include new domain names and/or domain name servers to be added to the database.
  • Referring to FIG. 6, for example, memory space for a new [0098] domain name record 615 may be allocated from a memory pool associated with the domain name records 611, or, alternatively, from a general memory pool associated with domain name data 610. The new domain name may be normalized and copied to the new domain name record 615, a pointer to an existing name server (e.g., name server record 655) may be determined and copied to the new domain name record 615. A dirty bit 618 may be included within new domain name record 615. Other information may be calculated and added to new domain name record 615, such as, for example, a number of name servers, a chain pointer, etc. In more complicated examples, the new information may include a new search key with corresponding resource data.
  • Referring to FIG. 7, in a more complicated example, a [0099] new search node 705, as well as a new key data record 780, may be created. In this example, the new search node 705 may include a comparison character (“m”), in the first position, that is greater than the comparison character (“I”), in the first position, of existing search node 710. Consequently, search node 705 may be inserted in the TST at the same “level” (i.e., 1st character position) as search node 710. Before search node 705 is committed to the database, the 4-byte “greater than” pointer 715 of search node 710 may contain a “null” pointer. Search node 705 may also include a 4-byte key pointer 706 which may contain a 40-bit pointer to the new key data record 780. Key data record 780 may include a key length 781 (e.g., “5”) and type 782 (e.g., indicating embedded resource data), a variable length key 783 (e.g., “m.com”), a number of embedded resources 784 (e.g., “1”), a resource length 785 (e.g., “9”), a variable-length resource string 786 or byte sequence (e.g., “180.1.1.1”) and dirty bit 707. Memory space may be allocated for search node 705 from a memory pool associated with TST nodes 701, while memory space may be allocated for the key data record 770 from a memory pool associated with plurality of key data records 702.
  • Referring to FIG. 8, for example, a [0100] new search node 890, as well as a new key data record 880, may be created. In this example, the new search node 890 may be a horizontal node including, for example, a two-bit node type 891 (e.g., “01”), a 38-bit first address 892, an eight-bit address count 893 (e.g., 2), an eight-bit first character 894 (e.g., “I”), an eight-bit last character 895 (e.g., “m”), a variable-length bitmap 896 and a 38-bit second address 897. First address 892 may point to vertical node 820, the next vertical node in the “I<. . . >” search string path, while second address 897 may point to key data record 880 associated with the search key fragment “m.” Key data record 880 may include a key length 881 (e.g., “5”) and type 882 (e.g., indicating embedded resource data), a variable length key 883 (e.g., “m.com”), a number of embedded resources 884 (e.g., “1”), a resource length 885 (e.g., “9”), a variable-length resource string 886 or byte sequence (e.g., “180.1.1.1”) and dirty bit 807. Memory space may be allocated for search node 890 from a memory pool associated with plurality of search nodes 801, while memory space may be allocated for key data record 880 from a memory pool associated with plurality of key data records 802.
  • The new information may also include several modifications to existing records within the database. Referring to FIG. 4, the new information may include modifications to [0101] data record 410. In this example, new data record 420 may be created and the information from data record 410 copied thereto. As above, memory space for data record 420 may be allocated from a memory pool associated with database records 401. The modifications may then be applied to data 422. Data records 410 and 420 may also include dirty bits 406 and 407, respectively.
  • Referring to FIG. 6, the new information may include modifications to name [0102] server record 640, such as, for example, a new IP address (e.g., “180.2.1.2”). In this example, new name server record 660 may be created and the information from old name server record 640 copied thereto. As above, memory space for name server record 660 may be allocated from a memory pool associated with the name server records 631, or, alternatively, from a general memory pool associated with name server data 630. The new name server IP address may then be copied to the appropriate field within name server record 660, i.e., e.g., name server IP address 667. A dirty bit 668 may be included within new name server record 660. Similar modifications to the various elements within the database embodiments described with reference to FIGS. 7 and 8 are also contemplated.
  • The new information may also include the deletion of at least one existing element within the database. In one embodiment, no new element may be created, but the dirty bit of the element to be deleted may be set by the update thread. In another embodiment, a new, explicit “delete” element may be created, with the dirty bit set, indicating that the former element has been removed from the database. Referring to FIG. 4, for example, the new information may include the deletion of [0103] data record 410, which may include dirty bit 407. Referring to FIG. 6, for example, the new information may include the deletion of domain name record 670, which may include dirty bit 678. Similar deletions to the various elements within the database embodiments described with reference to FIGS. 7 and 8 are also contemplated.
  • The update thread may set ([0104] 975) a dirty bit within each of the plurality of new elements. As noted above, the dirty bit may notify the search threads that the particular database record is associated with a current transaction, and that a subsequent query-retry of the database should be performed. Thus, each of the database records effected by a transaction may be identified. Referring to FIGS. 4 and 6-8, for example, the update thread may set a dirty bit within each of the database records affected by the transaction. Dirty bit 408 may be set to “1” for new data record 430 and dirty bits 407 and 406 may be set to “1” for modified data records 410 and 420, respectively. Dirty bit 618 may be set to “1” for new domain name record 615 and dirty bits 606 and 668 may be set to “1” for modified name server records 640 and 660, respectively. Dirty bits 707 and 807 may be set to “1” for new key data records 780 and 880, respectively.
  • For clarity, the top level flow diagram illustrated in FIG. 9 is extended to FIG. 10 though flow diagram connection symbol “A.” Referring to FIG. 10, for database records to be deleted, the update thread may also set ([0105] 1075) a dirty bit within the appropriate database records. For example, dirty bit 407 may be set to “1” for deleted data record 410 and dirty bit 678 may be set to “1” for deleted domain name record 670. Data record 420 and 430, domain name record 615, name server record 660 and key data records 780 and 880 may be considered to be “new” elements within the database, while modified data record 410, modified name server record 640, deleted data record 410 and deleted domain name record 670 may be considered to be “old” elements within the database. In these examples, data record 410 is used as both a “modified” data record and as a “deleted” data record.
  • The update thread may write ([0106] 980) a pointer to the database using a single uninterruptible operation. Generally, a new element may be committed to the database, (i.e., become instantaneously visible to the search threads, or processes), by writing a pointer to the new element to the appropriate location within the database. As discussed above, this appropriate location may be aligned in memory, so that the single operation includes a single store instruction of an appropriate length. Even though the new elements may be visible to the search threads after the pointer write, the “set” dirty bit notifies the search threads that each new database element may be part of a current transaction, and that a subsequent query-retry, or reconstruction from the redo log, may be necessary. For database embodiments containing multiple indices, it may be possible for one index to contain pointers to “old” elements while another index to contain pointers to “new” elements. Consequently, in the DNS resolution embodiment, for example, two domain name records with the same domain name, or primary key, may exist within the search space simultaneously, but only during a transaction involving that record for a unique index.
  • Referring to FIG. 4, an 8-byte pointer corresponding to [0107] new data record 430 may be written to hash table 403. Referring to FIG. 6, an 8-byte pointer corresponding to new domain name record 615 may be written to hash table 612. Importantly, these hash table entries may be aligned on 8-byte boundaries in memory 104 to ensure that a single, 8-byte store instruction is used to update this value. Referring to FIG. 7, a 4-byte pointer corresponding to the new search node 705 may be written to the 4-byte “greater-than” node pointer 715 within search node 710. Importantly, the node pointer 715 may be aligned on a 4-byte boundary in memory 104 to ensure that a single, 4-byte store instruction may be used to update this value. Referring to FIG. 8, plurality of search nodes 801 may also include a top-of-tree address 899, which may be aligned on an 8-byte word boundary in memory 104 and may reference the first node within plurality of search nodes 801 (i.e., e.g., vertical node 810). An 8-byte pointer corresponding to the new search node 890 may be written to the top-of-tree address 899 using a single store instruction. In each of these embodiments, just prior to the store instruction, the new data are not visible to the search threads, while just after the store instruction, the new data are visible to the search threads. Thus, with a single, uninterruptible operation, the new data may be committed to the database without the use of operating system or database table locks.
  • Referring to FIG. 10, for database records to be deleted from the database, in an embodiment, a pointer, or pointers, to the existing record may be written ([0108] 1080) with a null pointer using a single uninterruptible operation. The null pointer may de-reference the existing record and indicate that the existing record has been deleted from the database. Referring to FIG. 4, for example, data record 410 may be deleted from database 400 by overwriting the appropriate entry within hash table 403 with an 8-byte null pointer. Referring to FIG. 6, for example, domain name record 670 may be deleted from database 600 by overwriting the appropriate entry within hash table 612 with an 8-byte null pointer. In an alternative embodiment, an 8-byte pointer to a new, “explicit” delete record, corresponding to a “deleted” domain name record 670, may be written to hash table 613. In this embodiment, modifications, additions and deletions to the database may be accomplished similarly.
  • The update thread may clear ([0109] 985) the dirty bit within each of the plurality of new elements. In an embodiment, the dirty bit may be cleared from each new element by setting the dirty bit to “0.” For example, and as discussed with reference to FIGS. 4 and 6-8, dirty bit 406 and 408 may be set to “0” for data records 420 and 430, respectively. Dirty bit 618 may be set to “0” for domain name record 615, dirty bits 606 and 668 may be set to “0” for name server records 640 and 660, respectively. Dirty bits 707 and 807 may be set to “0” for key data records 780 and 880, respectively. In an embodiment, the dirty bit may be set to “0” for each of the new elements in any order. After the dirty bits within each of the new elements have been cleared (985), the “old,” or existing, database elements are no longer active, i.e., referenced within the database. In an embodiment, the dirty bits within these elements may then be cleared by setting the dirty bit to “0,” while in an alternative embodiment, the dirty bits may not be cleared at all.
  • In an embodiment, the update thread may physically delete ([0110] 990) existing database elements that have been modified after the dirty bits are cleared (985) from each of the new elements. Advantageously, the physical deletion of these modified elements from memory 104 may be delayed to preserve consistency of in-progress searches. For example, after an existing element has been modified and the corresponding new element committed to the database, the physical deletion of the existing element from memory 104 may be delayed so that existing search threads that have a result, acquired just before the new element was committed to the database, may continue to use the previous state of the data. The update thread may physically delete (990) the existing element after all the search threads that began before the existing element was modified have finished.
  • Similarly, after an existing element has been deleted from the database, the physical deletion of the existing element from [0111] memory 104 may be delayed so that existing search threads that have a result, acquired just before the existing element was deleted from the database, may continue to use the previous state of the data. Referring to FIG. 10, the update thread may physically delete (1090) the existing element after all the search threads that began before the existing element was deleted have finished.
  • Potential complications may arise from the interaction of methods associated with embodiments of the present invention and various architectural characteristics of [0112] system 100. For example, the processor on which the update thread is executing (e.g., processor 102-1, 102-2, etc.) may include hardware to support out-of-order instruction execution. In another example, system 100 may include an optimizing compiler which may produce a sequence of instructions, associated with embodiments of the present invention, that have been optimally rearranged to exploit the parallelism of the processor's internal architecture (e.g., processor 102-1, 102-2, etc.). Many other complications may readily be admitted by one skilled in the art. Data hazards arising from out-of-order instruction execution may be eliminated, for example, by creating dependencies between the creation (970) of the new element and the pointer write (980) to the database.
  • In one embodiment, these dependencies may be established by inserting additional arithmetic operations, such as, for example, an exclusive OR (XOR) instruction, into the sequence of instructions executed by processor [0113] 102-1 to force the execution of the instructions associated with the creation (970) of the new element to issue, or complete, before the execution of the pointer write (980) to the database. For example, the contents of the location in memory 104 corresponding to the new element, and containing the dirty bit, may be XOR'ed with the contents of the location in memory 104 corresponding to the pointer to the new element. Subsequently, the address of the new element may be written (980) to memory 104 to commit the new element to the database. Numerous methods to overcome these complications may be readily discernable to one skilled in the art.
  • Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention. [0114]

Claims (54)

What is claimed is:
1. A multi-threaded network database system, comprising:
at least one processor coupled to a network; and
a memory coupled to the processor, the memory including a database and instructions adapted to be executed by the processor to:
create an update thread and a plurality of search threads;
assign each of a plurality of search queries, received over the network, to one of the plurality of search threads;
for each search thread:
search the database according to the assigned search queries,
create a plurality of search replies corresponding to the assigned search queries, and
send the plurality of search replies over the network; and
for the update thread:
create a plurality of new elements according to new information received over the network,
set a dirty bit within each of the plurality of new elements,
without restricting access to the database for the plurality of search threads, write a pointer to each of the plurality of new elements to the database using a single uninterruptible operation, and
clear the dirty bit within each of the plurality of new elements.
2. The system of claim 1, wherein the instructions further include:
for the update thread:
set a dirty bit within at least one existing element to be deleted from the database, and
without restricting access to the database for the plurality of search threads, de-reference the existing element to be deleted using a single uninterruptible operation.
3. The system of claim 1, wherein the instructions further include:
for the update thread:
set a dirty bit within at least one existing element to be modified in the database before the pointer is written to corresponding new element, and
clear the dirty bit within the existing element after the pointer is written to the corresponding new element.
4. The system of claim 1, wherein the single uninterruptible operation is a store instruction.
5. The system of claim 4, wherein the store instruction writes four bytes to a memory address located on a four byte boundary.
6. The system of claim 4, wherein the store instruction writes eight bytes to a memory address located on an eight byte boundary.
7. The system of claim 4, wherein the processor has a word size of at least n-bytes, the memory has a width of at least n-bytes and the store instruction writes n-bytes to a memory address located on an n-byte boundary.
8. The system of claim 1, wherein the plurality of search queries are received within a single network packet.
9. The system of claim 1, wherein the plurality of search replies are sent within a single network packet.
10. The system of claim 1, wherein said restricting access includes database locking.
11. The system of claim 1, wherein said restricting access includes spin locking.
12. The system of claim 11, wherein said spin locking includes the use of at least one semaphore.
13. The system of claim 12, wherein said semaphore is a mutex semaphore.
14. The system of claim 1, further comprising a plurality of processors and a symmetric multi-processing operating system.
15. The system of claim 14, wherein the plurality of search threads perform at least 100,000 searches per second.
16. The system of claim 15, wherein the update thread performs at least 10,000 updates per second.
17. The system of claim 16, wherein the update thread performs between 50,000 and 130,000 updates per second.
18. The system of claim 1, wherein the pointer to the new element is written to a search index.
19. The system of claim 18, wherein the search index is a TST.
20. The system of claim 1, wherein the pointer to the new element is written to a data record within the database.
21. A method for searching and concurrently updating a database, comprising:
creating an update thread and a plurality of search threads;
assigning each of a plurality of search queries, received over the network, to one of the plurality of search threads;
for each search thread:
searching the database according to the assigned search queries,
creating a plurality of search replies corresponding to the assigned search queries, and
sending the plurality of search replies over the network; and
for the update thread:
creating a plurality of new elements according to new information received over the network,
setting a dirty bit within each of the plurality of new elements,
without restricting access to the database for the plurality of search threads, writing a pointer to each of the plurality of new elements to the database using a single uninterruptible operation, and
clearing the dirty bit within each of the plurality of new elements.
22. The method of claim 21, wherein the instructions further include:
for the update thread:
setting a dirty bit within at least one existing element to be deleted from the database, and
without restricting access to the database for the plurality of search threads, de-referencing the existing element to be deleted using a single uninterruptible operation.
23. The method of claim 21, further comprising:
for the update thread:
setting a dirty bit within at least one existing element to be modified in the database before the pointer is written to corresponding new element, and
clearing the dirty bit within the existing element after the pointer is written to the corresponding new element.
24. The method of claim 21, wherein the single uninterruptible operation is a store instruction.
25. The method of claim 23, wherein the store instruction writes four bytes to a memory address located on a four byte boundary.
26. The method of claim 23, wherein the store instruction writes eight bytes to a memory address located on an eight byte boundary.
27. The method of claim 21, wherein the plurality of search queries are received within a single network packet.
28. The method of claim 21, wherein the plurality of search replies are sent within a single network packet.
29. The method of claim 21, wherein said restricting access includes database locking.
30. The method of claim 21, wherein said restricting access includes spin locking.
31. The method of claim 30, wherein said spin locking includes the use of at least one semaphore.
32. The method of claim 31, wherein said semaphore is a mutex semaphore.
33. The method of claim 21, wherein the plurality of search threads perform at least 100,000 searches per second.
34. The method of claim 21, wherein the update thread performs at least 10,000 updates per second.
35. The method of claim 34, wherein the update thread performs between 50,000 and 130,000 updates per second.
36. The method of claim 21, wherein the pointer to the new element is written to a search index.
37. The method of claim 21, wherein the pointer to the new element is written to a data record within the database.
38. A computer readable medium including instructions adapted to be executed by at least one processor to implement a method for searching and concurrently updating a database, the method comprising:
creating an update thread and a plurality of search threads;
assigning each of a plurality of search queries, received over the network, to one of the plurality of search threads;
for each search thread:
searching a database according to the assigned search queries,
creating a plurality of search replies corresponding to the assigned search queries, and
sending the plurality of search replies over the network; and
for the update thread:
creating a plurality of new elements according to new information received over the network,
setting a dirty bit within each of the plurality of new elements,
without restricting access to the database for the plurality of search threads, writing a pointer to each of the plurality of new elements to the database using a single uninterruptible operation, and
clearing the dirty bit within each of the plurality of new elements.
39. The computer readable medium of claim 38, wherein the method further includes:
for the update thread:
setting a dirty bit within at least one element to be deleted from the database, and
without restricting access to the database for the plurality of search threads, de-referencing the element to be deleted using a single uninterruptible operation.
40. The computer readable medium of claim 38, wherein the method further includes:
for the update thread:
setting a dirty bit within at least one existing element to be modified in the database before the pointer is written to corresponding new element, and
clearing the dirty bit within the existing element after the pointer is written to the corresponding new element.
41. The computer readable medium of claim 38, wherein the single uninterruptible operation is a store instruction.
42. The computer readable medium of claim 41, wherein the store instruction writes four bytes to a memory address located on a four byte boundary.
43. The computer readable medium of claim 41, wherein the store instruction writes eight bytes to a memory address located on an eight byte boundary.
44. The computer readable medium of claim 38, wherein the plurality of search queries are received within a single network packet.
45. The computer readable medium of claim 38, wherein the plurality of search replies are sent within a single network packet.
46. The computer readable medium of claim 38, wherein said restricting access includes database locking.
47. The computer readable medium of claim 38, wherein said restricting access includes spin locking.
48. The computer readable medium of claim 47, wherein said spin locking includes the use of at least one semaphore.
49. The computer readable medium of claim 48, wherein said semaphore is a mutex semaphore.
50. The computer readable medium of claim 38, wherein the pointer to the new element is written to a search index.
51. The computer readable medium of claim 38, wherein the pointer to the new element is written to a data record within the database.
52. The system of claim 8, wherein the new information is received within the single network packet.
53. The method of claim 27, wherein the new information is received within the single network packet.
54. The computer readable medium of claim 44, wherein the new information is received within the single network packet.
US10/285,544 2001-11-01 2002-11-01 Transactional memory manager Abandoned US20030084038A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/285,544 US20030084038A1 (en) 2001-11-01 2002-11-01 Transactional memory manager

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US33084201P 2001-11-01 2001-11-01
US36516902P 2002-03-19 2002-03-19
US10/285,544 US20030084038A1 (en) 2001-11-01 2002-11-01 Transactional memory manager

Publications (1)

Publication Number Publication Date
US20030084038A1 true US20030084038A1 (en) 2003-05-01

Family

ID=26987480

Family Applications (10)

Application Number Title Priority Date Filing Date
US10/285,547 Expired - Lifetime US7047258B2 (en) 2001-11-01 2002-11-01 Method and system for validating remote database updates
US10/285,549 Expired - Lifetime US7167877B2 (en) 2001-11-01 2002-11-01 Method and system for updating a remote database
US10/285,575 Expired - Lifetime US6681228B2 (en) 2001-11-01 2002-11-01 Method and system for processing query messages over a network
US10/285,544 Abandoned US20030084038A1 (en) 2001-11-01 2002-11-01 Transactional memory manager
US10/285,618 Expired - Lifetime US7203682B2 (en) 2001-11-01 2002-11-01 High speed non-concurrency controlled database
US10/674,820 Active 2025-07-31 US8171019B2 (en) 2001-11-01 2003-10-01 Method and system for processing query messages over a network
US11/641,054 Abandoned US20070100808A1 (en) 2001-11-01 2006-12-19 High speed non-concurrency controlled database
US12/331,498 Expired - Lifetime US8630988B2 (en) 2001-11-01 2008-12-10 System and method for processing DNS queries
US13/292,833 Expired - Lifetime US8682856B2 (en) 2001-11-01 2011-11-09 Method and system for processing query messages over a network
US14/136,831 Abandoned US20140108452A1 (en) 2001-11-01 2013-12-20 System and method for processing dns queries

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US10/285,547 Expired - Lifetime US7047258B2 (en) 2001-11-01 2002-11-01 Method and system for validating remote database updates
US10/285,549 Expired - Lifetime US7167877B2 (en) 2001-11-01 2002-11-01 Method and system for updating a remote database
US10/285,575 Expired - Lifetime US6681228B2 (en) 2001-11-01 2002-11-01 Method and system for processing query messages over a network

Family Applications After (6)

Application Number Title Priority Date Filing Date
US10/285,618 Expired - Lifetime US7203682B2 (en) 2001-11-01 2002-11-01 High speed non-concurrency controlled database
US10/674,820 Active 2025-07-31 US8171019B2 (en) 2001-11-01 2003-10-01 Method and system for processing query messages over a network
US11/641,054 Abandoned US20070100808A1 (en) 2001-11-01 2006-12-19 High speed non-concurrency controlled database
US12/331,498 Expired - Lifetime US8630988B2 (en) 2001-11-01 2008-12-10 System and method for processing DNS queries
US13/292,833 Expired - Lifetime US8682856B2 (en) 2001-11-01 2011-11-09 Method and system for processing query messages over a network
US14/136,831 Abandoned US20140108452A1 (en) 2001-11-01 2013-12-20 System and method for processing dns queries

Country Status (16)

Country Link
US (10) US7047258B2 (en)
EP (10) EP2503476A1 (en)
JP (4) JP4420324B2 (en)
KR (4) KR100977161B1 (en)
CN (4) CN1610902B (en)
AU (5) AU2002350106B2 (en)
BR (4) BR0213863A (en)
CA (4) CA2466110C (en)
EA (4) EA006038B1 (en)
HK (1) HK1075308A1 (en)
IL (8) IL161723A0 (en)
MX (4) MXPA04004201A (en)
NO (4) NO20042259L (en)
NZ (4) NZ532773A (en)
WO (5) WO2003038683A1 (en)
ZA (4) ZA200403597B (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050044089A1 (en) * 2003-08-21 2005-02-24 Microsoft Corporation Systems and methods for interfacing application programs with an item-based storage platform
US20060195456A1 (en) * 2005-02-28 2006-08-31 Microsoft Corporation Change notification query multiplexing
US20070088724A1 (en) * 2003-08-21 2007-04-19 Microsoft Corporation Systems and methods for extensions and inheritance for units of information manageable by a hardware/software interface system
US20070136290A1 (en) * 2005-12-07 2007-06-14 Microsoft Corporation Removal of unnecessary read-to-update upgrades in software transactional memory
US20070169030A1 (en) * 2005-12-07 2007-07-19 Microsoft Corporation Compiler support for optimizing decomposed software transactional memory operations
US20080222219A1 (en) * 2007-03-05 2008-09-11 Appassure Software, Inc. Method and apparatus for efficiently merging, storing and retrieving incremental data
US20080288727A1 (en) * 2007-05-14 2008-11-20 International Business Machines Corporation Computing System with Optimized Support for Transactional Memory
US20080288730A1 (en) * 2007-05-14 2008-11-20 International Business Machines Corporation Transactional Memory System Which Employs Thread Assists Using Address History Tables
US20080288726A1 (en) * 2007-05-14 2008-11-20 International Business Machines Corporation Transactional Memory System with Fast Processing of Common Conflicts
US20090113443A1 (en) * 2007-05-14 2009-04-30 International Business Machines Corporation Transactional Memory Computing System with Support for Chained Transactions
US7555634B1 (en) 2004-04-22 2009-06-30 Sun Microsystems, Inc. Multiple data hazards detection and resolution unit
US20110055483A1 (en) * 2009-08-31 2011-03-03 International Business Machines Corporation Transactional memory system with efficient cache support
US20110178984A1 (en) * 2010-01-18 2011-07-21 Microsoft Corporation Replication protocol for database systems
US20110191299A1 (en) * 2010-02-01 2011-08-04 Microsoft Corporation Logical data backup and rollback using incremental capture in a distributed database
US8166101B2 (en) 2003-08-21 2012-04-24 Microsoft Corporation Systems and methods for the implementation of a synchronization schemas for units of information manageable by a hardware/software interface system
US8238696B2 (en) 2003-08-21 2012-08-07 Microsoft Corporation Systems and methods for the implementation of a digital images schema for organizing units of information manageable by a hardware/software interface system
US20130031050A1 (en) * 2008-08-07 2013-01-31 Armanta, Inc. System, Method, and Computer Program Product for Accessing Manipulating Remote Datasets
US8688920B2 (en) 2007-05-14 2014-04-01 International Business Machines Corporation Computing system with guest code support of transactional memory
US8965850B2 (en) 2011-11-18 2015-02-24 Dell Software Inc. Method of and system for merging, storing and retrieving incremental backup data
US9009452B2 (en) 2007-05-14 2015-04-14 International Business Machines Corporation Computing system with transactional memory using millicode assists
CN104572881A (en) * 2014-12-23 2015-04-29 国家电网公司 Method for importing distribution network graph model based on multi-task concurrency
US9244846B2 (en) 2012-07-06 2016-01-26 International Business Machines Corporation Ensuring causality of transactional storage accesses interacting with non-transactional storage accesses
US20160140149A1 (en) * 2014-11-19 2016-05-19 Unisys Corporation Dynamic modification of database schema
WO2017063048A1 (en) * 2015-10-15 2017-04-20 Big Ip Pty Ltd A system, method, computer program and data signal for the provision of a database of information for lead generating purposes
WO2017063049A1 (en) * 2015-10-15 2017-04-20 Big Ip Pty Ltd A system, method, computer program and data signal for conducting an electronic search of a database
US20170213023A1 (en) * 2013-08-20 2017-07-27 White Cloud Security, L.L.C. Application Trust Listing Service
US20180062998A1 (en) * 2016-08-31 2018-03-01 Viavi Solutions Inc. Packet filtering using binary search trees
US9925492B2 (en) 2014-03-24 2018-03-27 Mellanox Technologies, Ltd. Remote transactional memory
US9971987B1 (en) 2014-03-25 2018-05-15 Amazon Technologies, Inc. Out of order data management
US10089339B2 (en) * 2016-07-18 2018-10-02 Arm Limited Datagram reassembly
US10095800B1 (en) 2013-12-16 2018-10-09 Amazon Technologies, Inc. Multi-tenant data store management
US10530758B2 (en) * 2015-12-18 2020-01-07 F5 Networks, Inc. Methods of collaborative hardware and software DNS acceleration and DDOS protection
US10552367B2 (en) 2017-07-26 2020-02-04 Mellanox Technologies, Ltd. Network data transactions using posted and non-posted operations
US10642780B2 (en) 2016-03-07 2020-05-05 Mellanox Technologies, Ltd. Atomic access to object pool over RDMA transport network
US11126621B1 (en) * 2017-12-31 2021-09-21 Allscripts Software, Llc Database methodology for searching encrypted data records
US11269836B2 (en) * 2019-12-17 2022-03-08 Cerner Innovation, Inc. System and method for generating multi-category searchable ternary tree data structure
US11347771B2 (en) * 2007-11-28 2022-05-31 International Business Machines Corporation Content engine asynchronous upgrade framework
US11468407B2 (en) * 2005-11-04 2022-10-11 Blackberry Limited Method and system for updating message threads
US20220335049A1 (en) * 2021-04-14 2022-10-20 Google Llc Powering Scalable Data Warehousing with Robust Query Performance
US11500849B2 (en) * 2019-12-02 2022-11-15 International Business Machines Corporation Universal streaming change data capture

Families Citing this family (225)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7272604B1 (en) * 1999-09-03 2007-09-18 Atle Hedloy Method, system and computer readable medium for addressing handling from an operating system
US6745248B1 (en) * 2000-08-02 2004-06-01 Register.Com, Inc. Method and apparatus for analyzing domain name registrations
WO2002019127A1 (en) * 2000-08-25 2002-03-07 Integrated Business Systems And Services, Inc. Transaction-based enterprise application integration (eai) and development system
US20030182447A1 (en) * 2001-05-31 2003-09-25 Schilling Frank T. Generic top-level domain re-routing system
EP2503476A1 (en) * 2001-11-01 2012-09-26 Verisign, Inc. Method and system for updating a remote database
US20040005892A1 (en) * 2002-04-18 2004-01-08 Arnaldo Mayer System and method for managing parameter exchange between telecommunications operators
CA2384185A1 (en) * 2002-04-29 2003-10-29 Ibm Canada Limited-Ibm Canada Limitee Resizable cache sensitive hash table
JP3971984B2 (en) * 2002-10-15 2007-09-05 松下電器産業株式会社 Communication apparatus and communication method
US8255361B2 (en) * 2003-01-31 2012-08-28 Oracle America, Inc. Method and system for validating differential computer system update
US7162495B2 (en) * 2003-03-31 2007-01-09 Qwest Communications Inc. Systems and methods for clearing telephone number porting assignments EN masse
US7689569B2 (en) * 2003-03-31 2010-03-30 Qwest Communications International Inc. Systems and methods for managing large data environments
US7395276B2 (en) * 2003-03-31 2008-07-01 Qwest Communications International Inc. Systems and methods for resolving telephone number discrepancies en masse
US20040193509A1 (en) * 2003-03-31 2004-09-30 Qwest Communications International Inc. Systems and methods for managing telephone number inventory
US20040193604A1 (en) * 2003-03-31 2004-09-30 Qwest Communications International Inc. Systems and methods for restricting a telephone number's availability for assignment
US7624112B2 (en) * 2003-04-03 2009-11-24 Oracle International Corporation Asynchronously storing transaction information from memory to a persistent storage
US7212817B2 (en) * 2003-04-30 2007-05-01 Hewlett-Packard Development Company, L.P. Partitioning a database keyed with variable length keys
US20040220941A1 (en) * 2003-04-30 2004-11-04 Nielson Mark R. Sorting variable length keys in a database
JP2005309550A (en) 2004-04-19 2005-11-04 Hitachi Ltd Remote copying method and system
JP4374953B2 (en) * 2003-09-09 2009-12-02 株式会社日立製作所 Data processing system
US7130975B2 (en) * 2003-06-27 2006-10-31 Hitachi, Ltd. Data processing system
JP4124348B2 (en) 2003-06-27 2008-07-23 株式会社日立製作所 Storage system
TW591441B (en) * 2003-07-28 2004-06-11 Accton Technology Corp Database system and data access method thereof
US8949304B2 (en) * 2003-08-20 2015-02-03 Apple Inc. Method and apparatus for accelerating the expiration of resource records in a local cache
US7606788B2 (en) * 2003-08-22 2009-10-20 Oracle International Corporation Method and apparatus for protecting private information within a database
CN100337236C (en) * 2003-08-26 2007-09-12 华为技术有限公司 Method for making data in front and rear databases uniform
US20050066290A1 (en) * 2003-09-16 2005-03-24 Chebolu Anil Kumar Pop-up capture
US7577995B2 (en) 2003-09-16 2009-08-18 At&T Intellectual Property I, L.P. Controlling user-access to computer applications
US7219201B2 (en) * 2003-09-17 2007-05-15 Hitachi, Ltd. Remote storage disk control device and method for controlling the same
US7702628B1 (en) * 2003-09-29 2010-04-20 Sun Microsystems, Inc. Implementing a fully dynamic lock-free hash table without dummy nodes
US7158976B1 (en) * 2003-09-30 2007-01-02 Emc Corporation Spatial domain mechanism
US20060008256A1 (en) 2003-10-01 2006-01-12 Khedouri Robert K Audio visual player apparatus and system and method of content distribution using the same
US20130097302A9 (en) * 2003-10-01 2013-04-18 Robert Khedouri Audio visual player apparatus and system and method of content distribution using the same
US7127587B2 (en) * 2003-12-11 2006-10-24 International Business Machines Corporation Intent seizes in a multi-processor environment
JP4412989B2 (en) 2003-12-15 2010-02-10 株式会社日立製作所 Data processing system having a plurality of storage systems
AU2003295304A1 (en) * 2003-12-30 2005-07-21 Telefonaktiebolaget Lm Ericsson (Publ) Method device for transmitting data packets belong to different users in a common transmittal protocol packet
JP4477370B2 (en) * 2004-01-30 2010-06-09 株式会社日立製作所 Data processing system
US7895199B2 (en) * 2004-04-20 2011-02-22 Honda Motor Co., Ltd. Method and system for modifying orders
CA2465558A1 (en) * 2004-04-22 2005-10-22 Ibm Canada Limited - Ibm Canada Limitee Framework for retrieval and display of large result sets
ATE343303T1 (en) * 2004-05-11 2006-11-15 Cit Alcatel NETWORK ELEMENT AND METHOD FOR REPRESENTING ADDRESS INFORMATION
US7483426B2 (en) * 2004-05-13 2009-01-27 Micrel, Inc. Look-up table expansion method
US8943050B2 (en) * 2004-05-21 2015-01-27 Ca, Inc. Method and apparatus for optimizing directory performance
US20060036720A1 (en) * 2004-06-14 2006-02-16 Faulk Robert L Jr Rate limiting of events
DE602004007903T2 (en) * 2004-06-22 2008-04-17 Sap Ag Data processing device of online transaction data
US7774298B2 (en) * 2004-06-30 2010-08-10 Sap Ag Method and system for data extraction from a transaction system to an analytics system
JP4519563B2 (en) 2004-08-04 2010-08-04 株式会社日立製作所 Storage system and data processing system
US7359923B2 (en) * 2004-08-20 2008-04-15 International Business Machines Corporation Online incremental deferred integrity processing and maintenance of rolled in and rolled out data
US7788282B2 (en) * 2004-09-16 2010-08-31 International Business Machines Corporation Methods and computer programs for database structure comparison
JP2006127028A (en) * 2004-10-27 2006-05-18 Hitachi Ltd Memory system and storage controller
US8356127B2 (en) * 2004-12-09 2013-01-15 Rambus Inc. Memory interface with workload adaptive encode/decode
US20060218176A1 (en) * 2005-03-24 2006-09-28 International Business Machines Corporation System, method, and service for organizing data for fast retrieval
US9547780B2 (en) * 2005-03-28 2017-01-17 Absolute Software Corporation Method for determining identification of an electronic device
US7693082B2 (en) * 2005-04-12 2010-04-06 Azimuth Systems, Inc. Latency measurement apparatus and method
CN1878164A (en) * 2005-06-08 2006-12-13 华为技术有限公司 E.164 number domain name storing and searching method
CN100395996C (en) * 2005-06-23 2008-06-18 华为技术有限公司 Information synchronizing method for network management system
US7743028B1 (en) * 2005-07-13 2010-06-22 Symantec Corporation Incremental backup of partial volumes
US8015222B2 (en) 2005-10-24 2011-09-06 Emc Corporation Virtual repository management
US8819048B1 (en) * 2005-10-24 2014-08-26 Emc Corporation Virtual repository management to provide retention management services
US20070100783A1 (en) * 2005-10-29 2007-05-03 International Business Machines Corporation Method, system, and program for determining discrepancies between database management systems
EP1974522B1 (en) * 2005-12-27 2012-10-17 France Telecom Server, client and method for managing DNSSEC requests
US20070192374A1 (en) * 2006-02-16 2007-08-16 Emc Corporation Virtual repository management to provide functionality
US8990153B2 (en) * 2006-02-07 2015-03-24 Dot Hill Systems Corporation Pull data replication model
US7761293B2 (en) * 2006-03-06 2010-07-20 Tran Bao Q Spoken mobile engine
US20070208564A1 (en) * 2006-03-06 2007-09-06 Available For Licensing Telephone based search system
US20070226264A1 (en) * 2006-03-22 2007-09-27 Gang Luo System and method for real-time materialized view maintenance
US7783850B2 (en) * 2006-03-28 2010-08-24 Dot Hill Systems Corporation Method and apparatus for master volume access during volume copy
KR100728983B1 (en) * 2006-04-14 2007-06-15 주식회사 하이닉스반도체 Phase change ram device and method of manufacturing the same
KR100728982B1 (en) * 2006-04-14 2007-06-15 주식회사 하이닉스반도체 Phase change ram device and method of manufacturing the same
US7636829B2 (en) * 2006-05-02 2009-12-22 Intel Corporation System and method for allocating and deallocating memory within transactional code
TW200743000A (en) * 2006-05-11 2007-11-16 Ming-Ta Hsu Report retrieval and presentation methods and systems
US8606926B2 (en) 2006-06-14 2013-12-10 Opendns, Inc. Recursive DNS nameserver
US8713188B2 (en) 2007-12-13 2014-04-29 Opendns, Inc. Per-request control of DNS behavior
US7575163B2 (en) 2006-07-18 2009-08-18 At&T Intellectual Property I, L.P. Interactive management of storefront purchases
US8400947B2 (en) * 2006-07-20 2013-03-19 Tekelec, Inc. Methods, systems, and computer program products for specifying a particular ENUM service type in a communications network that utilizes a plurality of different ENUM service types
US20080034053A1 (en) * 2006-08-04 2008-02-07 Apple Computer, Inc. Mail Server Clustering
US20080052270A1 (en) * 2006-08-23 2008-02-28 Telefonaktiebolaget Lm Ericsson (Publ) Hash table structure and search method
US7921075B2 (en) * 2006-09-29 2011-04-05 International Business Machines Corporation Generic sequencing service for business integration
US9274857B2 (en) * 2006-10-13 2016-03-01 International Business Machines Corporation Method and system for detecting work completion in loosely coupled components
US9514201B2 (en) * 2006-10-13 2016-12-06 International Business Machines Corporation Method and system for non-intrusive event sequencing
US7680956B2 (en) * 2006-10-24 2010-03-16 Cisco Technology, Inc. Communicating additional information in a DNS update response by requesting deletion of a specific record
US9824107B2 (en) * 2006-10-25 2017-11-21 Entit Software Llc Tracking changing state data to assist in computer network security
KR100898995B1 (en) * 2006-10-25 2009-05-21 노키아 코포레이션 Remote electronic transactions
US7593973B2 (en) * 2006-11-15 2009-09-22 Dot Hill Systems Corp. Method and apparatus for transferring snapshot data
US20080254436A1 (en) * 2006-11-16 2008-10-16 Morgia Michael A Selection Of A Consensus From A Plurality Of Ideas
US8515912B2 (en) 2010-07-15 2013-08-20 Palantir Technologies, Inc. Sharing and deconflicting data changes in a multimaster database system
US8688749B1 (en) 2011-03-31 2014-04-01 Palantir Technologies, Inc. Cross-ontology multi-master replication
US8181187B2 (en) * 2006-12-01 2012-05-15 Portico Systems Gateways having localized in-memory databases and business logic execution
US8615635B2 (en) * 2007-01-05 2013-12-24 Sony Corporation Database management methodology
US7831565B2 (en) * 2007-01-18 2010-11-09 Dot Hill Systems Corporation Deletion of rollback snapshot partition
US8751467B2 (en) * 2007-01-18 2014-06-10 Dot Hill Systems Corporation Method and apparatus for quickly accessing backing store metadata
DE102007008293B4 (en) * 2007-02-16 2010-02-25 Continental Automotive Gmbh Method and device for secure storage and secure reading of user data
JP2008226167A (en) * 2007-03-15 2008-09-25 Toshiba Corp Data distribution system and data distribution program
US7716183B2 (en) * 2007-04-11 2010-05-11 Dot Hill Systems Corporation Snapshot preserved data cloning
US7975115B2 (en) * 2007-04-11 2011-07-05 Dot Hill Systems Corporation Method and apparatus for separating snapshot preserved and write data
US8768898B1 (en) * 2007-04-26 2014-07-01 Netapp, Inc. Performing direct data manipulation on a storage device
US20090182718A1 (en) * 2007-05-08 2009-07-16 Digital River, Inc. Remote Segmentation System and Method Applied To A Segmentation Data Mart
US8856094B2 (en) * 2007-05-08 2014-10-07 Digital River, Inc. Remote segmentation system and method
US7783603B2 (en) * 2007-05-10 2010-08-24 Dot Hill Systems Corporation Backing store re-initialization method and apparatus
US8001345B2 (en) * 2007-05-10 2011-08-16 Dot Hill Systems Corporation Automatic triggering of backing store re-initialization
US8175099B2 (en) * 2007-05-14 2012-05-08 Microsoft Corporation Embedded system development platform
US7882337B2 (en) * 2007-05-19 2011-02-01 International Business Machines Corporation Method and system for efficient tentative tracing of software in multiprocessors
US8204858B2 (en) 2007-06-25 2012-06-19 Dot Hill Systems Corporation Snapshot reset method and apparatus
US8140961B2 (en) * 2007-11-21 2012-03-20 Hewlett-Packard Development Company, L.P. Automated re-ordering of columns for alignment trap reduction
US8412700B2 (en) 2008-01-11 2013-04-02 International Business Machines Corporation Database query optimization using index carryover to subset an index
US7912867B2 (en) * 2008-02-25 2011-03-22 United Parcel Services Of America, Inc. Systems and methods of profiling data for integration
US8015191B2 (en) * 2008-03-27 2011-09-06 International Business Machines Corporation Implementing dynamic processor allocation based upon data density
US8170988B2 (en) * 2008-04-17 2012-05-01 The Boeing Company System and method for synchronizing databases
US8768349B1 (en) * 2008-04-24 2014-07-01 Sprint Communications Company L.P. Real-time subscriber profile consolidation system
US9094140B2 (en) * 2008-04-28 2015-07-28 Time Warner Cable Enterprises Llc Methods and apparatus for audience research in a content-based network
DE102008022415A1 (en) * 2008-05-06 2009-11-12 TÜV Rheinland Industrie Service GmbH Absinkverhinderungsvorrichtung
US8275761B2 (en) 2008-05-15 2012-09-25 International Business Machines Corporation Determining a density of a key value referenced in a database query over a range of rows
US8140520B2 (en) * 2008-05-15 2012-03-20 International Business Machines Corporation Embedding densities in a data structure
EP2134122A1 (en) * 2008-06-13 2009-12-16 Hewlett-Packard Development Company, L.P. Controlling access to a communication network using a local device database and a shared device database
US8312033B1 (en) 2008-06-26 2012-11-13 Experian Marketing Solutions, Inc. Systems and methods for providing an integrated identifier
CN101309177B (en) * 2008-07-11 2012-01-11 中国移动通信集团云南有限公司 Network resource data management method and system
US9418005B2 (en) 2008-07-15 2016-08-16 International Business Machines Corporation Managing garbage collection in a data processing system
CN101639950B (en) * 2008-07-29 2011-07-13 中兴通讯股份有限公司 Method and device for synchronizing data in lane toll system
US8751441B2 (en) * 2008-07-31 2014-06-10 Sybase, Inc. System, method, and computer program product for determining SQL replication process
US8768933B2 (en) * 2008-08-08 2014-07-01 Kabushiki Kaisha Toshiba System and method for type-ahead address lookup employing historically weighted address placement
CN101727383B (en) * 2008-10-16 2012-07-04 上海市医疗保险信息中心 Simulation test method and system of database
US9292612B2 (en) 2009-04-22 2016-03-22 Verisign, Inc. Internet profile service
US8676989B2 (en) 2009-04-23 2014-03-18 Opendns, Inc. Robust domain name resolution
US8527945B2 (en) 2009-05-07 2013-09-03 Verisign, Inc. Method and system for integrating multiple scripts
US8037076B2 (en) * 2009-05-11 2011-10-11 Red Hat, Inc. Federated indexing from hashed primary key slices
US8510263B2 (en) * 2009-06-15 2013-08-13 Verisign, Inc. Method and system for auditing transaction data from database operations
US8739125B2 (en) * 2009-06-16 2014-05-27 Red Hat, Inc. Automated and unattended process for testing software applications
US20100333071A1 (en) * 2009-06-30 2010-12-30 International Business Machines Corporation Time Based Context Sampling of Trace Data with Support for Multiple Virtual Machines
US8977705B2 (en) * 2009-07-27 2015-03-10 Verisign, Inc. Method and system for data logging and analysis
US8327019B2 (en) 2009-08-18 2012-12-04 Verisign, Inc. Method and system for intelligent routing of requests over EPP
US8856344B2 (en) 2009-08-18 2014-10-07 Verisign, Inc. Method and system for intelligent many-to-many service routing over EPP
US8874694B2 (en) * 2009-08-18 2014-10-28 Facebook, Inc. Adaptive packaging of network resources
US20110044320A1 (en) * 2009-08-21 2011-02-24 Avaya Inc. Mechanism for fast evaluation of policies in work assignment
US8175098B2 (en) 2009-08-27 2012-05-08 Verisign, Inc. Method for optimizing a route cache
US8982882B2 (en) 2009-11-09 2015-03-17 Verisign, Inc. Method and system for application level load balancing in a publish/subscribe message architecture
US9047589B2 (en) 2009-10-30 2015-06-02 Verisign, Inc. Hierarchical publish and subscribe system
US9235829B2 (en) 2009-10-30 2016-01-12 Verisign, Inc. Hierarchical publish/subscribe system
US9762405B2 (en) 2009-10-30 2017-09-12 Verisign, Inc. Hierarchical publish/subscribe system
US9269080B2 (en) 2009-10-30 2016-02-23 Verisign, Inc. Hierarchical publish/subscribe system
US9569753B2 (en) 2009-10-30 2017-02-14 Verisign, Inc. Hierarchical publish/subscribe system performed by multiple central relays
CN102096676B (en) * 2009-12-11 2014-04-09 中国移动通信集团公司 Data updating and query control method and system
US9176783B2 (en) 2010-05-24 2015-11-03 International Business Machines Corporation Idle transitions sampling with execution context
US8843684B2 (en) 2010-06-11 2014-09-23 International Business Machines Corporation Performing call stack sampling by setting affinity of target thread to a current process to prevent target thread migration
US8799872B2 (en) 2010-06-27 2014-08-05 International Business Machines Corporation Sampling with sample pacing
FR2964213B1 (en) * 2010-09-01 2013-04-26 Evidian IDENTITY DIRECTORY AND METHOD FOR UPDATING AN IDENTITY DIRECTORY
US8489724B2 (en) * 2010-09-14 2013-07-16 Cdnetworks Co., Ltd. CNAME-based round-trip time measurement in a content delivery network
US20120089646A1 (en) * 2010-10-08 2012-04-12 Jain Rohit N Processing change data
US8332433B2 (en) 2010-10-18 2012-12-11 Verisign, Inc. Database synchronization and validation
US8799904B2 (en) 2011-01-21 2014-08-05 International Business Machines Corporation Scalable system call stack sampling
JP5652281B2 (en) * 2011-03-18 2015-01-14 富士通株式会社 Business processing server, business processing method, and business processing program
RU2480819C2 (en) * 2011-06-28 2013-04-27 Закрытое акционерное общество "Лаборатория Касперского" Method of optimising work with linked lists
US8549579B2 (en) * 2011-07-06 2013-10-01 International Business Machines Corporation Dynamic data-protection policies within a request-reply message queuing environment
CN103765423B (en) * 2011-08-03 2017-02-15 亚马逊技术有限公司 Gathering transaction data associated with locally stored data files
US8782352B2 (en) * 2011-09-29 2014-07-15 Oracle International Corporation System and method for supporting a self-tuning locking mechanism in a transactional middleware machine environment
IL216056B (en) * 2011-10-31 2018-04-30 Verint Systems Ltd Combined database system and method
US9679009B2 (en) * 2011-11-17 2017-06-13 Sap Se Component independent process integration message search
US8782004B2 (en) 2012-01-23 2014-07-15 Palantir Technologies, Inc. Cross-ACL multi-master replication
KR101375794B1 (en) 2012-01-27 2014-03-18 네이버비즈니스플랫폼 주식회사 Method and device for improving performance of database
JP2013182588A (en) * 2012-03-05 2013-09-12 Oki Electric Ind Co Ltd Synchronization method for back-up data in back-up system
US9065855B2 (en) * 2012-06-29 2015-06-23 Verisign, Inc. Systems and methods for automatically providing Whois service to top level domains
US9369395B2 (en) * 2012-08-31 2016-06-14 At&T Intellectual Property I, L.P. Methods and apparatus to negotiate flow control for a communication session
US20140101150A1 (en) * 2012-10-05 2014-04-10 Axis Semiconductor, Inc. Efficient high performance scalable pipelined searching method using variable stride multibit tries
US9081975B2 (en) 2012-10-22 2015-07-14 Palantir Technologies, Inc. Sharing information between nexuses that use different classification schemes for information access control
US9501761B2 (en) 2012-11-05 2016-11-22 Palantir Technologies, Inc. System and method for sharing investigation results
US9654541B1 (en) 2012-11-12 2017-05-16 Consumerinfo.Com, Inc. Aggregating user web browsing data
US9613165B2 (en) 2012-11-13 2017-04-04 Oracle International Corporation Autocomplete searching with security filtering and ranking
US9916621B1 (en) 2012-11-30 2018-03-13 Consumerinfo.Com, Inc. Presentation of credit score factors
CN103929763A (en) * 2013-01-11 2014-07-16 阿尔卡特朗讯 Method for comparison and reconstruction of geographic redundancy database
US10102570B1 (en) 2013-03-14 2018-10-16 Consumerinfo.Com, Inc. Account vulnerability alerts
WO2014195804A2 (en) * 2013-06-04 2014-12-11 Marvell World Trade Ltd. Internal search engine architecture
US8886601B1 (en) * 2013-06-20 2014-11-11 Palantir Technologies, Inc. System and method for incrementally replicating investigative analysis data
GB2517932B (en) * 2013-09-04 2021-05-05 1Spatial Group Ltd Modification and validation of spatial data
US9922043B1 (en) * 2013-10-28 2018-03-20 Pivotal Software, Inc. Data management platform
US9569070B1 (en) 2013-11-11 2017-02-14 Palantir Technologies, Inc. Assisting in deconflicting concurrency conflicts
US9477737B1 (en) * 2013-11-20 2016-10-25 Consumerinfo.Com, Inc. Systems and user interfaces for dynamic access of multiple remote databases and synchronization of data based on user rules
US9009827B1 (en) 2014-02-20 2015-04-14 Palantir Technologies Inc. Security sharing system
US9405655B2 (en) * 2014-03-19 2016-08-02 Dell Products, Lp System and method for running a validation process for an information handling system during a factory process
US9910883B2 (en) 2014-04-07 2018-03-06 International Business Machines Corporation Enhanced batch updates on records and related records system and method
WO2015162705A1 (en) * 2014-04-22 2015-10-29 株式会社日立製作所 Shared resource update device and shared resource update method
CN106471486B (en) 2014-04-30 2019-05-17 甲骨文国际公司 System and method for supporting adaptive self-adjusting locking mechanism in transaction middleware machine environment
US9778949B2 (en) * 2014-05-05 2017-10-03 Google Inc. Thread waiting in a multithreaded processor architecture
US9021260B1 (en) 2014-07-03 2015-04-28 Palantir Technologies Inc. Malware data item analysis
US10572496B1 (en) 2014-07-03 2020-02-25 Palantir Technologies Inc. Distributed workflow system and database with access controls for city resiliency
US9785773B2 (en) 2014-07-03 2017-10-10 Palantir Technologies Inc. Malware data item analysis
US9699023B2 (en) * 2014-07-18 2017-07-04 Fujitsu Limited Initializing a network interface based on stored data
US10204134B2 (en) 2014-08-14 2019-02-12 International Business Machines Corporation Automatic detection of problems in a large-scale multi-record update system and method
US9734016B2 (en) * 2015-02-24 2017-08-15 Red Hat Israel, Ltd. Secure live virtual machine guest based snapshot recovery
US20160378824A1 (en) * 2015-06-24 2016-12-29 Futurewei Technologies, Inc. Systems and Methods for Parallelizing Hash-based Operators in SMP Databases
US20160378812A1 (en) * 2015-06-25 2016-12-29 International Business Machines Corporation Reduction of bind breaks
CN104965923B (en) * 2015-07-08 2018-09-28 安徽兆尹信息科技股份有限公司 A kind of cloud computing application platform construction method for generating cash flow statement
IL242218B (en) 2015-10-22 2020-11-30 Verint Systems Ltd System and method for maintaining a dynamic dictionary
IL242219B (en) * 2015-10-22 2020-11-30 Verint Systems Ltd System and method for keyword searching using both static and dynamic dictionaries
US20170177656A1 (en) * 2015-12-18 2017-06-22 Wal-Mart Stores, Inc. Systems and methods for resolving data discrepancy
CN105574407B (en) * 2015-12-28 2018-09-25 无锡天脉聚源传媒科技有限公司 A kind of shared treating method and apparatus
US10621198B1 (en) 2015-12-30 2020-04-14 Palantir Technologies Inc. System and method for secure database replication
RU2623882C1 (en) * 2016-02-18 2017-06-29 Акционерное общество "Лаборатория Касперского" Method for searching inlet line in search tree with indexing of search tree nodes
US10353888B1 (en) * 2016-03-03 2019-07-16 Amdocs Development Limited Event processing system, method, and computer program
WO2017191495A1 (en) * 2016-05-05 2017-11-09 Askarov Bauyrzhan New domain name system and usage thereof
CN106250487B (en) * 2016-07-29 2020-07-03 新华三技术有限公司 Database concurrency control method and device
US10382562B2 (en) * 2016-11-04 2019-08-13 A10 Networks, Inc. Verification of server certificates using hash codes
US10262053B2 (en) 2016-12-22 2019-04-16 Palantir Technologies Inc. Systems and methods for data replication synchronization
TWI643146B (en) * 2016-12-22 2018-12-01 經貿聯網科技股份有限公司 Method for dynamically updating financial data and processing system using the same, and method for dynamically adjusting power configuration and processing system using the same
CN106790544A (en) * 2016-12-22 2017-05-31 郑州云海信息技术有限公司 Reduce the method and device of amount of communication data between Terminal Server Client and data center
CN111107175B (en) * 2017-03-31 2023-08-08 贵州白山云科技股份有限公司 Method and device for constructing DNS response message
GB2561176A (en) * 2017-04-03 2018-10-10 Edinburgh Napier Univ System and method for management of confidential data
US10068002B1 (en) 2017-04-25 2018-09-04 Palantir Technologies Inc. Systems and methods for adaptive data replication
US10430062B2 (en) 2017-05-30 2019-10-01 Palantir Technologies Inc. Systems and methods for geo-fenced dynamic dissemination
US11030494B1 (en) 2017-06-15 2021-06-08 Palantir Technologies Inc. Systems and methods for managing data spills
US10380196B2 (en) 2017-12-08 2019-08-13 Palantir Technologies Inc. Systems and methods for using linked documents
US10915542B1 (en) 2017-12-19 2021-02-09 Palantir Technologies Inc. Contextual modification of data sharing constraints in a distributed database system that uses a multi-master replication scheme
US20190213271A1 (en) * 2018-01-09 2019-07-11 Unisys Corporation Method and system for data exchange critical event notification
KR102034679B1 (en) 2018-01-17 2019-10-23 (주)비아이매트릭스 A data input/output system using grid interface
CN110083596A (en) * 2018-05-16 2019-08-02 陈刚 A kind of method of data history tracking and the tracking of data change histories
CN108876143A (en) * 2018-06-13 2018-11-23 亳州市药通信息咨询有限公司 A kind of Chinese medicine price index system
CN110798332B (en) 2018-08-03 2022-09-06 Emc Ip控股有限公司 Method and system for searching directory access groups
US20200074541A1 (en) 2018-09-05 2020-03-05 Consumerinfo.Com, Inc. Generation of data structures based on categories of matched data items
US11238656B1 (en) 2019-02-22 2022-02-01 Consumerinfo.Com, Inc. System and method for an augmented reality experience via an artificial intelligence bot
CN111831639B (en) * 2019-04-19 2024-01-30 北京车和家信息技术有限公司 Global unique ID generation method and device and vehicle management system
US11303606B1 (en) 2019-06-03 2022-04-12 Amazon Technologies, Inc. Hashing name resolution requests according to an identified routing policy
CA3148975C (en) * 2019-07-30 2023-04-25 Falkonry Inc. Fluid and resolution-friendly view of large volumes of time series data
CN110990377B (en) * 2019-11-21 2023-08-22 上海达梦数据库有限公司 Data loading method, device, server and storage medium
CN111240762B (en) * 2020-01-10 2021-11-23 珠海格力电器股份有限公司 Thread management method, storage medium and electronic device
CN113966591B (en) * 2020-02-24 2023-09-19 森斯通株式会社 User setting information authentication method, recording medium, and apparatus using virtual code
WO2021172875A1 (en) * 2020-02-24 2021-09-02 주식회사 센스톤 Method, program, and device for authenticating user setting information by using virtual code
WO2022173423A1 (en) * 2021-02-09 2022-08-18 Micro Focus Llc System for retrieval of large datasets in cloud environments
CN113806342A (en) * 2021-07-21 2021-12-17 厦门莲隐科技有限公司 System for extracting data at bottom of ether house block chain
US20240045753A1 (en) * 2022-08-02 2024-02-08 Nxp B.V. Dynamic Configuration Of Reaction Policies In Virtualized Fault Management System

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4412285A (en) * 1981-04-01 1983-10-25 Teradata Corporation Multiprocessor intercommunication system and method
US4947366A (en) * 1987-10-02 1990-08-07 Advanced Micro Devices, Inc. Input/output controller incorporating address mapped input/output windows and read ahead/write behind capabilities
US5089952A (en) * 1988-10-07 1992-02-18 International Business Machines Corporation Method for allowing weak searchers to access pointer-connected data structures without locking
US5260942A (en) * 1992-03-06 1993-11-09 International Business Machines Corporation Method and apparatus for batching the receipt of data packets
US5283894A (en) * 1986-04-11 1994-02-01 Deran Roger L Lockless concurrent B-tree index meta access method for cached nodes
US5287496A (en) * 1991-02-25 1994-02-15 International Business Machines Corporation Dynamic, finite versioning for concurrent transaction and query processing
US5301287A (en) * 1990-03-12 1994-04-05 Hewlett-Packard Company User scheduled direct memory access using virtual addresses
US5410682A (en) * 1990-06-29 1995-04-25 Digital Equipment Corporation In-register data manipulation for unaligned byte write using data shift in reduced instruction set processor
US5920886A (en) * 1997-03-14 1999-07-06 Music Semiconductor Corporation Accelerated hierarchical address filtering and translation using binary and ternary CAMs
US5924098A (en) * 1997-06-30 1999-07-13 Sun Microsystems, Inc. Method and apparatus for managing a linked-list data structure
US6029170A (en) * 1997-11-25 2000-02-22 International Business Machines Corporation Hybrid tree array data structure and method
US6044448A (en) * 1997-12-16 2000-03-28 S3 Incorporated Processor having multiple datapath instances
US6047323A (en) * 1995-10-19 2000-04-04 Hewlett-Packard Company Creation and migration of distributed streams in clusters of networked computers
US6188428B1 (en) * 1992-02-11 2001-02-13 Mark Koz Transcoding video file server and methods for its use
US6237019B1 (en) * 1998-03-18 2001-05-22 International Business Machines Corporation Method and apparatus for performing a semaphore operation
US6256256B1 (en) * 1998-01-30 2001-07-03 Silicon Aquarius, Inc. Dual port random access memories and systems using the same
US20010025320A1 (en) * 1999-02-26 2001-09-27 Seng Ching Hong Multi-language domain name service
US6304259B1 (en) * 1998-02-09 2001-10-16 International Business Machines Corporation Computer system, method and user interface components for abstracting and accessing a body of knowledge
US6330568B1 (en) * 1996-11-13 2001-12-11 Pumatech, Inc. Synchronization of databases
US6360220B1 (en) * 1998-08-04 2002-03-19 Microsoft Corporation Lock-free methods and systems for accessing and storing information in an indexed computer data structure having modifiable entries
US6449657B2 (en) * 1999-08-06 2002-09-10 Namezero.Com, Inc. Internet hosting system
US6484185B1 (en) * 1999-04-05 2002-11-19 Microsoft Corporation Atomic operations on data structures
US6868414B2 (en) * 2001-01-03 2005-03-15 International Business Machines Corporation Technique for serializing data structure updates and retrievals without requiring searchers to use locks

Family Cites Families (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8704882D0 (en) * 1987-03-03 1987-04-08 Hewlett Packard Co Secure messaging systems
US5175849A (en) * 1988-07-28 1992-12-29 Amdahl Corporation Capturing data of a database system
US5161223A (en) 1989-10-23 1992-11-03 International Business Machines Corporation Resumeable batch query for processing time consuming queries in an object oriented database management system
US5893117A (en) * 1990-08-17 1999-04-06 Texas Instruments Incorporated Time-stamped database transaction and version management system
US5369757A (en) * 1991-06-18 1994-11-29 Digital Equipment Corporation Recovery logging in the presence of snapshot files by ordering of buffer pool flushing
US5749079A (en) * 1992-03-04 1998-05-05 Singapore Computer Systems Limited End user query facility including a query connectivity driver
EP0594196B1 (en) * 1992-10-22 1999-03-31 Cabletron Systems, Inc. Address lookup in packet data communications link, using hashing and content-addressable memory
US5684990A (en) * 1995-01-11 1997-11-04 Puma Technology, Inc. Synchronization of disparate databases
US5729735A (en) * 1995-02-08 1998-03-17 Meyering; Samuel C. Remote database file synchronizer
US5615337A (en) * 1995-04-06 1997-03-25 International Business Machines Corporation System and method for efficiently processing diverse result sets returned by a stored procedures
US5974409A (en) * 1995-08-23 1999-10-26 Microsoft Corporation System and method for locating information in an on-line network
US5758150A (en) * 1995-10-06 1998-05-26 Tele-Communications, Inc. System and method for database synchronization
US5875443A (en) * 1996-01-30 1999-02-23 Sun Microsystems, Inc. Internet-based spelling checker dictionary system with automatic updating
US5852715A (en) * 1996-03-19 1998-12-22 Emc Corporation System for currently updating database by one host and reading the database by different host for the purpose of implementing decision support functions
US5765028A (en) * 1996-05-07 1998-06-09 Ncr Corporation Method and apparatus for providing neural intelligence to a mail query agent in an online analytical processing system
US5787452A (en) * 1996-05-21 1998-07-28 Sybase, Inc. Client/server database system with methods for multi-threaded data processing in a heterogeneous language environment
US6154777A (en) * 1996-07-01 2000-11-28 Sun Microsystems, Inc. System for context-dependent name resolution
US5995980A (en) * 1996-07-23 1999-11-30 Olson; Jack E. System and method for database update replication
US5926816A (en) * 1996-10-09 1999-07-20 Oracle Corporation Database Synchronizer
US6044381A (en) * 1997-09-11 2000-03-28 Puma Technology, Inc. Using distributed history files in synchronizing databases
WO1998038583A1 (en) * 1997-02-26 1998-09-03 Siebel Systems, Inc. Method of determining visibility to a remote database client of a plurality of database transactions having variable visibility strengths
US5937414A (en) * 1997-02-28 1999-08-10 Oracle Corporation Method and apparatus for providing database system replication in a mixed propagation environment
US6862602B2 (en) * 1997-03-07 2005-03-01 Apple Computer, Inc. System and method for rapidly identifying the existence and location of an item in a file
KR19990001093A (en) * 1997-06-12 1999-01-15 윤종용 Operating program and database installation method of exchange system
US6098108A (en) * 1997-07-02 2000-08-01 Sitara Networks, Inc. Distributed directory for enhanced network communication
US6148070A (en) * 1997-07-02 2000-11-14 Ameritech Corporation Method, system, and database for providing a telecommunication service
US5924096A (en) * 1997-10-15 1999-07-13 Novell, Inc. Distributed database using indexed into tags to tracks events according to type, update cache, create virtual update log on demand
US6058389A (en) * 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system
US6061678A (en) * 1997-10-31 2000-05-09 Oracle Corporation Approach for managing access to large objects in database systems using large object indexes
US6304881B1 (en) * 1998-03-03 2001-10-16 Pumatech, Inc. Remote data access and synchronization
US6185567B1 (en) 1998-05-29 2001-02-06 The Trustees Of The University Of Pennsylvania Authenticated access to internet based research and data services
US6131122A (en) * 1998-06-01 2000-10-10 Nortel Networks Corporation Programmable internet automation
WO1999063441A1 (en) * 1998-06-05 1999-12-09 Mylex Corporation Snapshot backup strategy
US6434144B1 (en) * 1998-07-06 2002-08-13 Aleksey Romanov Multi-level table lookup
US6092178A (en) * 1998-09-03 2000-07-18 Sun Microsystems, Inc. System for responding to a resource request
US6411966B1 (en) * 1998-09-21 2002-06-25 Microsoft Corporation Method and computer readable medium for DNS dynamic update to minimize client-server and incremental zone transfer traffic
US6243715B1 (en) * 1998-11-09 2001-06-05 Lucent Technologies Inc. Replicated database synchronization method whereby primary database is selected queries to secondary databases are referred to primary database, primary database is updated, then secondary databases are updated
EP1142227A2 (en) * 1998-12-23 2001-10-10 Nokia Wireless Routers, Inc. A unified routing scheme for ad-hoc internetworking
US6516327B1 (en) * 1998-12-24 2003-02-04 International Business Machines Corporation System and method for synchronizing data in multiple databases
US6304924B1 (en) * 1999-02-02 2001-10-16 International Business Machines Corporation Two lock-free, constant-space, multiple-(impure)-reader, single-writer structures
US6553392B1 (en) * 1999-02-04 2003-04-22 Hewlett-Packard Development Company, L.P. System and method for purging database update image files after completion of associated transactions
FI106493B (en) * 1999-02-09 2001-02-15 Nokia Mobile Phones Ltd A method and system for reliably transmitting packet data
US6721334B1 (en) * 1999-02-18 2004-04-13 3Com Corporation Method and apparatus for packet aggregation in packet-based network
EP1157524B1 (en) * 1999-03-03 2007-12-19 Ultradns, Inc. Scalable and efficient domain name resolution
US6745177B2 (en) * 1999-04-09 2004-06-01 Metro One Telecommunications, Inc. Method and system for retrieving data from multiple data sources using a search routing database
US6938057B2 (en) * 1999-05-21 2005-08-30 International Business Machines Corporation Method and apparatus for networked backup storage
US6529504B1 (en) * 1999-06-02 2003-03-04 Sprint Communications Company, L.P. Telecommunications service control point interface
WO2001011443A2 (en) * 1999-08-06 2001-02-15 Namezero.Com, Inc. Internet hosting system
US6785704B1 (en) * 1999-12-20 2004-08-31 Fastforward Networks Content distribution system for operation over an internetwork including content peering arrangements
US6792458B1 (en) * 1999-10-04 2004-09-14 Urchin Software Corporation System and method for monitoring and analyzing internet traffic
US6560614B1 (en) * 1999-11-12 2003-05-06 Xosoft Inc. Nonintrusive update of files
KR100751622B1 (en) 1999-11-26 2007-08-22 네테카 인코포레이티드 Network address server
US6980990B2 (en) * 1999-12-01 2005-12-27 Barry Fellman Internet domain name registration system
US6434681B1 (en) * 1999-12-02 2002-08-13 Emc Corporation Snapshot copy facility for a data storage system permitting continued host read/write access
US6625621B2 (en) * 2000-01-04 2003-09-23 Starfish Software, Inc. System and methods for a fast and scalable synchronization server
US6677964B1 (en) * 2000-02-18 2004-01-13 Xsides Corporation Method and system for controlling a complementary user interface on a display surface
US6789073B1 (en) * 2000-02-22 2004-09-07 Harvey Lunenfeld Client-server multitasking
JP2001236257A (en) 2000-02-24 2001-08-31 Fujitsu Ltd Information storage device and method for updating subscriber's data and mobile communication system
US6615223B1 (en) * 2000-02-29 2003-09-02 Oracle International Corporation Method and system for data replication
US6643669B1 (en) * 2000-03-14 2003-11-04 Telefonaktiebolaget Lm Ericsson (Publ) Method for optimization of synchronization between a client's database and a server database
JP2001290689A (en) 2000-04-07 2001-10-19 Hitachi Ltd Data-verifying method for replication among plural data bases
US6976090B2 (en) * 2000-04-20 2005-12-13 Actona Technologies Ltd. Differentiated content and application delivery via internet
US6725218B1 (en) * 2000-04-28 2004-04-20 Cisco Technology, Inc. Computerized database system and method
US7734815B2 (en) 2006-09-18 2010-06-08 Akamai Technologies, Inc. Global load balancing across mirrored data centers
US7165116B2 (en) * 2000-07-10 2007-01-16 Netli, Inc. Method for network discovery using name servers
US7725602B2 (en) 2000-07-19 2010-05-25 Akamai Technologies, Inc. Domain name resolution using a distributed DNS network
US20020029226A1 (en) * 2000-09-05 2002-03-07 Gang Li Method for combining data with maps
FR2813986B1 (en) * 2000-09-08 2002-11-29 Eric Vincenot SOUND WAVE GUIDE DEVICE
JP2002108836A (en) * 2000-09-29 2002-04-12 Hitachi Ltd Processor system
US6785675B1 (en) * 2000-11-13 2004-08-31 Convey Development, Inc. Aggregation of resource requests from multiple individual requestors
US6636854B2 (en) * 2000-12-07 2003-10-21 International Business Machines Corporation Method and system for augmenting web-indexed search engine results with peer-to-peer search results
US6728736B2 (en) * 2001-03-14 2004-04-27 Storage Technology Corporation System and method for synchronizing a data copy using an accumulation remote copy trio
US6691124B2 (en) * 2001-04-04 2004-02-10 Cypress Semiconductor Corp. Compact data structures for pipelined message forwarding lookups
GB2374951B (en) * 2001-04-24 2005-06-15 Discreet Logic Inc Asynchronous database updates
US7171415B2 (en) * 2001-05-04 2007-01-30 Sun Microsystems, Inc. Distributed information discovery through searching selected registered information providers
US20030182447A1 (en) * 2001-05-31 2003-09-25 Schilling Frank T. Generic top-level domain re-routing system
US6744652B2 (en) * 2001-08-22 2004-06-01 Netlogic Microsystems, Inc. Concurrent searching of different tables within a content addressable memory
EP2503476A1 (en) 2001-11-01 2012-09-26 Verisign, Inc. Method and system for updating a remote database
US20030208511A1 (en) * 2002-05-02 2003-11-06 Earl Leroy D. Database replication system
US20050105513A1 (en) 2002-10-27 2005-05-19 Alan Sullivan Systems and methods for direction of communication traffic
US20050027882A1 (en) 2003-05-05 2005-02-03 Sullivan Alan T. Systems and methods for direction of communication traffic
US7310686B2 (en) 2002-10-27 2007-12-18 Paxfire, Inc. Apparatus and method for transparent selection of an Internet server based on geographic location of a user
US7761570B1 (en) 2003-06-26 2010-07-20 Nominum, Inc. Extensible domain name service
US7769826B2 (en) 2003-06-26 2010-08-03 Nominum, Inc. Systems and methods of providing DNS services using separate answer and referral caches
US7761678B1 (en) 2004-09-29 2010-07-20 Verisign, Inc. Method and apparatus for an improved file repository
US7685270B1 (en) 2005-03-31 2010-03-23 Amazon Technologies, Inc. Method and apparatus for measuring latency in web services
AU2006251563A1 (en) 2005-05-24 2006-11-30 Paxfire, Inc. Enhanced features for direction of communication traffic
US7546368B2 (en) 2005-06-01 2009-06-09 Neustar, Inc. Systems and methods for isolating local performance variation in website monitoring
US7477575B2 (en) 2005-09-23 2009-01-13 Verisign, Inc. Redundant timer system and method
CA2637413A1 (en) 2006-01-20 2007-07-26 Paxfire, Inc. Systems and methods for discerning and controlling communication traffic
US8713188B2 (en) 2007-12-13 2014-04-29 Opendns, Inc. Per-request control of DNS behavior
US8606926B2 (en) 2006-06-14 2013-12-10 Opendns, Inc. Recursive DNS nameserver
US20080059152A1 (en) 2006-08-17 2008-03-06 Neustar, Inc. System and method for handling jargon in communication systems
EP2054830A2 (en) 2006-08-17 2009-05-06 Neustar, Inc. System and method for managing domain policy for interconnected communication networks
US8234379B2 (en) 2006-09-14 2012-07-31 Afilias Limited System and method for facilitating distribution of limited resources
US20100030897A1 (en) 2006-12-20 2010-02-04 Rob Stradling Method and System for Installing a Root Certificate on a Computer With a Root Update Mechanism
US7694016B2 (en) 2007-02-07 2010-04-06 Nominum, Inc. Composite DNS zones
EP2201457A2 (en) 2007-10-18 2010-06-30 Neustar, Inc. System and method for sharing web performance monitoring data
US20090235359A1 (en) 2008-03-12 2009-09-17 Comodo Ca Limited Method and system for performing security and vulnerability scans on devices behind a network security device
US7925782B2 (en) 2008-06-30 2011-04-12 Amazon Technologies, Inc. Request routing using network computing components
US7991737B2 (en) * 2008-09-04 2011-08-02 Microsoft Corporation Synchronization of records of a table using bookmarks
US20090282038A1 (en) 2008-09-23 2009-11-12 Michael Subotin Probabilistic Association Based Method and System for Determining Topical Relatedness of Domain Names
US9172713B2 (en) 2008-09-24 2015-10-27 Neustar, Inc. Secure domain name system
US7930393B1 (en) 2008-09-29 2011-04-19 Amazon Technologies, Inc. Monitoring domain allocation performance
US8521908B2 (en) 2009-04-07 2013-08-27 Verisign, Inc. Existent domain name DNS traffic capture and analysis
US9292612B2 (en) 2009-04-22 2016-03-22 Verisign, Inc. Internet profile service
US8676989B2 (en) 2009-04-23 2014-03-18 Opendns, Inc. Robust domain name resolution
US8527945B2 (en) 2009-05-07 2013-09-03 Verisign, Inc. Method and system for integrating multiple scripts
US8510263B2 (en) 2009-06-15 2013-08-13 Verisign, Inc. Method and system for auditing transaction data from database operations
US8977705B2 (en) 2009-07-27 2015-03-10 Verisign, Inc. Method and system for data logging and analysis
US8380870B2 (en) 2009-08-05 2013-02-19 Verisign, Inc. Method and system for filtering of network traffic
US20110035497A1 (en) 2009-08-05 2011-02-10 Dynamic Network Services, Inc. System and method for providing global server load balancing
US8327019B2 (en) 2009-08-18 2012-12-04 Verisign, Inc. Method and system for intelligent routing of requests over EPP
US8175098B2 (en) 2009-08-27 2012-05-08 Verisign, Inc. Method for optimizing a route cache
US9047589B2 (en) 2009-10-30 2015-06-02 Verisign, Inc. Hierarchical publish and subscribe system
US8982882B2 (en) 2009-11-09 2015-03-17 Verisign, Inc. Method and system for application level load balancing in a publish/subscribe message architecture
US9286369B2 (en) 2009-12-30 2016-03-15 Symantec Corporation Data replication across enterprise boundaries

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4412285A (en) * 1981-04-01 1983-10-25 Teradata Corporation Multiprocessor intercommunication system and method
US5283894A (en) * 1986-04-11 1994-02-01 Deran Roger L Lockless concurrent B-tree index meta access method for cached nodes
US4947366A (en) * 1987-10-02 1990-08-07 Advanced Micro Devices, Inc. Input/output controller incorporating address mapped input/output windows and read ahead/write behind capabilities
US5089952A (en) * 1988-10-07 1992-02-18 International Business Machines Corporation Method for allowing weak searchers to access pointer-connected data structures without locking
US5301287A (en) * 1990-03-12 1994-04-05 Hewlett-Packard Company User scheduled direct memory access using virtual addresses
US5410682A (en) * 1990-06-29 1995-04-25 Digital Equipment Corporation In-register data manipulation for unaligned byte write using data shift in reduced instruction set processor
US5287496A (en) * 1991-02-25 1994-02-15 International Business Machines Corporation Dynamic, finite versioning for concurrent transaction and query processing
US6188428B1 (en) * 1992-02-11 2001-02-13 Mark Koz Transcoding video file server and methods for its use
US5260942A (en) * 1992-03-06 1993-11-09 International Business Machines Corporation Method and apparatus for batching the receipt of data packets
US6047323A (en) * 1995-10-19 2000-04-04 Hewlett-Packard Company Creation and migration of distributed streams in clusters of networked computers
US6330568B1 (en) * 1996-11-13 2001-12-11 Pumatech, Inc. Synchronization of databases
US5920886A (en) * 1997-03-14 1999-07-06 Music Semiconductor Corporation Accelerated hierarchical address filtering and translation using binary and ternary CAMs
US5924098A (en) * 1997-06-30 1999-07-13 Sun Microsystems, Inc. Method and apparatus for managing a linked-list data structure
US6029170A (en) * 1997-11-25 2000-02-22 International Business Machines Corporation Hybrid tree array data structure and method
US6044448A (en) * 1997-12-16 2000-03-28 S3 Incorporated Processor having multiple datapath instances
US6256256B1 (en) * 1998-01-30 2001-07-03 Silicon Aquarius, Inc. Dual port random access memories and systems using the same
US6304259B1 (en) * 1998-02-09 2001-10-16 International Business Machines Corporation Computer system, method and user interface components for abstracting and accessing a body of knowledge
US6237019B1 (en) * 1998-03-18 2001-05-22 International Business Machines Corporation Method and apparatus for performing a semaphore operation
US6360220B1 (en) * 1998-08-04 2002-03-19 Microsoft Corporation Lock-free methods and systems for accessing and storing information in an indexed computer data structure having modifiable entries
US20010025320A1 (en) * 1999-02-26 2001-09-27 Seng Ching Hong Multi-language domain name service
US6484185B1 (en) * 1999-04-05 2002-11-19 Microsoft Corporation Atomic operations on data structures
US6449657B2 (en) * 1999-08-06 2002-09-10 Namezero.Com, Inc. Internet hosting system
US6868414B2 (en) * 2001-01-03 2005-03-15 International Business Machines Corporation Technique for serializing data structure updates and retrievals without requiring searchers to use locks

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070088724A1 (en) * 2003-08-21 2007-04-19 Microsoft Corporation Systems and methods for extensions and inheritance for units of information manageable by a hardware/software interface system
US20050044089A1 (en) * 2003-08-21 2005-02-24 Microsoft Corporation Systems and methods for interfacing application programs with an item-based storage platform
US8238696B2 (en) 2003-08-21 2012-08-07 Microsoft Corporation Systems and methods for the implementation of a digital images schema for organizing units of information manageable by a hardware/software interface system
US8166101B2 (en) 2003-08-21 2012-04-24 Microsoft Corporation Systems and methods for the implementation of a synchronization schemas for units of information manageable by a hardware/software interface system
US8131739B2 (en) 2003-08-21 2012-03-06 Microsoft Corporation Systems and methods for interfacing application programs with an item-based storage platform
US7917534B2 (en) 2003-08-21 2011-03-29 Microsoft Corporation Systems and methods for extensions and inheritance for units of information manageable by a hardware/software interface system
US7555634B1 (en) 2004-04-22 2009-06-30 Sun Microsystems, Inc. Multiple data hazards detection and resolution unit
US20060195456A1 (en) * 2005-02-28 2006-08-31 Microsoft Corporation Change notification query multiplexing
US7805422B2 (en) * 2005-02-28 2010-09-28 Microsoft Corporation Change notification query multiplexing
US20230034472A1 (en) * 2005-11-04 2023-02-02 Blackberry Limited Method and system for updating message threads
US11468407B2 (en) * 2005-11-04 2022-10-11 Blackberry Limited Method and system for updating message threads
US8099726B2 (en) 2005-12-07 2012-01-17 Microsoft Corporation Implementing strong atomicity in software transactional memory
US20070169030A1 (en) * 2005-12-07 2007-07-19 Microsoft Corporation Compiler support for optimizing decomposed software transactional memory operations
US20070136290A1 (en) * 2005-12-07 2007-06-14 Microsoft Corporation Removal of unnecessary read-to-update upgrades in software transactional memory
US7810085B2 (en) 2005-12-07 2010-10-05 Microsoft Corporation Removal of unnecessary read-to-update upgrades in software transactional memory
US7861237B2 (en) 2005-12-07 2010-12-28 Microsoft Corporation Reducing unnecessary software transactional memory operations on newly-allocated data
US8799882B2 (en) 2005-12-07 2014-08-05 Microsoft Corporation Compiler support for optimizing decomposed software transactional memory operations
US8266609B2 (en) * 2005-12-07 2012-09-11 Microsoft Corporation Efficient placement of software transactional memory operations around procedure calls
US20070143276A1 (en) * 2005-12-07 2007-06-21 Microsoft Corporation Implementing strong atomicity in software transactional memory
US20070169031A1 (en) * 2005-12-07 2007-07-19 Microsoft Corporation Efficient placement of software transactional memory operations around procedure calls
US20080222219A1 (en) * 2007-03-05 2008-09-11 Appassure Software, Inc. Method and apparatus for efficiently merging, storing and retrieving incremental data
US9690790B2 (en) 2007-03-05 2017-06-27 Dell Software Inc. Method and apparatus for efficiently merging, storing and retrieving incremental data
US20080288730A1 (en) * 2007-05-14 2008-11-20 International Business Machines Corporation Transactional Memory System Which Employs Thread Assists Using Address History Tables
US8095750B2 (en) 2007-05-14 2012-01-10 International Business Machines Corporation Transactional memory system with fast processing of common conflicts
US8117403B2 (en) 2007-05-14 2012-02-14 International Business Machines Corporation Transactional memory system which employs thread assists using address history tables
US20090113443A1 (en) * 2007-05-14 2009-04-30 International Business Machines Corporation Transactional Memory Computing System with Support for Chained Transactions
US20080288727A1 (en) * 2007-05-14 2008-11-20 International Business Machines Corporation Computing System with Optimized Support for Transactional Memory
US8321637B2 (en) 2007-05-14 2012-11-27 International Business Machines Corporation Computing system with optimized support for transactional memory
US8688920B2 (en) 2007-05-14 2014-04-01 International Business Machines Corporation Computing system with guest code support of transactional memory
US9104427B2 (en) 2007-05-14 2015-08-11 International Business Machines Corporation Computing system with transactional memory using millicode assists
US8095741B2 (en) 2007-05-14 2012-01-10 International Business Machines Corporation Transactional memory computing system with support for chained transactions
US9009452B2 (en) 2007-05-14 2015-04-14 International Business Machines Corporation Computing system with transactional memory using millicode assists
US20080288726A1 (en) * 2007-05-14 2008-11-20 International Business Machines Corporation Transactional Memory System with Fast Processing of Common Conflicts
US11347771B2 (en) * 2007-11-28 2022-05-31 International Business Machines Corporation Content engine asynchronous upgrade framework
US8655920B2 (en) * 2008-08-07 2014-02-18 Armanta, Inc. Report updating based on a restructured report slice
US20130031050A1 (en) * 2008-08-07 2013-01-31 Armanta, Inc. System, Method, and Computer Program Product for Accessing Manipulating Remote Datasets
US8738862B2 (en) 2009-08-31 2014-05-27 International Business Machines Corporation Transactional memory system with efficient cache support
US8667231B2 (en) 2009-08-31 2014-03-04 International Business Machines Corporation Transactional memory system with efficient cache support
US20110055483A1 (en) * 2009-08-31 2011-03-03 International Business Machines Corporation Transactional memory system with efficient cache support
US8566524B2 (en) 2009-08-31 2013-10-22 International Business Machines Corporation Transactional memory system with efficient cache support
US20110178984A1 (en) * 2010-01-18 2011-07-21 Microsoft Corporation Replication protocol for database systems
US20110191299A1 (en) * 2010-02-01 2011-08-04 Microsoft Corporation Logical data backup and rollback using incremental capture in a distributed database
US8825601B2 (en) 2010-02-01 2014-09-02 Microsoft Corporation Logical data backup and rollback using incremental capture in a distributed database
US8965850B2 (en) 2011-11-18 2015-02-24 Dell Software Inc. Method of and system for merging, storing and retrieving incremental backup data
US9244846B2 (en) 2012-07-06 2016-01-26 International Business Machines Corporation Ensuring causality of transactional storage accesses interacting with non-transactional storage accesses
US20170213023A1 (en) * 2013-08-20 2017-07-27 White Cloud Security, L.L.C. Application Trust Listing Service
US10095800B1 (en) 2013-12-16 2018-10-09 Amazon Technologies, Inc. Multi-tenant data store management
US9925492B2 (en) 2014-03-24 2018-03-27 Mellanox Technologies, Ltd. Remote transactional memory
US9971987B1 (en) 2014-03-25 2018-05-15 Amazon Technologies, Inc. Out of order data management
US20160140149A1 (en) * 2014-11-19 2016-05-19 Unisys Corporation Dynamic modification of database schema
US11176106B2 (en) * 2014-11-19 2021-11-16 Unisys Corporation Dynamic modification of database schema
CN104572881A (en) * 2014-12-23 2015-04-29 国家电网公司 Method for importing distribution network graph model based on multi-task concurrency
WO2017063049A1 (en) * 2015-10-15 2017-04-20 Big Ip Pty Ltd A system, method, computer program and data signal for conducting an electronic search of a database
WO2017063048A1 (en) * 2015-10-15 2017-04-20 Big Ip Pty Ltd A system, method, computer program and data signal for the provision of a database of information for lead generating purposes
US10530758B2 (en) * 2015-12-18 2020-01-07 F5 Networks, Inc. Methods of collaborative hardware and software DNS acceleration and DDOS protection
US10642780B2 (en) 2016-03-07 2020-05-05 Mellanox Technologies, Ltd. Atomic access to object pool over RDMA transport network
US10089339B2 (en) * 2016-07-18 2018-10-02 Arm Limited Datagram reassembly
US20180062998A1 (en) * 2016-08-31 2018-03-01 Viavi Solutions Inc. Packet filtering using binary search trees
US11005977B2 (en) * 2016-08-31 2021-05-11 Viavi Solutions Inc. Packet filtering using binary search trees
US11770463B2 (en) 2016-08-31 2023-09-26 Viavi Solutions Inc. Packet filtering using binary search trees
US10552367B2 (en) 2017-07-26 2020-02-04 Mellanox Technologies, Ltd. Network data transactions using posted and non-posted operations
US11126621B1 (en) * 2017-12-31 2021-09-21 Allscripts Software, Llc Database methodology for searching encrypted data records
US11500849B2 (en) * 2019-12-02 2022-11-15 International Business Machines Corporation Universal streaming change data capture
US11748325B2 (en) 2019-12-17 2023-09-05 Cerner Innovation, Inc. System and method for generating multicategory searchable ternary tree data structure
US11269836B2 (en) * 2019-12-17 2022-03-08 Cerner Innovation, Inc. System and method for generating multi-category searchable ternary tree data structure
US20220335049A1 (en) * 2021-04-14 2022-10-20 Google Llc Powering Scalable Data Warehousing with Robust Query Performance

Also Published As

Publication number Publication date
CN1610902B (en) 2010-05-05
NO331574B1 (en) 2012-01-30
CN1610906B (en) 2012-05-09
KR100941350B1 (en) 2010-02-11
BR0213862A (en) 2004-12-21
JP4420325B2 (en) 2010-02-24
NZ532773A (en) 2005-11-25
US7047258B2 (en) 2006-05-16
EP2450812A1 (en) 2012-05-09
EP2495671A1 (en) 2012-09-05
WO2003038565A3 (en) 2004-02-26
IL161712A0 (en) 2004-09-27
KR100977161B1 (en) 2010-08-20
US8630988B2 (en) 2014-01-14
NO20042260L (en) 2004-08-02
US6681228B2 (en) 2004-01-20
EP2562661A3 (en) 2016-05-25
MXPA04004203A (en) 2005-05-16
CN1610901A (en) 2005-04-27
CN1610877B (en) 2010-06-02
EP1449062A1 (en) 2004-08-25
EP1461723A4 (en) 2009-08-05
US20030084074A1 (en) 2003-05-01
US20030084057A1 (en) 2003-05-01
US8682856B2 (en) 2014-03-25
EA006223B1 (en) 2005-10-27
IL161722A0 (en) 2005-11-20
JP2005508042A (en) 2005-03-24
KR20040053268A (en) 2004-06-23
KR20040053266A (en) 2004-06-23
CN100557595C (en) 2009-11-04
EA200400614A1 (en) 2004-10-28
WO2003038654A1 (en) 2003-05-08
CA2466117A1 (en) 2003-05-08
EP1451714B1 (en) 2018-05-02
EP1449049A4 (en) 2009-10-28
EA200400613A1 (en) 2004-10-28
IL161723A (en) 2010-06-30
EP1451728A4 (en) 2009-08-05
ZA200404268B (en) 2005-10-26
NZ532772A (en) 2005-12-23
US20040254926A1 (en) 2004-12-16
US7203682B2 (en) 2007-04-10
EP1451714A1 (en) 2004-09-01
JP4420324B2 (en) 2010-02-24
AU2002356885B2 (en) 2008-10-02
EA005646B1 (en) 2005-04-28
IL161723A0 (en) 2005-11-20
EP2503476A1 (en) 2012-09-26
EA200400618A1 (en) 2004-10-28
BR0213807A (en) 2004-12-07
US7167877B2 (en) 2007-01-23
CN1610877A (en) 2005-04-27
EP1449049A2 (en) 2004-08-25
AU2002350104B2 (en) 2008-12-04
IL161721A0 (en) 2005-11-20
MXPA04004169A (en) 2004-07-08
US20070100808A1 (en) 2007-05-03
WO2003038565A2 (en) 2003-05-08
CA2472014A1 (en) 2003-05-08
KR100953137B1 (en) 2010-04-16
NZ532771A (en) 2005-12-23
MXPA04004201A (en) 2005-01-25
AU2002356884B2 (en) 2008-12-04
EP1451728A1 (en) 2004-09-01
NO20042258L (en) 2004-08-02
US20120102016A1 (en) 2012-04-26
US8171019B2 (en) 2012-05-01
EP1449062B1 (en) 2018-05-16
IL161712A (en) 2011-02-28
BR0213863A (en) 2004-12-21
IL161721A (en) 2011-08-31
JP4897196B2 (en) 2012-03-14
ZA200403597B (en) 2005-10-26
KR20040053254A (en) 2004-06-23
CA2466110A1 (en) 2003-05-08
JP2005508050A (en) 2005-03-24
NO20042259L (en) 2004-08-02
JP2005510782A (en) 2005-04-21
US20030084039A1 (en) 2003-05-01
IL161722A (en) 2009-07-20
US20140108452A1 (en) 2014-04-17
HK1075308A1 (en) 2005-12-09
CA2466117C (en) 2013-12-31
CA2472014C (en) 2012-07-10
EP1461723A1 (en) 2004-09-29
JP4399552B2 (en) 2010-01-20
CN1610902A (en) 2005-04-27
CA2466107A1 (en) 2003-05-08
MXPA04004202A (en) 2005-05-16
EP2477126A3 (en) 2013-09-11
EP2477126A2 (en) 2012-07-18
EP2562661A2 (en) 2013-02-27
ZA200404267B (en) 2005-08-31
NO20042261L (en) 2004-08-02
EA006045B1 (en) 2005-08-25
ZA200404266B (en) 2005-10-26
KR100970122B1 (en) 2010-07-13
JP2005508051A (en) 2005-03-24
CN1610906A (en) 2005-04-27
EA200400612A1 (en) 2004-12-30
WO2003038683A1 (en) 2003-05-08
AU2002350106B2 (en) 2008-09-11
KR20040053255A (en) 2004-06-23
CA2466110C (en) 2011-04-19
BR0213864A (en) 2004-12-21
AU2002356886A1 (en) 2003-05-12
NZ533166A (en) 2005-12-23
US20090106211A1 (en) 2009-04-23
EA006038B1 (en) 2005-08-25
US20030084075A1 (en) 2003-05-01
WO2003038596A1 (en) 2003-05-08
WO2003038653A1 (en) 2003-05-08
EP1449062A4 (en) 2009-08-05
CA2466107C (en) 2013-01-08
EP1451714A4 (en) 2009-08-05

Similar Documents

Publication Publication Date Title
US7203682B2 (en) High speed non-concurrency controlled database
AU2002356884A1 (en) Transactional memory manager
AU2002350106A1 (en) High speed non-concurrency controlled database

Legal Events

Date Code Title Description
AS Assignment

Owner name: VERISIGN, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BALOGH, ARISTOTLE;HAWORTH JR., WILLIAM F.;REEL/FRAME:013454/0558

Effective date: 20021101

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION