WO2003069510A1 - Similarity search engine for use with relational databases - Google Patents

Similarity search engine for use with relational databases Download PDF

Info

Publication number
WO2003069510A1
WO2003069510A1 PCT/US2003/004685 US0304685W WO03069510A1 WO 2003069510 A1 WO2003069510 A1 WO 2003069510A1 US 0304685 W US0304685 W US 0304685W WO 03069510 A1 WO03069510 A1 WO 03069510A1
Authority
WO
WIPO (PCT)
Prior art keywords
document
similarity
search
command
schema
Prior art date
Application number
PCT/US2003/004685
Other languages
French (fr)
Inventor
John R. Ripley
Original Assignee
Infoglide Software Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infoglide Software Corporation filed Critical Infoglide Software Corporation
Priority to AU2003219777A priority Critical patent/AU2003219777A1/en
Priority to EP03716051A priority patent/EP1476826A4/en
Priority to CA002475962A priority patent/CA2475962A1/en
Publication of WO2003069510A1 publication Critical patent/WO2003069510A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2468Fuzzy queries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99935Query augmenting and refining, e.g. inexact access
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99936Pattern matching access
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99944Object-oriented database structure
    • Y10S707/99945Object-oriented database structure processing

Definitions

  • the invention relates generally to the field of search engines for use with large enterprise databases. More particularly, the present invention enables similarity search engines that, when combined with standard relational database products, gives users a powerful set of standard database tools as well as a rich collection of proprietary similarity measurement processes that enable similarity determinations between an anchor record and target database records.
  • Information resources that are available contain large amounts of information that may be useful only if there exists the capability to segment the information into manageable and meaningful packets.
  • Database technology provides adequate means for identifying and exactly matching disparate data records to provide a binary output indicative of a match.
  • users wish to determine a quantitative measure of similarity between an anchor record and target database records based on a broadly defined search criteria. This is particularly true in the case where the target records may be incomplete, contain errors, or are inaccurate. It is also sometimes useful to be able to narrow the number of possibilities for producing irrelevant matches reported by database searching programs.
  • Traditional search methods that make use of exact, partial and range retrieval paradigms do not satisfy the content-based retrieval requirements of many users. This has led to the development of similarity search engines.
  • Similarity search engines have been developed to satisfy the requirement for a content-based search capability that is able to provide a quantitative assessment of the similarity between an anchor record and multiple target records.
  • the basis for many of these similarity search engines is a comparison of an anchor record band or string of data with target record bands or strings of data that are compared serially and in a sequential fashion. For example, an anchor record band may be compared with target record band #1, then target record band #2, etc., until a complete set of target record bands have been searched and a similarity score computed.
  • the anchor record bands and each target record band contain attributes of a complete record band of a particular matter, such as an individual.
  • each record band may contain attributes comprising a named individual, address, social security number, driver's license number, and other information related to the named individual.
  • the attributes within each record band are serially compared, such as name-name, address-address, number- number, etc.
  • a complete set of target record bands are compared to an anchor record band to determine similarity with the anchor record band by computing similarity scores for each attribute within a record band and for each record band.
  • a similarity search engine that provides a system and method for determining a quantitative measure of similarity in a single pass between an anchor record and a set of multiple target records that have multiple relationship characteristics. It should be capable of operating under various operating systems in a multi-processing environment. It should have the capability to similarity search large enterprise databases without the requirement to start over again when an error is encountered.
  • the present invention of a Similarity Search Engine (SSE) for use with relational databases is a system and method for determining a quantitative assessment of the similarity between an anchor record or document and a set of one or more target records or documents. It makes a similarity assessment in a single pass through the target records having multiple relationship characteristics. It is capable of running under various operating systems in a multi-processing environment and operates in an error-tolerant fashion with large enterprise databases.
  • the present invention comprises a set of robust, multi-threaded components that provide a system and method for scoring and ranking the similarity of documents that may be represented as Extensible Markup Language (XML) documents.
  • This search engine uses a unique command syntax known as the XML Command Language (XCL).
  • attribute similarity is quantified as a score having a value of between 0.00 and 1.00 that results from the comparison of an anchor value attribute (search criterion) vs. a target value attribute (database field) using a distance function that identifies am attribute similarity measurement.
  • document or record level which comprises a "roll-up" or aggregation of one or more attribute similarity scores determined by a parent computing or 0 choice algorithm
  • document or record similarity is a value normalized to a score value of between 0.00 and 1.00 for the document or record.
  • a single anchor document containing multiple attributes is compared to multiple target documents also containing multiple attributes.
  • Table 1 illustrates the interrelationships between attributes, anchor 5 attribute values, target attribute values, distance functions and attribute similarity scores.
  • the distance functions represent measurement algorithms to be executed to determine an attribute similarity score.
  • This Similarity Search Engine (SSE) architecture is a server configuration comprising a Gateway, a Virtual Document Manager (VDM), a Search Manager (SM) and an SQL/Relational Database Management System (RDMS).
  • the SSE server may serve one or more clients.
  • the Gateway provides command and response routing as well as user management functions. It accepts commands from clients and routes those commands to either the VDM or the SM.
  • the purpose of the VDM is XML document generation, particularly schema generation.
  • the purpose of the SM is XML document scoring, or aggregation.
  • the VDM and the SM each receive commands from the Gateway and in turn make calls to the RDMS.
  • the RDMS provides token attribute similarity scoring in addition to data persistence, data retrieval and access to User Defined Functions (UDFs).
  • the UDFs include measurement algorithms for computing attribute similarity scores.
  • the Gateway, VDM and SM are specializations of a unique generic architecture referred to as the XML Command Framework (XCF), which handles the
  • a Datasource object is a logical connection to a data store, such as a relational database, and it manages the physical connection to the data store.
  • a Schema object central to SSE operation, is a structural definition of a document with additional markup to provide database mapping and similarity definitions.
  • a Query object is a command that dictates which elements of a database underlying a Schema object should be searched, their search criteria, the similarity measures to be used and which results should be considered in the final output.
  • a Measure object is a function that operates on two strings and returns a similarity score indicative of the degree of similarity between the two strings.
  • a method having features of the present invention for performing similarity searching comprises the steps of receiving a request instruction from a client for initiating a similarity search, generating one or more query commands from the request instruction, each query command designating an anchor document and at least one search document, executing each query command, including computing a normalized document similarity score having a value of between 0.00 and 1.00 for each search document in each query command for indicating a degree of similarity between the anchor document and each search document, and creating a result dataset containing the computed normalized document similarity scores for each search document, and sending a response including the result dataset to the client.
  • the step of generating one or more query commands may further comprise identifying a schema document for defining structure of search terms, mapping of datasets providing target search values to relational database locations, and designating measures, choices and weight to be used in a similarity search.
  • the step of computing a normalized document similarity score may comprise computing attribute token similarity scores having values of between 0.00 and 1.00 for the corresponding leaf nodes of the anchor document and a search document using designated measure algorithms, multiplying each token similarity score by a designated weighting factor, aggregating the token similarity scores using designated choice algorithms for determining a document similarity score having a value of between 0.00 and 1.00 for the search document.
  • the step of computing attribute token similarity scores may further comprise computing attribute token similarity scores in a relational database management " system, the step of multiplying each token-similarity-score may further-comprise multiplying each token similarity score in a similarity search engine, and the step of aggregating the token similarity scores may further comprise aggregating the token similarity scores in the similarity search engine.
  • the step of generating one or more query commands may comprise populating an anchor document with search criteria values, identifying documents to be searched, defining semantics for overriding parameters specified in an associated schema document, defining a structure to be used by the result dataset, and imposing restrictions on the result dataset.
  • the step of defining semantics may comprise designating overriding measures for determining attribute token similarity scores, designating overriding choice algorithms for aggregating token similarity scores into document similarity scores, and designating overriding weights to be applied to token similarity scores.
  • the step of imposing restrictions may be selected from the group consisting of defining a range of similarity indicia scores to be selected and defining percentiles of similarity indicia scores to be selected.
  • the step of computing a normalized document similarity score may further comprise computing a normalized document similarity score having a value of between 0.00 and 1.00, whereby a normalized similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching.
  • the step of computing attribute token similarity scores having values of between 0.00 and 1.00 may further comprise computing attribute token similarity scores having values of between 0.00 and 1.00, whereby a attribute token similarity value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching.
  • the step of generating one or more query commands may further comprise generating one or more query commands whereby each query command includes attributes of command operation, name identification, and associated schema document identification.
  • the method may further comprise receiving a schema instruction from a client, generating a schema command document comprising the steps of defining a structure of target search terms in one or more search documents, creating a mapping of database record locations to the target search terms, listing semantic elements for defining measures, weights and choices to be used in similarity searches, and storing the schema command document into a database management system.
  • the method may further comprise the step of representing documents and commands as hierarchical XML documents.
  • the step of sending a response to the client may further comprise sending a response including an error message and a warning message to the client.
  • the step of sending a response to the client may further comprise sending a response to the client containing he " result datasets, whereby each result dataset includes-at -least-one - normalized document similarity score, at least one search document name, a path to the search documents having a returned score, and at least one designated schema.
  • the method may further comprising receiving a statistics instruction from a client, generating a statistics command from the statistics instruction, which may comprise the steps of identifying a statistics definition to be used for generating statistics, populating an anchor document with search criteria values, identifying documents to be searched, delineating semantics for overriding measures, parsers and choices defined in a semantics clause in an associated schema document, defining a structure to be used by a result dataset, imposing restrictions to be applied to the result dataset, identifying a schema to be used for the basis of generating statistics, designating a name for the target statistics table for storing results, executing the statistics command for generating a statistics schema with statistics table, mappings and measures, and storing the statistics schema in a database management system.
  • the method may further comprise the step of executing a batch command comprising executing a plurality of commands in sequence for collecting results of several related operations.
  • the method may further comprise selecting measure algorithms from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination.
  • the method may further comprise selecting choice algorithms from the group consisting of single best, greedy sum, overall sum, greedy minimum, overall minimum, and overall maximum.
  • Another embodiment of the present invention is a computer-readable medium containing instructions for controlling a computer system to implement the method above.
  • a system for performing similarity searching comprises a gateway for receiving a request instruction from a client for initiating a similarity search, the gateway for generating one or more query commands from the request instruction, each query command designating an anchor document and at least one search document, a search manager for executing each query command, including means for computing a normalized document similarity score having a value of between 0.00 and 1.00 for each search document in each query command for indicating a degree of similarity between the anchor document and each search document, means for creating a result dataset containing the computed normalized document similarity scores for each search document, and the gateway for sending a response including the result dataset to the client.
  • the means for computing a normalized similarity score may comprise a relational database management " ⁇ system for computing-attribute token similarity- scores having values-of- bet-ween 0.00 and - 1.00 for the corresponding leaf nodes of the anchor document and a search document using designated measure algorithms, and the search manager for multiplying each token similarity score by a designated weighting factor and aggregating the token similarity scores using designated choice algorithms for determining a document similarity score having a value of between 0.00 and 1.00 for the search document.
  • Each one or more query commands may further comprise a measure designation, and the database management system further comprises designated measure algorithms for computing a token similarity score.
  • Each query command may comprise an anchor document populated with search criteria values, at least one search document, designated measure algorithms for determining token similarity scores, designated choice algorithms for aggregating token similarity scores into document similarity scores, designated weights for weighting token similarity scores, restrictions to be applied to a result dataset document, and a structure to be used by the result dataset.
  • the computed document similarity scores may have a value of between 0.00 and 1.00, whereby a normalized similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of 5 similarity matching.
  • the relational database management system may include means for computing an attribute token similarity score having a value of between 0.00 and 1.00, whereby a token similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching.
  • Each query command may include attributes of command 0 operation, name identification, and associated schema document identification for providing a mapping of search documents to database management system locations.
  • the system may further comprise the gateway for receiving a schema instruction from a client, a virtual document manager for generating a schema command document, the schema command document comprising a structure of target search terms in one or more search documents, a 5 mapping of database record locations to the target search terms, semantic elements for
  • each result dataset may include at least one normalized document similarity score, at least one search document name, a path to the search documents having a returned score and 0 at least one designated schema.
  • Each result dataset may include an error message and a warning message to the client.
  • the system may further comprise the gateway for receiving a statistics instruction from a client and for generating a statistics command from the statistics instruction, the search-manager for-identifying a-statisties defmition-to-be-used or-generating - statistics, populating an anchor document with search criteria values, identifying documents 5 to be searched, delineating semantics for overriding measures, weights and choices defined in a semantics clause in an associated schema document, defining a structure to be used by a result dataset, imposing restrictions to be applied to the result dataset, identifying a schema to be used for the basis of generating statistics, designating a name for the target statistics table for storing results, and a statistics processing module for executing the statistics command for 0 generating a statistics schema with statistics table, mappings and measures, and storing the statistics schema in a database management system.
  • the gateway for receiving a statistics instruction from a client and for generating a statistics command from the statistics instruction
  • the search-manager for-identifying a-statisties defmition-to-be
  • the system may further comprise the gateway for receiving a batch command from a client for executing a plurality of commands in sequence for collecting results of several related operations.
  • the system may further comprise selecting measure algorithms selected from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination.
  • the system may further comprise choice algorithms selected from the group consisting of single best, greedy sum, 5 overall sum, greedy minimum, overall minimum, and overall maximum.
  • a system for performing similarity searching comprises a gateway for handling all communication between a client, a virtual document manager and a search manager, the virtual document manager connected between the gateway and a relational database management system for providing document 0 management, the search manager connected between the gateway and the relational database management system for searching and scoring documents, and the relational database management system for providing relational data management, document and measure persistence, and similarity measure execution.
  • the virtual document manager may include a relational database driver for mapping XML documents to relational database tables.
  • the 5 virtual document manager may include a statistics processing module for generating statistics based on similarity search results.
  • the relational database management system may include means for storing and executing user defined functions.
  • the user defined functions include measurement algorithms for determining attribute token similarity scores.
  • Another embodiment of the present invention is a method for performing similarity searching 0 that comprises the steps of creating a search schema document by a virtual document manager, generating one or more query commands by a gateway, executing one or more query commands in a search manager and relational database management system for
  • the step of creating a schema document may comprise designating a structure of search documents, datasets for mapping search document attributes to relational database locations, and semantics identifying measures for computing token attribute similarity search scores between search documents and an anchor document, weights for modulating token attribute similarity search scores, choices for aggregating token attribute similarity search scores into 0 document similarity search scores, and paths to the search document structure attributes.
  • the step of generating one or more query commands may comprise designating an anchor document, search or schema documents, restrictions on result sets, structure of result sets, and semantics for overriding schema document semantics including measures, weights, choices and paths.
  • the step of executing one or more query commands may comprise computing token attribute similarity search scores having values of between 0.00 and 1.00 for each search document and an anchor document in a relational database management system using measures, and modulating the token attribute similarity search scores using weights and aggregating the token attribute similarity scores into document similarity scores having values of between 0.00 and 1.00 in the search manager using choices.
  • the step of assembling a result document may comprise identifying associated query commands and schema documents, document structure, paths to search terms, and similarity scores by the search manager.
  • the search schema, the query commands, the search documents, the anchor document and the result document may be represented by hierarchical XML documents.
  • the method may further comprise selecting measure algorithms from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination.
  • the method may further comprise selecting choice algorithms from the group consisting of single best, greedy sum, overall sum, greedy minimum, overall minimum, and overall maximum.
  • Another embodiment of the present invention is a computer-readable medium containing instructions for controlling a computer system to implement the method above.
  • Figure 2 depicts an example of mapping an XML document into database tables
  • Figure 3 depicts an example of an XML document resulting from a single READ
  • Figure 4 depicts an example of a RESULT from a QUERY
  • Figure 5 A depicts a process for handling a statistics command in a Search Manager (SM);
  • SM Search Manager
  • Figure 5B depicts a dataflow of a statistics command process in a Search Manager (SM);
  • SM Search Manager
  • Figure 6 describes the Measures implemented as UDFs
  • Figure 7 depicts an architecture of the XML Command Framework (XCF)
  • Figure 8 depicts the format of a RESPONSE generated by a CommandHandler
  • Figure 9 A depicts a process for handling an XCL command in a Commands erver
  • Figure 9B depicts a dataflow of an XCL command process in a Commands erver;
  • Figure 10 depicts a general XCL command format
  • Figure 11 depicts an example of multiple tables mapped onto a search document
  • Figure 12 depicts the format of a SCHEMA command
  • Figure 13 depicts an example of a RESPONSE from a list of SCHEMA commands
  • Figure 14 depicts the format for a STRUCTURE clause
  • Figure 15 depicts an example of a STRUCTURE clause for a hierarchical search
  • Figure 16 depicts the format of the MAPPING clause
  • Figure 17 depicts an example of a MAPPING clause
  • Figure 18 depicts the format of the SEMANTICS clause
  • Figure 19 depicts the structure of a SCHEMA command and it related clauses
  • Figure 20 depicts the format of the QUERY command
  • Figure 21 depicts an example of the WHERE clause
  • Figures 22 A and 22B depict examples of a FROM clause;
  • Figure 23 depicts the format of the RESTRICT clause;
  • Figure 24 depicts an example of the RESTRICT clause
  • Figure 25 depicts an example of the SELECT clause
  • Figures 26A, 26B and 26C depict formats of a RESPONSE structure
  • Figure 27 depicts an example of a RESPONSE with results of a similarity search
  • Figure 28 depicts the format of a DOCUMENT command
  • Figure 29 depicts a search document example for the layout depicted in Figure 11 ;
  • Figure 30 depicts a format of a statistics definition template
  • - Figure-31 depicts an example of a-simple-statistics definition
  • Figure 32 depicts a RESPONSE to a statistics generation command
  • Figure 33 depicts the format of a BATCH command
  • Figure 34 depicts the process of setting up a schema
  • Figure 35 depicts an example of a SCHEMA command
  • Figure 36 depicts the process of executing an SSE search
  • Figure 37 depicts an example of a QUERY command
  • Figure 38 depicts an example of a data and similarity results of a QUERY command
  • FIG 39 depicts an example RESPONSE resulting from a QUERY command.
  • SSE Similarity Search Engine
  • the SSE employs a command language based on XML, the Extensible Markup Language.
  • SSE commands are issued as XML documents and search results are returned as XML documents.
  • XML Extensible Markup Language
  • the specification for Extensible Markup Language (XML) 1.0 (Second Edition), W3C Recommendation 6 October 2000 is incorporated herein by reference.
  • the syntax of the SSE Command Language XCL consists of XML elements, their values and attributes that control the behavior of the SSE.
  • a client program can define and execute searches employing the S SE .
  • XML tags are enclosed in angled brackets. Indentations are used to demark parent- child relationships. Tags that have special meaning for the SSE Command Language are shown in capital letters. Specific values are shown as-is, while variables are shown in italic type. The following briefly defines XML notation:
  • the SSE relies primarily on several system objects for its operation. Although there are other system objects, the primary four system objects include a Datasource object, a
  • a Datasource object describes a logical connection to a data store, such as a relational database.
  • the Datasource object manages the physical connection to the data store.
  • the SSE may support many different types of datasources, the preferred datasource used in the SSE is an SQL database, implemented by the vdm.RelationalDatasource class.
  • a relational Datasource object is made up of attributes comprising Name, Driver, URL, Username and Password, as described in Table 2.
  • a Schema object is at the heart of everything the SSE does.
  • a Schema object is a structural definition of a document along with additional markup to provide SQL database mapping and similarity definitions.
  • the definition of a Schema object comprises Name, Structure, Mapping and Semantics, as described in Table 3.
  • a Query object is an XCL command that dictates which elements of a Schema object (actually the underlying database) should be searched, their search criteria, the similarity measures to be used and which results should be considered in the final output.
  • the Query object format is sometimes referred to a Query By Example (QBE) because an "example" of what we are looking for is provided in the Query.
  • Attributes of a Query object comprise a Where clause, Semantics, and Restrict, as described in Table 4.
  • a Measure object is a function that takes in two strings and returns a score (between 0.000 and 1.000) of how similar the two strings are.
  • These Measure objects are implemented as User Defined Functions (UDFs) and are compiled into a native library in an SQL Database.
  • Measure objects are made up of attributes comprising Name, Function and Flags, as described in Table 5.
  • FIG. 1 depicts a high level architecture 100 of the Similarity Search Engine (SSE).
  • the SSE architecture 100 includes an SSE Server 190 that comprises a Gateway 110, a Virtual Document Manager (VDM) 120, a Search Manager (SM) 130 and a Relational Database Management System (RDMS) 140.
  • the Gateway 110 provides routing and user management.
  • the VDM 120 enables XML document generation.
  • the SM 130 performs XML document and scoring.
  • the RDMS 140 (generally an SQL Database) provides token attribute scoring as well as data persistence and retrieval, and storing User Defined Functions (UDFs) 145.
  • the SSE Server 190 is a similarity search server that may connect to one or more Clients 150 via a Client Network 160.
  • the SSE Server also connects to a RDMS 140.
  • the Gateway 110 serves as a central point of contact for all client communication by responding to commands sent by one or more clients 150.
  • the Gateway 110 supports a plurality of communication protocols with a user interface, including sockets, HTTP, and JMS.
  • the Gateway 110 is implemented as a gateway.
  • Server class a direct descendent of the xcf.BaseCommandServer class available in the unique generic architecture referred to as the XML Command Framework (XCF), which handles the details of threading, distribution, communication, resource management and general command handling.
  • XCF XML Command Framework
  • the Gateway 110 inherits all the default command handling and communication functions available in all XCF Command Servers.
  • the Gateway 110 relies on several types of command handlers for user definition, user login and logout, and command routing.
  • the Gateway 110 makes use of a user class to encapsulate what a "user" is and implements a component class interface, which is inherited from the generic XCF architecture. Instances of XCF Component command handlers used by the Gateway 110 to add, remove or read a user definition are shown in Table 6.
  • the Gateway 110 includes several instances of command handlers inherited from the generic XCF architecture to properly route incoming XML Command Language (XCL) commands to an appropriate target, whether it is the VDM 120, the SM 130, or both. These command handlers used by the Gateway 110 for routing are shown in Table 8.
  • XCL XML Command Language
  • Table 9 shows the routing of command types processed by the Gateway 110, and which command handler shown in Table 8 is relied upon for the command execution.
  • the communication between the Gateway 110 and the VDM 120, and between the Gateway 110 and the SM 130 is via the XML Command Language (XCL).
  • XCL XML Command Language
  • the VDM 120 is responsible for XML document management, and connects between the Gateway 110 and the RDMS 140.
  • the VDM 120 is implemented by the vdm.Server class, 5 which is a direct descendent of the xcf.BaseCommandServer class available in the unique generic architecture referred to as the XML Command Framework (XCF), which handles the details of threading, distribution, communication, resource management and general command handling.
  • XCF XML Command Framework
  • the XCF is discussed below in more detail. Therefore, the VDM 120 inherits all the default command handling and communication functions available in all XCF
  • VDM 120 uses existing relational tables and fields to provide dynamic XML generation capabilities without storing the XML documents.
  • the VDM 120 provides its document management capabilities through Document Providers.
  • the most visible function to a Client 150 of the VDM 120 is the creation and
  • a Document Provider is defined by the vdm.DocProvider interface and is responsible for generating and storing XML documents based on a schema definition.
  • the SSE Server 190 only implement one DocProvider, which is an SQL based document provider, if the DocProvider implements the interface, the document 0 provider can be any source that generates an XML document.
  • document providers may be file systems, web sites, proprietary file formats, or XML databases. For a user to retrieve relational data, the user must know where the data resides and how it is
  • a Datasource object encapsulates alHhe-connection information.- - -
  • command handlers required by the VDM 120 in order to 5 satisfactorily execute XCL commands. These include the document related command handlers shown in Table 10.
  • the VDM 120 communicates with the RDMS 140 via the Java Database Connectivity (JDBC) application programming interface.
  • JDBC Java Database Connectivity
  • the VDM 120 includes a Relational Database Driver (RDD) 125 for providing a link between XML documents and the RDMS 140.
  • RDD Relational Database Driver
  • the RDD 125 implements the DocProvider interface, supporting standard functions defined in that class, including reading, writing and deleting XML documents.
  • the RDD 125 is initialized by calling the initialize(String map) function, where this map is an XML document describing the relationships between the XML documents to be dealt with and the relational database. For instance, consider an example XML document 210 that follows the form shown in Figure 2.
  • Datasets 220 can specify that the data in claim/claimant/name should come from the Claimants table 240 of the RDMS 230, while /claim/witness/name should come from the Witnesses table 250. Conversely, when writing an existing XML document 210 of this form out to the RDMS 230, the Datasets 220 will tell the RDD 125 that it should write any data found at /claim/claimant/name out to the "name" field of the Claimants table 240, and write the data found at /claim/witness/name out to the "name" field of the Witnesses table 250. Through describing these relationships, the Datasets 220 allows the RDD 125 to read, write, and delete XML documents for the VDM 120.
  • these Datasets 220 define relationships that are stored in a Java model.
  • the XML map 210 is parsed and used to build a hierarchy of Datasets 220, one level of hierarchy for each database table referenced in the Datasets 220. This encapsulation of the XML parsing into this one area minimizes the impact of syntax changes in the XML map 210.
  • These Datasets 220 have an XML form and may describe a document based on a relational table or a document based on a SQL statement. If based on a relational table, then initializing the RDD 125 with these Datasets 220 will allow full read/write functionality.
  • RDMS 230 should be stored in the XML document 210, and visa- versa when writing XML document data to the RDMS.
  • a Dataset ⁇ EXPRESSION> tag indicates whether the Dataset describes a document based on a relational table or a document based on an SQL statement.
  • the VDM 120 relies on three functions to provide the functionality of building XML documents from underlying RDMS. Each of these three functions returns the resultant document(s) as a String.
  • the functions are singleRead, multipleRead and expressionRead.
  • singleRead singleRead(String primaryKey, boolean createRoot, String contentFilter).
  • primaryKey is a String that represents the primary key of the document being produced.
  • the boolean createRoot indicates whether or not the user wants the function to wrap the resultant XML document in a root-level ⁇ RESULT> tag.
  • the String contentFilter is an XML structure represented as a String that describes the structure that the result must be formatted in. This structure is always a cut-down version of the full document. For instance, if we initialize the RDD 125 with an example, and then call singleRead(" 1 ", true, " ⁇ claim> ⁇ witness> ⁇ city/> ⁇ /witness> ⁇ /claim>”), the resulting XML document may look like that shown in Figure 3.
  • expressionRead ExpressionRead(String expression, int start, int blockSize, boolean createRoot,
  • multipleRead multipleRead(Set primaryKeys, boolean createRoot, String contentFilter).
  • the boolean createRoot and String ContentFilter behave just as they did in singleRead.
  • the only parameter that is different is that instead of a single primaryKey String, multipleRead takes a set of primaryKeys.
  • the other two read functions, singleRead and expressionRead may be considered to be special cases of the multipleRead method.
  • a singleRead may be considered as a multipleRead called on a primaryKey set of one.
  • the results may be fed into a set that is then sent to a multipleRead.
  • Composition of documents follows a basic algorithm. A row is taken from the topmost array of arrays, the one representing the master table of the document. The portion of the XML document that takes information from that row is built. Next, if there is a master- detail relationship, the detail table is dealt with. All rows associated with the master row are selected, and XML structures built from their information. In this manner, iterating through all of the table arrays, the document is built. Then, the master array advances to the next row, and the process begins again. When it finishes, all of the documents will have been built, and they are returned in String form.
  • the VDM relies on two functions for writing XML documents out to an underlying RDMS. These functions are singleWrite and multipleWrite.
  • singleWrite singleWrite(String primaryKey, String document).
  • the parameter primaryKey is the document number to be written out.
  • the parameter document is an XML document in String form, which will be parsed and written out to the RDMS.
  • the driver has created a series of PreparedStatements to handle the data insertion. The driver iterates through the document, matching each leafs context to a context in a Dataset. When a context match is made, the relevant Insert statement has another piece plugged into it. When all of the necessary data has been plugged into the prepared Insert statement, it is executed and the data is written to the RDMS.
  • multipleWrite function " multipleWrite(Map documents).
  • the multipleWrite function takes as a parameter a Map which holds pairs of primary keys and documents.
  • the multipleWrite function iterates through this Map, calling the singleWrite function with each of the pairs.
  • the VDM relies on three methods for deleting the data represented in an XML document from the underlying RDMS.
  • the functions are singleDelete, multipleDelete and expressionDelete.
  • SingleDelete singleDelete(String primaryKey, String document).
  • the singleDelete method takes in a String primaryKey, which identifies the document to be deleted. While the DocProvider interface requires a second parameter, the String document, the relational driver does nothing with this information and is able to function with only the document's primary key.
  • the driver In order to delete a given document, the driver first iterates through the Dataset structure, executing selects for relevant columns in each table. This is required to properly map the master-detail relationship. For instance, there is no guarantee that the master table's primary key will be the same key used in the detail table. Running the Dataset's selects as if a read command had been called allows the driver access to the necessary information, and insures that all components of the document are deleted.
  • the expressionDelete method takes as its sole parameter a SQL expression which describes the set of primary keys of documents which the user wishes to delete.
  • the expression is executed, with the assumption that the first column of the resulting rows will be the primary key. These primary keys are iterated through, each being loaded into a call to singleDelete.
  • the SM 130 is responsible for XML document and SQL searching and scoring, and connects between the Gateway 110 and the RDMS 140.
  • the SM 130 is implemented as a search.Server class, which is a direct descendent of the xcf.BaseCommandServer class ⁇ available irrthe unique-generic architecture referred to-as the XML Command Framework (XCF), which handles the details of threading, distribution, communication, resource management and general command handling.
  • XCF XML Command Framework
  • the SM 130 inherits all the default command handling and communication functions available in all XCF Command Servers.
  • the SM 130 does not maintain any of its own indexes, but uses a combination of relational indexes and User Defined Functions (UDFs) 145 to provide similarity-scoring methods in addition to traditional search techniques.
  • the SM 130 sends commands to the RDMS 140 to cause the RDMS 140 to execute token attribute similarity scoring based on selected UDFs.
  • the SM 140 also performs aggregation of token attribute scores from the RDMS 140 to determine document or record similarity scores using selected choice algorithms. SQL commands sent by the SM 130 to the RDMS 140 are used execute functions within the RDMS 140 and to register UDFs 145 with the RDMS 140.
  • command handlers required by the SM 130 in order to satisfactorily execute XCL commands. These include schema, datasource, measure (UDF), and choice related command handlers.
  • the schema related command handlers are shown in Table 13.
  • the SM 130 communicates with the RDMS 140 via the Java Database Connectivity (JDBC) application programming interface.
  • JDBC Java Database Connectivity
  • a similarity search is generally initiated when the Gateway 110 receives a QUERY command containing a search request from a Client 150, and the Gateway 110 routes the QUERY command to the SM 130.
  • the SM 130 generally executes the QUERY command by accessing a SCHEMA previously defined by a Client 150 and specified in the QUERY command, and parsing the QUERY command into a string of SQL statements. These SQL statements are sent to the RDMS 140 where they are executed to perform a similarity search of token attributes and scoring of the attributes of the target documents specified in the SCHEMA and stored in the RDMS 140.
  • the attribute similarity scores are then returned to the SM 130 from the RDMS 140 where weighting factors specified in the SCHEMA are applied to each score and Choice algorithms specified in the SCHEMA aggregate or "roll-up" the attribute scores to obtain an overall similarity score for each target document or record specified in the SCHEMA or QUERY command.
  • the scores are then returned to the GatewayLllO.byihe SM 130.in.a RESULT document, _which is then_returned to the Client 150.
  • attribute scoring by the SM 130 consider the following SQL statement sent by the SM 130 to the RDMS 140:
  • the SM 130 has caused the RDMS 140 to score a series of attributes independently and the RDMS 140 has returned a set of scores shown in Table 18 to the SM 130.
  • the SM 130 determines the overall score of a record (or document) by aggregation through use of a Choice algorithm specified in the associated SCHEMA.
  • An example of aggregation may be simple averaging the scores of the attribute after first multiplying them by relative weight factors, as specified in a QUERY command. In the example case, all fields are weighted evenly (1.00), and therefore the score is a simple average.
  • Figure 4 depicts an example of a RESULT document from the example of Table 19.
  • the Statistics Processing Module (SPM) 135 enables the acquisition of statistical information about the data stored in search tables in the RDMS 140, using the built-in functions available in the RDMS 140. This enables the definition of statistics after search data has been stored in the RDMS 140.
  • the Statistics Processing Module (SPM) 135 gives the user the ability to specify the fields upon which they wish to obtain statistics. The list of fields selected will act as a combination when computing occurrences. For example, the most frequently occurring first, middle, and last name combination. In addition to the fields, the user will be able to provide count restriction (e.g., only those with 4 or more occurrences) along with data restriction (e.g., only those records in Texas).
  • Figure 5 A depicts a process 500 for handling a STATISTICS command in a Search Manager (SM) 130 when the SSE Server 190 receives a STATISTICS command and a CommandHandler is invoked to handle the process.
  • SM 130 receives a STATISTICS command
  • the Statistics Definition to be used in the generation process is identified 510.
  • the SCHEMA (search table) from which these statistics are based is then identified 520.
  • an SQL statement is issued to extract the necessary statistical information from the SCHEMA 530. If the results of a QUERY command are not already present, a new statistics table is created to store the results of a QUERY command 540.
  • the statistics table is then populated with the results of the QUERY command 550.
  • a statistics SCHEMA (with mapping and measures) is generated 560. And lastly, the newly created statistics SCHEMA is added to the SM 130 so that the statistics table becomes a new search table and is exposed to the client as a searchable database 570.
  • Figure 5B depicts the dataflow 502 in the statistics command process of Figure 5B.
  • Statistic Definitions are considered Components that fit into the ComponentManager architecture with their persistence directory being "statistics", as described below with regard to the CommandServer of the XCF.
  • the management commands are handled by the ComponentAdd, ComponentRemove and ComponentRead CommandHandlers available in the CommandServers registered CommandHandlers.
  • the RDMS 140 is generally considered to be an SQL database, although it is not limited to this type of database.
  • the RDMS 140 may comprise a DB2 Relational Database Management System by IBM Corporation.
  • the SM 130 communicates with the RDMS 140 by sending commands and receiving data across a JDBC application programming interface (API).
  • the SM 130 is able to cause the RDMS 140 to execute conventional RDBMS commands as well as commands to execute the User Defined Functions (UDFs) 145 contained in a library in the RDMS 140 for providing similarity-scoring methods in addition to traditional search techniques.
  • the VDM 120 also communicates with the RDMS 140 via a JDBC application programming interface (API).
  • API JDBC application programming interface
  • UDFs 145 provide an extension to a Relational Database Management System - (RDMS) suite of built-in functions.
  • the built-in functions include a series of math, string, and date functions. However, none of these built-in functions generally provide any similarity or distance functional capability needed for similarity searching.
  • the UDFs 145 may be downloaded into the RDMS 140 by the SSE server 190 provide the functions required for similarity searching.
  • UDFs 145 may be written in C, C++, Java, or a database-specific procedure language. The implementations of these UDFs 145 are known as Measures. The Measures compare two strings of document attributes and generate a score that is normalized to a value between 0.00 and 1.00.
  • Figure 6 describes the Measures implemented as UDFs 145 in an embodiment of the SSE.
  • the term "tokenized Compare” is used in the Measure descriptions of Figure 6.
  • it means to use domain-specific (and thus domain-limited) knowledge to break the input strings into their constituent parts.
  • tokens comprising Number, Street Name, Street Type, and, optionally, Apartment. This may improve the quality of scoring by allowing different weights for different tokens, and by allowing different, more specific measures to be used on each token.
  • Figure 7 depicts an architecture of the XML Command Framework (XCF) 700.
  • the Gateway, VDM and SM described above each rely on the flexible design of the XCF 700 for core processing capability.
  • the XCF 700 functions as XML in and XML out, that is, it generates an XML response to an XML command. It is based upon a unique XML command language XCL that strongly focuses on the needs of search applications. The details of XCL are described below.
  • the architecture of the XCF 700 comprises the following major entities: CommandServer 710 for configuration, overall flow, and central point of contact; CommandExecutor 720 for executing XML commands and providing XML result; CommandResponse 730 for receiving XML results; CommandHandlerFactory 740 for registration and identification of CommandHandlers 742; Component Manager 750 for management of Components 752, Acceptors 756 and Connectors 754, Interceptors 758, and LifetimeManagers 760; and CommandDispatcher 770 containing a Queue 772 for CommandHandler 742 thread management. CommandHandlers 742 process individual XML commands.
  • Components 752 are pluggable units of functionality.
  • Connectors 754 and Acceptors 756 provide for communication into and out of the CommandServer 710.
  • Interceptors 758 hook to intercept incoming commands.
  • LifetimeManagers 760 manage lifetime of CommandHandler 742 execution.
  • Each of these entities is defined as an interface that allow for multiple implementations of an entity.
  • Each interface is an object that has at least one base implementation defined in XCF.
  • Each interface is limited to the contract imposed by the interface.
  • the CommandServer 700 is the central point-of-contact for all things in the XCF. It is responsible for overall execution flow and provides central access to services and components of the system. Most objects that reference the XCF services are passed a CommandServer reference in a constructor or in a setter method. Central access to synchronization objects can be placed here, as all supporting objects will have access. Being the central point of most things, it is also responsible for bootstrapping, initialization and configuration.
  • the interface to the CommandExecutor 720 is defined as: void execute(String command, CommandResponse response), where command is the XML command to be executed by a CommandHandler 742, and CommandResponse is any object that implements the interface to the CommandResponse 730. It is this object that will be called asynchronously when the command has been completed.
  • the CommandResponse interface 730 must be implemented when calling a
  • CommandExecutor 720 execute method.
  • the contract is: void setValue(String value). Once a command has been completed, it will call this method. It is here where a particular CommandResponse 730 implementation will get a response value and process it accordingly.
  • the CommandHandlers 742 provide means for interpreting and executing XML commands for solving a particular problem. For each problem that needs a solution, there is an assigned CommandHandler 742. First, consider a standard XML command:
  • a CommandHandler 742 is uniquely identified in the system by the three attributes shown in Table 20. These attributes are known as the signature of the CommandHandler 742.
  • the command handlers 742 provide template methods for the following functions: 1) ensure proper initialization of the CommandHandler 742;
  • Figure 8 depicts the format of a RESPONSE generated by a CommandHandler 742.
  • CommandHandlers 742 that are responsible for overall management of the CommandServer 710. These are shown in Table 21 along with the CommandHandler PassThrough, which is not automatically registered.
  • the CommandHandlerFactory 740 serves as a factory for CommandHandlers 742. There is only one instance of this object per CommandServer 710. This object is responsible for the following functions:
  • a Component 752 is identified by its type and name. The type enables grouping of
  • Components 752 and name uniquely identifies Components 752 within the group.
  • the Component 752 is responsible for determining its name, and ComponentManager 750 handles grouping on type.
  • the lifecycle of a component 752 is: 1) create a Component 752; 2) configure the Component 752, specified in XML;
  • the CommandDispatcher 770 exposes a single method for InterruptedException.
  • the CommandDispatcher interface 770 differs from the CommandExecutor 720 because it expects an initialized CommandHandler 742 rather than an XML string, and it delegates the command response functionality to the CommandHandler 742 itself.
  • the BaseCommandDispatcher uses a PooledExecutor. As a command is added, it is placed in this bounded pool and when a thread becomes available, the CommandHandler' s run() method is called.
  • the function of the LifetimeManager 760 is to keep track of any objects that requires or requests lifetime management.
  • a LifetimeManager 760 is an optional part of a CommandServer 710 and is not explicitly listed in the CommandServer interface 710. It can be registered as a separate Component 752, and can manage anything that implements the LifetimeManager interface 760.
  • the only objects that require Lifetime management are CommandHandlers 742.
  • the BaseCommandServer creates a CommandLifetimeManager component that is dedicated to this task of managing the lifetime of commands/CommandHandlers 742 that enter the system. CommandHandlers themselves do not implement the Lifetime Manager interface 760.
  • Commandlnterceptors 758 are components that can be added to a CommandServer 710. Their function is to intercept commands before they are executed. Implementations of Commandlnterceptor 758 should raise a CommandlnterceptionException if evaluation fails. BaseCommandServer 710 will evaluate all registered Interceptors 758 before calling the dispatcher. If one fails, the dispatcher will not be called and the CommandlnterceptorException's getMessage() will be placed in the error block of the response.
  • the Acceptor 756 and Connector 754 pair is an abstraction of the communication between clients and Commands ervers 710, and between a CommandServers 710 and other Commands ervers 710.
  • CommandAcceptors 756 and CommandConnectors 754 extend the Component interface 752, and are therefore seen by the CommandServer 710 as Components 752 that are initialized, configured, activated and deactivated similar to all other Components 752.
  • the ComponentManager 750 manages acceptors 756 and Connectors 754.
  • a CommandAcceptor 756 is an interface that defines how commands are accepted into a CommandServer 710. It is the responsibility of the CommandAcceptor 756 to encapsulate all the communication logic necessary to receive commands. It passes those commands (in string form) to the CommandServer 710 via its CommandExecutor interface 720. Once a command is successfully executed, it is the responsibility of the
  • the CommandConnector 754 encapsulates all the communication logic necessary for moving commands across a "wire", but in the case of a Connector 754, it is responsible for sending commands, as opposed to receiving commands. It is a client's connection point to a CommandServer 710. For every CommandAcceptor implementation 756, there is generally a CommandConnector implementation 754.
  • the CommandConnector interface 754 extends the CommandExecutor interface 720, thereby implying that it executes commands. This enables location transparency as both CommandServer 710 and CommandConnector 754 expose the CommandExecutor interface 720.
  • CommandAcceptors 756 and CommandConnectors 754 are Components 752 that are managed by a CommandServer's ComponentManager 750.
  • the Acceptors 756 and Connectors 754 are the clients' view of the CommandServer 710.
  • Several implementations of Acceptor/Connector interfaces provide most communication needs. These classes are shown in Table 22.
  • All Connectors 754 are asynchronous to a user, even if internally they make use of threads and socket pools to provide an illusion of asynchronous communication.
  • Figure 9A depicts a process 900 for handling an XCL command in a CommandServer 710.
  • An XCL command is formulated, a CommandResponse object is provided, and a CommandServer's CommandExecutor interface is called 910.
  • the CommandExecutor 720 calls a CommandHandlerFactory 740 with a raw XCL command string 920.
  • the XCL command string is parsed, a registered CommandHandler 742 is found with the same TYPE, action and version signature as the XCL command, a CommandHandler prototype is cloned - with the runtime state information, and it is passed back to the CommandServer 930.
  • the CommandServer 710 gives the newly cloned CommandHandler 742 a reference and the same CommandResponse object provided in the first step 940.
  • the CommandServer 710 then delegates execution of the CommandHandler 742 to the CommandDispatcher 770 by placing it in its Queue 772, 950.
  • the CommandDispatcher 770 When ready, the CommandDispatcher 770 will grab a thread from the Queue 772, 960. The CommandDispatcher 770 will then call the CommandHandler run() method 970. Once running, the CommandHandler 742 can do whatever is required to satisfy the request, making use of system services of the CommandServer 710, 980. Once a result (or error) has been generated, the CommandHandler 742 places the value in setResult(), loads its CommandResponse object setValueQ with result, and the result passes back to the caller 990.
  • Figure 9B depicts a dataflow 902 of an XCL command process steps shown in Figure 9A in a CommandServer architecture.
  • the SSE employs a Command Language based on XML, the Extensible Markup Language. This Command Language is called XCL.
  • XCL commands are issued as XML documents and search results are returned as XML documents.
  • the syntax of XCL consists of XML elements, their values and attributes, which control the behavior of the Similarity Search Engine Server.
  • a client program can define and execute searches employing the SSE Server.
  • API Application Programming Interface
  • All SSE commands are formed in XML and run through the execute interface, which is implemented for both Java and COM.
  • Java there are synchronous and asynchronous versions.
  • COM the interface is always synchronous.
  • both versions there are similar methods. The first accepts a string and would be appropriate when the application does not make extensive use of XML, or when it wants to use the SAX parser for speed and does not employ an internal representation.
  • the other method accepts a DOM instance and opens the door to more advanced XML technologies such as XSL.
  • Figure 10 depicts the general XCL command format.
  • XCL commands look like XML documents. Each command is a document and its clauses are elements. Command options are given by element or attribute values.
  • the XCL command language provides three main types of commands for building similarity applications - a SCHEMA command that defines the document set for the similarity search, a QUERY command that searches the document set, and some administrative commands for managing documents, queries, measures, and so on.
  • the SCHEMA command has three main clauses.
  • a STRUCTURE clause describes the structure of the documents to be searched, arranging data elements into an XML hierarchy that expresses their relationships.
  • a MAPPING clause maps search terms with target values from the datasources.
  • a SEMANTICS clause indicates how similarity is to be assessed.
  • the QUERY command also has several clauses.
  • a WHERE clause indicates the structure and values for the search terms.
  • a FROM clause describes the datasources to be accessed.
  • SELECT and RESTRICT clauses describe the result set and scoring criteria.
  • an optional SEMANTICS clause overrides semantics defined in the SCHEMA.
  • the administrative commands allow the application to read, write, and delete the documents, queries, schemas, measures, parsings, choices, and datatypes used in the search. For multi-user situations, a simple locking protocol is provided.
  • XCL commands return result sets in the form of XML documents.
  • the QUERY command contains similarity scores for the documents searched.
  • the result set can return scores for entire documents or for any elements and attributes they contain. In case of problems, the result set contains error or warning messages.
  • Results can be returned synchronously or asynchronously. The synchronous calls block until the result set is ready, while the asynchronous calls return immediately with results returned via a callback coded in the client. Depending on the needs of the application, the results can be retrieved in either string or DOM format.
  • the ResultSet class used by SSE Command Language mimics the ResultSet class for JDBC, allowing applications to iterate through the results to access their contents.
  • An XML document begins at a top node (the root element), and elements can be nested, forming a hierarchy. The bottom or "leaf nodes contain the document's content (data values).
  • a document set is a collection of documents with the same hierarchical layout, as defined by the schema for the search.
  • An anchor document is a hierarchy of XML elements that represent the data values to be used as search criteria. Currently, there can be only one instance of each element in the anchor document. However, the target documents can have repeated groups.
  • Figure 11 depicts a hierarchical layout 1100 that allows multiple tables 1110, 1120 to be mapped onto the search document 1160 via datasets 1130, 1140 through the use of the VDM Relational Database Driver 1150 discussed above.
  • the Database values are mapped to their corresponding places in the virtual XML document to be searched.
  • a target document 1-160 is a hierarchy of values drawn from a relational database 1110, 1120. Values from the Relational Database 1110, 1120 are captured via OBDC or JDBC. Target documents can span multiple tables, joined by master/detail fields.
  • Documents examined by the SSE Server are virtual in the sense that they provide hierarchical representations that match the structure of the search schema while the data they contain still resides in the database tables 1110, 1120.
  • the target documents are a direct reflection of tables being tapped in the search.
  • Each valued element corresponds to field (column) in a table, and group elements correspond to the tables themselves 1110, 1120.
  • the hierarchical layout allows multiple tables 1110, 1120 to be mapped onto the virtual XML search document 1160, even tables from other databases in the case of a cross-database search.
  • the relationship between the target documents and datasource is mapped as part of the schema defined for the search.
  • a database can have many schemas, providing different ways of searching it.
  • Figure 12 depicts the format of a SCHEMA command.
  • the SCHEMA command enables a user to manage the schema for a search document, defining the hierarchical structure of the document and mapping its elements to data sources and similarity measures.
  • a SCHEMA command for a search document comprises of its STRUCTURE clause, its MAPPING clause, and its SEMANTICS clause.
  • the STRUCTURE clause defines the search terms and their relationships in XML format.
  • the MAPPING clause defines the target values and where they reside.
  • the SEMANTICS clause can include overrides to the default similarity measures, choices, and weights. Schemas can be listed, read, written, deleted, locked, and unlocked by the SCHEMA command, as required.
  • Search schemas must be coded manually according to the syntax given here. Predefined datatypes provide shortcuts for those wishing to use standard domain-oriented elements and measures. Search schemas normally reside in an SSE schema repository.
  • the "list” operator of the SCHEMA command returns a childless ⁇ SCHEMA> element for each schema in the repository. With a “read” operator, the SCHEMA command returns the schema indicated. Or if the schema name is given as "*”, the “read” operator returns all schemas in the directory.
  • the "write” operator causes the SCHEMA command to write the specified search schema into the directory, overwriting any existing schema with the same name.
  • the "delete” operator purges the specified document.
  • the SSE Server uses the hierarchical structure of the XML anchor document to express definitions, options, and overrides throughout the XCL command language.
  • the XML structure of the anchor document is defined in the STRUCTURE clause, specifying the data elements involved in the search along with their positions in the search document.
  • the SEMANTICS clause refers to this structure in mapping these elements to information sources, similarity measures, and so on.
  • a unique aspect of the XML hierarchy shown in the STRUCTURE clause is that no values are given.
  • Elements that represent search terms are shown as empty - i.e., just the XML tag. The values are to be supplied by the associated datasources. In the case of a "flat" search, all the target values are for child elements belonging to the parent.
  • Figure 15 depicts an example of a STRUCTURE clause for a hierarchical search.
  • Figure 16 depicts the format of the MAPPING clause, which associates elements in the anchor document with target fields in the database.
  • the MAPPING clause governs the mapping between elements of the XML search document and the elements of a relational database. Its contents are: database name, location, driver, username, and password. When multiple databases are connected, the mapping also indicates the node in the search document schema to populate with data from the database.
  • Each database table or view is represented by a dataset, which gives the bindings of database fields to elements and attributes in the search schema.
  • Datasets bind with each other to join the database tables into a hierarchy that matches the structure of the search schema.
  • the MAPPING clause for a relational datasource contains a ⁇ DATASET> element for every table in the database that contains target values for the search.
  • ⁇ DATASET> contains the datasource attribute that identifies the object used as the datasource.
  • the ⁇ DATASET> also contains an ⁇ EXPRESSION> element that tells SSE that the datasource is a relational table.
  • the ⁇ DATASET> also includes a ⁇ PATH> element that indicates which element in the search schema contains the search terms for target values drawn from the table.
  • Target values are mapped to the search schema with a ⁇ FIELD> element for each field to be included.
  • the ⁇ DATASET> for a relational table also contains a ⁇ BIND> element that defines master/detail relationships with other tables. This binding resembles a JOIN operation by the DBMS, associating a foreign key in- the detail table with a primary key in the master table.
  • Figure 17 depicts an example of a MAPPING clause.
  • the ⁇ MAPPING> may include two ⁇ DATASET> elements, the first to describe the master Product table, and the second to describe the detail Model table.
  • Figure 18 depicts the format of the SEMANTICS clause, which assigns measures, choices, and weights to search terms.
  • the SEMANTICS clause provides intelligence to guide the search. By default, standard measures based on datasource datatypes are assigned to the search terms. Sometimes these provide adequate results, but other times applications require measures that take into account the way the data is used.
  • New semantics are assigned with the APPLY clause, which consists of a repeatable PATH clause and up to one each of the following: MEASURE clause, CHOICE clause, and WEIGHT clause.
  • the PATH clause indicates an element in the search schema that is to receive new semantics.
  • the xpath notation traces a hierarchical path to the element beginning at the root.
  • the MEASURE clause allows the use of refined measures for the elements indicated in the APPLY clause.
  • the measure specified in the MEASURE clause takes precedence over any measure specified in the original schema.
  • the specified measure can either be a variation on the standard measure, a new measure defined using the SSE syntax, or a user-coded measure.
  • the CHOICE clause enables a different pairing algorithm to be assigned to parsed values of the elements indicated in the APPLY clause. These algorithms perform aggregation of the similarity search scores of the attributes determined by the measure algorithms.
  • the WEIGHT clause allows a relative weight to be assigned to the scores of the elements listed in the APPLY clause.
  • Figure 19 depicts the hierarchical structure 1900 of the SCHEMA command 1910.
  • the SCHEMA command 1910 comprises a STRUCTURE clause 191-5, a SEMANTICS clause 1920 and a MAPPING clause 1925.
  • the SEMANTICS clause 1920 comprises a MEASURE clause 1930 for identifying Measures 1950 to be used for scoring document attribute tokens, a CHOICE clause 1935 for identifying the Aggregation algorithms 1955 for "rolling up" token scores to obtain document scores, a WEIGHTING clause 1940 for emphasizing or de-emphasizing token scores, and a PATH clause 1945 for indicating a path to an element of a search schema in a RDMS to which the SEMANTICS clause 1920 will apply.
  • the MEASURE clause 1930 contains a partial list of MEASURES algorithms 1950 for determining token attribute scores.
  • Figure 6 above describes a more detailed list of MEASURE algorithms.
  • the CHOICE clause 1935 contains a partial list of CHOICE algorithms 1955 for aggregating token scores into document scores.
  • Figure 20 depicts the format of the QUERY command.
  • the QUERY command initiates a similarity search, which scores matches between search terms indicated in a WHERE clause and target values drawn from the relational datasource indicated in the FROM clause.
  • the RESTRICT clause and SELECT clause determine what results are returned.
  • the QUERY command looks to the search schema for the structure and semantics of the search, or to subordinate SEMANTICS clauses that override the default settings in the schema document.
  • the format of the WHERE clause is shown in Figure 20.
  • the WHERE clause indicates the anchor to be compared to target values drawn from the datasources specified in the FROM clause.
  • the anchor document is structured as a hierarchy to indicate parent/child relationships, reflecting the STRUCTURE clause of the search schema.
  • the WHERE clause takes the form of an XML document structure populated with anchor values, i.e. the values that represent the "ideal" for the search. This document's structure conforms to the structure of the search schema. However, only the elements contributing to the similarity need to be included.
  • Hierarchical relationships among elements, which would be established with JOUST operations in SQL, are represented in SSE Command Language by the nesting of elements in the WHERE clause.
  • the format of the FROM clause is shown in Figure 20.
  • the FROM clause associates the QUERY with the document set being searched.
  • the FROM clause identifies the set of documents to be examined in the search. These are virtual documents drawn from relational datasources according to a predefined mapping.
  • the FROM clause offers two ways to identify search documents. The first draws target values from a relational datasource through the VDM. The second presents the documents themselves as part of the FROM clause.
  • Figure 22A depicts examples of a FROM clause that indicates the search should examine the entire set for "acmejproducts”.
  • Figure 22B depicts an example of a FROM clause that indicates the search should examine the documents shown. Turning now to Figure 23, Figure 23 depicts the format of the RESTRICT clause.
  • the RESTRICT clause places limits on the results returned by the QUERY.
  • the RESTRICT clause offers three methods for culling the results of a QUERY before they are returned to the client. When a RESTRICT clause contains multiple methods, they are applied in the order listed, each working on the result of the one before it.
  • the SCORE clause includes ⁇ START> and ⁇ END> elements (both required, neither repeating) to define the range of scores for documents to be returned. If the ⁇ START> score is greater than the ⁇ END> score, the documents receiving scores in that range are returned in descending order by score. That is, the score closest to 1.00 comes first. When the ⁇ END> score is the larger, the results are in ascending order.
  • the INDEX clause includes ⁇ START> and ⁇ END> elements (both required, neither repeating) to define a sequence of documents to return.
  • candidate documents are numbered sequentially and the documents with sequence numbers falling in the range between ⁇ START> and ⁇ END> are returned. This is useful for clients that need a fixed number of documents returned.
  • the sequence numbers must be positive integers.
  • Figure 24 depicts an example of the RESTRICT clause. This RESTRICT clause first limits the scores to those over 0.80. Then it returns the first three. If there are not at least three remaining, it returns what's left.
  • the format of the SELECT clause is shown in Figure 19.
  • the SELECT clause allows the application to determine the structure of the result set. Otherwise, the results consist of a list of all documents examined with a similarity score for each document.
  • the SELECT clause governs the contents of the result set returned to the client.
  • the client receives a list of DOCUMENT elements, each with a score that indicates its degree of similarity to the search terms in the QUERY; The score is reported as an added attribute for the ⁇ DOCUMENT> element, along with its name and schema. If the boolean for scoring is set to false, only the document name and schema are returned. Likewise, if the QUERY does not include a WHERE clause, no scoring is performed.
  • a SELECT clause that includes a structure from the search schema returns ⁇ DOCUMENT> elements containing that structure, each with the target value considered in the search. If the boolean for scoring is set to true (default), the result set includes ⁇ DETAIL> elements that contain a ⁇ PATH> element structure given in the WHERE clause and a ⁇ SCORE> element with the similarity score.
  • Figure 25 depicts an example of a SELECT clause that returns both target values and similarity scores.
  • the QUERY command also contains a SEMANTICS clause, as shown in Figure 20.
  • the SEMANTICS clause in a QUERY command has the same format as a SEMANTICS clause in a SCHEMA command, and is discussed above in the description of Figure 18 and Figure 19.
  • a SEMANTICS clause specifies the semantics to use in the QUERY, and will override the default SEMANTICS clause contained in the SCHEMA command.
  • a successful QUERY command returns a ⁇ RESULT> element whose contents are determined by the SELECT clause as just described above.
  • An unsuccessful QUERY may return an ⁇ ERROR> or ⁇ WARNFNG> to the client.
  • a RESPONSE format showing only scores of a similarity search is depicted in Figure 26B, and a RESPONSE format showing details of a similarity search is depicted in Figure 26C.
  • Commands other than a QUERY command return results, but not similarity scores.
  • ⁇ RESULT> contains an element that echoes the original command and contains set of elements of the type requested.
  • a "list" operation -produces a set of childless elements of the type requested, each with an identifying name attribute.
  • a "read” operation returns complete XML structures for the elements requested.
  • the ⁇ DETAIL> element depicted in Figure 26C is included when the score attribute of a QUERY command SELECT clause is set to "true". This produces a list of elements and attributes used in the WHERE clause of the QUERY command and the target values used to produce the scores. Each score is reported in a ⁇ SCORE> element of an APPLY clause along with a ⁇ WHERE> element with the xpath of the search term and a ⁇ FROM> element with the xpath of the target value. When multiple target values are involved, the xpath includes an index to indicate which one was chosen for scoring, e.g.
  • the third value (in tree order) for a product's model number would be Product/Model/Number.
  • the ⁇ DETAIL> element preserves any attributes from the original command.
  • an index attribute is added and its value indicates the document's sequence number among others in the set.
  • Figure 27 shows an example of a RESPONSE with results of a similarity search containing scores for three documents, where document's score is based on comparing its values with the search terms, a unique name identifies the document, and a search schema used in the command.
  • Figure 28 depicts the format of a DOCUMENT command.
  • the DOCUMENT command enables the application to manage document sets involved in the search.
  • the DOCUMENT command includes operations for managing the document set used in the search.
  • the "list” operation returns a childless ⁇ DOCUMENT> element for each document in the set.
  • the "read” operation retrieves documents from the datasource according to the mapping defined in the schema.
  • the "lock” and “unlock” operations provide a simple locking protocol to prevent conflicting updates in case several DOCUMENT operations are attempted at once by different clients.
  • the operation attribute returns as “locked” or “denied” to indicate the success of the operation.
  • "*" is specified instead of the document name, the "read” operation returns all documents.
  • Figure 29 depicts an example of a search document representative of the search document depicted in Figure 11 above. To carry out a search of this document, the structure would be populated with the values used in the search to form the anchor document. The same structure is used to return the results of a search, including the documents found to be similar to the search criteria, in addition to the scores indicating the degree of similarity for each document.
  • Figure 30 depicts a format of a STATISTICS command definition template, where bold italic represents optional sections.
  • the Statistics Processing Module (SPM) discussed above in regard to Figure 5 uses this definition template.
  • Figure 31 is an example of a simple STATISTICS definition.
  • the FROM clause identifies a document Schema and the SELECT clause identifies a last, first and middle name of a claimant.
  • Figure 32 depicts a SCHEMA response to a STATISTICS generation command.
  • Figure 33 depicts the format of a BATCH command.
  • BATCH commands provide a way to collect the results of several related operations into a single XML element. Each command in the batch is executed in sequence.
  • the DATASOURCE command is used for identifying and maintaining datasources in the Relational Database Management System (RDMS).
  • the MEASURE command is used for creating and maintaining the measures for determining document attribute and token similarity scores stored in the RDMS as User Defined Functions (UDFs).
  • UDFs User Defined Functions
  • the CHOICE command is used for creating and maintaining aggregation (roll-up) algorithms stored in the RDMS and used by the Search Manager for determining overall document similarity scores.
  • Figure 34 depicts the overall process of setting up a schema. Prior to beginning this process, a target database must be imported into the Relational Database Management System associated with the SSE Server, as shown in Figure 1.
  • the user must have knowledge of the structure of the data within the imported database.
  • the structure knowledge is required for the user to set up a schema.
  • the VDM synthesizes XML documents from relational data
  • the SM synthesizes relational data from XML documents.
  • a schema must be established 3400 by the Client sending 3410 and the Gateway receiving 3420 a command.
  • the command is transmitted to the SSE Server from the Client using sockets, HTTP or JMS protocol.
  • the command is converted to XCL by the Gateway and it is determined if it is a SCHEMA command.
  • Figure 35 depicts an example of a SCHEMA command based on the format shown in Figure 12.
  • the Gateway determines that the command is a Schema command 3430. Since this is a SCHEMA command, the Gateway sends the SCHEMA command to the VDM 3440.
  • the VDM builds relational tables and primary key tables based on the Schema command attributes 3460. These tables are then stored for future use 3470.
  • Figures 36A, 36B, and 36C depicts the overall process of executing a SSE search. After one or more schemas have been defined, the SSE is ready to accept a QUERY command. A typical QUERY command based on the format shown in
  • Figure 19 might resemble the example QUERY command shown in Figure 37.
  • a client issues a QUERY command 3602 that is received by the Gateway 3604, it is determined if there is a WHERE clause in the command 3608. If there were no WHERE clause in a QUERY command 3608, the command would be examined to determine if there was a SELECT clause in the QUERY command 3612. If there were no SELECT clause 3612, RESULT would be returned to the client 3616. If there were a SELECT clause in the Query command 3612, indicating a selection of the structure for the result set to be produced by the QUERY command, the QUERY command would be sent to the VDM 3614.
  • the VDM Upon receipt of the QUERY command by the VDM, the VDM extracts the SELECTed values from the RDMS 3618 and includes the SELECTed values in a RESULT set 3620, which is returned to the client 3616. If there were a WHERE clause in a QUERY command 3608, the QUERY command would be sent to the SM 3610.
  • the QUERY command is received at the SM 3630. It is then determined if the QUERY command is a side-by-side comparison 3632. If it is a side- by-side comparison 3632, a recursive process for scoring nested elements is initiated. If it is not a side-by-side comparison 3632, it is determined if the target is a valid schema 3634. If it is not a valid schema 3634, an error condition is returned to the client as a RESULT 3646. Otherwise, the process moves to a determination of a REPEATING GROUP query 3660 in Figure 36C.
  • the recursive process for scoring nested elements that is entered if the Query requires a side-by-side comparison 3632 comprises determining if a root element of a document has been scored 3636. If it has, RESULT is returned to the client 3638, otherwise it is determined if the element has unscored children 3640. If the root element has unscored children 3640, the next unscored child type is examined 3644 and it is determined if this element has unscored children 3640. If this element does not have unscored children 3640, MEASURE and CHOICE are applied to this element type 3642, and the next unscored child type is examined 3644. This process continues until the root element of the document has been scored 3636 and RESULT returned to the client.
  • This process comprises retrieving an XML document from the VDM 3672 and performing a side-by-side scoring 3674 using the recursive process for scoring nested elements describe above including steps 3636, 3638, 3640, 3642 and 3644.
  • a record is dismissed 3678 if the score does not meet restriction 3676, and appending score/pkey to results 3680 if the score does meet restriction 3666.
  • the pkeys and scores replace FROM clause 3682 and control is returned to the Gateway to determine if there is a SELECT clause 3612, and processed as described above in Figure 36A.
  • An SQL command from the example SCHEMA and QUERY commands shown above may be as follows:
  • Figure 38 depicts an example data table in the RDMS and associated RESULT from the example SQL command above.
  • Figure 39 depicts the result of the Query command described above that would be returned to the client as a RESULT within a RESPONSE. This RESPONSE corresponds to the results illustrated in Figure 38.

Abstract

The invention provides a system and method for defining a schema and sending a query to a Similarity Search Engine to determine a quantitative assessment of the similarity of attributes between an anchor record and one or more target records (Figure 1). The Similarity Search Engine makes a similarity assessment in a single pass through the target records having multiple relationship characteristics. The Similarity Search Engine is a server (190) configuration that comprises a Gateway for command and response routing (110), a Virtual Document Manager (120) for document generation, a Search Manager (130) for document scoring, and an Relational Database Management System (140) for providing data persistence, data retrieval and access to User Defined Functions (145). The Similarity Search Engine uses a unique command syntax based on the Extensible Markup Language to implement functions necessary for similarity searching and scoring.

Description

SIMILARITY SEARCH ENGINE FOR USE WITH RELATIONAL DATABASES by John R. Ripley of Round Rock, Texas
This application claims benefit of U. S. Provisional Application No. 60/356,812, filed on February 14, 2002.
Background The invention relates generally to the field of search engines for use with large enterprise databases. More particularly, the present invention enables similarity search engines that, when combined with standard relational database products, gives users a powerful set of standard database tools as well as a rich collection of proprietary similarity measurement processes that enable similarity determinations between an anchor record and target database records.
Information resources that are available contain large amounts of information that may be useful only if there exists the capability to segment the information into manageable and meaningful packets. Database technology provides adequate means for identifying and exactly matching disparate data records to provide a binary output indicative of a match. However, in many cases, users wish to determine a quantitative measure of similarity between an anchor record and target database records based on a broadly defined search criteria. This is particularly true in the case where the target records may be incomplete, contain errors, or are inaccurate. It is also sometimes useful to be able to narrow the number of possibilities for producing irrelevant matches reported by database searching programs. Traditional search methods that make use of exact, partial and range retrieval paradigms do not satisfy the content-based retrieval requirements of many users. This has led to the development of similarity search engines. Similarity search engines have been developed to satisfy the requirement for a content-based search capability that is able to provide a quantitative assessment of the similarity between an anchor record and multiple target records. The basis for many of these similarity search engines is a comparison of an anchor record band or string of data with target record bands or strings of data that are compared serially and in a sequential fashion. For example, an anchor record band may be compared with target record band #1, then target record band #2, etc., until a complete set of target record bands have been searched and a similarity score computed. The anchor record bands and each target record band contain attributes of a complete record band of a particular matter, such as an individual. For example, each record band may contain attributes comprising a named individual, address, social security number, driver's license number, and other information related to the named individual. As the anchor record band is compared with a target record band, the attributes within each record band are serially compared, such as name-name, address-address, number- number, etc. In this serial-sequential fashion, a complete set of target record bands are compared to an anchor record band to determine similarity with the anchor record band by computing similarity scores for each attribute within a record band and for each record band. Although it may be fast, there are a number of disadvantages to this "band" approach for determining a quantitative measure of similarity.
Using a "band" approach in determining similarity, if one attribute of a target record band becomes misaligned with the anchor record band, the remaining record comparisons may result in erroneous similarity scores, since each record attribute is determined relative to the previous record attribute. This becomes particularly troublesome when confronted with large enterprise databases that inevitably will produce an error, necessitating starting the scoring process anew. Another disadvantage of the "band" approach is that handling large relational databases containing multiple relationships may become quite cumbersome, slowing the scoring process. Furthermore, this approach often requires a multi-pass operation to fully process a large database. Oftentimes, these existing similarity search engines may only run under a single operating system.
There is a need for a similarity search engine that provides a system and method for determining a quantitative measure of similarity in a single pass between an anchor record and a set of multiple target records that have multiple relationship characteristics. It should be capable of operating under various operating systems in a multi-processing environment. It should have the capability to similarity search large enterprise databases without the requirement to start over again when an error is encountered.
Summary The present invention of a Similarity Search Engine (SSE) for use with relational databases is a system and method for determining a quantitative assessment of the similarity between an anchor record or document and a set of one or more target records or documents. It makes a similarity assessment in a single pass through the target records having multiple relationship characteristics. It is capable of running under various operating systems in a multi-processing environment and operates in an error-tolerant fashion with large enterprise databases. The present invention comprises a set of robust, multi-threaded components that provide a system and method for scoring and ranking the similarity of documents that may be represented as Extensible Markup Language (XML) documents. This search engine uses a unique command syntax known as the XML Command Language (XCL). At the individual 5 attribute level, attribute similarity is quantified as a score having a value of between 0.00 and 1.00 that results from the comparison of an anchor value attribute (search criterion) vs. a target value attribute (database field) using a distance function that identifies am attribute similarity measurement. At the document or record level, which comprises a "roll-up" or aggregation of one or more attribute similarity scores determined by a parent computing or 0 choice algorithm, document or record similarity is a value normalized to a score value of between 0.00 and 1.00 for the document or record. A single anchor document containing multiple attributes, usually arranged in a hierarchical fashion, is compared to multiple target documents also containing multiple attributes.
The example of Table 1 illustrates the interrelationships between attributes, anchor 5 attribute values, target attribute values, distance functions and attribute similarity scores. There is generally a single set of anchor value attributes and multiple sets of target value attributes. The distance functions represent measurement algorithms to be executed to determine an attribute similarity score. There may be token level attributes at a lowest hierarchical level as well as intermediate level attributes between the highest or parent level 0 and the lowest or leaf level of a document or record. Attribute similarity scores at the token level are determined by designated measurement functions to compute a token attribute similarity score of between 0.00 and 1.00. Choice or aggregation algorithms are designated to
- roll-up or aggregate scores in a hierarchical fashion to determine a document or record similarity score. Different weighting factors may also be used modulate the relative 5 importance of different attribute scores. The measurement functions, weighting functions, aggregation algorithms, anchor document, and target documents are generally specified in a "schema" document. In Table 1, anchor value attributes of "John", "Austin", and "Navy" are compared with target value attributes of "Jon", "Round Rock", and "Dark Blue" using distance functions "String Difference", "GeoDistance", and "SynonymCompare" to compute 0 attribute similarity scores of "0.75", "0.95", and "1.00", respectively.
Figure imgf000005_0001
Figure imgf000006_0001
TABLE 1
In this example, all attributes are weighted equally, and the document score is determined by taking the average of similarity scores. The anchor document would compare at 0.90 vs. the target document. Although the example demonstrates the use of weighted average in determining individual scores, it is one of many possible alternatives of aggregation algorithms that may be implemented.
This Similarity Search Engine (SSE) architecture is a server configuration comprising a Gateway, a Virtual Document Manager (VDM), a Search Manager (SM) and an SQL/Relational Database Management System (RDMS). The SSE server may serve one or more clients. The Gateway provides command and response routing as well as user management functions. It accepts commands from clients and routes those commands to either the VDM or the SM. The purpose of the VDM is XML document generation, particularly schema generation. The purpose of the SM is XML document scoring, or aggregation. The VDM and the SM each receive commands from the Gateway and in turn make calls to the RDMS. The RDMS provides token attribute similarity scoring in addition to data persistence, data retrieval and access to User Defined Functions (UDFs). The UDFs include measurement algorithms for computing attribute similarity scores. The Gateway, VDM and SM are specializations of a unique generic architecture referred to as the XML Command Framework (XCF), which handles the details of threading, distribution,
"cόmmύnicati n'fresδ rce management "and general "command handling: " - - - - -
There are several system objects that the SSE relies on extensively for its operation. These include a Datasource object, a Schema object, a Query object and a Measure object. A Datasource object is a logical connection to a data store, such as a relational database, and it manages the physical connection to the data store. A Schema object, central to SSE operation, is a structural definition of a document with additional markup to provide database mapping and similarity definitions. A Query object is a command that dictates which elements of a database underlying a Schema object should be searched, their search criteria, the similarity measures to be used and which results should be considered in the final output. A Measure object is a function that operates on two strings and returns a similarity score indicative of the degree of similarity between the two strings. These Measure objects are implemented as User Defined Functions (UDFs).
A method having features of the present invention for performing similarity searching comprises the steps of receiving a request instruction from a client for initiating a similarity search, generating one or more query commands from the request instruction, each query command designating an anchor document and at least one search document, executing each query command, including computing a normalized document similarity score having a value of between 0.00 and 1.00 for each search document in each query command for indicating a degree of similarity between the anchor document and each search document, and creating a result dataset containing the computed normalized document similarity scores for each search document, and sending a response including the result dataset to the client. The step of generating one or more query commands may further comprise identifying a schema document for defining structure of search terms, mapping of datasets providing target search values to relational database locations, and designating measures, choices and weight to be used in a similarity search. The step of computing a normalized document similarity score may comprise computing attribute token similarity scores having values of between 0.00 and 1.00 for the corresponding leaf nodes of the anchor document and a search document using designated measure algorithms, multiplying each token similarity score by a designated weighting factor, aggregating the token similarity scores using designated choice algorithms for determining a document similarity score having a value of between 0.00 and 1.00 for the search document. The step of computing attribute token similarity scores may further comprise computing attribute token similarity scores in a relational database management " system, the step of multiplying each token-similarity-score may further-comprise multiplying each token similarity score in a similarity search engine, and the step of aggregating the token similarity scores may further comprise aggregating the token similarity scores in the similarity search engine. The step of generating one or more query commands may comprise populating an anchor document with search criteria values, identifying documents to be searched, defining semantics for overriding parameters specified in an associated schema document, defining a structure to be used by the result dataset, and imposing restrictions on the result dataset. The step of defining semantics may comprise designating overriding measures for determining attribute token similarity scores, designating overriding choice algorithms for aggregating token similarity scores into document similarity scores, and designating overriding weights to be applied to token similarity scores. The step of imposing restrictions may be selected from the group consisting of defining a range of similarity indicia scores to be selected and defining percentiles of similarity indicia scores to be selected. The step of computing a normalized document similarity score may further comprise computing a normalized document similarity score having a value of between 0.00 and 1.00, whereby a normalized similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching. The step of computing attribute token similarity scores having values of between 0.00 and 1.00 may further comprise computing attribute token similarity scores having values of between 0.00 and 1.00, whereby a attribute token similarity value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching. The step of generating one or more query commands may further comprise generating one or more query commands whereby each query command includes attributes of command operation, name identification, and associated schema document identification. The method may further comprise receiving a schema instruction from a client, generating a schema command document comprising the steps of defining a structure of target search terms in one or more search documents, creating a mapping of database record locations to the target search terms, listing semantic elements for defining measures, weights and choices to be used in similarity searches, and storing the schema command document into a database management system. The method may further comprise the step of representing documents and commands as hierarchical XML documents. The step of sending a response to the client may further comprise sending a response including an error message and a warning message to the client. The step of sending a response to the client may further comprise sending a response to the client containing he" result datasets, whereby each result dataset includes-at -least-one - normalized document similarity score, at least one search document name, a path to the search documents having a returned score, and at least one designated schema. The method may further comprising receiving a statistics instruction from a client, generating a statistics command from the statistics instruction, which may comprise the steps of identifying a statistics definition to be used for generating statistics, populating an anchor document with search criteria values, identifying documents to be searched, delineating semantics for overriding measures, parsers and choices defined in a semantics clause in an associated schema document, defining a structure to be used by a result dataset, imposing restrictions to be applied to the result dataset, identifying a schema to be used for the basis of generating statistics, designating a name for the target statistics table for storing results, executing the statistics command for generating a statistics schema with statistics table, mappings and measures, and storing the statistics schema in a database management system. The method may further comprise the step of executing a batch command comprising executing a plurality of commands in sequence for collecting results of several related operations. The method may further comprise selecting measure algorithms from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination. The method may further comprise selecting choice algorithms from the group consisting of single best, greedy sum, overall sum, greedy minimum, overall minimum, and overall maximum. Another embodiment of the present invention is a computer-readable medium containing instructions for controlling a computer system to implement the method above.
In an alternate embodiment of the present invention, a system for performing similarity searching comprises a gateway for receiving a request instruction from a client for initiating a similarity search, the gateway for generating one or more query commands from the request instruction, each query command designating an anchor document and at least one search document, a search manager for executing each query command, including means for computing a normalized document similarity score having a value of between 0.00 and 1.00 for each search document in each query command for indicating a degree of similarity between the anchor document and each search document, means for creating a result dataset containing the computed normalized document similarity scores for each search document, and the gateway for sending a response including the result dataset to the client. The means for computing a normalized similarity score may comprise a relational database management " ~ system for computing-attribute token similarity- scores having values-of- bet-ween 0.00 and - 1.00 for the corresponding leaf nodes of the anchor document and a search document using designated measure algorithms, and the search manager for multiplying each token similarity score by a designated weighting factor and aggregating the token similarity scores using designated choice algorithms for determining a document similarity score having a value of between 0.00 and 1.00 for the search document. Each one or more query commands may further comprise a measure designation, and the database management system further comprises designated measure algorithms for computing a token similarity score. Each query command may comprise an anchor document populated with search criteria values, at least one search document, designated measure algorithms for determining token similarity scores, designated choice algorithms for aggregating token similarity scores into document similarity scores, designated weights for weighting token similarity scores, restrictions to be applied to a result dataset document, and a structure to be used by the result dataset. The computed document similarity scores may have a value of between 0.00 and 1.00, whereby a normalized similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of 5 similarity matching. The relational database management system may include means for computing an attribute token similarity score having a value of between 0.00 and 1.00, whereby a token similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching. Each query command may include attributes of command 0 operation, name identification, and associated schema document identification for providing a mapping of search documents to database management system locations. The system may further comprise the gateway for receiving a schema instruction from a client, a virtual document manager for generating a schema command document, the schema command document comprising a structure of target search terms in one or more search documents, a 5 mapping of database record locations to the target search terms, semantic elements for
, defining measures, weights, and choices for use in searches, and a relational database management system for storing the schema command document. The system of claim 18, wherein each result dataset may include at least one normalized document similarity score, at least one search document name, a path to the search documents having a returned score and 0 at least one designated schema. Each result dataset may include an error message and a warning message to the client. The system may further comprise the gateway for receiving a statistics instruction from a client and for generating a statistics command from the statistics instruction, the search-manager for-identifying a-statisties defmition-to-be-used or-generating - statistics, populating an anchor document with search criteria values, identifying documents 5 to be searched, delineating semantics for overriding measures, weights and choices defined in a semantics clause in an associated schema document, defining a structure to be used by a result dataset, imposing restrictions to be applied to the result dataset, identifying a schema to be used for the basis of generating statistics, designating a name for the target statistics table for storing results, and a statistics processing module for executing the statistics command for 0 generating a statistics schema with statistics table, mappings and measures, and storing the statistics schema in a database management system. The system may further comprise the gateway for receiving a batch command from a client for executing a plurality of commands in sequence for collecting results of several related operations. The system may further comprise selecting measure algorithms selected from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination. The system may further comprise choice algorithms selected from the group consisting of single best, greedy sum, 5 overall sum, greedy minimum, overall minimum, and overall maximum.
In another embodiment of the present invention a system for performing similarity searching comprises a gateway for handling all communication between a client, a virtual document manager and a search manager, the virtual document manager connected between the gateway and a relational database management system for providing document 0 management, the search manager connected between the gateway and the relational database management system for searching and scoring documents, and the relational database management system for providing relational data management, document and measure persistence, and similarity measure execution. The virtual document manager may include a relational database driver for mapping XML documents to relational database tables. The 5 virtual document manager may include a statistics processing module for generating statistics based on similarity search results. The relational database management system may include means for storing and executing user defined functions. The user defined functions include measurement algorithms for determining attribute token similarity scores. Another embodiment of the present invention is a method for performing similarity searching 0 that comprises the steps of creating a search schema document by a virtual document manager, generating one or more query commands by a gateway, executing one or more query commands in a search manager and relational database management system for
' " - " determining-the degree-ofsimilarity between an anchor- document -and search-documents-, and assembling a result document containing document similarity scores of between 0.00 and 5 1.00. The step of creating a schema document may comprise designating a structure of search documents, datasets for mapping search document attributes to relational database locations, and semantics identifying measures for computing token attribute similarity search scores between search documents and an anchor document, weights for modulating token attribute similarity search scores, choices for aggregating token attribute similarity search scores into 0 document similarity search scores, and paths to the search document structure attributes. The step of generating one or more query commands may comprise designating an anchor document, search or schema documents, restrictions on result sets, structure of result sets, and semantics for overriding schema document semantics including measures, weights, choices and paths. The step of executing one or more query commands may comprise computing token attribute similarity search scores having values of between 0.00 and 1.00 for each search document and an anchor document in a relational database management system using measures, and modulating the token attribute similarity search scores using weights and aggregating the token attribute similarity scores into document similarity scores having values of between 0.00 and 1.00 in the search manager using choices. The step of assembling a result document may comprise identifying associated query commands and schema documents, document structure, paths to search terms, and similarity scores by the search manager. The search schema, the query commands, the search documents, the anchor document and the result document may be represented by hierarchical XML documents. The method may further comprise selecting measure algorithms from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination. The method may further comprise selecting choice algorithms from the group consisting of single best, greedy sum, overall sum, greedy minimum, overall minimum, and overall maximum. Another embodiment of the present invention is a computer-readable medium containing instructions for controlling a computer system to implement the method above.
Brief Description of the Drawings These and other features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings wherein: - Figure 1- depicts a-high level-architecture of-the-Similarit-y Search-Engine-(SSE); ~ - -
Figure 2 depicts an example of mapping an XML document into database tables; Figure 3 depicts an example of an XML document resulting from a single READ;
Figure 4 depicts an example of a RESULT from a QUERY; Figure 5 A depicts a process for handling a statistics command in a Search Manager (SM);
Figure 5B depicts a dataflow of a statistics command process in a Search Manager (SM);
Figure 6 describes the Measures implemented as UDFs; Figure 7 depicts an architecture of the XML Command Framework (XCF); Figure 8 depicts the format of a RESPONSE generated by a CommandHandler; Figure 9 A depicts a process for handling an XCL command in a Commands erver; Figure 9B depicts a dataflow of an XCL command process in a Commands erver;
Figure 10 depicts a general XCL command format;
Figure 11 depicts an example of multiple tables mapped onto a search document;
Figure 12 depicts the format of a SCHEMA command; Figure 13 depicts an example of a RESPONSE from a list of SCHEMA commands;
Figure 14 depicts the format for a STRUCTURE clause;
Figure 15 depicts an example of a STRUCTURE clause for a hierarchical search;
Figure 16 depicts the format of the MAPPING clause;
Figure 17 depicts an example of a MAPPING clause; Figure 18 depicts the format of the SEMANTICS clause;
Figure 19 depicts the structure of a SCHEMA command and it related clauses;
Figure 20 depicts the format of the QUERY command;
Figure 21 depicts an example of the WHERE clause;
Figures 22 A and 22B depict examples of a FROM clause; Figure 23 depicts the format of the RESTRICT clause;
Figure 24 depicts an example of the RESTRICT clause;
Figure 25 depicts an example of the SELECT clause;
Figures 26A, 26B and 26C depict formats of a RESPONSE structure;
Figure 27 depicts an example of a RESPONSE with results of a similarity search; Figure 28 depicts the format of a DOCUMENT command;
Figure 29 depicts a search document example for the layout depicted in Figure 11 ;
Figure 30 depicts a format of a statistics definition template; -Figure-31-depicts an example ofa-simple-statistics definition; — - - - - -
Figure 32 depicts a RESPONSE to a statistics generation command; Figure 33 depicts the format of a BATCH command;
Figure 34 depicts the process of setting up a schema;
Figure 35 depicts an example of a SCHEMA command;
Figure 36 depicts the process of executing an SSE search;
Figure 37 depicts an example of a QUERY command; Figure 38 depicts an example of a data and similarity results of a QUERY command; and
Figure 39 depicts an example RESPONSE resulting from a QUERY command. Detailed Description of the Drawings Before describing the architecture of the Similarity Search Engine (SSE), it is useful to define and explain some of the objects used in the system. The SSE employs a command language based on XML, the Extensible Markup Language. SSE commands are issued as XML documents and search results are returned as XML documents. The specification for Extensible Markup Language (XML) 1.0 (Second Edition), W3C Recommendation 6 October 2000 is incorporated herein by reference. The syntax of the SSE Command Language XCL consists of XML elements, their values and attributes that control the behavior of the SSE. Using SSE commands, a client program can define and execute searches employing the S SE .
The SSE commands are shown here in abstract syntax notation using the following conventions:
Regular type Shows command words to be entered as shown (uppercase or lowercase) Italics Stands for a value that may vary from command to command
XML tags are enclosed in angled brackets. Indentations are used to demark parent- child relationships. Tags that have special meaning for the SSE Command Language are shown in capital letters. Specific values are shown as-is, while variables are shown in italic type. The following briefly defines XML notation:
<XXX> Tag for XML element named XXX
— χχχ αttr/bωte="vα/we"/> ~ -XML element-named XXX- with specified- value for attribute - <XXX>value<IXXX> XML element named XXX containing value
<XXX> XML element named XXX containing element
<YYY>value</YYY> named YYY with the value that appears between the tags. In
</XXX> xpath notation, this structure would be written as XXX/YYY
The SSE relies primarily on several system objects for its operation. Although there are other system objects, the primary four system objects include a Datasource object, a
Schema object, a Query object and a Measure object.
A Datasource object describes a logical connection to a data store, such as a relational database. The Datasource object manages the physical connection to the data store. Although the SSE may support many different types of datasources, the preferred datasource used in the SSE is an SQL database, implemented by the vdm.RelationalDatasource class. A relational Datasource object is made up of attributes comprising Name, Driver, URL, Username and Password, as described in Table 2.
Figure imgf000015_0001
TABLE 2
A Schema object is at the heart of everything the SSE does. A Schema object is a structural definition of a document along with additional markup to provide SQL database mapping and similarity definitions. The definition of a Schema object comprises Name, Structure, Mapping and Semantics, as described in Table 3.
Figure imgf000015_0002
TABLE 3
A Query object is an XCL command that dictates which elements of a Schema object (actually the underlying database) should be searched, their search criteria, the similarity measures to be used and which results should be considered in the final output. The Query object format is sometimes referred to a Query By Example (QBE) because an "example" of what we are looking for is provided in the Query. Attributes of a Query object comprise a Where clause, Semantics, and Restrict, as described in Table 4.
Figure imgf000016_0001
TABLE 4
A Measure object is a function that takes in two strings and returns a score (between 0.000 and 1.000) of how similar the two strings are. These Measure objects are implemented as User Defined Functions (UDFs) and are compiled into a native library in an SQL Database. Measure objects are made up of attributes comprising Name, Function and Flags, as described in Table 5.
Figure imgf000016_0002
TABLE 5
Turning now to Figure 1, Figure 1 depicts a high level architecture 100 of the Similarity Search Engine (SSE). The SSE architecture 100 includes an SSE Server 190 that comprises a Gateway 110, a Virtual Document Manager (VDM) 120, a Search Manager (SM) 130 and a Relational Database Management System (RDMS) 140. The Gateway 110 provides routing and user management. The VDM 120 enables XML document generation. The SM 130 performs XML document and scoring. The RDMS 140 (generally an SQL Database) provides token attribute scoring as well as data persistence and retrieval, and storing User Defined Functions (UDFs) 145. The SSE Server 190 is a similarity search server that may connect to one or more Clients 150 via a Client Network 160. The SSE Server also connects to a RDMS 140.
The Gateway 110 serves as a central point of contact for all client communication by responding to commands sent by one or more clients 150. The Gateway 110 supports a plurality of communication protocols with a user interface, including sockets, HTTP, and JMS. The Gateway 110 is implemented as a gateway. Server class, a direct descendent of the xcf.BaseCommandServer class available in the unique generic architecture referred to as the XML Command Framework (XCF), which handles the details of threading, distribution, communication, resource management and general command handling. The XCF is discussed below in more detail. Therefore, the Gateway 110 inherits all the default command handling and communication functions available in all XCF Command Servers. The Gateway 110 relies on several types of command handlers for user definition, user login and logout, and command routing.
To add a user to the system, the Gateway 110 makes use of a user class to encapsulate what a "user" is and implements a component class interface, which is inherited from the generic XCF architecture. Instances of XCF Component command handlers used by the Gateway 110 to add, remove or read a user definition are shown in Table 6.
Figure imgf000017_0001
TABLE 6
When a user definition has been added to the system, a user must log in and log out of the system. Instances of command handlers used only by the Gateway 110 for user login and user logout are shown in Table 7.
Figure imgf000017_0002
Figure imgf000018_0001
TABLE 7
The Gateway 110 includes several instances of command handlers inherited from the generic XCF architecture to properly route incoming XML Command Language (XCL) commands to an appropriate target, whether it is the VDM 120, the SM 130, or both. These command handlers used by the Gateway 110 for routing are shown in Table 8.
Figure imgf000018_0002
TABLE 8
Table 9 shows the routing of command types processed by the Gateway 110, and which command handler shown in Table 8 is relied upon for the command execution.
Figure imgf000018_0003
TABLE 9 The communication between the Gateway 110 and the VDM 120, and between the Gateway 110 and the SM 130 is via the XML Command Language (XCL).
The VDM 120 is responsible for XML document management, and connects between the Gateway 110 and the RDMS 140. The VDM 120 is implemented by the vdm.Server class, 5 which is a direct descendent of the xcf.BaseCommandServer class available in the unique generic architecture referred to as the XML Command Framework (XCF), which handles the details of threading, distribution, communication, resource management and general command handling. The XCF is discussed below in more detail. Therefore, the VDM 120 inherits all the default command handling and communication functions available in all XCF
10 Command Servers. Unlike XML databases having proprietary storage and search formats, the VDM 120 uses existing relational tables and fields to provide dynamic XML generation capabilities without storing the XML documents.
The VDM 120 provides its document management capabilities through Document Providers. The most visible function to a Client 150 of the VDM 120 is the creation and
15 maintenance of SCHEMA documents, which define parameters used for similarity searching. A Document Provider is defined by the vdm.DocProvider interface and is responsible for generating and storing XML documents based on a schema definition. Although described embodiments of the SSE Server 190 only implement one DocProvider, which is an SQL based document provider, if the DocProvider implements the interface, the document 0 provider can be any source that generates an XML document. For example, document providers may be file systems, web sites, proprietary file formats, or XML databases. For a user to retrieve relational data, the user must know where the data resides and how it is
~ " "connected." A Datasource object encapsulates alHhe-connection information.- - -
There are several types of command handlers required by the VDM 120 in order to 5 satisfactorily execute XCL commands. These include the document related command handlers shown in Table 10.
Figure imgf000019_0001
Figure imgf000020_0001
TABLE 10
Schema related command handlers required by the VDM 120 are shown in Table 11.
Figure imgf000020_0002
TABLE 11
Datasource related command handlers required by the VDM 120 are shown in Table 12.
Figure imgf000020_0003
TABLE 12
The VDM 120 communicates with the RDMS 140 via the Java Database Connectivity (JDBC) application programming interface.
The VDM 120 includes a Relational Database Driver (RDD) 125 for providing a link between XML documents and the RDMS 140. The RDD 125 implements the DocProvider interface, supporting standard functions defined in that class, including reading, writing and deleting XML documents. The RDD 125 is initialized by calling the initialize(String map) function, where this map is an XML document describing the relationships between the XML documents to be dealt with and the relational database. For instance, consider an example XML document 210 that follows the form shown in Figure 2. When building an XML document 210 from the RDMS 230, Datasets 220 can specify that the data in claim/claimant/name should come from the Claimants table 240 of the RDMS 230, while /claim/witness/name should come from the Witnesses table 250. Conversely, when writing an existing XML document 210 of this form out to the RDMS 230, the Datasets 220 will tell the RDD 125 that it should write any data found at /claim/claimant/name out to the "name" field of the Claimants table 240, and write the data found at /claim/witness/name out to the "name" field of the Witnesses table 250. Through describing these relationships, the Datasets 220 allows the RDD 125 to read, write, and delete XML documents for the VDM 120.
Internally, these Datasets 220 define relationships that are stored in a Java model. At the beginning of initialization, the XML map 210 is parsed and used to build a hierarchy of Datasets 220, one level of hierarchy for each database table referenced in the Datasets 220. This encapsulation of the XML parsing into this one area minimizes the impact of syntax changes in the XML map 210. These Datasets 220 have an XML form and may describe a document based on a relational table or a document based on a SQL statement. If based on a relational table, then initializing the RDD 125 with these Datasets 220 will allow full read/write functionality. However, if based on an SQL statement, then the initialized RDD 125 will allow documents to be generated from the RDMS 230, but not to be written out to it. In the common usage, in which the Dataset 220 is describing a relational table, <BIND> tags ' " definelhe key fields used in the master-detail relationship. -The topmost Dataset'-s <BIND> - tag simply describes its primary key, since it has no relationship with any higher-level Dataset. Dataset <PATH> tags describe where the data being read from the tables in the
RDMS 230 should be stored in the XML document 210, and visa- versa when writing XML document data to the RDMS. A Dataset <EXPRESSION> tag indicates whether the Dataset describes a document based on a relational table or a document based on an SQL statement. The VDM 120 relies on three functions to provide the functionality of building XML documents from underlying RDMS. Each of these three functions returns the resultant document(s) as a String. The functions are singleRead, multipleRead and expressionRead. Consider the singleRead function: singleRead(String primaryKey, boolean createRoot, String contentFilter). In this function, primaryKey is a String that represents the primary key of the document being produced. The boolean createRoot indicates whether or not the user wants the function to wrap the resultant XML document in a root-level <RESULT> tag. The String contentFilter is an XML structure represented as a String that describes the structure that the result must be formatted in. This structure is always a cut-down version of the full document. For instance, if we initialize the RDD 125 with an example, and then call singleRead(" 1 ", true, "<claim><witness><city/></witness></claim>"), the resulting XML document may look like that shown in Figure 3. Consider the expressionRead function: expressionRead(String expression, int start, int blockSize, boolean createRoot,
String contentFilter). The expressionRead method allows the user to request a group of documents that can be described in a SQL statement. For instance, all documents whose primary key would be included in the ResultSet of "SELECT Inventory_Code FROM Products WHERE Product_Code = 'Clearance'." The results of executing the SQL expression, if there are any, are assumed to have the primary key included in the first column of the results. These primary keys are loaded into a Set as they are read, and the multipleRead is called with this information. In the event that the SQL expression describes too general a set of primary keys, expressionRead has two int parameters that allow a user to describe a subset of the keys returned by the SQL expression. For example, there are several hundred clearance items but only the first one hundred clearance items are of interest. To read the first one hundred clearance items, the SQL expression would be the same as above, start would be passed in as a "value of 1, and blockSize~would be 100-.-Toτead the second one hundred clearance items, - start could be reset to 100, and blockSize would remain at its value of 100. The remaining two parameters, createRoot and contentFilter, function similarly to the way that they are described in singleRead.
Consider the multipleRead function: multipleRead(Set primaryKeys, boolean createRoot, String contentFilter). In multipleRead, the boolean createRoot and String ContentFilter behave just as they did in singleRead. The only parameter that is different is that instead of a single primaryKey String, multipleRead takes a set of primaryKeys. The other two read functions, singleRead and expressionRead, may be considered to be special cases of the multipleRead method. A singleRead may be considered as a multipleRead called on a primaryKey set of one. When an expressionRead is executed, the results may be fed into a set that is then sent to a multipleRead.
Composition of documents follows a basic algorithm. A row is taken from the topmost array of arrays, the one representing the master table of the document. The portion of the XML document that takes information from that row is built. Next, if there is a master- detail relationship, the detail table is dealt with. All rows associated with the master row are selected, and XML structures built from their information. In this manner, iterating through all of the table arrays, the document is built. Then, the master array advances to the next row, and the process begins again. When it finishes, all of the documents will have been built, and they are returned in String form.
The VDM relies on two functions for writing XML documents out to an underlying RDMS. These functions are singleWrite and multipleWrite. Consider the singleWrite function: singleWrite(String primaryKey, String document). The parameter primaryKey is the document number to be written out. The parameter document is an XML document in String form, which will be parsed and written out to the RDMS. During initialization, the driver has created a series of PreparedStatements to handle the data insertion. The driver iterates through the document, matching each leafs context to a context in a Dataset. When a context match is made, the relevant Insert statement has another piece plugged into it. When all of the necessary data has been plugged into the prepared Insert statement, it is executed and the data is written to the RDMS. Consider the multipleWrite function: " multipleWrite(Map documents).
The multipleWrite function takes as a parameter a Map which holds pairs of primary keys and documents. The multipleWrite function iterates through this Map, calling the singleWrite function with each of the pairs.
The VDM relies on three methods for deleting the data represented in an XML document from the underlying RDMS. The functions are singleDelete, multipleDelete and expressionDelete. Consider the singleDelete function: singleDelete(String primaryKey, String document).
The singleDelete method takes in a String primaryKey, which identifies the document to be deleted. While the DocProvider interface requires a second parameter, the String document, the relational driver does nothing with this information and is able to function with only the document's primary key. In order to delete a given document, the driver first iterates through the Dataset structure, executing selects for relevant columns in each table. This is required to properly map the master-detail relationship. For instance, there is no guarantee that the master table's primary key will be the same key used in the detail table. Running the Dataset's selects as if a read command had been called allows the driver access to the necessary information, and insures that all components of the document are deleted. Once the selects have been executed, the driver iterates down through the Dataset executing each prepared delete statement with the proper relevant data now plugged in. Consider the multipleDelete function: multipleDelete(Map documents). The multipleDelete method is bottlenecked through the singleDelete method, just as multipleWrite was through singleWrite. In this case, multipleWrite takes as a parameter a Map of paired primary keys and documents. The key set of primary keys is iterated through, and singleDelete is called on each one.
Consider the expressionDelete function: expressionDelete(String expression).
The expressionDelete method takes as its sole parameter a SQL expression which describes the set of primary keys of documents which the user wishes to delete. The expression is executed, with the assumption that the first column of the resulting rows will be the primary key. These primary keys are iterated through, each being loaded into a call to singleDelete. The SM 130 is responsible for XML document and SQL searching and scoring, and connects between the Gateway 110 and the RDMS 140. The SM 130 is implemented as a search.Server class, which is a direct descendent of the xcf.BaseCommandServer class ~available irrthe unique-generic architecture referred to-as the XML Command Framework (XCF), which handles the details of threading, distribution, communication, resource management and general command handling. The XCF is discussed below in more detail. Therefore, the SM 130 inherits all the default command handling and communication functions available in all XCF Command Servers. The SM 130 does not maintain any of its own indexes, but uses a combination of relational indexes and User Defined Functions (UDFs) 145 to provide similarity-scoring methods in addition to traditional search techniques. The SM 130 sends commands to the RDMS 140 to cause the RDMS 140 to execute token attribute similarity scoring based on selected UDFs. The SM 140 also performs aggregation of token attribute scores from the RDMS 140 to determine document or record similarity scores using selected choice algorithms. SQL commands sent by the SM 130 to the RDMS 140 are used execute functions within the RDMS 140 and to register UDFs 145 with the RDMS 140.
There are several types of command handlers required by the SM 130 in order to satisfactorily execute XCL commands. These include schema, datasource, measure (UDF), and choice related command handlers. The schema related command handlers are shown in Table 13.
Figure imgf000025_0001
TABLE 13
Datasource related command handlers required by the SM 130 are shown in Table 14.
Figure imgf000025_0002
TABLE 14
Measure (UDF) related command handlers required by the SM 130 are shown in Table 15.
Figure imgf000025_0003
TABLE 15
Choice related command handlers required by the SM 130 are shown in Table 16.
Figure imgf000026_0001
TABLE 16
The SM 130 communicates with the RDMS 140 via the Java Database Connectivity (JDBC) application programming interface.
A similarity search is generally initiated when the Gateway 110 receives a QUERY command containing a search request from a Client 150, and the Gateway 110 routes the QUERY command to the SM 130. The SM 130 generally executes the QUERY command by accessing a SCHEMA previously defined by a Client 150 and specified in the QUERY command, and parsing the QUERY command into a string of SQL statements. These SQL statements are sent to the RDMS 140 where they are executed to perform a similarity search of token attributes and scoring of the attributes of the target documents specified in the SCHEMA and stored in the RDMS 140. The attribute similarity scores are then returned to the SM 130 from the RDMS 140 where weighting factors specified in the SCHEMA are applied to each score and Choice algorithms specified in the SCHEMA aggregate or "roll-up" the attribute scores to obtain an overall similarity score for each target document or record specified in the SCHEMA or QUERY command. The scores are then returned to the GatewayLllO.byihe SM 130.in.a RESULT document, _which is then_returned to the Client 150. As an example of attribute scoring by the SM 130, consider the following SQL statement sent by the SM 130 to the RDMS 140:
SELECT CUSTOMER.ID,STRDJJFF(CUSTOMER.FIRST_NAME, 'JOHN') AS SI FROM CUSTOMER The measure "looks_like" is implemented by the User Defined Function (UDF) STRDIFF, which takes in two varchars (string) and returns a float(0.0 .. 1.0). In this case, the two strings are a field value (target) and literal value (search). The result of the UDF is a float (score) of the comparison. Table 17 shows an example result of this SQL statement that is returned by the RDMS 140 to the SM 130.
Figure imgf000027_0001
Table 17
Taking this example further, with more attributes, we could score more data with the SQL statement sent to the RDMS 140 by the SM 130:
SELECT CUSTOMER.ID,STRDιFF(CUSTOMER.FIRST_NAME, 'JOHN') AS SI,
STRDIFF(CUSTOMER.MIDDLE_NAME, 'R') AS S2,
STRDΓFF(CUSTOMER.SURNAME, 'RJJPLEY') AS S3 FROM CUSTOMER Table 18 shows an example result of this SQL statement received by the SM 130 from the RDMS 140.
Figure imgf000027_0002
TABLE 18
The SM 130 has caused the RDMS 140 to score a series of attributes independently and the RDMS 140 has returned a set of scores shown in Table 18 to the SM 130.
In addition to this behavior, we can selectively limit the documents returned/examined by applying restriction logic to the search. Consider the SQL statement sent by the SM 130 to the RDMS 140:
SELECT CUSTOMER.ID,STRDIFF(CUSTOMER.FIRST_NAME, 'JOHN') AS SI, STRDJJFF(CUSTOMER.MIDDLE_NAME, 'R') AS S2, STRDIFF(CUSTOMER.SURNAME, 'RIPLEY') AS S3 FROM CUSTOMER WHERE STRDIFF(CUSTOMER.FIRST_NAME, 'JOHN') > 0.72 In this example, only customers with a first name having a high degree of similarity (> 0.72) to "John" are of interest, regardless of the other criteria (where 0.00 represents no similarity and 1.00 represents an exact similarity or comparison match). Table 19 shows the expected result received from the RDMS 140 by the SM 130.
Figure imgf000028_0001
TABLE 19 Records 2 & 4 are excluded because of their low similarity scores of first names (0.00 and 0.50, respectively) vs. "John".
Once a set of attribute scores is returned from the RDMS 140, the SM 130 determines the overall score of a record (or document) by aggregation through use of a Choice algorithm specified in the associated SCHEMA. An example of aggregation may be simple averaging the scores of the attribute after first multiplying them by relative weight factors, as specified in a QUERY command. In the example case, all fields are weighted evenly (1.00), and therefore the score is a simple average. Figure 4 depicts an example of a RESULT document from the example of Table 19.
The Statistics Processing Module (SPM) 135 enables the acquisition of statistical information about the data stored in search tables in the RDMS 140, using the built-in functions available in the RDMS 140. This enables the definition of statistics after search data has been stored in the RDMS 140. The Statistics Processing Module (SPM) 135 gives the user the ability to specify the fields upon which they wish to obtain statistics. The list of fields selected will act as a combination when computing occurrences. For example, the most frequently occurring first, middle, and last name combination. In addition to the fields, the user will be able to provide count restriction (e.g., only those with 4 or more occurrences) along with data restriction (e.g., only those records in Texas).
Turning now to Figure 5A, Figure 5 A depicts a process 500 for handling a STATISTICS command in a Search Manager (SM) 130 when the SSE Server 190 receives a STATISTICS command and a CommandHandler is invoked to handle the process. When the SM 130 receives a STATISTICS command, the Statistics Definition to be used in the generation process is identified 510. The SCHEMA (search table) from which these statistics are based is then identified 520. Next, an SQL statement is issued to extract the necessary statistical information from the SCHEMA 530. If the results of a QUERY command are not already present, a new statistics table is created to store the results of a QUERY command 540. The statistics table is then populated with the results of the QUERY command 550. A statistics SCHEMA (with mapping and measures) is generated 560. And lastly, the newly created statistics SCHEMA is added to the SM 130 so that the statistics table becomes a new search table and is exposed to the client as a searchable database 570. Figure 5B depicts the dataflow 502 in the statistics command process of Figure 5B.
Statistic Definitions are considered Components that fit into the ComponentManager architecture with their persistence directory being "statistics", as described below with regard to the CommandServer of the XCF. The management commands are handled by the ComponentAdd, ComponentRemove and ComponentRead CommandHandlers available in the CommandServers registered CommandHandlers.
Turning back to Figure 1, the RDMS 140 is generally considered to be an SQL database, although it is not limited to this type of database. In one embodiment of the present invention, the RDMS 140 may comprise a DB2 Relational Database Management System by IBM Corporation. The SM 130 communicates with the RDMS 140 by sending commands and receiving data across a JDBC application programming interface (API). The SM 130 is able to cause the RDMS 140 to execute conventional RDBMS commands as well as commands to execute the User Defined Functions (UDFs) 145 contained in a library in the RDMS 140 for providing similarity-scoring methods in addition to traditional search techniques. The VDM 120 also communicates with the RDMS 140 via a JDBC application programming interface (API).
UDFs 145 provide an extension to a Relational Database Management System - (RDMS) suite of built-in functions. The built-in functions include a series of math, string, and date functions. However, none of these built-in functions generally provide any similarity or distance functional capability needed for similarity searching. The UDFs 145 may be downloaded into the RDMS 140 by the SSE server 190 provide the functions required for similarity searching. UDFs 145 may be written in C, C++, Java, or a database-specific procedure language. The implementations of these UDFs 145 are known as Measures. The Measures compare two strings of document attributes and generate a score that is normalized to a value between 0.00 and 1.00. They can be called from any application that has knowledge of the signature of the function, for example, parameters, type, and return type. For a RDMS 140 to be capable of calling UDFs 145, the function signatures must match what the database expects a UDF 145 to be, and the function library and entry point must be declared to the RDMS 140. The library of functions are compiled and deployed into the UDF library directory of the host RDMS 140, and the UDFs 145 are registered with the RDMS 140 by an SQL command. The measures are described below in more detail.
Turning now to Figure. 6, Figure 6 describes the Measures implemented as UDFs 145 in an embodiment of the SSE. The term "tokenized Compare" is used in the Measure descriptions of Figure 6. In the present context, it means to use domain-specific (and thus domain-limited) knowledge to break the input strings into their constituent parts. For example, an attribute of a street address can be broken down into tokens comprising Number, Street Name, Street Type, and, optionally, Apartment. This may improve the quality of scoring by allowing different weights for different tokens, and by allowing different, more specific measures to be used on each token.
Turning now to Figure 7, Figure 7 depicts an architecture of the XML Command Framework (XCF) 700. The Gateway, VDM and SM described above each rely on the flexible design of the XCF 700 for core processing capability. The XCF 700 functions as XML in and XML out, that is, it generates an XML response to an XML command. It is based upon a unique XML command language XCL that strongly focuses on the needs of search applications. The details of XCL are described below. The architecture of the XCF 700 comprises the following major entities: CommandServer 710 for configuration, overall flow, and central point of contact; CommandExecutor 720 for executing XML commands and providing XML result; CommandResponse 730 for receiving XML results; CommandHandlerFactory 740 for registration and identification of CommandHandlers 742; Component Manager 750 for management of Components 752, Acceptors 756 and Connectors 754, Interceptors 758, and LifetimeManagers 760; and CommandDispatcher 770 containing a Queue 772 for CommandHandler 742 thread management. CommandHandlers 742 process individual XML commands. Components 752 are pluggable units of functionality. Connectors 754 and Acceptors 756 provide for communication into and out of the CommandServer 710. Interceptors 758 hook to intercept incoming commands. LifetimeManagers 760 manage lifetime of CommandHandler 742 execution. Each of these entities is defined as an interface that allow for multiple implementations of an entity. Each interface is an object that has at least one base implementation defined in XCF. Each interface is limited to the contract imposed by the interface.
The CommandServer 700 is the central point-of-contact for all things in the XCF. It is responsible for overall execution flow and provides central access to services and components of the system. Most objects that reference the XCF services are passed a CommandServer reference in a constructor or in a setter method. Central access to synchronization objects can be placed here, as all supporting objects will have access. Being the central point of most things, it is also responsible for bootstrapping, initialization and configuration.
The interface to the CommandExecutor 720 is defined as: void execute(String command, CommandResponse response), where command is the XML command to be executed by a CommandHandler 742, and CommandResponse is any object that implements the interface to the CommandResponse 730. It is this object that will be called asynchronously when the command has been completed. The CommandResponse interface 730 must be implemented when calling a
CommandExecutor 720 execute method. The contract is: void setValue(String value). Once a command has been completed, it will call this method. It is here where a particular CommandResponse 730 implementation will get a response value and process it accordingly. The CommandHandlers 742 provide means for interpreting and executing XML commands for solving a particular problem. For each problem that needs a solution, there is an assigned CommandHandler 742. First, consider a standard XML command:
<TYPE op="action" version="version"/> A CommandHandler 742 is uniquely identified in the system by the three attributes shown in Table 20. These attributes are known as the signature of the CommandHandler 742.
Figure imgf000031_0001
TABLE 20
The command handlers 742 provide template methods for the following functions: 1) ensure proper initialization of the CommandHandler 742;
2) register the CommandHandler 742 with the LifeTimeManager 760;
3) catch any uncaught exceptions; and
4) ensure proper uninitialization of the CommandHandler 742. Figure 8 depicts the format of a RESPONSE generated by a CommandHandler 742.
There are several standard CommandHandlers 742 that are responsible for overall management of the CommandServer 710. These are shown in Table 21 along with the CommandHandler PassThrough, which is not automatically registered.
Figure imgf000032_0001
TABLE 21
The CommandHandlerFactory 740 serves as a factory for CommandHandlers 742. There is only one instance of this object per CommandServer 710. This object is responsible for the following functions:
1) registering/unregistering CommandHandlers;
2) parsing incoming XML commands; 3) identifying registered CommandHandlers responsible for handling XML commands;
4) cloning a particular CommandHandler; and
5) giving a cloned CommandHandler run-time state information. A Component 752 is identified by its type and name. The type enables grouping of
Components 752 and name uniquely identifies Components 752 within the group. The Component 752 is responsible for determining its name, and ComponentManager 750 handles grouping on type. The lifecycle of a component 752 is: 1) create a Component 752; 2) configure the Component 752, specified in XML;
3) activate the Component 752;
4) add the Component 752 to the ComponentManager 750;
5) ...
6) remove the Component 752 from the ComponentManager 750; and 7) deactivate the Component 752.
A Component 752 is generally created, activated and added to the ComponentManager 750 at CommandServer 710 startup, and is removed and deactivated at CommandServer 710 shutdown. However, this procedure is not enforced and Components 752 can be added and removed at any time during the lifetime of a CommandServer 710. Since the Component interface is very flexible and lightweight, many system objects are defined as Components 752. These include CommandAcceptors 756, CommandConnectors 754, Commandlnterceptors 758, and LifetimeManager 760. If a Component 752 does not perform any function, but is simply a definition, a generic utility Component 752 implementation may be used. This utility only ensures that the configuration of the Component 752 is valid XML and it has a name="xxx" at the root level.
The CommandDispatcher 770 exposes a single method for InterruptedException. The CommandDispatcher interface 770 differs from the CommandExecutor 720 because it expects an initialized CommandHandler 742 rather than an XML string, and it delegates the command response functionality to the CommandHandler 742 itself. Internally, the BaseCommandDispatcher uses a PooledExecutor. As a command is added, it is placed in this bounded pool and when a thread becomes available, the CommandHandler' s run() method is called.
The function of the LifetimeManager 760 is to keep track of any objects that requires or requests lifetime management. A LifetimeManager 760 is an optional part of a CommandServer 710 and is not explicitly listed in the CommandServer interface 710. It can be registered as a separate Component 752, and can manage anything that implements the LifetimeManager interface 760. The only objects that require Lifetime management are CommandHandlers 742. During its setup, the BaseCommandServer creates a CommandLifetimeManager component that is dedicated to this task of managing the lifetime of commands/CommandHandlers 742 that enter the system. CommandHandlers themselves do not implement the Lifetime Manager interface 760.
Commandlnterceptors 758 are components that can be added to a CommandServer 710. Their function is to intercept commands before they are executed. Implementations of Commandlnterceptor 758 should raise a CommandlnterceptionException if evaluation fails. BaseCommandServer 710 will evaluate all registered Interceptors 758 before calling the dispatcher. If one fails, the dispatcher will not be called and the CommandlnterceptorException's getMessage() will be placed in the error block of the response. The Acceptor 756 and Connector 754 pair is an abstraction of the communication between clients and Commands ervers 710, and between a CommandServers 710 and other Commands ervers 710. CommandAcceptors 756 and CommandConnectors 754 extend the Component interface 752, and are therefore seen by the CommandServer 710 as Components 752 that are initialized, configured, activated and deactivated similar to all other Components 752. The ComponentManager 750 manages acceptors 756 and Connectors 754.
A CommandAcceptor 756 is an interface that defines how commands are accepted into a CommandServer 710. It is the responsibility of the CommandAcceptor 756 to encapsulate all the communication logic necessary to receive commands. It passes those commands (in string form) to the CommandServer 710 via its CommandExecutor interface 720. Once a command is successfully executed, it is the responsibility of the
CommandAcceptor 756 to pass the result back across the communication channel.
Similar to the CommandAcceptor 756, the CommandConnector 754 encapsulates all the communication logic necessary for moving commands across a "wire", but in the case of a Connector 754, it is responsible for sending commands, as opposed to receiving commands. It is a client's connection point to a CommandServer 710. For every CommandAcceptor implementation 756, there is generally a CommandConnector implementation 754. The CommandConnector interface 754 extends the CommandExecutor interface 720, thereby implying that it executes commands. This enables location transparency as both CommandServer 710 and CommandConnector 754 expose the CommandExecutor interface 720.
As discussed, CommandAcceptors 756 and CommandConnectors 754 are Components 752 that are managed by a CommandServer's ComponentManager 750. The Acceptors 756 and Connectors 754 are the clients' view of the CommandServer 710. Several implementations of Acceptor/Connector interfaces provide most communication needs. These classes are shown in Table 22.
TABLE 22
All Connectors 754 are asynchronous to a user, even if internally they make use of threads and socket pools to provide an illusion of asynchronous communication.
Turning now to Figure 9, Figure 9A depicts a process 900 for handling an XCL command in a CommandServer 710. An XCL command is formulated, a CommandResponse object is provided, and a CommandServer's CommandExecutor interface is called 910. Inside the CommandServer 710, the CommandExecutor 720 calls a CommandHandlerFactory 740 with a raw XCL command string 920. Inside the CommandHandlerFactory 740, the XCL command string is parsed, a registered CommandHandler 742 is found with the same TYPE, action and version signature as the XCL command, a CommandHandler prototype is cloned - with the runtime state information, and it is passed back to the CommandServer 930. The CommandServer 710 gives the newly cloned CommandHandler 742 a reference and the same CommandResponse object provided in the first step 940. The CommandServer 710 then delegates execution of the CommandHandler 742 to the CommandDispatcher 770 by placing it in its Queue 772, 950. When ready, the CommandDispatcher 770 will grab a thread from the Queue 772, 960. The CommandDispatcher 770 will then call the CommandHandler run() method 970. Once running, the CommandHandler 742 can do whatever is required to satisfy the request, making use of system services of the CommandServer 710, 980. Once a result (or error) has been generated, the CommandHandler 742 places the value in setResult(), loads its CommandResponse object setValueQ with result, and the result passes back to the caller 990. Figure 9B depicts a dataflow 902 of an XCL command process steps shown in Figure 9A in a CommandServer architecture.
The SSE employs a Command Language based on XML, the Extensible Markup Language. This Command Language is called XCL. XCL commands are issued as XML documents and search results are returned as XML documents. The syntax of XCL consists of XML elements, their values and attributes, which control the behavior of the Similarity Search Engine Server. Using XCL commands, a client program can define and execute searches employing the SSE Server.
This description introduces the Application Programming Interface (API) for the SSE Server. All SSE commands are formed in XML and run through the execute interface, which is implemented for both Java and COM. For Java, there are synchronous and asynchronous versions. For COM, the interface is always synchronous. For both versions, there are similar methods. The first accepts a string and would be appropriate when the application does not make extensive use of XML, or when it wants to use the SAX parser for speed and does not employ an internal representation. The other method accepts a DOM instance and opens the door to more advanced XML technologies such as XSL.
Turning now to Figure 10, Figure 10 depicts the general XCL command format. Logically, XCL commands look like XML documents. Each command is a document and its clauses are elements. Command options are given by element or attribute values. The XCL command language provides three main types of commands for building similarity applications - a SCHEMA command that defines the document set for the similarity search, a QUERY command that searches the document set, and some administrative commands for managing documents, queries, measures, and so on. The SCHEMA command has three main clauses. A STRUCTURE clause describes the structure of the documents to be searched, arranging data elements into an XML hierarchy that expresses their relationships. A MAPPING clause maps search terms with target values from the datasources. A SEMANTICS clause indicates how similarity is to be assessed. The QUERY command also has several clauses. A WHERE clause indicates the structure and values for the search terms. A FROM clause describes the datasources to be accessed. SELECT and RESTRICT clauses describe the result set and scoring criteria. And an optional SEMANTICS clause overrides semantics defined in the SCHEMA. The administrative commands allow the application to read, write, and delete the documents, queries, schemas, measures, parsings, choices, and datatypes used in the search. For multi-user situations, a simple locking protocol is provided. XCL commands return result sets in the form of XML documents. The QUERY command contains similarity scores for the documents searched. The result set can return scores for entire documents or for any elements and attributes they contain. In case of problems, the result set contains error or warning messages. Results can be returned synchronously or asynchronously. The synchronous calls block until the result set is ready, while the asynchronous calls return immediately with results returned via a callback coded in the client. Depending on the needs of the application, the results can be retrieved in either string or DOM format. The ResultSet class used by SSE Command Language mimics the ResultSet class for JDBC, allowing applications to iterate through the results to access their contents.
All XCL commands operate on XML documents and produce document sets as results. An XML document begins at a top node (the root element), and elements can be nested, forming a hierarchy. The bottom or "leaf nodes contain the document's content (data values). A document set is a collection of documents with the same hierarchical layout, as defined by the schema for the search. An anchor document is a hierarchy of XML elements that represent the data values to be used as search criteria. Currently, there can be only one instance of each element in the anchor document. However, the target documents can have repeated groups.
Turning now to Figure 11, Figure 11 depicts a hierarchical layout 1100 that allows multiple tables 1110, 1120 to be mapped onto the search document 1160 via datasets 1130, 1140 through the use of the VDM Relational Database Driver 1150 discussed above. The Database values are mapped to their corresponding places in the virtual XML document to be searched. A target document 1-160 is a hierarchy of values drawn from a relational database 1110, 1120. Values from the Relational Database 1110, 1120 are captured via OBDC or JDBC. Target documents can span multiple tables, joined by master/detail fields. Documents examined by the SSE Server are virtual in the sense that they provide hierarchical representations that match the structure of the search schema while the data they contain still resides in the database tables 1110, 1120. In many cases, the target documents are a direct reflection of tables being tapped in the search. Each valued element corresponds to field (column) in a table, and group elements correspond to the tables themselves 1110, 1120. The hierarchical layout allows multiple tables 1110, 1120 to be mapped onto the virtual XML search document 1160, even tables from other databases in the case of a cross-database search. The relationship between the target documents and datasource is mapped as part of the schema defined for the search. A database can have many schemas, providing different ways of searching it.
Turning now to Figure 12, Figure 12 depicts the format of a SCHEMA command. The SCHEMA command enables a user to manage the schema for a search document, defining the hierarchical structure of the document and mapping its elements to data sources and similarity measures. A SCHEMA command for a search document comprises of its STRUCTURE clause, its MAPPING clause, and its SEMANTICS clause. The STRUCTURE clause defines the search terms and their relationships in XML format. The MAPPING clause defines the target values and where they reside. The SEMANTICS clause can include overrides to the default similarity measures, choices, and weights. Schemas can be listed, read, written, deleted, locked, and unlocked by the SCHEMA command, as required. Search schemas must be coded manually according to the syntax given here. Predefined datatypes provide shortcuts for those wishing to use standard domain-oriented elements and measures. Search schemas normally reside in an SSE schema repository. The "list" operator of the SCHEMA command returns a childless <SCHEMA> element for each schema in the repository. With a "read" operator, the SCHEMA command returns the schema indicated. Or if the schema name is given as "*", the "read" operator returns all schemas in the directory. The "write" operator causes the SCHEMA command to write the specified search schema into the directory, overwriting any existing schema with the same name. The "delete" operator purges the specified document. The "lock" and "unlock" operations provide a simple locking protocol to prevent conflicting updates in case several DOCUMENT operations are attempted at once by different clients. The operation attribute returns as "locked" or "denied" to indicate the success of the operation. For example, the command <SCHEMA op="list" name="*"/> calls for a list of schemas shown in Figure 13. Turning now to Figure 14, Figure 14 depicts the format for a STRUCTURE clause.
The SSE Server uses the hierarchical structure of the XML anchor document to express definitions, options, and overrides throughout the XCL command language. The XML structure of the anchor document is defined in the STRUCTURE clause, specifying the data elements involved in the search along with their positions in the search document. The SEMANTICS clause refers to this structure in mapping these elements to information sources, similarity measures, and so on. A unique aspect of the XML hierarchy shown in the STRUCTURE clause is that no values are given. Elements that represent search terms are shown as empty - i.e., just the XML tag. The values are to be supplied by the associated datasources. In the case of a "flat" search, all the target values are for child elements belonging to the parent. In a "hierarchical" search, the search terms may occur at different levels of the hierarchy with multiple occurrences of values for child elements. However, the anchor document cannot specify repeated values. Figure 15 depicts an example of a STRUCTURE clause for a hierarchical search. Turning now to Figure 16, Figure 16 depicts the format of the MAPPING clause, which associates elements in the anchor document with target fields in the database. The MAPPING clause governs the mapping between elements of the XML search document and the elements of a relational database. Its contents are: database name, location, driver, username, and password. When multiple databases are connected, the mapping also indicates the node in the search document schema to populate with data from the database. Each database table or view is represented by a dataset, which gives the bindings of database fields to elements and attributes in the search schema. Datasets bind with each other to join the database tables into a hierarchy that matches the structure of the search schema. The MAPPING clause for a relational datasource contains a <DATASET> element for every table in the database that contains target values for the search. <DATASET> contains the datasource attribute that identifies the object used as the datasource. The <DATASET> also contains an <EXPRESSION> element that tells SSE that the datasource is a relational table. The <DATASET> also includes a <PATH> element that indicates which element in the search schema contains the search terms for target values drawn from the table. Target values are mapped to the search schema with a <FIELD> element for each field to be included. The <DATASET> for a relational table also contains a <BIND> element that defines master/detail relationships with other tables. This binding resembles a JOIN operation by the DBMS, associating a foreign key in- the detail table with a primary key in the master table. Figure 17 depicts an example of a MAPPING clause. For example the <MAPPING> may include two <DATASET> elements, the first to describe the master Product table, and the second to describe the detail Model table.
Turning now to Figure 18, Figure 18 depicts the format of the SEMANTICS clause, which assigns measures, choices, and weights to search terms. The SEMANTICS clause provides intelligence to guide the search. By default, standard measures based on datasource datatypes are assigned to the search terms. Sometimes these provide adequate results, but other times applications require measures that take into account the way the data is used. New semantics are assigned with the APPLY clause, which consists of a repeatable PATH clause and up to one each of the following: MEASURE clause, CHOICE clause, and WEIGHT clause. The PATH clause indicates an element in the search schema that is to receive new semantics. The xpath notation traces a hierarchical path to the element beginning at the root. When several elements are to receive the same semantics, they can be listed in the same <APPLY> clause. The MEASURE clause allows the use of refined measures for the elements indicated in the APPLY clause. For those elements, the measure specified in the MEASURE clause takes precedence over any measure specified in the original schema. The specified measure can either be a variation on the standard measure, a new measure defined using the SSE syntax, or a user-coded measure. The CHOICE clause enables a different pairing algorithm to be assigned to parsed values of the elements indicated in the APPLY clause. These algorithms perform aggregation of the similarity search scores of the attributes determined by the measure algorithms. The WEIGHT clause allows a relative weight to be assigned to the scores of the elements listed in the APPLY clause. By default, all elements and attributes belonging to the same parent are assigned equal weights. That is, the scores of the child elements and attributes are averaged to produce the score for the parent. If necessary, the scores are normalized to produce an overall score in the range 0.00 to 1.00. For example, in scoring a name, <LAST> might be assigned a WEIGHT of .70, <MIDDLE> a WEIGHT of .10, and <FIRST> a WEIGHT of .20. The resulting score for would then be calculated as: score = (.70)*(score<LAST>) + (.10)*(score<MIDDLE>) + (.20)*(score<FIRST>). Without the WEIGHT clause, the calculation would be: score= (score<LAST> + score<MIDDLE> + score <FIRST>)/3.
Turning to Figure 19, Figure 19 depicts the hierarchical structure 1900 of the SCHEMA command 1910. As described above, the SCHEMA command 1910 comprises a STRUCTURE clause 191-5, a SEMANTICS clause 1920 and a MAPPING clause 1925. The SEMANTICS clause 1920 comprises a MEASURE clause 1930 for identifying Measures 1950 to be used for scoring document attribute tokens, a CHOICE clause 1935 for identifying the Aggregation algorithms 1955 for "rolling up" token scores to obtain document scores, a WEIGHTING clause 1940 for emphasizing or de-emphasizing token scores, and a PATH clause 1945 for indicating a path to an element of a search schema in a RDMS to which the SEMANTICS clause 1920 will apply. The MEASURE clause 1930 contains a partial list of MEASURES algorithms 1950 for determining token attribute scores. Figure 6 above describes a more detailed list of MEASURE algorithms. The CHOICE clause 1935 contains a partial list of CHOICE algorithms 1955 for aggregating token scores into document scores. Turning now to Figure 20, Figure depicts the format of the QUERY command. The QUERY command initiates a similarity search, which scores matches between search terms indicated in a WHERE clause and target values drawn from the relational datasource indicated in the FROM clause. The RESTRICT clause and SELECT clause determine what results are returned. The QUERY command looks to the search schema for the structure and semantics of the search, or to subordinate SEMANTICS clauses that override the default settings in the schema document.
The format of the WHERE clause is shown in Figure 20. The WHERE clause indicates the anchor to be compared to target values drawn from the datasources specified in the FROM clause. The anchor document is structured as a hierarchy to indicate parent/child relationships, reflecting the STRUCTURE clause of the search schema. For the SSE Server, the WHERE clause takes the form of an XML document structure populated with anchor values, i.e. the values that represent the "ideal" for the search. This document's structure conforms to the structure of the search schema. However, only the elements contributing to the similarity need to be included. Hierarchical relationships among elements, which would be established with JOUST operations in SQL, are represented in SSE Command Language by the nesting of elements in the WHERE clause. No matter where they occur in the document structure, all elements included in the WHERE clause are scored against the target values drawn from the associated datasource. Unlike its SQL counterpart, the SSE Server's WHERE clause does not always qualify or select a collection of records for further processing. In a similarity search, every target value receives a score. The results returned to the application client can be controlled with RESTRICT and SELECT clauses, but the similarity search looks at every document. The SSE's WHERE clause tells the SSE Server which elements and attributes to score. A more direct comparison in SQL might be the list of data items in the main clause of the command. Figure 21 depicts an example of the WHERE clause.- A WHERE clause is required in any QUERY that does similarity scoring. Without a WHERE clause, a QUERY can still return documents according the SELECT clause.
The format of the FROM clause is shown in Figure 20. The FROM clause associates the QUERY with the document set being searched. The FROM clause identifies the set of documents to be examined in the search. These are virtual documents drawn from relational datasources according to a predefined mapping. The FROM clause offers two ways to identify search documents. The first draws target values from a relational datasource through the VDM. The second presents the documents themselves as part of the FROM clause. Figure 22A depicts examples of a FROM clause that indicates the search should examine the entire set for "acmejproducts". Figure 22B depicts an example of a FROM clause that indicates the search should examine the documents shown. Turning now to Figure 23, Figure 23 depicts the format of the RESTRICT clause. The RESTRICT clause places limits on the results returned by the QUERY. The RESTRICT clause offers three methods for culling the results of a QUERY before they are returned to the client. When a RESTRICT clause contains multiple methods, they are applied in the order listed, each working on the result of the one before it. The SCORE clause includes <START> and <END> elements (both required, neither repeating) to define the range of scores for documents to be returned. If the <START> score is greater than the <END> score, the documents receiving scores in that range are returned in descending order by score. That is, the score closest to 1.00 comes first. When the <END> score is the larger, the results are in ascending order. The INDEX clause includes <START> and <END> elements (both required, neither repeating) to define a sequence of documents to return. For this purpose, candidate documents are numbered sequentially and the documents with sequence numbers falling in the range between <START> and <END> are returned. This is useful for clients that need a fixed number of documents returned. The sequence numbers must be positive integers. Figure 24 depicts an example of the RESTRICT clause. This RESTRICT clause first limits the scores to those over 0.80. Then it returns the first three. If there are not at least three remaining, it returns what's left.
The format of the SELECT clause is shown in Figure 19. The SELECT clause allows the application to determine the structure of the result set. Otherwise, the results consist of a list of all documents examined with a similarity score for each document. The SELECT clause governs the contents of the result set returned to the client. By default, the client receives a list of DOCUMENT elements, each with a score that indicates its degree of similarity to the search terms in the QUERY; The score is reported as an added attribute for the <DOCUMENT> element, along with its name and schema. If the boolean for scoring is set to false, only the document name and schema are returned. Likewise, if the QUERY does not include a WHERE clause, no scoring is performed. A SELECT clause that includes a structure from the search schema returns <DOCUMENT> elements containing that structure, each with the target value considered in the search. If the boolean for scoring is set to true (default), the result set includes <DETAIL> elements that contain a <PATH> element structure given in the WHERE clause and a <SCORE> element with the similarity score. Figure 25 depicts an example of a SELECT clause that returns both target values and similarity scores.
The QUERY command also contains a SEMANTICS clause, as shown in Figure 20. The SEMANTICS clause in a QUERY command has the same format as a SEMANTICS clause in a SCHEMA command, and is discussed above in the description of Figure 18 and Figure 19. A SEMANTICS clause specifies the semantics to use in the QUERY, and will override the default SEMANTICS clause contained in the SCHEMA command. For details on the MEASURE clause, CHOICE clause, and WEIGHT clause and PATH clause comprising a SEMANTICS clause, refer to their descriptions above in Figure 18 and Figure 19 relative to the SCHEMA command.
For traditional exact-match searches, results are just a list of the documents that satisfy the search's matching criteria. However, similarity searches normally regard all documents as similar to some degree, so the result of a similarity search is a list of all the documents searched, each with a similarity score that tells how similar it is to the search criteria. Optionally, the client can limit the result set to documents with a specified degree of similarity - for example a score of 90% or above - according to the requirements of the application. The client may also request details showing the anchor and target values that were compared to produce the document score. Turning now to Figure 26, Figure 26 A depicts the format of a RESPONSE structure.
A successful QUERY command returns a <RESULT> element whose contents are determined by the SELECT clause as just described above. An unsuccessful QUERY may return an <ERROR> or <WARNFNG> to the client. A RESPONSE format showing only scores of a similarity search is depicted in Figure 26B, and a RESPONSE format showing details of a similarity search is depicted in Figure 26C. Commands other than a QUERY command return results, but not similarity scores. For these other commands, <RESULT> contains an element that echoes the original command and contains set of elements of the type requested. A "list" operation-produces a set of childless elements of the type requested, each with an identifying name attribute. A "read" operation returns complete XML structures for the elements requested. The <DETAIL> element depicted in Figure 26C is included when the score attribute of a QUERY command SELECT clause is set to "true". This produces a list of elements and attributes used in the WHERE clause of the QUERY command and the target values used to produce the scores. Each score is reported in a <SCORE> element of an APPLY clause along with a <WHERE> element with the xpath of the search term and a <FROM> element with the xpath of the target value. When multiple target values are involved, the xpath includes an index to indicate which one was chosen for scoring, e.g. the third value (in tree order) for a product's model number would be Product/Model/Number. In addition to the name attribute, which indicates which result document the details concern, the <DETAIL> element preserves any attributes from the original command. When the DETAIL concerns an unnamed document, such as the result of an embedded QUERY, an index attribute is added and its value indicates the document's sequence number among others in the set. Figure 27 shows an example of a RESPONSE with results of a similarity search containing scores for three documents, where document's score is based on comparing its values with the search terms, a unique name identifies the document, and a search schema used in the command.
Turning now to Figure 28, Figure 28 depicts the format of a DOCUMENT command. The DOCUMENT command enables the application to manage document sets involved in the search. The DOCUMENT command includes operations for managing the document set used in the search. The "list" operation returns a childless <DOCUMENT> element for each document in the set. The "read" operation retrieves documents from the datasource according to the mapping defined in the schema. The "lock" and "unlock" operations provide a simple locking protocol to prevent conflicting updates in case several DOCUMENT operations are attempted at once by different clients. The operation attribute returns as "locked" or "denied" to indicate the success of the operation. When "*" is specified instead of the document name, the "read" operation returns all documents. Likewise, the "*" tells the "delete", "lock", "unlock", and "index" operations to affect all documents in the set. Currently, the "list" operation requires name- '*" and returns only the first 100 documents. Search documents need an identifier to serve as the primary key. The document name can be anything as long as it is unique within the set. Where documents are drawn from relational datasources, it is customary to use the primary key for the root table as the document name. Figure 29 depicts an example of a search document representative of the search document depicted in Figure 11 above. To carry out a search of this document, the structure would be populated with the values used in the search to form the anchor document. The same structure is used to return the results of a search, including the documents found to be similar to the search criteria, in addition to the scores indicating the degree of similarity for each document.
Turning now to Figure 30, Figure 30 depicts a format of a STATISTICS command definition template, where bold italic represents optional sections. The Statistics Processing Module (SPM), discussed above in regard to Figure 5 uses this definition template. Figure 31 is an example of a simple STATISTICS definition. The FROM clause identifies a document Schema and the SELECT clause identifies a last, first and middle name of a claimant. Figure 32 depicts a SCHEMA response to a STATISTICS generation command. Turning now to Figure 33, Figure 33 depicts the format of a BATCH command. BATCH commands provide a way to collect the results of several related operations into a single XML element. Each command in the batch is executed in sequence.
There are additional commands that are used for administrative and maintenance purposes. The DATASOURCE command is used for identifying and maintaining datasources in the Relational Database Management System (RDMS). The MEASURE command is used for creating and maintaining the measures for determining document attribute and token similarity scores stored in the RDMS as User Defined Functions (UDFs). The CHOICE command is used for creating and maintaining aggregation (roll-up) algorithms stored in the RDMS and used by the Search Manager for determining overall document similarity scores. Turning now to Figure 34, Figure 34 depicts the overall process of setting up a schema. Prior to beginning this process, a target database must be imported into the Relational Database Management System associated with the SSE Server, as shown in Figure 1. In addition, the user must have knowledge of the structure of the data within the imported database. The structure knowledge is required for the user to set up a schema. With reference to Figure 1, the VDM synthesizes XML documents from relational data, and the SM synthesizes relational data from XML documents. Referring to Figure 34, a schema must be established 3400 by the Client sending 3410 and the Gateway receiving 3420 a command. The command is transmitted to the SSE Server from the Client using sockets, HTTP or JMS protocol. The command is converted to XCL by the Gateway and it is determined if it is a SCHEMA command. Figure 35 depicts an example of a SCHEMA command based on the format shown in Figure 12. Turning back to Figure 34, after a client issues a Schema command 3410 and the command is received by the Gateway 3420, the Gateway determines that the command is a Schema command 3430. Since this is a SCHEMA command, the Gateway sends the SCHEMA command to the VDM 3440. When the Schema command is received by the VDM 3450, the VDM builds relational tables and primary key tables based on the Schema command attributes 3460. These tables are then stored for future use 3470.
Turning now to Figure 36, Figures 36A, 36B, and 36C depicts the overall process of executing a SSE search. After one or more schemas have been defined, the SSE is ready to accept a QUERY command. A typical QUERY command based on the format shown in
Figure 19 might resemble the example QUERY command shown in Figure 37. Turning now to Figure 36A, when a client issues a QUERY command 3602 that is received by the Gateway 3604, it is determined if there is a WHERE clause in the command 3608. If there were no WHERE clause in a QUERY command 3608, the command would be examined to determine if there was a SELECT clause in the QUERY command 3612. If there were no SELECT clause 3612, RESULT would be returned to the client 3616. If there were a SELECT clause in the Query command 3612, indicating a selection of the structure for the result set to be produced by the QUERY command, the QUERY command would be sent to the VDM 3614. Upon receipt of the QUERY command by the VDM, the VDM extracts the SELECTed values from the RDMS 3618 and includes the SELECTed values in a RESULT set 3620, which is returned to the client 3616. If there were a WHERE clause in a QUERY command 3608, the QUERY command would be sent to the SM 3610.
Turning now to Figure 36B, the QUERY command is received at the SM 3630. It is then determined if the QUERY command is a side-by-side comparison 3632. If it is a side- by-side comparison 3632, a recursive process for scoring nested elements is initiated. If it is not a side-by-side comparison 3632, it is determined if the target is a valid schema 3634. If it is not a valid schema 3634, an error condition is returned to the client as a RESULT 3646. Otherwise, the process moves to a determination of a REPEATING GROUP query 3660 in Figure 36C. The recursive process for scoring nested elements that is entered if the Query requires a side-by-side comparison 3632 comprises determining if a root element of a document has been scored 3636. If it has, RESULT is returned to the client 3638, otherwise it is determined if the element has unscored children 3640. If the root element has unscored children 3640, the next unscored child type is examined 3644 and it is determined if this element has unscored children 3640. If this element does not have unscored children 3640, MEASURE and CHOICE are applied to this element type 3642, and the next unscored child type is examined 3644. This process continues until the root element of the document has been scored 3636 and RESULT returned to the client.
Turning now to Figure 36C, if a target is a valid schema 3634 from Figure 36B, a determination is made of whether the QUERY is a REPEATING GROUP query 3660. If it is a REPEATING GROUP query 3660, a score and primary key is determined for every record in the underlying dataset. If it is not a REPEATING DATASET 3660, the process continues by beginning a SQL statement with primary key 3662, building a UDF call for every attribute and measure 3664, building FROM/JOIN clauses for all tables used 3666, building WHERE clauses for any restrictive measure used 3668, and executing the SQL statement 3670. Next, for every record in the SQL result set, overall score for records using weights in SEMANTICS is calculated 3684, dismissing a record 3690 if the score does not meet restriction 3686, and appending score/pkey to results 3688 if the score does meet restriction 3686. The pkeys and scores replace FROM clause 3682 and control is returned to the Gateway to determine if there is a SELECT clause 3612, and processed as described above in Figure 36A. If the QUERY is a REPEATING GROUP query 3660, a score and primary key is determined for every record in the underlying dataset. This process comprises retrieving an XML document from the VDM 3672 and performing a side-by-side scoring 3674 using the recursive process for scoring nested elements describe above including steps 3636, 3638, 3640, 3642 and 3644. A record is dismissed 3678 if the score does not meet restriction 3676, and appending score/pkey to results 3680 if the score does meet restriction 3666. The pkeys and scores replace FROM clause 3682 and control is returned to the Gateway to determine if there is a SELECT clause 3612, and processed as described above in Figure 36A. An SQL command from the example SCHEMA and QUERY commands shown above may be as follows:
SELECT PKEY STRDIFF(PERSONS.FIRST."JOE"), STRDIFF(PERSON.LAST.SMITH) Turning now to Figure 38, Figure 38 depicts an example data table in the RDMS and associated RESULT from the example SQL command above. Figure 39 depicts the result of the Query command described above that would be returned to the client as a RESULT within a RESPONSE. This RESPONSE corresponds to the results illustrated in Figure 38.
Although the present invention has been described in detail with reference to certain preferred embodiments, it should be apparent that modifications and adaptations to those embodiments might occur to persons skilled in the art without departing from the spirit and scope of the present invention.

Claims

What is claimed is:
1. A method for performing similarity searching, comprising the steps of: receiving a request instruction from a client for initiating a similarity search; generating one or more query commands from the request instruction, each query command designating an anchor document and at least one search document; executing each query command, including: computing a normalized document similarity score having a value of between 0.00 and 1.00 for each search document in each query command for indicating a degree of similarity between the anchor document and each search document; creating a result dataset containing the computed normalized document similarity scores for each search document; and sending a response including the result dataset to the client.
2. The method of claim 1 , wherein the step of generating one or more query commands further comprises identifying a schema document for defining structure of search terms, mapping of datasets providing target search values to relational database locations, and designating measures, choices and weight to be used in a similarity search.
3. The method of claim 1, wherein the step of computing a normalized document similarity score comprises: computing attribute token similarity scores having values of between 0.00 and 1.00 - for the corresponding leaf nodes of the anchor document and a search document using designated measure algorithms; multiplying each token similarity score by a designated weighting factor; aggregating the token similarity scores using designated choice algorithms for determining a document similarity score having a value of between 0.00 and 1.00 for the search document.
4. The method of claim 3, wherein: the step of computing attribute token similarity scores further comprises computing attribute token similarity scores in a relational database management system; the step of multiplying each token similarity score further comprises multiplying each token similarity score in a similarity search engine; and the step of aggregating the token similarity scores further comprises aggregating the token similarity scores in the similarity search engine.
5. The method of claim 1, wherein the step of generating one or more query commands comprises: populating an anchor document with search criteria values; identifying documents to be searched; defining semantics for overriding parameters specified in an associated schema document; defining a structure to be used by the result dataset; and imposing restrictions on the result dataset.
6. The method of claim 5, wherein the step of defining semantics comprises: designating overriding measures for determining attribute token similarity scores; designating overriding choice algorithms for aggregating token similarity scores into document similarity scores; and designating overriding weights to be applied to token similarity scores.
7. The method of claim 5, wherein the step of imposing restrictions is selected from the group consisting of defining a range of similarity indicia scores to be selected and defining percentiles of similarity indicia scores to be selected.
8. The method of claim 1 , wherein the step of computing a normalized document similarity score further comprises computing a normalized document similarity score having a value of between 0.00 and 1.00, whereby a normalized similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching.
9. The method of claim 3, wherein the step of computing attribute token similarity scores having values of between 0.00 and 1.00 further comprises computing attribute token similarity scores having values of between 0.00 and 1.00, whereby a attribute token similarity value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching.
10. The method of claim 1, wherein the step of generating one or more query commands further comprises generating one or more query commands whereby each query command includes attributes of command operation, name identification, and associated schema document identification.
11. The method of claim 1, further comprising: receiving a schema instruction from a client; generating a schema command document comprising the steps of: defining a structure of target search terms in one or more search documents; creating a mapping of database record locations to the target search terms; listing semantic elements for defining measures, weights and choices to be used in similarity searches; and storing the schema command document into a database management system.
12. The method of claim 1, further comprising the step of representing documents and commands as hierarchical XML documents.
13. The method of claim 1, wherein the step of sending a response to the client further comprises sending a response including an error message and a warning message to the client.
14. The method of claim 1, wherein the step of sending a response to the client further comprises sending a response to the client containing the result datasets, whereby each result dataset includes at least one normalized document similarity score, at least one search document name, a path to the search documents having a returned score, and at least one designated schema.
15. The method of claim 1, further comprising: receiving a statistics instruction from a client; generating a statistics command from the statistics instruction, comprising the steps of: identifying a statistics definition to be used for generating statistics; populating an anchor document with search criteria values; identifying documents to be searched; delineating semantics for overriding measures, parsers and choices defined in a semantics clause in an associated schema document; defining a structure to be used by a result dataset; imposing restrictions to be applied to the result dataset; identifying a schema to be used for the basis of generating statistics; designating a name for the target statistics table for storing results; executing the statistics command for generating a statistics schema with statistics table, mappings and measures; and storing the statistics schema in a database management system.
16. The method of claim 1 , further comprising the step of executing a batch command comprising executing a plurality of commands in sequence for collecting results of several related operations.
17. The method of claim 3, further comprising selecting measure algorithms from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination.
18. The method of claim 3, further comprising selecting choice algorithms from the group consisting of single best, greedy sum, overall sum, greedy minimum, overall minimum, and overall maximum.
19. A computer-readable medium containing instructions for controlling a computer system to implement the method of claim 1.
20. A system for performing similarity searching, comprising: a gateway for receiving a request instruction from a client for initiating a similarity search; the gateway for generating one or more query commands from the request instruction, each query command designating an anchor document and at least one search document; a search manager for executing each query command, including: means for computing a normalized document similarity score having a value of between 0.00 and 1.00 for each search document in each query command for indicating a degree of similarity between the anchor document and each search document; means for creating a result dataset containing the computed normalized document similarity scores for each search document; and the gateway for sending a response including the result dataset to the client.
21. The system of claim 20, wherein the means for computing a normalized similarity score comprises: a relational database management system for computing attribute token similarity scores having values of between 0.00 and 1.00 for the corresponding leaf nodes of the anchor document and a search document using designated measure algorithms; and the search manager for multiplying each token similarity score by a designated weighting factor and aggregating the token similarity scores using designated choice algorithms for determining a document similarity score having a value of between 0.00 and 1.00 for the search document.
22. The system of claim 20, wherein: each one or more query commands further comprises a measure designation; and the database management system further comprises designated measure algorithms for computing a token similarity score.
23. The system of claim 20, wherein each query command comprises: an anchor document populated with search criteria values; at least one search document; designated measure algorithms for determining token similarity scores; designated choice algorithms for aggregating token similarity scores into document similarity scores; designated weights for weighting token similarity scores; restrictions to be applied to a result dataset document; and a structure to be used by the result dataset.
24. The system of claim 20, wherein the computed document similarity scores have a value of between 0.00 and 1.00, whereby a normalized similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching.
25. The system of claim 21 , wherein the relational database management system includes means for computing an attribute token similarity score having a value of between 0.00 and 1.00, whereby a token similarity indicia value of 0.00 represents no similarity matching, a value of 1.00 represents exact similarity matching, and values between 0.00 and 1.00 represent degrees of similarity matching.
26. The system of claim 20, wherein each query command includes attributes of command operation, name identification, and associated schema document identification for providing a mapping of search documents to database management system locations.
27. The system of claim 20, further comprising: the gateway for receiving a schema instruction from a client; a virtual document manager for generating a schema command document; the schema command document comprising: a structure of target search terms in one or more search documents; a mapping of database record locations to the target search terms; semantic elements for defining measures, weights, and choices for use in searches; and a relational database management system for storing the schema command document.
28. The system of claim 20, wherein each result dataset includes at least one normalized document similarity score, at least one search document name, a path to the search documents having a returned score and at least one designated schema.
29. The system of claim 20, wherein each result dataset includes an error message and a warning message to the client.
30. The system of claim 20, further comprising: the gateway for receiving a statistics instruction from a client and for generating a statistics command from the statistics instruction; the search manager for identifying a statistics definition to be used for generating statistics, populating an anchor document with search criteria values, identifying documents to be searched, delineating semantics for overriding measures, weights and choices defined in a semantics clause in an associated schema document, defining a structure to be used by a result dataset, imposing restrictions to be applied to the result dataset, identifying a schema to be used for the basis of generating statistics, designating a name for the target statistics table for storing results; and a statistics processing module for executing the statistics command for generating a statistics schema with statistics table, mappings and measures, and storing the statistics schema in a database management system.
31. The system of claim 20, further comprising the gateway for receiving a batch command from a client for executing a plurality of commands in sequence for collecting results of several related operations.
32. The system of claim 21, wherein the measure algorithms are selected from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination.
33. The system of claim 21, wherein the choice algorithms are selected from the group consisting of single best, greedy sum, overall sum, greedy minimum, overall minimum, and overall maximum.
34. A system for performing similarity searching, comprising: a gateway for handling all communication between a client, a virtual document manager and a search manager; the virtual document manager connected between the gateway and a relational database management system for providing document management; the search manager connected between the gateway and the relational database management system for searching and scoring documents; and the relational database management system for providing relational data management, document and measure persistence, and similarity measure execution.
35. The system of claim 34, wherein the virtual document manager includes a relational database driver for mapping XML documents to relational database tables.
36. The system of claim 34, wherein the virtual document manager includes a statistics processing module for generating statistics based on similarity search results.
37. The system of claim 34, wherein the relational database management system includes means for storing and executing user defined functions.
38. The system of claim 37, wherein the user defined functions include measurement algorithms for determining attribute token similarity scores.
39. A method for performing similarity searching, comprising the steps of: creating a search schema document by a virtual document manager; generating one or more query commands by a gateway; executing one or more query commands in a search manager and relational database management system for determining the degree of similarity between an anchor document and search documents; and assembling a result document containing document similarity scores of between 0.00 and 1.00.
40. The method of claim 39, wherein the step of creating a schema document comprises designating a structure of search documents, datasets for mapping search document attributes to relational database locations, and semantics identifying measures for computing token attribute similarity search scores between search documents and an anchor document, weights for modulating token attribute similarity search scores, choices for aggregating token attribute similarity search scores into document similarity search scores, and paths to the search document structure attributes.
41. The method of claim 39, wherein the step of generating one or more query commands comprises designating an anchor document, search or schema documents, restrictions on result sets, structure of result sets, and semantics for overriding schema document semantics including measures, weights, choices and paths.
42. The method of claim 39, wherein the step of executing one or more query commands comprises: computing token attribute similarity search scores having values of between 0.00 and 1.00 for each search document and an anchor document in a relational database management system using measures; and modulating the token attribute similarity search scores using weights and aggregating the token attribute similarity scores into document similarity scores having values of between 0.00 and 1.00 in the search manager using choices.
43. The method of claim 39, wherein the step of assembling a result document comprises identifying associated query commands and schema documents, document structure, paths to search terms, and similarity scores by the search manager.
44. The method of claim 39, wherein the search schema, the query commands, the search documents, the anchor document and the result document are represented by hierarchical XML documents.
45. The method of claim 40, further comprising selecting measure algorithms from the group consisting of name equivalents, foreign name equivalents, textual, sound coding, string difference, numeric, numbered difference, ranges, numeric combinations, range combinations, fuzzy, date oriented, date to range, date difference, and date combination.
46. The method of claim 40, further comprising selecting choice algorithms from the group consisting of single best, greedy sum, overall sum, greedy minimum, overall minimum, and overall maximum.
. A computer-readable medium containing instructions for controlling a computer system implement the method of claim 39.
PCT/US2003/004685 2002-02-14 2003-02-14 Similarity search engine for use with relational databases WO2003069510A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2003219777A AU2003219777A1 (en) 2002-02-14 2003-02-14 Similarity search engine for use with relational databases
EP03716051A EP1476826A4 (en) 2002-02-14 2003-02-14 Similarity search engine for use with relational databases
CA002475962A CA2475962A1 (en) 2002-02-14 2003-02-14 Similarity search engine for use with relational databases

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US35681202P 2002-02-14 2002-02-14
US60/356,812 2002-02-14
US10/365,828 2003-02-13
US10/365,828 US6829606B2 (en) 2002-02-14 2003-02-13 Similarity search engine for use with relational databases

Publications (1)

Publication Number Publication Date
WO2003069510A1 true WO2003069510A1 (en) 2003-08-21

Family

ID=27737554

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/004685 WO2003069510A1 (en) 2002-02-14 2003-02-14 Similarity search engine for use with relational databases

Country Status (5)

Country Link
US (2) US6829606B2 (en)
EP (1) EP1476826A4 (en)
AU (1) AU2003219777A1 (en)
CA (1) CA2475962A1 (en)
WO (1) WO2003069510A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007126698A1 (en) 2006-04-26 2007-11-08 Microsoft Corporation Significant change search alerts
EP2778980A1 (en) * 2013-03-14 2014-09-17 Wal-Mart Stores, Inc. Attribute-based document searching
WO2019133206A1 (en) * 2017-12-29 2019-07-04 Kensho Technologies, Llc Search engine for identifying analogies

Families Citing this family (204)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US7904187B2 (en) 1999-02-01 2011-03-08 Hoffberg Steven M Internet appliance system and method
US8126779B2 (en) * 1999-04-11 2012-02-28 William Paul Wanker Machine implemented methods of ranking merchants
US7302429B1 (en) * 1999-04-11 2007-11-27 William Paul Wanker Customizable electronic commerce comparison system and method
US6883135B1 (en) * 2000-01-28 2005-04-19 Microsoft Corporation Proxy server using a statistical model
US7302164B2 (en) * 2000-02-11 2007-11-27 Datcard Systems, Inc. System and method for producing medical image data onto portable digital recording media
US6915294B1 (en) * 2000-08-18 2005-07-05 Firstrain, Inc. Method and apparatus for searching network resources
US6978301B2 (en) * 2000-12-06 2005-12-20 Intelliden System and method for configuring a network device
US7035866B1 (en) * 2001-10-09 2006-04-25 Microsoft Corporation System and method providing diffgram format
US7676507B2 (en) * 2002-01-11 2010-03-09 Enrico Maim Methods and systems for searching and associating information resources such as web pages
WO2003085552A2 (en) * 2002-04-10 2003-10-16 Software Engineering Gmbh Comparison of source files
US7082561B2 (en) * 2002-04-30 2006-07-25 Lsi Logic Corporation Built-in functional tester for search engines
US7210136B2 (en) * 2002-05-24 2007-04-24 Avaya Inc. Parser generation based on example document
US7613994B2 (en) 2002-05-29 2009-11-03 International Business Machines Corporation Document handling in a web application
US7752293B1 (en) * 2002-07-30 2010-07-06 Cisco Technology, Inc. Command processing in a telecommunications network
US7058644B2 (en) * 2002-10-07 2006-06-06 Click Commerce, Inc. Parallel tree searches for matching multiple, hierarchical data structures
US7685262B2 (en) * 2003-01-24 2010-03-23 General Electric Company Method and system for transfer of imaging protocols and procedures
US20040163041A1 (en) * 2003-02-13 2004-08-19 Paterra, Inc. Relational database structures for structured documents
US7200785B2 (en) * 2003-03-13 2007-04-03 Lsi Logic Corporation Sequential tester for longest prefix search engines
US20040205216A1 (en) * 2003-03-19 2004-10-14 Ballinger Keith W. Efficient message packaging for transport
US8180784B2 (en) * 2003-03-28 2012-05-15 Oracle International Corporation Method and system for improving performance of counting hits in a search
US8315998B1 (en) * 2003-04-28 2012-11-20 Verizon Corporate Services Group Inc. Methods and apparatus for focusing search results on the semantic web
US10475116B2 (en) * 2003-06-03 2019-11-12 Ebay Inc. Method to identify a suggested location for storing a data entry in a database
US7668888B2 (en) * 2003-06-05 2010-02-23 Sap Ag Converting object structures for search engines
US7401075B2 (en) * 2003-06-11 2008-07-15 Wtviii, Inc. System for viewing and indexing mark up language messages, forms and documents
WO2004112301A2 (en) * 2003-06-11 2004-12-23 Wtviii, Inc. Mark up language authoring system
US20040260715A1 (en) * 2003-06-20 2004-12-23 Mongeon Brad A. Object mapping across multiple different data stores
US7505964B2 (en) 2003-09-12 2009-03-17 Google Inc. Methods and systems for improving a search ranking using related queries
US7236982B2 (en) * 2003-09-15 2007-06-26 Pic Web Services, Inc. Computer systems and methods for platform independent presentation design
US7386562B2 (en) * 2003-11-25 2008-06-10 Abb Technology Ag Generic product finder system and method
US7418456B2 (en) * 2004-01-16 2008-08-26 International Business Machines Corporation Method for defining a metadata schema to facilitate passing data between an extensible markup language document and a hierarchical database
US8296304B2 (en) 2004-01-26 2012-10-23 International Business Machines Corporation Method, system, and program for handling redirects in a search engine
US7499913B2 (en) * 2004-01-26 2009-03-03 International Business Machines Corporation Method for handling anchor text
US7424467B2 (en) * 2004-01-26 2008-09-09 International Business Machines Corporation Architecture for an indexer with fixed width sort and variable width sort
US7293005B2 (en) 2004-01-26 2007-11-06 International Business Machines Corporation Pipelined architecture for global analysis and index building
US8880502B2 (en) * 2004-03-15 2014-11-04 International Business Machines Corporation Searching a range in a set of values in a network with distributed storage entities
US7584221B2 (en) * 2004-03-18 2009-09-01 Microsoft Corporation Field weighting in text searching
US7854003B1 (en) * 2004-03-19 2010-12-14 Verizon Corporate Services Group Inc. & Raytheon BBN Technologies Corp. Method and system for aggregating algorithms for detecting linked interactive network connections
US20050210001A1 (en) * 2004-03-22 2005-09-22 Yeun-Jonq Lee Field searching method and system having user-interface for composite search queries
US7761858B2 (en) * 2004-04-23 2010-07-20 Microsoft Corporation Semantic programming language
US7398274B2 (en) * 2004-04-27 2008-07-08 International Business Machines Corporation Mention-synchronous entity tracking system and method for chaining mentions
JP2005321849A (en) * 2004-05-06 2005-11-17 Fujitsu Ltd Data analysis support program, method, and device
US7827175B2 (en) 2004-06-10 2010-11-02 International Business Machines Corporation Framework reactive search facility
US7836411B2 (en) 2004-06-10 2010-11-16 International Business Machines Corporation Search framework metadata
US9626437B2 (en) * 2004-06-10 2017-04-18 International Business Machines Corporation Search scheduling and delivery tool for scheduling a search using a search framework profile
WO2006007194A1 (en) * 2004-06-25 2006-01-19 Personasearch, Inc. Dynamic search processor
US7540051B2 (en) * 2004-08-20 2009-06-02 Spatial Systems, Inc. Mapping web sites based on significance of contact and category
US7461064B2 (en) * 2004-09-24 2008-12-02 International Buiness Machines Corporation Method for searching documents for ranges of numeric values
US7606793B2 (en) 2004-09-27 2009-10-20 Microsoft Corporation System and method for scoping searches using index keys
US7827181B2 (en) 2004-09-30 2010-11-02 Microsoft Corporation Click distance determination
US7761448B2 (en) 2004-09-30 2010-07-20 Microsoft Corporation System and method for ranking search results using click distance
US7739277B2 (en) 2004-09-30 2010-06-15 Microsoft Corporation System and method for incorporating anchor text into ranking search results
US8296354B2 (en) * 2004-12-03 2012-10-23 Microsoft Corporation Flexibly transferring typed application data
US7739270B2 (en) * 2004-12-07 2010-06-15 Microsoft Corporation Entity-specific tuned searching
US7716198B2 (en) 2004-12-21 2010-05-11 Microsoft Corporation Ranking search results using feature extraction
WO2006081474A2 (en) * 2005-01-27 2006-08-03 Intel Corp. Multi-path simultaneous xpath evaluation over data streams
US7792833B2 (en) 2005-03-03 2010-09-07 Microsoft Corporation Ranking search results using language types
US7685203B2 (en) * 2005-03-21 2010-03-23 Oracle International Corporation Mechanism for multi-domain indexes on XML documents
US8095386B2 (en) * 2005-05-03 2012-01-10 Medicity, Inc. System and method for using and maintaining a master matching index
US8316129B2 (en) 2005-05-25 2012-11-20 Microsoft Corporation Data communication coordination with sequence numbers
US20060287996A1 (en) * 2005-06-16 2006-12-21 International Business Machines Corporation Computer-implemented method, system, and program product for tracking content
US20070005592A1 (en) * 2005-06-21 2007-01-04 International Business Machines Corporation Computer-implemented method, system, and program product for evaluating annotations to content
US8417693B2 (en) * 2005-07-14 2013-04-09 International Business Machines Corporation Enforcing native access control to indexed documents
US8260771B1 (en) * 2005-07-22 2012-09-04 A9.Com, Inc. Predictive selection of item attributes likely to be useful in refining a search
US7599917B2 (en) 2005-08-15 2009-10-06 Microsoft Corporation Ranking search results using biased click distance
US7840553B2 (en) * 2005-10-07 2010-11-23 Oracle International Corp. Recommending materialized views for queries with multiple instances of same table
US7634457B2 (en) * 2005-10-07 2009-12-15 Oracle International Corp. Function-based index tuning for queries with expressions
US20090168163A1 (en) * 2005-11-01 2009-07-02 Global Bionic Optics Pty Ltd. Optical lens systems
US7870031B2 (en) * 2005-12-22 2011-01-11 Ebay Inc. Suggested item category systems and methods
US20070185860A1 (en) * 2006-01-24 2007-08-09 Michael Lissack System for searching
US8424020B2 (en) * 2006-01-31 2013-04-16 Microsoft Corporation Annotating portions of a message with state properties
US7933472B1 (en) 2006-04-26 2011-04-26 Datcard Systems, Inc. System for remotely generating and distributing DICOM-compliant media volumes
US7542973B2 (en) * 2006-05-01 2009-06-02 Sap, Aktiengesellschaft System and method for performing configurable matching of similar data in a data repository
JP4826331B2 (en) * 2006-05-09 2011-11-30 富士ゼロックス株式会社 Document usage tracking system
US7526486B2 (en) * 2006-05-22 2009-04-28 Initiate Systems, Inc. Method and system for indexing information about entities with respect to hierarchies
US8332366B2 (en) 2006-06-02 2012-12-11 International Business Machines Corporation System and method for automatic weight generation for probabilistic matching
US7730060B2 (en) * 2006-06-09 2010-06-01 Microsoft Corporation Efficient evaluation of object finder queries
US8370423B2 (en) 2006-06-16 2013-02-05 Microsoft Corporation Data synchronization and sharing relationships
US8635214B2 (en) 2006-07-26 2014-01-21 International Business Machines Corporation Improving results from search providers using a browsing-time relevancy factor
EP2062172A4 (en) * 2006-08-21 2012-01-04 Choice Engine Pty Ltd A choice engine
US7698268B1 (en) 2006-09-15 2010-04-13 Initiate Systems, Inc. Method and system for filtering false positives
US8356009B2 (en) 2006-09-15 2013-01-15 International Business Machines Corporation Implementation defined segments for relational database systems
US7685093B1 (en) 2006-09-15 2010-03-23 Initiate Systems, Inc. Method and system for comparing attributes such as business names
US8661029B1 (en) 2006-11-02 2014-02-25 Google Inc. Modifying search result ranking based on implicit user feedback
US7552119B2 (en) * 2006-12-20 2009-06-23 International Business Machines Corporation Apparatus and method for skipping XML index scans with common ancestors of a previously failed predicate
US7716210B2 (en) * 2006-12-20 2010-05-11 International Business Machines Corporation Method and apparatus for XML query evaluation using early-outs and multiple passes
US8606799B2 (en) * 2006-12-28 2013-12-10 Sap Ag Software and method for utilizing a generic database query
US7730056B2 (en) * 2006-12-28 2010-06-01 Sap Ag Software and method for utilizing a common database layout
US8417731B2 (en) 2006-12-28 2013-04-09 Sap Ag Article utilizing a generic update module with recursive calls identify, reformat the update parameters into the identified database table structure
WO2008085989A1 (en) * 2007-01-10 2008-07-17 Richard Garfinkle A software method for data storage and retrieval
US8359339B2 (en) * 2007-02-05 2013-01-22 International Business Machines Corporation Graphical user interface for configuration of an algorithm for the matching of data records
US7958058B2 (en) * 2007-03-02 2011-06-07 International Business Machines Corporation System, method, and service for migrating an item within a workflow process
US8938463B1 (en) 2007-03-12 2015-01-20 Google Inc. Modifying search result ranking based on implicit user feedback and a model of presentation bias
US8694374B1 (en) 2007-03-14 2014-04-08 Google Inc. Detecting click spam
US8515926B2 (en) 2007-03-22 2013-08-20 International Business Machines Corporation Processing related data from information sources
US20080243874A1 (en) * 2007-03-28 2008-10-02 Microsoft Corporation Lightweight Schema Definition
US8423514B2 (en) * 2007-03-29 2013-04-16 International Business Machines Corporation Service provisioning
WO2008121824A1 (en) * 2007-03-29 2008-10-09 Initiate Systems, Inc. Method and system for data exchange among data sources
WO2008121700A1 (en) * 2007-03-29 2008-10-09 Initiate Systems, Inc. Method and system for managing entities
WO2008121170A1 (en) * 2007-03-29 2008-10-09 Initiate Systems, Inc. Method and system for parsing languages
US7941442B2 (en) 2007-04-18 2011-05-10 Microsoft Corporation Object similarity search in high-dimensional vector spaces
US9092510B1 (en) 2007-04-30 2015-07-28 Google Inc. Modifying search result ranking based on a temporal element of user feedback
US8103150B2 (en) * 2007-06-07 2012-01-24 Cyberlink Corp. System and method for video editing based on semantic data
US20080320088A1 (en) * 2007-06-19 2008-12-25 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Helping valuable message content pass apparent message filtering
US8682982B2 (en) * 2007-06-19 2014-03-25 The Invention Science Fund I, Llc Preliminary destination-dependent evaluation of message content
US8984133B2 (en) * 2007-06-19 2015-03-17 The Invention Science Fund I, Llc Providing treatment-indicative feedback dependent on putative content treatment
US9374242B2 (en) * 2007-11-08 2016-06-21 Invention Science Fund I, Llc Using evaluations of tentative message content
US20110010214A1 (en) * 2007-06-29 2011-01-13 Carruth J Scott Method and system for project management
US7877393B2 (en) * 2007-07-19 2011-01-25 Oracle America, Inc. Method and system for accessing a file system
US20090044101A1 (en) * 2007-08-07 2009-02-12 Wtviii, Inc. Automated system and method for creating minimal markup language schemas for a framework of markup language schemas
US8694511B1 (en) 2007-08-20 2014-04-08 Google Inc. Modifying search result ranking based on populations
US8065404B2 (en) * 2007-08-31 2011-11-22 The Invention Science Fund I, Llc Layering destination-dependent content handling guidance
US8082225B2 (en) * 2007-08-31 2011-12-20 The Invention Science Fund I, Llc Using destination-dependent criteria to guide data transmission decisions
US8713434B2 (en) 2007-09-28 2014-04-29 International Business Machines Corporation Indexing, relating and managing information about entities
JP5306359B2 (en) 2007-09-28 2013-10-02 インターナショナル・ビジネス・マシーンズ・コーポレーション Method and system for associating data records in multiple languages
WO2009042941A1 (en) * 2007-09-28 2009-04-02 Initiate Systems, Inc. Method and system for analysis of a system for matching data records
US8909655B1 (en) 2007-10-11 2014-12-09 Google Inc. Time based ranking
US7840569B2 (en) 2007-10-18 2010-11-23 Microsoft Corporation Enterprise relevancy ranking using a neural network
US9348912B2 (en) 2007-10-18 2016-05-24 Microsoft Technology Licensing, Llc Document length as a static relevance feature for ranking search results
US8903842B2 (en) 2007-10-26 2014-12-02 Microsoft Corporation Metadata driven reporting and editing of databases
US7930389B2 (en) * 2007-11-20 2011-04-19 The Invention Science Fund I, Llc Adaptive filtering of annotated messages or the like
US20090138329A1 (en) * 2007-11-26 2009-05-28 William Paul Wanker Application of query weights input to an electronic commerce information system to target advertising
US7945571B2 (en) * 2007-11-26 2011-05-17 Legit Services Corporation Application of weights to online search request
US20090150355A1 (en) * 2007-11-28 2009-06-11 Norton Garfinkle Software method for data storage and retrieval
US20090177301A1 (en) * 2007-12-03 2009-07-09 Codentity, Llc Scalable system and method for an integrated digital media catalog, management and reproduction system
US7865502B2 (en) * 2008-04-10 2011-01-04 International Business Machines Corporation Optimization of extensible markup language path language (XPATH) expressions in a database management system configured to accept extensible markup language (XML) queries
US8812493B2 (en) 2008-04-11 2014-08-19 Microsoft Corporation Search results ranking using editing distance and document information
US8250637B2 (en) * 2008-04-29 2012-08-21 International Business Machines Corporation Determining the degree of relevance of duplicate alerts in an entity resolution system
US8386476B2 (en) * 2008-05-20 2013-02-26 Gary Stephen Shuster Computer-implemented search using result matching
US8738360B2 (en) 2008-06-06 2014-05-27 Apple Inc. Data detection of a character sequence having multiple possible data types
US8396865B1 (en) 2008-12-10 2013-03-12 Google Inc. Sharing search engine relevance data between corpora
US9135396B1 (en) 2008-12-22 2015-09-15 Amazon Technologies, Inc. Method and system for determining sets of variant items
US9009146B1 (en) 2009-04-08 2015-04-14 Google Inc. Ranking search results based on similar queries
US8447760B1 (en) 2009-07-20 2013-05-21 Google Inc. Generating a related set of documents for an initial set of documents
US8498974B1 (en) 2009-08-31 2013-07-30 Google Inc. Refining search results
US8972391B1 (en) 2009-10-02 2015-03-03 Google Inc. Recent interest based relevance scoring
US8874555B1 (en) 2009-11-20 2014-10-28 Google Inc. Modifying scoring data based on historical changes
US8615514B1 (en) 2010-02-03 2013-12-24 Google Inc. Evaluating website properties by partitioning user feedback
US8924379B1 (en) 2010-03-05 2014-12-30 Google Inc. Temporal-based score adjustments
US8959093B1 (en) 2010-03-15 2015-02-17 Google Inc. Ranking search results based on anchors
US8738635B2 (en) 2010-06-01 2014-05-27 Microsoft Corporation Detection of junk in search result ranking
US9623119B1 (en) 2010-06-29 2017-04-18 Google Inc. Accentuating search results
US8832083B1 (en) 2010-07-23 2014-09-09 Google Inc. Combining user feedback
US20120041820A1 (en) * 2010-08-12 2012-02-16 Mark Allen Simon Machine to structure data as composite property
US9262390B2 (en) 2010-09-02 2016-02-16 Lexis Nexis, A Division Of Reed Elsevier Inc. Methods and systems for annotating electronic documents
US8631277B2 (en) 2010-12-10 2014-01-14 Microsoft Corporation Providing transparent failover in a file system
US9002867B1 (en) 2010-12-30 2015-04-07 Google Inc. Modifying ranking data based on document changes
US9331955B2 (en) 2011-06-29 2016-05-03 Microsoft Technology Licensing, Llc Transporting operations of arbitrary size over remote direct memory access
US8856582B2 (en) 2011-06-30 2014-10-07 Microsoft Corporation Transparent failover
US20130067095A1 (en) 2011-09-09 2013-03-14 Microsoft Corporation Smb2 scaleout
US8788579B2 (en) 2011-09-09 2014-07-22 Microsoft Corporation Clustered client failover
US8782042B1 (en) 2011-10-14 2014-07-15 Firstrain, Inc. Method and system for identifying entities
US20130117257A1 (en) * 2011-11-03 2013-05-09 Microsoft Corporation Query result estimation
US9495462B2 (en) 2012-01-27 2016-11-15 Microsoft Technology Licensing, Llc Re-ranking search results
US8977613B1 (en) 2012-06-12 2015-03-10 Firstrain, Inc. Generation of recurring searches
US9218373B2 (en) * 2012-10-10 2015-12-22 Business Objects Software Ltd. In-memory data profiling
US8862566B2 (en) 2012-10-26 2014-10-14 Equifax, Inc. Systems and methods for intelligent parallel searching
WO2014087714A1 (en) 2012-12-04 2014-06-12 株式会社エヌ・ティ・ティ・ドコモ Information processing device, server device, dialogue system and program
US10592480B1 (en) 2012-12-30 2020-03-17 Aurea Software, Inc. Affinity scoring
US9942334B2 (en) 2013-01-31 2018-04-10 Microsoft Technology Licensing, Llc Activity graphs
US10438254B2 (en) 2013-03-15 2019-10-08 Ebay Inc. Using plain text to list an item on a publication system
US9183499B1 (en) 2013-04-19 2015-11-10 Google Inc. Evaluating quality based on neighbor features
US10007897B2 (en) * 2013-05-20 2018-06-26 Microsoft Technology Licensing, Llc Auto-calendaring
JP6107429B2 (en) * 2013-05-30 2017-04-05 富士通株式会社 Database system, search method and program
US9443015B1 (en) * 2013-10-31 2016-09-13 Allscripts Software, Llc Automatic disambiguation assistance for similar items in a set
US9025892B1 (en) 2013-12-02 2015-05-05 Qbase, LLC Data record compression with progressive and/or selective decomposition
US9223833B2 (en) 2013-12-02 2015-12-29 Qbase, LLC Method for in-loop human validation of disambiguated features
US9922032B2 (en) 2013-12-02 2018-03-20 Qbase, LLC Featured co-occurrence knowledge base from a corpus of documents
US9230041B2 (en) 2013-12-02 2016-01-05 Qbase, LLC Search suggestions of related entities based on co-occurrence and/or fuzzy-score matching
US9201744B2 (en) 2013-12-02 2015-12-01 Qbase, LLC Fault tolerant architecture for distributed computing systems
US9223875B2 (en) 2013-12-02 2015-12-29 Qbase, LLC Real-time distributed in memory search architecture
US9424294B2 (en) 2013-12-02 2016-08-23 Qbase, LLC Method for facet searching and search suggestions
US9317565B2 (en) 2013-12-02 2016-04-19 Qbase, LLC Alerting system based on newly disambiguated features
US9424524B2 (en) 2013-12-02 2016-08-23 Qbase, LLC Extracting facts from unstructured text
US9177262B2 (en) 2013-12-02 2015-11-03 Qbase, LLC Method of automated discovery of new topics
US9619571B2 (en) 2013-12-02 2017-04-11 Qbase, LLC Method for searching related entities through entity co-occurrence
US9355152B2 (en) 2013-12-02 2016-05-31 Qbase, LLC Non-exclusionary search within in-memory databases
US9348573B2 (en) 2013-12-02 2016-05-24 Qbase, LLC Installation and fault handling in a distributed system utilizing supervisor and dependency manager nodes
US9547701B2 (en) 2013-12-02 2017-01-17 Qbase, LLC Method of discovering and exploring feature knowledge
WO2015084726A1 (en) 2013-12-02 2015-06-11 Qbase, LLC Event detection through text analysis template models
US9336280B2 (en) 2013-12-02 2016-05-10 Qbase, LLC Method for entity-driven alerts based on disambiguated features
KR20160124742A (en) 2013-12-02 2016-10-28 큐베이스 엘엘씨 Method for disambiguating features in unstructured text
US9659108B2 (en) 2013-12-02 2017-05-23 Qbase, LLC Pluggable architecture for embedding analytics in clustered in-memory databases
US9544361B2 (en) 2013-12-02 2017-01-10 Qbase, LLC Event detection through text analysis using dynamic self evolving/learning module
CN106462575A (en) 2013-12-02 2017-02-22 丘贝斯有限责任公司 Design and implementation of clustered in-memory database
US9984427B2 (en) 2013-12-02 2018-05-29 Qbase, LLC Data ingestion module for event detection and increased situational awareness
US9208204B2 (en) 2013-12-02 2015-12-08 Qbase, LLC Search suggestions using fuzzy-score matching and entity co-occurrence
US9542477B2 (en) 2013-12-02 2017-01-10 Qbase, LLC Method of automated discovery of topics relatedness
US10157222B2 (en) 2013-12-12 2018-12-18 Samuel S. Epstein Methods and apparatuses for content preparation and/or selection
US11017003B2 (en) 2013-12-12 2021-05-25 Samuel S. Epstein Methods and apparatuses for content preparation and/or selection
US9361317B2 (en) 2014-03-04 2016-06-07 Qbase, LLC Method for entity enrichment of digital content to enable advanced search functionality in content management systems
US20160259888A1 (en) * 2015-03-02 2016-09-08 Sony Corporation Method and system for content management of video images of anatomical regions
WO2017082875A1 (en) * 2015-11-10 2017-05-18 Hewlett Packard Enterprise Development Lp Data allocation based on secure information retrieval
US10223429B2 (en) 2015-12-01 2019-03-05 Palantir Technologies Inc. Entity data attribution using disparate data sets
CN105528403B (en) * 2015-12-02 2020-01-03 小米科技有限责任公司 Target data identification method and device
CN106126643B (en) * 2016-06-23 2018-01-02 北京百度网讯科技有限公司 The distributed approach and device of stream data
US11080301B2 (en) 2016-09-28 2021-08-03 Hewlett Packard Enterprise Development Lp Storage allocation based on secure data comparisons via multiple intermediaries
US10466965B2 (en) * 2017-02-22 2019-11-05 Paypal, Inc. Identification of users across multiple platforms
US10310471B2 (en) * 2017-02-28 2019-06-04 Accenture Global Solutions Limited Content recognition and communication system
US10984030B2 (en) * 2017-03-20 2021-04-20 International Business Machines Corporation Creating cognitive intelligence queries from multiple data corpuses
US10691652B2 (en) 2018-03-29 2020-06-23 International Business Machines Corporation Similarity-based clustering search engine
US10970471B2 (en) * 2018-04-23 2021-04-06 International Business Machines Corporation Phased collaborative editing
US11030222B2 (en) 2019-04-09 2021-06-08 Fair Isaac Corporation Similarity sharding
EP3843016A1 (en) 2019-12-23 2021-06-30 Commissariat à l'Energie Atomique et aux Energies Alternatives Method for screening data implemented by computer

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038561A (en) * 1996-10-15 2000-03-14 Manning & Napier Information Services Management and analysis of document information text
US6041323A (en) * 1996-04-17 2000-03-21 International Business Machines Corporation Information search method, information search device, and storage medium for storing an information search program
US6446065B1 (en) * 1996-07-05 2002-09-03 Hitachi, Ltd. Document retrieval assisting method and system for the same and document retrieval service using the same

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6850252B1 (en) * 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US5579471A (en) * 1992-11-09 1996-11-26 International Business Machines Corporation Image query system and method
US6182069B1 (en) * 1992-11-09 2001-01-30 International Business Machines Corporation Video query system and method
GB2281645A (en) * 1993-09-03 1995-03-08 Ibm Control of access to a networked system
US5615109A (en) * 1995-05-24 1997-03-25 Eder; Jeff Method of and system for generating feasible, profit maximizing requisition sets
US5893095A (en) * 1996-03-29 1999-04-06 Virage, Inc. Similarity engine for content-based retrieval of images
US5915250A (en) * 1996-03-29 1999-06-22 Virage, Inc. Threshold-based comparison
US6374237B1 (en) * 1996-12-24 2002-04-16 Intel Corporation Data set selection based upon user profile
US6070240A (en) * 1997-08-27 2000-05-30 Ensure Technologies Incorporated Computer access control
US6061691A (en) * 1998-08-31 2000-05-09 Maxagrid International, Inc. Method and system for inventory management
US6226656B1 (en) * 1998-11-12 2001-05-01 Sourcefinder, Inc. System and method for creating, generating and processing user-defined generic specs
US6550057B1 (en) * 1999-08-31 2003-04-15 Accenture Llp Piecemeal retrieval in an information services patterns environment
US6529909B1 (en) * 1999-08-31 2003-03-04 Accenture Llp Method for translating an object attribute converter in an information services patterns environment
US6434568B1 (en) * 1999-08-31 2002-08-13 Accenture Llp Information services patterns in a netcentric environment
US6442748B1 (en) * 1999-08-31 2002-08-27 Accenture Llp System, method and article of manufacture for a persistent state and persistent object separator in an information services patterns environment
US6618727B1 (en) * 1999-09-22 2003-09-09 Infoglide Corporation System and method for performing similarity searching
US6691109B2 (en) * 2001-03-22 2004-02-10 Turbo Worx, Inc. Method and apparatus for high-performance sequence comparison

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6041323A (en) * 1996-04-17 2000-03-21 International Business Machines Corporation Information search method, information search device, and storage medium for storing an information search program
US6446065B1 (en) * 1996-07-05 2002-09-03 Hitachi, Ltd. Document retrieval assisting method and system for the same and document retrieval service using the same
US6038561A (en) * 1996-10-15 2000-03-14 Manning & Napier Information Services Management and analysis of document information text

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1476826A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007126698A1 (en) 2006-04-26 2007-11-08 Microsoft Corporation Significant change search alerts
EP2024879A1 (en) * 2006-04-26 2009-02-18 Microsoft Corporation Significant change search alerts
JP2009535691A (en) * 2006-04-26 2009-10-01 マイクロソフト コーポレーション Significant change search alert
EP2024879A4 (en) * 2006-04-26 2009-11-04 Microsoft Corp Significant change search alerts
US8108388B2 (en) 2006-04-26 2012-01-31 Microsoft Corporation Significant change search alerts
EP2778980A1 (en) * 2013-03-14 2014-09-17 Wal-Mart Stores, Inc. Attribute-based document searching
WO2019133206A1 (en) * 2017-12-29 2019-07-04 Kensho Technologies, Llc Search engine for identifying analogies
US10915586B2 (en) 2017-12-29 2021-02-09 Kensho Technologies, Llc Search engine for identifying analogies

Also Published As

Publication number Publication date
EP1476826A1 (en) 2004-11-17
CA2475962A1 (en) 2003-08-21
US7020651B2 (en) 2006-03-28
US20030182282A1 (en) 2003-09-25
US6829606B2 (en) 2004-12-07
EP1476826A4 (en) 2007-11-21
US20050055345A1 (en) 2005-03-10
AU2003219777A1 (en) 2003-09-04

Similar Documents

Publication Publication Date Title
US6829606B2 (en) Similarity search engine for use with relational databases
US7386554B2 (en) Remote scoring and aggregating similarity search engine for use with relational databases
US6738759B1 (en) System and method for performing similarity searching using pointer optimization
US8082243B2 (en) Semantic discovery and mapping between data sources
US9792351B2 (en) Tolerant and extensible discovery of relationships in data using structural information and data analysis
US7092936B1 (en) System and method for search and recommendation based on usage mining
US6618727B1 (en) System and method for performing similarity searching
US7707168B2 (en) Method and system for data retrieval from heterogeneous data sources
US20040064449A1 (en) Remote scoring and aggregating similarity search engine for use with relational databases
US7505985B2 (en) System and method of generating string-based search expressions using templates
US8180758B1 (en) Data management system utilizing predicate logic
US7769770B2 (en) Secondary index and indexed view maintenance for updates to complex types
US20100017395A1 (en) Apparatus and methods for transforming relational queries into multi-dimensional queries
US20060195420A1 (en) System and method of joining data obtained from horizontally and vertically partitioned heterogeneous data stores using string-based location transparent search expressions
US8527502B2 (en) Method, system and computer-readable media for software object relationship traversal for object-relational query binding
US7801882B2 (en) Optimized constraint and index maintenance for non updating updates
US20070027849A1 (en) Integrating query-related operators in a programming language
KR20060045622A (en) Extraction, transformation and loading designer module of a computerized financial system
WO2000075849A2 (en) Method and apparatus for data access to heterogeneous data sources
US8639717B2 (en) Providing access to data with user defined table functions
US20060015809A1 (en) Structured-document management apparatus, search apparatus, storage method, search method and program
US20060010106A1 (en) SMO scripting optimization
Näsholm Extracting data from nosql databases-a step towards interactive visual analysis of nosql data
US20040044692A1 (en) Collection storage system
Truong Indexing nearest neighbor queries

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2475962

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2003716051

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2003716051

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP