US6330552B1 - Database query cost model optimizer - Google Patents

Database query cost model optimizer Download PDF

Info

Publication number
US6330552B1
US6330552B1 US09/162,638 US16263898A US6330552B1 US 6330552 B1 US6330552 B1 US 6330552B1 US 16263898 A US16263898 A US 16263898A US 6330552 B1 US6330552 B1 US 6330552B1
Authority
US
United States
Prior art keywords
query
elapsed time
operators
blocking
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/162,638
Inventor
Christopher M. Farrar
Harry A. Leslie
Pedro Celis
Diana L. Shak
Jay H. Vaishnav
Michael J. Skarpelos
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tandem Computers Inc
Hewlett Packard Enterprise Development LP
Original Assignee
Compaq Computer Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Compaq Computer Corp filed Critical Compaq Computer Corp
Priority to US09/162,638 priority Critical patent/US6330552B1/en
Assigned to TANDEM COMPUTERS, INCORPORATED reassignment TANDEM COMPUTERS, INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CELIS, PEDRO, VAISHNAV, JAY H., FARRAR, CHRISTOPHER M., LESLIE, HARRY A., SHAK, DIANA L., SKARPELOS, MICHAEL J.
Application granted granted Critical
Publication of US6330552B1 publication Critical patent/US6330552B1/en
Assigned to COMPAQ COMPUTER CORPORATION reassignment COMPAQ COMPUTER CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TANDEM COMPUTERS INC.
Assigned to COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P. reassignment COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: COMPAQ COMPUTER CORPORATION
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P.
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0283Price estimation or determination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99932Access augmentation or optimizing

Definitions

  • This invention relates to the field of database query optimizers, and more particularly, to an improved database query optimizer that can select a query plan based upon different performance goals, such as returning the first or last row of a query in the minimum amount of time, or minimizing the use of computer resources such as computer memory.
  • Computers have the capability of storing vast amounts of data.
  • computers can store and retain data related to thousands of employees of large multi-national corporations, including the departments in which they work, their employee numbers, salaries, job descriptions, geographical locations, etc.
  • users can present a query to the database system in the form of an SQL statement.
  • an SQL statement may ask the database system to list the names of all employees having employee numbers 1001 to 2000.
  • a properly structured SQL statement will result in a list of records that satisfy the question or “query.”
  • the query would produce the names of 1000 employees, assuming that the employees had sequential employee numbers.
  • an SQL compiler operates on the query to develop an efficient way to extract the desired information from the database.
  • the compiler generates a large number of different, but logically equivalent, plans for executing the same query.
  • These “plans” are typically represented in computer memory as query trees, wherein each node of the tree includes a relational operator, such as a “sort” or “merge” operator.
  • Relational operators are operators that receive one or more tables as input and produce a new table as an output.
  • Join, Union and Union All are examples of operators that receive two tables as inputs.
  • Group-by and Sort are examples of relational operators that receive only one table as input, such as a “sort” or “merge” operator.
  • the optimizer program selects the query tree with the lowest estimated cost to respond to the query.
  • cost is usually measured in terms of the amount of computer resources utilized by the computer in executing the SQL statement, for example, the number of I/O's or CPU instructions.
  • a major problem with existing optimizers is that, in many cases, they do not properly estimate the cost of carrying out the query.
  • known optimizers first estimate the number of CPU instructions, I/O operations and, in distributed systems, the number of messages that would be needed to carry out the SQL statement. See, e.g., P. G. Selinger, et al., “Access Path Selection in a Relational Database Management System,” Proceedings of the ACM-SIGMOD International Conference on Management of Data, June 1979, and L. F. Mackert and G. M. Lohman, “R* Optimizer Validation and Performance Evaluation for Distributed Queries,” Proceeding of the Twelfth International Conference on Very Large Data Bases, Kyoto, Japan, August, 1986.
  • such optimizers assign a cost to each operator, add up the cost associated with the execution of each operator and thus produce a total predicted cost for the particular plan.
  • the cost of various plans are calculated and compared. Then, the operator can select the lowest cost plan for execution.
  • the requested information can be obtained in various ways. Each way of obtaining the information involves a series of operations on the database called a “plan.”
  • the present invention is directed to a method, and related software and devices, for use with database query optimizers, for calculating the cost of implementing various plans and selecting the plan that best accommodates the particular desired performance goals.
  • the optimizer of the present invention can choose plans based on one of at least three performance goals: elapsed time to produce the first row of a query, elapsed time to produce the last row of a query and total resource usage based on a user supplied weighting of resource components.
  • the SQL compiler When a user inputs a query into the computer, the SQL compiler operates on the query statement to produce an executable query plan.
  • the compiling process typically includes a number of discrete steps which are handled by different components of the compiler.
  • a “parser” component verifies the syntax of the original SQL statement. If the syntax is correct, it produces a syntactically correct query tree.
  • a “binder” component checks the semantic content of the tree.
  • a “normalizer” component transforms the semantically correct query tree into canonical form.
  • the canonical tree represents, typically, a very large number of logically equivalent ways of processing the query posed by the SQL statement.
  • the “optimizer” component then operates on the canonical tree to generate a set of the logically equivalent query trees. According to the present invention, the optimizer then estimates the cost associated with carrying out each plan and selects the plan that best achieves the desired goal.
  • the cost estimating process can be viewed as involving two phases.
  • the first phase the downward portion of a depth-first query tree traversal, the optimizer assigns a “cost” to each operator in the query tree.
  • a “cost” consists of a set of “resource vectors” which in turn represent resources utilized by the operator to satisfy a particular performance goal.
  • the resources include CPU instructions, number of disk seeks, kilobytes of I/O transfers, normal and persistent memory utilized, number of messages and kilobytes transferred by those messages, temporary disk storage space used, the number of times the operator will be executed, and the amount of time the operator spends (i) waiting for results from descendent operators in the query tree and (ii) waiting caused by the use of multiple resources that can not be used concurrently. This last value, which measures types of waiting time, is called “blocking/idle time”.
  • the optimizer In the second phase of the process, known as cost roll-up, during the upward traversal of the query tree, the optimizer combines the costs associated with each node of the plan.
  • the present invention includes a sophisticated vector algebra designed to accurately model the interaction between operators in the query tree when performing cost roll-up.
  • the roll-up formulas of the present invention use the sophisticated vector algebra to model the blocking behavior of some operators and also how operators in the query tree overlap. (A blocking operator must wait, i.e. “block”, until it receives all rows from all its children before it can return a single row to its parent.)
  • the cost roll-up formulas and the underlying vector algebra on which they are based are described below.
  • the present invention also includes various formula that allow elapsed time to be computed from the resource vectors, so that various plans can be evaluated, not only in terms of resource usage, but also in terms of elapsed time to produce a first or last row.
  • FIG. 1 is a block diagram of a computer system for implementing a preferred embodiment of the present invention.
  • FIG. 2 is a flow chart illustrating the steps of a preferred embodiment of the present invention.
  • FIG. 3 is an illustration of a generalized cost roll-up from grandchild to child to parent operator.
  • FIG. 4 is an illustration of the merging of child costs and the roll-up of the merged costs with the parent's costs.
  • FIG. 5 is an illustration of a unary roll-up involving a non-blocking parent.
  • FIG. 6 is an illustration of a unary roll-up involving a blocking parent.
  • FIG. 7 is an illustration of a transformation in which, in a Nested Loops Join (“NLJ”), the left child is treated as a unary child of the NLJ operator's right child.
  • NLJ Nested Loops Join
  • An execution plan for a particular database query consists of relational operators forming the nodes of a query tree. Each operator in such a query tree has an associated cost.
  • the optimizer of the present invention combines the costs associated with each operator in the query tree to produce the total cost of the query plan. The optimizer then uses these costs to determine, amongst many logically equivalent plans, the plan which best accommodates the desired goal. The optimizer selects this plan, and it may then be executed. As previously mentioned, the cost of the various operators is combined according to a special vector calculus.
  • the “cost” of a plan may be described in terms of the total physical resources of the computer engaged in implementing the query, or it may be measured in terms of the time required to produce the first or last row of data resulting from the query.
  • FIG. 1 is a block diagram of a data processing system 100 .
  • the data processing system 100 has at least one processor 102 and associated computer memory 104 .
  • the memory 104 includes at least the “parser” 106 , “binder” 108 , “normalizer” 110 and “optimizer” 112 software components of the SQL compiler for implementing the tasks described herein.
  • the steps of the described embodiments of the present invention may be performed when instructions contained in memory 104 are executed by processor 102 .
  • System 100 also includes a network connection 114 , such as a connection to a LAN or a WAN.
  • System 100 includes an input device(s) 118 such as a keyboard, touch screen, or the like.
  • System 100 includes an output device(s) 120 such as a printer, display screen or the like.
  • System 100 also includes a computer readable medium input device 122 and a computer readable medium 124 .
  • Computer readable medium 124 can be any appropriate medium that has instructions and data stored thereon. These instructions are loaded from computer readable medium 124 into computer memory 104 . Instructions can also be loaded into memory in the form of a carrier wave over network connection 114 .
  • the instructions and data in computer memory 104 can be loaded into memory via an input device, such as a network, a LAN or a WAN, or can be loaded from a computer readable medium such as a disk, tape or other appropriate computer readable media.
  • the instructions comprising each of the software programs described herein can also be downloaded in the form of a carrier wave over a network connection.
  • System 100 includes an operating system (not shown).
  • the memory 104 and computer readable media 124 may also contain additional information, such as other application programs, operating systems, other data, etc., which are not shown in the figure for the sake of clarity.
  • data processing system 100 can include numerous elements not shown in FIG. 1, such as additional data, software, and/or information in memory, disk drives, keyboards, display devices, network connections, additional memory, additional CPUs, LANs, input/output lines, etc.
  • FIG. 2 illustrates a presently preferred process according to the subject invention.
  • a query in SQL format is first presented to the database system 210 , for example via input device 118 .
  • the parser 106 , binder 108 , normalizer 110 and optimizer software 112 operate on the query statement, as set forth above and as illustrated at 212 , 216 and 218 , respectively, to generate a set of logically equivalent trees. Each node of these query trees represents a relational operator.
  • the optimizer 112 assigns a preliminary cost 220 to that operator.
  • the preliminary cost consists of a set of vectors that each represent the amount of resources the operator will use with respect to a particular performance goal while executing on behalf of the query.
  • the optimizer combines the cost objects of all operators in the query tree according to the “roll-up” formulas described below and thus produces a cost for the overall query plan ( 222 ). After the optimizer compares the cost of logically equivalent plans and chooses the least expensive plan with respect to a specified performance goal 224 it delivers the selected plan to the database execution program which in turn implements 226 the selected plan.
  • a Cost object consists of four SimpleCostVector objects which in turn each contain ten CostScalar objects as vector components.
  • Each SimpleCostVector object measures an amount of resource usage. To determine the less expensive of two Cost objects, one must specify, not only the two Cost objects, but also a performance goal; two costs may compare differently depending on the specified performance goal.
  • a SimpleCostVector object consists of ten scalar components which respectively measure the following resource usage associated with a query operator:
  • a count of CPU instructions In the presently preferred embodiment, each unit value equates to 1000 machine instructions. Therefore, a value of 1 means 1000 machine instructions, 2 mean 2000 machine instructions, etc..
  • Number of random I/O's e.g., the number of head positionings of a disk drive for sequential reads.
  • Amount of memory (in Kilobytes) needed to hold working buffers, hash tables, etc., during the execution of an operator.
  • Amount of memory (in Kilobytes) which persists after operator completion.
  • a count of the number of messages between processes running on different nodes in a cluster is a count of the number of messages between processes running on different nodes in a cluster.
  • number of “probes.” This value represents the number of times the operator will be executed. Typically, this value will be 1 (one), but in some cases (e.g. the inner scan of a nested loops join) it can be greater than one.
  • FF(x) represents the weighting assigned to component x.
  • FF(x) multiplied by x yields the elapsed time for component x.
  • FF(x) is determined empirically through a calibration process. The calibration process is performed by executing a query to determine its actual elapsed time and then adjusting the value of FF(x) until the cost model predicts the actual elapsed time for the execution of the query.
  • nm ⁇ physical_memory indicates the amount of normal memory which does not have physical memory underneath it and is thus subject to page faults. Of course, if available physical memory exceeds normal memory, then the formula should not result in a negative elapsed time. Therefore, max( 0 , nm ⁇ physical_memory) selects the maximum of zero or the value for nm ⁇ physical_memory to ensure that pft will always be non-negative.
  • CPU elapsed time, cput consists of both the elapsed time for the CPU component and elapsed time for page faults.
  • I/O elapsed time, sit(s,k) consists of both the elapsed time for seeks and elapsed time for Kilobyte transfers.
  • mt ( m,mkb ) FF ( m )* m+FF ( mkb )* mkb
  • Message elapsed time, mt(m,mbk), consists of the elapsed time for messages, taking into consideration the number of messages and the number of kilobytes transferred for all messages.
  • elapsedTime max( sit ( s,k ), mt ( m,mkb ), cput )+ blk — idlt.
  • This formula relies on the assumption that, for a given resource vector, I/O, messaging and CPU can overlap. Therefore, the formula takes the maximum of their associated elapsed times and adds that value to the blocking/idle time.
  • the total resource usage may be calculated as a weighted sum of each component in the resource vector.
  • a database administrator may provide these weights through a user interface.
  • a cost object consists of the four SimpleCostVector objects set forth below.
  • “current process,” as used in connection with the second, third and fourth objects listed below, refers to the operator whose cost is being considered.
  • This vector represents the resource usage necessary for the associated operator in the expression tree to execute in its entirety.
  • Query parallelism typically has no effect on TC.
  • this vector represents cumulative usage for all probes.
  • This vector represents resource usage necessary for the associated operator in the expression tree to produce its first row after any blocking activity has occurred. For repeat counts greater than one, this vector represents the resource usage necessary to produce the first row in an average probe, not the first row of the first probe.
  • This vector represents resource usage necessary for the associated operator in the expression tree to produce its last row after any blocking activity has occurred. Thus, for repeat counts greater than one, this vector represents cumulative usage for all probes.
  • blocking operator If an operator must wait for one or more of its descendants to fully complete before it can produce its first row, it is called a “blocking” operator.
  • blocking operators include sort and hash-group-by operators.
  • a blocking operator consists of a blocking phase when it builds some data structure (e.g. a sorted temporary file in the case of sort, a hash table in the case of hash-group-by, etc.) and an output phase in which it uses the data structure from the blocking phase to produce its output rows.
  • CPTB represents resources used by a blocking operator and its descendants during the blocking operator's blocking phase. CPTB will equal the zero vector for non-blocking operators having no blocking descendants. For repeat counts greater than one, CPTB represents the average usage per probe.
  • a presently preferred embodiment of the present invention computes the elapsed time for the Cost object in question, as follows:
  • ET(CPTB) and ET(CPFR) represent the elapsed times of the CPTB vector and the CPFR vector respectively for the Cost object in question.
  • the formula takes into account the elapsed time to produce the first row plus the elapsed time of any preceding blocking activity.
  • ET(CPTB) and ET(CPLR) represent the elapsed times of the CPTB vector and the CPLR vector respectively for the Cost object in question.
  • RC(CPTB) represents the repeat count for the CPTB vector.
  • ET(CPTB)*RC(CPTB) represents the cumulative blocking time of all probes. This formula takes into account the elapsed time to produce the last row of the last probe, plus the elapsed time of any blocking activity over all probes.
  • the optimizer When optimizing for a performance goal of minimal resource usage, the optimizer does not calculate an elapsed time, but instead calculates a weighted sum of each component in the TC vector.
  • a database administrator may assign the relative weights to each component of the TC vector. If the administrator wants to limit the usage of a particular resource, the administrator can assign a greater relative weight to the component of the TC vector that reflects the usage of that resource.
  • a vector with zeros in all components (except repeat count) is called a zero vector and is represented as “Z.”
  • a vector having a non-zero Blocking/Idle time component of x and zeros in all other components (except repeat count) is called a blocking/idle vector. It is represented as I(x).
  • SimpleCostVector objects can be added and subtracted. SimpleCostVectors can be multiplied and divided by scalars. These operations are denoted as + V , ⁇ V , * V and / V respectively.
  • Vector addition (+ V ) involves adding the corresponding components (except repeat count) of two vectors, as in traditional linear algebra. The repeat count component of the result vector retains the repeat count of the first vector specified. (When adding average resource usage vectors, i.e. CPTB, the repeat counts for both vectors should be the same.
  • vector subtraction ( ⁇ V ) involves subtracting the corresponding components (except repeat count) of two vectors. Since the present invention does not allow negative resource usage, any resulting negative component is converted to zero. The repeat count component of the result vector retains the repeat count of the first vector specified.
  • Multiplying a SimpleCostVector object by a scalar involves multiplying each component, except Normal Memory, Persistent Memory and repeat count, by the specified scalar. Normal Memory and Persistent Memory remain unchanged because memory is reused. Thus, the number of times an operator executes does not affect its memory consumption. Similarly, dividing a SimpleCostVectorobject by a scalar involves dividing each component (except Normal Memory, Persistent Memory and repeat count) by the specified scalar.
  • the present invention involves two additional forms of vector addition: overlapped vector addition (denoted + O ), and blocking vector addition (denoted + B ).
  • Overlapped vector addition takes into account that in some cases I/O and messaging partially overlap, i.e., they can occur at the same time.
  • the CPU, Normal Memory, Persistent Memory and Temporary Disk Space components get added together according to the vector addition described above, but the I/O and message related components use the following formulas for overlapped addition:
  • s result MAX( s v1 , s v2 )+ FF ( io )*MIN( s v1 , s v2 )
  • k result MAX( k v1 , k v2 )+ FF ( io )*MIN( k v1 , k v2 )
  • m result MAX( m v1 , m v2 )+ FF ( msg )*MIN( m v1 , m v2 )
  • mkb result MAX( mkb v1 , mkb v2 )+ FF ( msg )*MIN( mkb v1 , mkb v2 )
  • the “result” subscript refers to components in the resulting vector, and the subscripts “v1” and “v2” refer to components of the two vectors being added with overlapped addition.
  • Blocking/Idle time component Overlapped addition for the Blocking/Idle time component requires a more sophisticated formula; two Blocking/Idle time components cannot simply be added together.
  • two vectors v 1 and v 2 each potentially having a non-zero Blocking/Idle time component.
  • the time the operator associated with v 1 is active effectively reduces the Blocking/Idle time of v 2 and vice versa.
  • To determine the elapsed time of a vector we use the formula presented above. Here is the formula for adding Blocking/Idle time components:
  • blk — idlt result MAX( 0 , blk — idle v1 ⁇ ( ET ( v 2 ) ⁇ blk — idle v2 ))+MAX( 0 , blk — idle v2 ⁇ ( ET ( v 1 ) ⁇ blk — idle v1 ))
  • ET(v) represents the elapsed time of vector v.
  • the term (ET(v 2 ) ⁇ blk_idle v2 ) is the amount of time the operator associated with vector v 2 was active.
  • the term blk_idle v1 ⁇ (ET(v 2 ) ⁇ blk_idle v2 ) represents the amount of time the operator associated with vector v 1 was idle less the amount of time the operator associated with vector v 2 was active.
  • the MAX functions in this case merely ensure a non-negative result.
  • a blocking operator e.g., sort
  • the hash join can not return a row to its parent until it has received all rows from its right child, so strictly speaking only its first phase is blocking.
  • v 1 + B v 2 v 1 + V v 2 + V I ( ET ( v 1 )+ ET ( v 2 ) ⁇ ET ( v 1 + V v 2 )).
  • I(x) represents a vector having a Blocking/Idle time component of x and zeros in all other components (except repeat count).
  • Table 1 below illustrates blocking vector addition. To simplify the presentation, assume that each vector has only 3 components—CPU, I/O and Blocking/Idle time—and that elapsed time is MAX(CPU, I/O)+Blocking/Idle time. In other words, for a given operator, its CPU and I/O overlap.
  • Vector v 1 has a CPU of 10 and an I/O of 5 and a Blocking/Idle time of 1, so its elapsed time is 11 (Blocking/Idle time plus the maximum of CPU and I/O).
  • Vector v 2 has a CPU of 2 and an I/O of 15 and a Blocking/Idle time of 2, so its elapsed time is 17 (Blocking/Idle time plus the maximum of CPU and I/O).
  • Adding v 1 and v 2 using vector addition involves adding each respective component resulting in a CPU of 12, an I/O of 20 and a Blocking/Idle time of 3. Thus, the elapsed time of this resulting vector is 23 (Blocking/Idle time plus the maximum of CPU and I/O).
  • Blocking vector addition accurately takes into account disallowing overlap.
  • the function etMIN takes two vectors and returns the one having minimum elapsed time; etMAX takes two vectors and returns the one having maximum elapsed time.
  • the function vecMIN takes two vectors and produces a resulting vector that represents minimum resource usage from the two input vectors.
  • vecMAX takes two vectors and produces a resulting vector that represents maximum resource usage from the two input vectors.
  • Table 2 below explains how vecMIN and vecMAX produce each component in the resulting vector, respectively.
  • the repeat count (also called number of probes) indicates the number of times an operator associated with a particular vector will execute. Typically an operator executes only once, but in some cases (e.g. the inner scan of a nested loops join) an operator can execute multiple times. Given a vector v, the abbreviation RC(v) represents its repeat count.
  • Vector normalization refers to the conversion of a vector from one repeat count to another. Given a vector v and a target repeat count rc, the abbreviation N(v,rc) is used to represent normalizing v to a repeat count rc. When normalizing a vector v to a repeat count rc, its Normal Memory and Persistent Memory components remain unchanged since an operator's memory consumption does not change when it executes repeatedly. The repeat count of the normalized vector becomes rc by definition of normalization. All other components are calculated with the following formula:
  • C v represents a component in vector v
  • C N(v,rc) represents the corresponding component in the normalized vector
  • C v *RC(v) represents the cumulative amount of component C v over all probes. Dividing the result by rc makes it an average amount over rc number of probes. Table 3 below summarizes the vector algebra abbreviations used throughout this patent:
  • FIG. 3 illustrates a generalized cost roll-up procedure 300 .
  • Each operator 310 , 312 , 314 , in the query tree keeps cost information (i.e. a Cost object) about itself independent of its children, called its preliminaryCost 316 .
  • Each operator also keeps track of the cost information it reported up to its parent, called rollUpCost 318 .
  • rollUpCost rollUpCost.
  • rolling up a cost vector involves combining a rollUpCost received from the operator's child with the operator's own preliminaryCost to produce its own rollUpCost which the operator will in turn report up to its parent.
  • the parent operator will in turn combine its preliminaryCost with the rollUpCost it receives to produce a rollUpCost for its parent.
  • the roll up proceeds recursively up the query tree in this manner until it reaches the root node in the query tree.
  • a given operator's rollUpCost reflects the cost of the entire subtree rooted at that operator.
  • FIG. 3 shows a parent operator 310 having only a single child 312 .
  • the child 312 has only a single grandchild 314 .
  • the present process first combines all child roll-up costs into a single merged roll-up cost and then uses that merged cost with the unary roll-up scheme depicted in FIG. 3 .
  • FIG. 4 illustrates the merging of child costs 410 , 412 and rolling up the merged cost 414 with the parent's preliminary cost 416 .
  • Table 4 lists the cost object abbreviations used throughout this document:
  • FIG. 5 depicts a unary roll-up involving a non-blocking parent having a preliminaryCost 510 and receiving a roll-up cost 512 from its descendent operators.
  • the preliminaryCost 510 and the child rollUpCost 512 are combined to yield a parent rollUpCost 514 .
  • the roll-up formulas for this case are, as follows:
  • TC PR TC PP + V TC CR
  • CPFR PR CPFR PP + B CPFR CR
  • CPLR PR ( CPLR PP + O ( CPLR CR ⁇ V CPFR CR ))+ V CPFR CR
  • CPTB PR N ( CPTB CR ,RC ( CPLR PP ))
  • CPLR CR ⁇ V CPFR CR represents the child's cumulative cost except for its first row.
  • the work associated with this term overlaps with the parent's activity, so it is added to the parent's preliminary cost using overlapped addition, hence the term (CPLR PP + O (CPLR CR ⁇ V CPFR CR )).
  • CPFR CR is added using vector addition since the activity for the child to produce its first row does not overlap with the parent.
  • CPTB blocking vector
  • FIG. 6 depicts a unary roll-up involving a blocking parent. As illustrated in this figure, the child rollUpCost 610 is combined with the blocking parent's preliminaryCost 612 to yield the parent rollUpCost.
  • the roll-up formulas for this case are:
  • TC PR TC PP + V TC CR
  • CPFR PR CPFR PP
  • CPTB PR ( CPTB PP + O ( CPLR CR / V RC ( CPTB PP )))+ B N ( CPTB CR ,RC ( CPTB PP ))
  • the formula for TC PR uses vector addition to accumulate all resource usage.
  • CPFR PP represents the cost of generating the first row once the blocking activity has completed.
  • CPFR PP represents the cost to read the first row from the sorted result set and send it to the parent.
  • CPLR PP for a blocking operator represents the cost of generating the last row once the blocking activity has completed.
  • CPLR PP represents the cost to read all rows from the sorted result set and send them to the parent.
  • CPTB PP represents the cost of the actual blocking activity (e.g. in the case of a sort, the cost of producing the sorted result set).
  • the last row and first row costs rolled up from the child reflect blocking activity since they took place during the parent operator's blocking phase, so they are combined with the parent's blocking cost.
  • the parent rolls up only its preliminary costs: CPFR PP and CPLR PP .
  • the term (CPLR CR / V RC(CPTB PP )) converts the child's rolled up last row cost to an average cost per probe to make the addition with the parent's blocking vector meaningful.
  • the term (CPTB PP + O (CPLR CR / V RC(CPTB CR ))) involves adding parent's blocking activity with the child's non-blocking activity, hence the overlapped addition.
  • the term normalize(CPTB CR , RC(CPTB PP )) represents the child's blocking activity (normalized to the parent's repeat count), so we add it in using blocking addition.
  • TC M TC L + V TC R
  • CPFR M et MIN( CPFR L , CPFR R )+ O vec MIN( CPFR L , CPFR R )
  • CPLR M CPLR L + O CPLR R
  • Overlapped addition is used when computing CPFR M because the left and right child of a union act independently (logically speaking), so when one has issued an I/O, the other can utilize the CPU or send a message.
  • TC M TC L + V TC R
  • CPFR M et MIN( CPFR L , CPFR R )+ O vec MIN( CPFR L , CPFR R )
  • CPLR M CPLR L + O ( CPLR R + V ( RC ( CPTB R )* O ( CPTB R ⁇ V vec MIN( CPTB R , N ( CPTB L , RC ( CPTB R ))))))))
  • CPLR M CPLR R + O ( CPLR L + V ( RC ( CPTB L )* O ( CPTB L ⁇ V vec MIN( CPTB L , N ( CPTB R , RC ( CPTB L ))))))))
  • CPTB M et MIN( CPTB L , N ( CPTB R , RC ( CPTB L )))+ O vec MIN( CPTB L , N ( CPTB R , RC ( CPTB L )))
  • the formula for TC M as a total cost computation, involves regular vector addition.
  • CPTB M resembles that of CPFR M with the added complication of normalizing the right leg's blocking vectors to the repeat count of the left leg.
  • blocking vectors represent average usage per probe, so they need to be normalized to the appropriate repeat count before proceeding.
  • a simple union can begin processing rows as soon as one of its blocking legs has begun sending rows, we make the merged blocking vector the child blocking vector with minimum elapsed time adjusted upwards to account for interference between the two blocking legs.
  • CPTB L + O CPTB R involves an implicit assumption that no activity can begin until both blocking children have completed their work. Since, typically, a union operator does not wait until both blocking children have completed their work, this alternative formula is usually too conservative and provides a worst case cost.
  • ET(CPTB L ) and ET(N(CPTB R , RC(CPTB L ))) are compared to determine which leg had the slowest total blocking vector. (Again, the right leg's blocking vector is normalized to the left leg's repeat count since blocking vectors represent average usage per probe.) Assuming a faster left leg, the term CPTB R ⁇ V vecMIN(CPTB R , N(CPTB L , RC(CPTB R ))) is a good estimate of the amount of CPTB R not reflected in CPTB M .
  • CPTB L ⁇ V vecMIN(CPTB L , N(CPTB R , RC(CPTB L ))) represents the amount of CPTB L not reflected in CPTB M .
  • the appropriate result vector is then added to CPLR M and thus, in the final costing, we avoid losing the resource usage that the result vector represents.
  • CPTB reflects an average usage per probe
  • CPLR reflects cumulative usage.
  • CPLR C CPLR N ⁇ V CPFR N
  • TC M TC L + V TC R
  • CPFR M CPFR L + O CPFR R
  • CPLR M CPLR L + O CPLR R
  • the formula for TC M as a total cost computation, involves regular vector addition.
  • CPFR M is simply the overlapped add of CPFR for both legs.
  • TC M TC L + V TC R
  • CPFR M CPFR L + O CPFR R
  • CPLR M CPLR L + O ( CPLR R + V ( RC ( CPTB R )* O ( CPTB R ⁇ V vec MIN( CPTB L , CPTB R ))))
  • CPLR M CPLR R + O ( CPLR L + V ( RC ( CPTB L )* O ( CPTB L ⁇ V vecMIN ( CPTB L ,CPTB R ))))
  • CPTB M ( et MIN( CPTB L ,N ( CPTB R RC ( CPTB L )))+ O vec MIN( CPTB L , N ( CPTB R , RC ( CPTB L )))
  • the formula for TC M as a total cost computation, involves regular vector addition.
  • CPFR M is simply the overlapped sum of CPFR for each of its children. We use overlapped addition on the assumption that when one child has issued an I/O the other can send messages or utilize CPU.
  • the MJ operator When exactly one of an MJ operator's children has a blocking leg, the MJ operator combines the two legs using the same strategy as the union operator: i.e. converting the non-blocking leg to a blocking leg (using the exact same transformation rules as the union operator) and then proceeding as if it had two blocking legs using the formulas described above.
  • All output rows from an NLJ operator's left child 712 eventually get sent to the right child 714 , so the NLJ operator's left child 712 can be viewed as providing rows directly to the right child 714 .
  • the left child 712 can treat the left child 712 as a unary child of the NLJ operator's right child 714 . By making this transformation, one can use the unary roll-up formulas to merge the two legs.
  • TC M TC L + V TC R
  • CPLR M ( CPLR R + O ( CPLR L ⁇ V CPFR L ))+ V CPFR L
  • CPFR M relies on the assumption that the first probe in the right child produces the actual first row.
  • all probes after the first probe may overlap with the previous probe, which suggests using overlapped addition (CPFR L + O CPFR R ) instead of blocking addition.
  • CPFR L + O CPFR R overlapped addition
  • TC M TC L + V TC R
  • CPFR M CPFR L + B CPFR R
  • CPLR M ( CPLR R + O ( CPLR L ⁇ V CPFR L ))+ V CPFR L
  • CPTB M N ( CPTB L , RC ( CPLR R ))
  • the left leg may be viewed as logically coming underneath the right leg, and by assumption only the left leg blocks, so CPTB L becomes CPTB M after normalizing CPTB L to the right leg's repeat count.
  • TC M TC L + V TC R
  • CPTB M CPTB R + O ( CPLR L / V RC ( CPTB R ))
  • the formula for CPTB M is a simplification of the unary blocking roll-up formula taking into account that the left leg has no blocking activity.
  • TC M TC L + V TC R
  • CPTB M ( CPTB R + O ( CPLR L / V RC ( CPTB R )))+ B N ( CPTB L , RC ( CPTB R ))
  • the hash join (HJ) operator does not merge the roll-up costs of its two children and then use a unary roll-up formula to produce its own roll-up cost. Instead, it uses intermediate cost vectors and values calculated and saved off during preliminary costing and combines these directly with the roll-up costs of its two children to produce its own roll-up cost.
  • a hash join consists of potentially three stages. Stage 1 involves taking rows produced by the right (inner) child, building a main memory hash table out of a cluster of rows that fits in memory and writing the remaining overflow clusters to disk. Stage 2 involves taking rows produced by the left (outer) child and, if the row in question belongs to a main memory cluster, probing the hash table built in stage 1 ; if the row in question belongs to an overflow cluster, it gets written to an overflow cluster on disk. Stage 3 joins corresponding overflow clusters produced in stages 1 and 2 . If stages 1 and 2 did not produce overflow clusters, stage 3 does not need to execute.
  • stage 1 Since no activity above the HJ operator can proceed until stage 1 completes, we regard stage 1 as blocking activity, so we place its resource usage in the blocking vector of the HJ preliminary cost:, i.e., CPTB PP .
  • Preliminary costing produces separate vectors for resource usage in stage 2 and stage 3 .
  • both vectors represent cumulative usage over all probes.
  • Preliminary costing also determines what fraction of STG 2 pertains to first row production and what fraction of STG 3 pertains to first row production. We denote these two fractions as frFrac 2 and frFrac 3 respectively.
  • Preliminary costing produces its preliminary first and last row costs using the following formulas:
  • TC PR ( TC L + V TC R )+ V TC PP
  • CPFR PR ( frFrac 2 * V (( CPLR L ⁇ V CPFR L )+ O STG 2 ))+ B ( frFrac 3 * V STG 3 )
  • CPLR PR (( CPLR L ⁇ V CPFR L )+ O STG 2 )+ B STG 3
  • CPTB PR ( CPFR L + V CPTB L )+ O ((((( CPLR R ⁇ V CPFR R )/ V RC ( CPLR R ))+ O CPTB PP )+ V CPFR R )+ B CPTB R )
  • the formula for TC PR as a total cost computation, involves regular vector addition.
  • CPLR PR In the computation of CPLR PR , no ancestor activity can begin until the left child has produced at least one row, so the left child's first row cost belongs with the total blocking roll-up cost: CPTB PR . The remaining portion of the left child's last row activity overlaps with stage 2 , hence the term ((CPLR L ⁇ V CPFR L )+ O STG 2 ). Stage 3 , however, can not proceed until stage 2 finishes, so blocking addition is used when adding its resource usage.
  • the computation for CPFR PR follows the same approach as the computation for CPLR PR with the additional complication of multiplying the stage 2 resource usage (including the left child's last row activity) and the stage 3 resource usage by the corresponding first row fractions—frFrac 2 and frFrac 3 respectively.
  • CPTB PR To compute CPTB PR , one first recognizes that all of the right child's last row activity, except the portion necessary to produce the first row, overlaps with the HJ operator's blocking activity, hence the term ((CPLR R ⁇ V CPFR R )/ V RC(CPLR R ))+ O CPTB PP . (Note that dividing by RC(CPLR R ) converts last row resource usage from a cumulative cost over all probes to an average cost per probe, and also note that the HJ operator's blocking activity, CPTB PP , represents stage 1 .) Next, CPFR R is added back, which was subtracted out earlier, and then the right child's blocking cost, CPTB R , is added using blocking addition. As mentioned earlier, the left child's blocking activity includes the activity necessary to produce its first row, hence the term CPFR L + V CPTB L . Since the left leg's blocking activity overlaps with the with the HJ operator, it is added using overlapped addition.

Abstract

A method and apparatus is disclosed for accurately estimating the cost of a database query, including the total computer resources used and estimated elapsed time for the production of a first row and last row of an operator involved in the query and/or the total resources used and elapsed time for returning the overall response to a database query. The method and apparatus accurately accounts for resources used and elapsed time associated with blocking operators, such as sorts and hash joins, which cannot produce a first row until they have completed their operations.

Description

CROSS-REFERENCE TO RELATED APPLICATION
This application is related to the co-pending U.S. patent application Ser. No. 09/162,245 of Christopher M. Farrar, Harry A. Leslie, Pedro Celis and Diana L. Shak, entitled “Histogram Synthesis Modeler for a Database Query Optimizer,” filed Sep. 28, 1998. This patent application is incorporated herein by reference.
BACKGROUND OF THE INVENTION
This invention relates to the field of database query optimizers, and more particularly, to an improved database query optimizer that can select a query plan based upon different performance goals, such as returning the first or last row of a query in the minimum amount of time, or minimizing the use of computer resources such as computer memory.
Computers have the capability of storing vast amounts of data. For example, computers can store and retain data related to thousands of employees of large multi-national corporations, including the departments in which they work, their employee numbers, salaries, job descriptions, geographical locations, etc. In order to extract selected pieces of that data from such large computerized databases, users can present a query to the database system in the form of an SQL statement. For example, an SQL statement may ask the database system to list the names of all employees having employee numbers 1001 to 2000. A properly structured SQL statement will result in a list of records that satisfy the question or “query.” In this example, the query would produce the names of 1000 employees, assuming that the employees had sequential employee numbers.
Once the user inputs a query into the computer, an SQL compiler operates on the query to develop an efficient way to extract the desired information from the database. Typically, the compiler generates a large number of different, but logically equivalent, plans for executing the same query. These “plans” are typically represented in computer memory as query trees, wherein each node of the tree includes a relational operator, such as a “sort” or “merge” operator. “Relational operators” are operators that receive one or more tables as input and produce a new table as an output. Join, Union and Union All are examples of operators that receive two tables as inputs. Group-by and Sort are examples of relational operators that receive only one table as input, such as a “sort” or “merge” operator. The optimizer program selects the query tree with the lowest estimated cost to respond to the query. In database parlance, “cost” is usually measured in terms of the amount of computer resources utilized by the computer in executing the SQL statement, for example, the number of I/O's or CPU instructions.
A major problem with existing optimizers is that, in many cases, they do not properly estimate the cost of carrying out the query. For example, known optimizers first estimate the number of CPU instructions, I/O operations and, in distributed systems, the number of messages that would be needed to carry out the SQL statement. See, e.g., P. G. Selinger, et al., “Access Path Selection in a Relational Database Management System,” Proceedings of the ACM-SIGMOD International Conference on Management of Data, June 1979, and L. F. Mackert and G. M. Lohman, “R* Optimizer Validation and Performance Evaluation for Distributed Queries,” Proceeding of the Twelfth International Conference on Very Large Data Bases, Kyoto, Japan, August, 1986. Once the number of such instructions, I/O operators and messages is predicted, such optimizers assign a cost to each operator, add up the cost associated with the execution of each operator and thus produce a total predicted cost for the particular plan. The cost of various plans are calculated and compared. Then, the operator can select the lowest cost plan for execution.
Unfortunately, because the computer can conduct some operations in parallel while others must be conducted serially, these optimizers cannot properly predict which plan will produce the first row or the last row in a minimum amount of time. Minimizing the amount of time needed to produce the first or last row of a query is often more important than the cost of the query in terms of the physical resources utilized in executing the query. Thus, there is a need for an improved database cost model that accurately accounts for speed in returning an answer to a query as well as the resources utilized. In short, time may be the most important “cost” associated with a query.
Known prior art optimizers also do not properly account for memory utilization. In situations where the memory of the computer is limited and/or where the algorithm utilizes large amounts of data, memory utilization is an important parameter to consider in determining the cost of the query. It will be recognized that the execution of an SQL statement will cause the computer to utilize certain resources such as, for example, CPU instructions and disk seeks. These types of resources clearly have some elapsed time associated with their execution. Therefore, one can predict that, all else being equal, an SQL statement that requires more instructions and more seeks than another statement will take longer to return a row than the other statement. However, the relationship between memory utilization and elapsed time is much more complicated and not at all intuitive. For example, a statement that uses more memory may actually execute faster than one which uses less memory. This complicated relationship between memory utilization and elapsed time may be one of the reasons that prior art optimizers have failed to account for memory utilization.
In view of all of the above, there is a need for an improved optimizer. In particular, there is a need for an optimizer that can consider and account for flexible performance goals, such as how quickly a plan returns a first row, or a last row, or selecting a plan that will minimize the usage of certain types of computer resources, and one that properly accounts for memory utilization.
SUMMARY OF THE INVENTION
For most database queries, the requested information can be obtained in various ways. Each way of obtaining the information involves a series of operations on the database called a “plan.” The present invention is directed to a method, and related software and devices, for use with database query optimizers, for calculating the cost of implementing various plans and selecting the plan that best accommodates the particular desired performance goals. For example, the optimizer of the present invention can choose plans based on one of at least three performance goals: elapsed time to produce the first row of a query, elapsed time to produce the last row of a query and total resource usage based on a user supplied weighting of resource components.
When a user inputs a query into the computer, the SQL compiler operates on the query statement to produce an executable query plan. The compiling process typically includes a number of discrete steps which are handled by different components of the compiler. First, a “parser” component verifies the syntax of the original SQL statement. If the syntax is correct, it produces a syntactically correct query tree. A “binder” component then checks the semantic content of the tree. Then, a “normalizer” component transforms the semantically correct query tree into canonical form. The canonical tree represents, typically, a very large number of logically equivalent ways of processing the query posed by the SQL statement. The “optimizer” component then operates on the canonical tree to generate a set of the logically equivalent query trees. According to the present invention, the optimizer then estimates the cost associated with carrying out each plan and selects the plan that best achieves the desired goal.
For the sake of clarity of explanation, the cost estimating process can be viewed as involving two phases. In the first phase, the downward portion of a depth-first query tree traversal, the optimizer assigns a “cost” to each operator in the query tree. A “cost” consists of a set of “resource vectors” which in turn represent resources utilized by the operator to satisfy a particular performance goal. The resources include CPU instructions, number of disk seeks, kilobytes of I/O transfers, normal and persistent memory utilized, number of messages and kilobytes transferred by those messages, temporary disk storage space used, the number of times the operator will be executed, and the amount of time the operator spends (i) waiting for results from descendent operators in the query tree and (ii) waiting caused by the use of multiple resources that can not be used concurrently. This last value, which measures types of waiting time, is called “blocking/idle time”.
In the second phase of the process, known as cost roll-up, during the upward traversal of the query tree, the optimizer combines the costs associated with each node of the plan. The present invention includes a sophisticated vector algebra designed to accurately model the interaction between operators in the query tree when performing cost roll-up. Specifically, the roll-up formulas of the present invention use the sophisticated vector algebra to model the blocking behavior of some operators and also how operators in the query tree overlap. (A blocking operator must wait, i.e. “block”, until it receives all rows from all its children before it can return a single row to its parent.) The cost roll-up formulas and the underlying vector algebra on which they are based are described below.
The present invention also includes various formula that allow elapsed time to be computed from the resource vectors, so that various plans can be evaluated, not only in terms of resource usage, but also in terms of elapsed time to produce a first or last row.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a computer system for implementing a preferred embodiment of the present invention.
FIG. 2 is a flow chart illustrating the steps of a preferred embodiment of the present invention.
FIG. 3 is an illustration of a generalized cost roll-up from grandchild to child to parent operator.
FIG. 4 is an illustration of the merging of child costs and the roll-up of the merged costs with the parent's costs.
FIG. 5 is an illustration of a unary roll-up involving a non-blocking parent.
FIG. 6 is an illustration of a unary roll-up involving a blocking parent.
FIG. 7 is an illustration of a transformation in which, in a Nested Loops Join (“NLJ”), the left child is treated as a unary child of the NLJ operator's right child.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
An execution plan for a particular database query consists of relational operators forming the nodes of a query tree. Each operator in such a query tree has an associated cost. The optimizer of the present invention combines the costs associated with each operator in the query tree to produce the total cost of the query plan. The optimizer then uses these costs to determine, amongst many logically equivalent plans, the plan which best accommodates the desired goal. The optimizer selects this plan, and it may then be executed. As previously mentioned, the cost of the various operators is combined according to a special vector calculus. The “cost” of a plan may be described in terms of the total physical resources of the computer engaged in implementing the query, or it may be measured in terms of the time required to produce the first or last row of data resulting from the query.
Overview of the Computer System
The present invention may be implemented on a computer system, such as that illustrated in FIG. 1. FIG. 1 is a block diagram of a data processing system 100. The data processing system 100 has at least one processor 102 and associated computer memory 104. The memory 104 includes at least the “parser” 106, “binder” 108, “normalizer” 110 and “optimizer” 112 software components of the SQL compiler for implementing the tasks described herein. The steps of the described embodiments of the present invention may be performed when instructions contained in memory 104 are executed by processor 102.
System 100 also includes a network connection 114, such as a connection to a LAN or a WAN. System 100 includes an input device(s) 118 such as a keyboard, touch screen, or the like. System 100 includes an output device(s) 120 such as a printer, display screen or the like. System 100 also includes a computer readable medium input device 122 and a computer readable medium 124. Computer readable medium 124 can be any appropriate medium that has instructions and data stored thereon. These instructions are loaded from computer readable medium 124 into computer memory 104. Instructions can also be loaded into memory in the form of a carrier wave over network connection 114. Thus, the instructions and data in computer memory 104 can be loaded into memory via an input device, such as a network, a LAN or a WAN, or can be loaded from a computer readable medium such as a disk, tape or other appropriate computer readable media. The instructions comprising each of the software programs described herein can also be downloaded in the form of a carrier wave over a network connection.
System 100 includes an operating system (not shown). A person of ordinary skill in the art will understand that the memory 104 and computer readable media 124 may also contain additional information, such as other application programs, operating systems, other data, etc., which are not shown in the figure for the sake of clarity. It also will be understood that data processing system 100 (or any other data processing system described herein) can include numerous elements not shown in FIG. 1, such as additional data, software, and/or information in memory, disk drives, keyboards, display devices, network connections, additional memory, additional CPUs, LANs, input/output lines, etc.
Overview of the Inventive Process
FIG. 2 illustrates a presently preferred process according to the subject invention. As shown in FIG. 2, a query in SQL format is first presented to the database system 210, for example via input device 118. The parser 106, binder 108, normalizer 110 and optimizer software 112 operate on the query statement, as set forth above and as illustrated at 212, 216 and 218, respectively, to generate a set of logically equivalent trees. Each node of these query trees represents a relational operator. Based upon the type of operator represented, the optimizer 112 assigns a preliminary cost 220 to that operator. The preliminary cost consists of a set of vectors that each represent the amount of resources the operator will use with respect to a particular performance goal while executing on behalf of the query. The optimizer combines the cost objects of all operators in the query tree according to the “roll-up” formulas described below and thus produces a cost for the overall query plan (222). After the optimizer compares the cost of logically equivalent plans and chooses the least expensive plan with respect to a specified performance goal 224 it delivers the selected plan to the database execution program which in turn implements 226 the selected plan.
Costing Primitives
This section describes the underlying costing primitives necessary for understanding subsequent roll-up formulas.
A Cost object consists of four SimpleCostVector objects which in turn each contain ten CostScalar objects as vector components. Each SimpleCostVector object measures an amount of resource usage. To determine the less expensive of two Cost objects, one must specify, not only the two Cost objects, but also a performance goal; two costs may compare differently depending on the specified performance goal.
A SimpleCostVector object consists of ten scalar components which respectively measure the following resource usage associated with a query operator:
1) CPU (cpu)
A count of CPU instructions. In the presently preferred embodiment, each unit value equates to 1000 machine instructions. Therefore, a value of 1 means 1000 machine instructions, 2 mean 2000 machine instructions, etc..
2) Seeks (s)
Number of random I/O's, e.g., the number of head positionings of a disk drive for sequential reads.
3) I/O Transfer (k)
Kilobytes of I/O transferred.
4) Normal Memory (nm)
Amount of memory (in Kilobytes) needed to hold working buffers, hash tables, etc., during the execution of an operator.
5) Persistent Memory (pm)
Amount of memory (in Kilobytes) which persists after operator completion.
6) Messages (m)
A count of the number of messages between processes running on different nodes in a cluster.
7) Message Transfer (mkb)
Number of Kilobytes transferred for all messages.
8) Temporary Disk Space (tds)
Amount of disk space (in Kilobytes) used for temporary files.
9) Repeat Count (rcount)
Also known as number of “probes.” This value represents the number of times the operator will be executed. Typically, this value will be 1 (one), but in some cases (e.g. the inner scan of a nested loops join) it can be greater than one.
10) Blocking/Idle Time (blk_idlt)
The amount of time spent waiting for child operators in the expression tree.
Computing Elapsed Times for SimpleCostVector Objects
Given a resource vector, the formulas below may be used to compute the estimated elapsed time associated with a SimpleCostVector. In these formulas, FF(x) represents the weighting assigned to component x. FF(x) multiplied by x yields the elapsed time for component x. In the present embodiment of the invention, for each component x, FF(x) is determined empirically through a calibration process. The calibration process is performed by executing a query to determine its actual elapsed time and then adjusting the value of FF(x) until the cost model predicts the actual elapsed time for the execution of the query.
Page Fault Time
pft=max(0, nm−physical_memory)*FF(normal_memory_page_faults)
The elapsed time for page faults, pft, depends on available physical memory and the amount of normal memory used to execute the query. The term nm−physical_memory indicates the amount of normal memory which does not have physical memory underneath it and is thus subject to page faults. Of course, if available physical memory exceeds normal memory, then the formula should not result in a negative elapsed time. Therefore, max(0, nm−physical_memory) selects the maximum of zero or the value for nm−physical_memory to ensure that pft will always be non-negative.
CPU Elapsed Time
cput=cpu*FF(cpu)+pft
CPU elapsed time, cput, consists of both the elapsed time for the CPU component and elapsed time for page faults.
I/O Elapsed Time
sit(s,k)=FF(s)*s+FF(k)*k
I/O elapsed time, sit(s,k), consists of both the elapsed time for seeks and elapsed time for Kilobyte transfers.
Message Elapsed Time
mt(m,mkb)=FF(m)*m+FF(mkb)*mkb
Message elapsed time, mt(m,mbk), consists of the elapsed time for messages, taking into consideration the number of messages and the number of kilobytes transferred for all messages.
Elapsed Time for a Vector
Given a vector, the following formula yields the elapsed time for the execution of the associated operator:
elapsedTime=max(sit(s,k), mt(m,mkb), cput)+blk idlt.
This formula relies on the assumption that, for a given resource vector, I/O, messaging and CPU can overlap. Therefore, the formula takes the maximum of their associated elapsed times and adds that value to the blocking/idle time.
Total Resource Usage
Given a resource vector and a performance goal of optimizing for minimal resource usage, the total resource usage may be calculated as a weighted sum of each component in the resource vector. A database administrator may provide these weights through a user interface.
Cost Object
A cost object consists of the four SimpleCostVector objects set forth below. In this context, “current process,” as used in connection with the second, third and fourth objects listed below, refers to the operator whose cost is being considered.
1) Total Cost (TC)
This vector represents the resource usage necessary for the associated operator in the expression tree to execute in its entirety. Query parallelism typically has no effect on TC. For repeat counts greater than one, this vector represents cumulative usage for all probes.
2) Current Process First Row (CPFR)
This vector represents resource usage necessary for the associated operator in the expression tree to produce its first row after any blocking activity has occurred. For repeat counts greater than one, this vector represents the resource usage necessary to produce the first row in an average probe, not the first row of the first probe.
3) Current Process Last Row (CPLR)
This vector represents resource usage necessary for the associated operator in the expression tree to produce its last row after any blocking activity has occurred. Thus, for repeat counts greater than one, this vector represents cumulative usage for all probes.
4) Current Process Total Blocking (CPTB)
If an operator must wait for one or more of its descendants to fully complete before it can produce its first row, it is called a “blocking” operator. Examples of blocking operators include sort and hash-group-by operators. Typically, a blocking operator consists of a blocking phase when it builds some data structure (e.g. a sorted temporary file in the case of sort, a hash table in the case of hash-group-by, etc.) and an output phase in which it uses the data structure from the blocking phase to produce its output rows. CPTB represents resources used by a blocking operator and its descendants during the blocking operator's blocking phase. CPTB will equal the zero vector for non-blocking operators having no blocking descendants. For repeat counts greater than one, CPTB represents the average usage per probe.
Computing Elapsed Times for Cost Objects
Given a Cost object and a performance goal specifying either first row or last row optimization, a presently preferred embodiment of the present invention computes the elapsed time for the Cost object in question, as follows:
Elapsed Time for a First Row Performance Goal
The following formula may be used to calculate the elapsed time for a plan to produce the first row:
elapsedTime=ET(CPFR)+ET(CPTB)
In this formula, ET(CPTB) and ET(CPFR) represent the elapsed times of the CPTB vector and the CPFR vector respectively for the Cost object in question. The formula takes into account the elapsed time to produce the first row plus the elapsed time of any preceding blocking activity.
Elapsed Time for a Last Row Performance Goal
The following formula may be used to calculate the elapsed time for a plan to produce the last row:
elapsedTime=ET(CPLR)+ET(CPTB)*RC(CPTB)
In this formula, ET(CPTB) and ET(CPLR) represent the elapsed times of the CPTB vector and the CPLR vector respectively for the Cost object in question. RC(CPTB) represents the repeat count for the CPTB vector. The term ET(CPTB)*RC(CPTB) represents the cumulative blocking time of all probes. This formula takes into account the elapsed time to produce the last row of the last probe, plus the elapsed time of any blocking activity over all probes.
Minimal Resource Performance Goal
When optimizing for a performance goal of minimal resource usage, the optimizer does not calculate an elapsed time, but instead calculates a weighted sum of each component in the TC vector. In a presently preferred embodiment, a database administrator may assign the relative weights to each component of the TC vector. If the administrator wants to limit the usage of a particular resource, the administrator can assign a greater relative weight to the component of the TC vector that reflects the usage of that resource.
Algebra for SimpleCostVector Objects
In order to roll up Cost objects associated with physical operators in a query tree, the following algebra is used for SimpleCostVector objects.
Zero Vectors
A vector with zeros in all components (except repeat count) is called a zero vector and is represented as “Z.” A vector having a non-zero Blocking/Idle time component of x and zeros in all other components (except repeat count) is called a blocking/idle vector. It is represented as I(x).
Vector Arithmetic
SimpleCostVector objects can be added and subtracted. SimpleCostVectors can be multiplied and divided by scalars. These operations are denoted as +V, −V, *V and /V respectively. Vector addition (+V) involves adding the corresponding components (except repeat count) of two vectors, as in traditional linear algebra. The repeat count component of the result vector retains the repeat count of the first vector specified. (When adding average resource usage vectors, i.e. CPTB, the repeat counts for both vectors should be the same. Similarly, vector subtraction (−V) involves subtracting the corresponding components (except repeat count) of two vectors. Since the present invention does not allow negative resource usage, any resulting negative component is converted to zero. The repeat count component of the result vector retains the repeat count of the first vector specified.
Multiplying a SimpleCostVector object by a scalar involves multiplying each component, except Normal Memory, Persistent Memory and repeat count, by the specified scalar. Normal Memory and Persistent Memory remain unchanged because memory is reused. Thus, the number of times an operator executes does not affect its memory consumption. Similarly, dividing a SimpleCostVectorobject by a scalar involves dividing each component (except Normal Memory, Persistent Memory and repeat count) by the specified scalar.
The present invention involves two additional forms of vector addition: overlapped vector addition (denoted +O), and blocking vector addition (denoted +B). Overlapped vector addition takes into account that in some cases I/O and messaging partially overlap, i.e., they can occur at the same time. Thus, the CPU, Normal Memory, Persistent Memory and Temporary Disk Space components get added together according to the vector addition described above, but the I/O and message related components use the following formulas for overlapped addition:
s result=MAX(s v1 , s v2)+FF(io)*MIN(s v1 , s v2)
k result=MAX(k v1 , k v2)+FF(io)*MIN(k v1 , k v2)
m result=MAX(m v1 , m v2)+FF(msg)*MIN(m v1 , m v2)
mkb result=MAX(mkb v1 , mkb v2)+FF(msg)*MIN(mkb v1 , mkb v2)
In the formulas above, the “result” subscript refers to components in the resulting vector, and the subscripts “v1” and “v2” refer to components of the two vectors being added with overlapped addition.
The same basic formula applies to each component, so a single explanation will suffice. If the two vectors pertain to completely overlapped activity, then the larger of the two components will become the resulting component, hence the term MAX(sv1, sv2). Typically, however, the overlap will not be complete. The extent to which the overlap occurs is inversely proportional to the size of the smaller component. Hence, the term FF(x)*MIN(sv1, sv2). Thus, FF(x)=0 results in complete overlap while FF(x)=1 results in no overlap at all.
Overlapped addition for the Blocking/Idle time component requires a more sophisticated formula; two Blocking/Idle time components cannot simply be added together. Consider two vectors v1 and v2 each potentially having a non-zero Blocking/Idle time component. The time the operator associated with v1 is active effectively reduces the Blocking/Idle time of v2 and vice versa. Only after Blocking/Idle time components of both v1 and v2 have been reduced by the elapsed time of the other vector (but not below zero) can we add the two components together. To determine the elapsed time of a vector, we use the formula presented above. Here is the formula for adding Blocking/Idle time components:
blk idlt result=MAX(0, blk idle v1−(ET(v 2)−blk idle v2))+MAX(0,blk idle v2−(ET(v 1)−blk idle v1))
where ET(v) represents the elapsed time of vector v.
The term (ET(v2)−blk_idlev2) is the amount of time the operator associated with vector v2 was active. The term blk_idlev1−(ET(v2)−blk_idlev2) represents the amount of time the operator associated with vector v1 was idle less the amount of time the operator associated with vector v2 was active. The MAX functions in this case merely ensure a non-negative result.
Consider the following simplified example. Assume v1 has an elapsed time 20 of which 8 is idle. Assume also that vector v2 has an elapsed time of 15 of which 10 is idle time. The resulting idle time of v1+O v2 is
MAX(0,8 −(15−10))+MAX(0,10−(20−8))=MAX(0,3)+MAX(0,−2)=3+0=3.
Repeated overlapped addition involves adding a vector to itself a repeated number of times using overlapped addition. We denote this with the symbol *O. For example, given vector v and scalar c, the term c *O v represents vector v added to itself c times using overlapped addition. Normal memory, persistent memory and repeat count remain unchanged.
Blocking Vector Addition
Consider two vectors v1 and v2 in which v2 pertains to a blocking operator3. Since v2 is blocking, the following formula must hold:
3 A blocking operator (e.g., sort) can not return a row to its parent until it has received all rows from one or more of its children. The hash join can not return a row to its parent until it has received all rows from its right child, so strictly speaking only its first phase is blocking.
ET(v 1)+ET(v 2)=ET(v 1+v 2).
However, neither traditional vector addition nor overlapped vector addition has this property. Therefore, the present invention defines blocking vector addition, +B, which has the appropriate characteristic:
v 1+B v 2=v 1+V v 2+V I(ET(v 1)+ET(v 2)−ET(v 1+V v 2)).
Recall that I(x) represents a vector having a Blocking/Idle time component of x and zeros in all other components (except repeat count).
Table 1 below illustrates blocking vector addition. To simplify the presentation, assume that each vector has only 3 components—CPU, I/O and Blocking/Idle time—and that elapsed time is MAX(CPU, I/O)+Blocking/Idle time. In other words, for a given operator, its CPU and I/O overlap.
TABLE 1
Blocking/ Elapsed
Vector CPU I/O Idle Time
v1 10 5 1 11
v2 (blocking) 2 15 2 17
v1 +V v2 12 20 3 23
I( ET(v1) + ET(v2) − ET(v1 +V v2)) 0 0 5 5
v1 +B v2 12 20 8 28
Vector v1 has a CPU of 10 and an I/O of 5 and a Blocking/Idle time of 1, so its elapsed time is 11 (Blocking/Idle time plus the maximum of CPU and I/O). Vector v2 has a CPU of 2 and an I/O of 15 and a Blocking/Idle time of 2, so its elapsed time is 17 (Blocking/Idle time plus the maximum of CPU and I/O). Adding v1 and v2 using vector addition involves adding each respective component resulting in a CPU of 12, an I/O of 20 and a Blocking/Idle time of 3. Thus, the elapsed time of this resulting vector is 23 (Blocking/Idle time plus the maximum of CPU and I/O).
Notice that the elapsed times for v1 and v2 do not add up to the elapsed time of v1+Vv2. This comes about because, with normal vector addition, we implicitly overlap v1's I/O with v2's CPU and vice versa, but this is inappropriate since v2 represents resources used by a blocking operator.
Blocking vector addition accurately takes into account disallowing overlap. First a vector having a Blocking/Idle time of ET(v1)+ET(v2)−ET(v1+V v2)=11+17−23=5 is created. All other components (except repeat count) have a value of zero.
Finally, we calculate the blocking sum, v1+Bv2=v1+Vv2+VI(ET(v1)+ET(v2)−ET(v1+Vv2)), where ET(v1)+ET(v2)=ET(v1+Bv2)=28.
Vector Comparison Functions
The function etMIN takes two vectors and returns the one having minimum elapsed time; etMAX takes two vectors and returns the one having maximum elapsed time. The function vecMIN takes two vectors and produces a resulting vector that represents minimum resource usage from the two input vectors. Similarly, vecMAX takes two vectors and produces a resulting vector that represents maximum resource usage from the two input vectors.
Table 2 below explains how vecMIN and vecMAX produce each component in the resulting vector, respectively.
TABLE 2
Component vecMin (vecMax) result
CPU From input vector with minimum (maximum)
CPU elapsed time
Seeks From input vector with minimum (maximum)
I/O elapsed time
I/O Transfer From input vector with minimum (maximum)
I/O elapsed time
Messages From input vector with minimum (maximum)
message elapsed time
Message Transfer From input vector with minimum (maximum)
message elapsed time
Repeat Count From first specified input vector in all cases
All other components From input vector with minimum (maximum)
component value
Vector Normalization
The repeat count (also called number of probes) indicates the number of times an operator associated with a particular vector will execute. Typically an operator executes only once, but in some cases (e.g. the inner scan of a nested loops join) an operator can execute multiple times. Given a vector v, the abbreviation RC(v) represents its repeat count.
Recall that for a Cost object, the vector CPTB represents average resource usage per probe rather than cumulative resource usage. Whenever such “average usage per probe” vectors are added together, both vectors should pertain to the same number of probes (i.e. have the same repeat count). “Vector normalization” refers to the conversion of a vector from one repeat count to another. Given a vector v and a target repeat count rc, the abbreviation N(v,rc) is used to represent normalizing v to a repeat count rc. When normalizing a vector v to a repeat count rc, its Normal Memory and Persistent Memory components remain unchanged since an operator's memory consumption does not change when it executes repeatedly. The repeat count of the normalized vector becomes rc by definition of normalization. All other components are calculated with the following formula:
C N(v,rc) =C v *RC(v)/rc
where Cv represents a component in vector v and CN(v,rc) represents the corresponding component in the normalized vector.
The term Cv*RC(v) represents the cumulative amount of component Cv over all probes. Dividing the result by rc makes it an average amount over rc number of probes. Table 3 below summarizes the vector algebra abbreviations used throughout this patent:
TABLE 3
Vector Abbreviation Explanation
+V, −V, *V, lV traditional vector operations
+O overlapped vector addition
*O repeated overlapped vector addition
+B blocking vector addition
N(v,rc) vector v normalized to repeat count of rc
Z zero vector
I(x) vector with idle component x and zeros in all other
components
RC(v) repeat count of vector v
ET(v) elapsed time of vector v
vecMIN(v1,v2) vector of minimum resource usage from
two input vectors v1 and v2
vecMAX(v1,v2) vector of maximum resource usage from
two input vectors v1 and v2
etMIN(v1,v2) vector with minimum elapsed time from vectors
v1 and v2
etMAX(v1,v2) vector with maximum elapsed time from vectors
v1 and v2
Rolling Up Cost Objects
FIG. 3 illustrates a generalized cost roll-up procedure 300. Each operator 310, 312, 314, in the query tree keeps cost information (i.e. a Cost object) about itself independent of its children, called its preliminaryCost 316. Each operator also keeps track of the cost information it reported up to its parent, called rollUpCost 318. For leaf nodes in a query tree, preliminaryCost=rollUpCost. From the perspective of a given operator in a query tree, rolling up a cost vector involves combining a rollUpCost received from the operator's child with the operator's own preliminaryCost to produce its own rollUpCost which the operator will in turn report up to its parent. The parent operator will in turn combine its preliminaryCost with the rollUpCost it receives to produce a rollUpCost for its parent. The roll up proceeds recursively up the query tree in this manner until it reaches the root node in the query tree. Thus, a given operator's rollUpCost reflects the cost of the entire subtree rooted at that operator.
FIG. 3 shows a parent operator 310 having only a single child 312. The child 312, in turn, has only a single grandchild 314. When an operator has multiple children (e.g., join or union), the present process first combines all child roll-up costs into a single merged roll-up cost and then uses that merged cost with the unary roll-up scheme depicted in FIG. 3.
FIG. 4 illustrates the merging of child costs 410, 412 and rolling up the merged cost 414 with the parent's preliminary cost 416. As explained below, the particular details of child cost merging and cost roll-up depend on the characteristics of the operators involved. Table 4 below lists the cost object abbreviations used throughout this document:
TABLE 4
Cost Abbreviation Explanation
PP parent preliminary cost
PR parent roll-up cost
CR child roll-up cost
L left child roll-up cost
R right child roll-up cost
M merged children roll-up cost
N non-blocking cost
C non-blocking cost converted to blocking
Non-blocking Unary Roll-up
FIG. 5 depicts a unary roll-up involving a non-blocking parent having a preliminaryCost 510 and receiving a roll-up cost 512 from its descendent operators. The preliminaryCost 510 and the child rollUpCost 512 are combined to yield a parent rollUpCost 514. The roll-up formulas for this case are, as follows:
TC PR =TC PP+V TC CR
CPFR PR =CPFR PP+B CPFR CR
CPLR PR=(CPLR PP+O(CPLR CRV CPFR CR))+V CPFR CR
CPTB PR =N(CPTB CR ,RC(CPLR PP))
Since TC represents an accumulation of all resources, we use vector addition. However, the formula for CPFR involves adding the parent's preliminary cost with the child's roll-up cost. Blocking addition is used because a parent can not produce its first row until it receives at least one row from its child.
The formula for CPLR is more involved. The term (CPLRCRVCPFRCR) represents the child's cumulative cost except for its first row. The work associated with this term overlaps with the parent's activity, so it is added to the parent's preliminary cost using overlapped addition, hence the term (CPLRPP+O(CPLRCRVCPFRCR)). Finally CPFRCR is added using vector addition since the activity for the child to produce its first row does not overlap with the parent.
Since the parent does not block, the parent simply reports the blocking activity reported up from its child. Thus, we normalize the blocking vector (CPTB) to the parent's repeat count but otherwise leave CPTB unchanged.
Blocking Unary Roll-up
FIG. 6 depicts a unary roll-up involving a blocking parent. As illustrated in this figure, the child rollUpCost 610 is combined with the blocking parent's preliminaryCost 612 to yield the parent rollUpCost. The roll-up formulas for this case are:
TC PR =TC PP+V TC CR
CPFR PR =CPFR PP
CPLR PR =CPLR PP
CPTB PR=(CPTB PP+O(CPLR CR/V RC(CPTB PP)))+B N(CPTB CR ,RC(CPTB PP))
The formula for TCPR uses vector addition to accumulate all resource usage.
For a blocking operator, CPFRPP represents the cost of generating the first row once the blocking activity has completed. (For example, in the case of a sort, CPFRPP represents the cost to read the first row from the sorted result set and send it to the parent.) CPLRPP for a blocking operator represents the cost of generating the last row once the blocking activity has completed. (For example, in the case of a sort, CPLRPP represents the cost to read all rows from the sorted result set and send them to the parent.) CPTBPP represents the cost of the actual blocking activity (e.g. in the case of a sort, the cost of producing the sorted result set).
With a blocking parent operator, the last row and first row costs rolled up from the child reflect blocking activity since they took place during the parent operator's blocking phase, so they are combined with the parent's blocking cost. Thus, for its first and last row costs, the parent rolls up only its preliminary costs: CPFRPP and CPLRPP.
When calculating the cost roll-up for total blocking, the child's blocking and last row costs are added to the parent's blocking cost, since all of the child's activity took place during the parent's blocking phase. The term (CPLRCR/VRC(CPTBPP)) converts the child's rolled up last row cost to an average cost per probe to make the addition with the parent's blocking vector meaningful. The term (CPTBPP+O(CPLRCR/VRC(CPTBCR))) involves adding parent's blocking activity with the child's non-blocking activity, hence the overlapped addition. The term normalize(CPTBCR, RC(CPTBPP)) represents the child's blocking activity (normalized to the parent's repeat count), so we add it in using blocking addition.
Union Operator
Since a union operator has two children, the roll-up costs for each child are merged into a single roll-up cost before using the unary roll-up formulas described earlier.
Two Non-blocking Legs
The formulas below indicate how a union operator's left and right children can be merged when neither leg has any blocking operators:
TC M =TC L+V TC R
CPFR M =etMIN(CPFR L , CPFR R)+O vecMIN(CPFR L , CPFR R)
CPLR M =CPLR L+O CPLR R
The formula for TCM, as for all total cost computations, involves normal vector addition.
To determine CPFRM, we need to recognize that the first row produced by either of the two legs becomes the first row produced by the union operator itself, hence the term etMIN(CPFRL, CPFRR). Unfortunately, the two legs do not act in isolation; they interfere with one another at least to some degree. The term vecMIN(CPFRL, CPFRR) measures this interference. Consider any component of a vector, say CPU. If the left child uses 100 units of CPU to produce its first row and the right child used 75 units of CPU to produce its first row, then CPU interference only occurs during the first 75 units.
Overlapped addition is used when computing CPFRM because the left and right child of a union act independently (logically speaking), so when one has issued an I/O, the other can utilize the CPU or send a message.
To determine CPLRM we need to recognize that both its children must complete before a union operator can produce its last row. Since the left and right child of a union act independently, we simply use an overlapped add.
Two Blocking Legs
The formulas below indicate how to merge a union operator's left and right children when both legs have blocking operators:
TC M =TC L+V TC R
CPFR M =etMIN(CPFR L , CPFR R)+O vecMIN(CPFR L , CPFR R)
if (ET(CPTB L)<ET(N(CPTB R , RC(CPTB L))))
CPLR M =CPLR L+O(CPLR R+V(RC(CPTB R)*O(CPTB RV vecMIN(CPTB R , N(CPTB L , RC(CPTB R))))))
else
CPLR M =CPLR R+O(CPLR L+V(RC(CPTB L)*O(CPTB LV vecMIN(CPTB L , N(CPTB R , RC(CPTB L))))))
end
CPTB M =etMIN(CPTB L , N(CPTB R , RC(CPTB L)))+O vecMIN(CPTB L , N(CPTB R , RC(CPTB L)))
The formula for TCM, as a total cost computation, involves regular vector addition.
The formula for CPFRM remains unchanged from the previous case of two non-blocking legs.
The formula for CPTBM resembles that of CPFRM with the added complication of normalizing the right leg's blocking vectors to the repeat count of the left leg. (Remember, blocking vectors represent average usage per probe, so they need to be normalized to the appropriate repeat count before proceeding.) Since a simple union can begin processing rows as soon as one of its blocking legs has begun sending rows, we make the merged blocking vector the child blocking vector with minimum elapsed time adjusted upwards to account for interference between the two blocking legs. (An alternative formula, CPTBL+OCPTBR, involves an implicit assumption that no activity can begin until both blocking children have completed their work. Since, typically, a union operator does not wait until both blocking children have completed their work, this alternative formula is usually too conservative and provides a worst case cost.)
There is a very complicated formula for CPLRM. First, ET(CPTBL) and ET(N(CPTBR, RC(CPTBL))) are compared to determine which leg had the slowest total blocking vector. (Again, the right leg's blocking vector is normalized to the left leg's repeat count since blocking vectors represent average usage per probe.) Assuming a faster left leg, the term CPTBRVvecMIN(CPTBR, N(CPTBL, RC(CPTBR))) is a good estimate of the amount of CPTBR not reflected in CPTBM. If we encounter a faster right leg, then the term CPTBLVvecMIN(CPTBL, N(CPTBR, RC(CPTBL))) represents the amount of CPTBL not reflected in CPTBM. The appropriate result vector is then added to CPLRM and thus, in the final costing, we avoid losing the resource usage that the result vector represents.
Finally, recall that CPTB reflects an average usage per probe, whereas CPLR reflects cumulative usage. The term RC(CPTBR)*O (CPTBRVvecMIN(CPTBL, N(CPTBR, RC(CPTBL)))) (or RC(CPTBL)*O (CPTBLVvecMIN(CPTBL, N(CPTBR, RC(CPTBL)))) if we encounter a slower left leg) converts average usage to cumulative usage. Overlapped multiplication is used since successive probes can overlap with each other.
One Blocking Leg and One Non-blocking Leg
In the case of a union with exactly one blocking leg, the non-blocking child's cost is converted to look like a set of blocking vectors and then the two children are combined using the formulas for both legs blocking described earlier. The transformation of a non-blocking leg into a blocking leg uses the following formulas:
TC C =TC N
CPFR C =Z
CPLR C =CPLR NV CPFR N
CPTB C =CPFR N
These formulas simply move CPFR into the blocking vector. Since last row elapsed time combines CPTB and CPLR, CPFR is subtracted from CPLR. This reflects the fact that waiting for a first row from a child operator is a form of blocking.
Merge Join Operator
Since a merge join (MJ) operator has two children, there is a need to merge the roll-up costs for each child into a single roll-up cost before using the unary roll-up formulas described earlier.
Two Non-blocking Legs
The formulas below indicate how to combine an MJ operator's left and right children when neither leg has any blocking operators:
TC M =TC L+V TC R
CPFR M =CPFR L+O CPFR R
CPLR M =CPLR L+O CPLR R
The formula for TCM, as a total cost computation, involves regular vector addition.
Since both its legs must produce their respective first rows before an MJ operator can produce its first row, CPFRM is simply the overlapped add of CPFR for both legs.
The computation for CPLRM is the same as for the Union operator.
Two Blocking Legs
The formulas below indicate how to combine an MJ operator's left and right child when both children have blocking operators:
TC M =TC L+V TC R
CPFR M =CPFR L+O CPFR R
if (ET(CPTB L)<ET(N(CPTB R , RC(CPTB L))))
CPLR M =CPLR L+O(CPLR R+V(RC(CPTB R)*O(CPTB RV vecMIN(CPTB L , CPTB R))))
else
CPLR M =CPLR R+O(CPLR L+V(RC(CPTB L)*O(CPTB LV vecMIN(CPTB L ,CPTB R))))
end
CPTB M=(etMIN(CPTB L ,N(CPTB R RC(CPTB L)))+O vecMIN(CPTB L , N(CPTB R , RC(CPTB L)))
The formula for TCM, as a total cost computation, involves regular vector addition.
Since an MJ requires rows from both its children before it can produce a row itself, CPFRM is simply the overlapped sum of CPFR for each of its children. We use overlapped addition on the assumption that when one child has issued an I/O the other can send messages or utilize CPU.
Both a union operator and an MJ operator must receive their slower child's last row before they can produce their own last row. Thus, the union operator's formulas for CPLRM and CPTBM also apply to an MJ operator.
One Blocking Leg and One Non-blocking Leg
When exactly one of an MJ operator's children has a blocking leg, the MJ operator combines the two legs using the same strategy as the union operator: i.e. converting the non-blocking leg to a blocking leg (using the exact same transformation rules as the union operator) and then proceeding as if it had two blocking legs using the formulas described above.
Nested Loops Join Operator
As illustrated in FIG. 7, since a nested loops join (NLJ) operator 710 has two children 712, 714, the roll-up costs for each child 712, 714 are merged into a single roll-up cost before using the unary roll-up formulas described earlier.
All output rows from an NLJ operator's left child 712 eventually get sent to the right child 714, so the NLJ operator's left child 712 can be viewed as providing rows directly to the right child 714. Thus, for purposes of merging the children, one can treat the left child 712 as a unary child of the NLJ operator's right child 714. By making this transformation, one can use the unary roll-up formulas to merge the two legs.
Both Legs Non-blocking
The formulas below indicate how to merge an NLJ operator's left and right child when neither leg has any blocking operators:
TC M =TC L+V TC R
CPFR M CPFR L+B CPFR R
CPLR M=(CPLR R+O(CPLR LV CPFR L))+V CPFR L
The formulas for TCM, CPFRM and CPLRM come directly from the unary non-blocking roll-up formulas. By assumption, neither leg blocks, so CPTBM is a zero vector.
The formula for CPFRM relies on the assumption that the first probe in the right child produces the actual first row. When the first row requires multiple probes, all probes after the first probe may overlap with the previous probe, which suggests using overlapped addition (CPFRL+OCPFRR) instead of blocking addition. One could also choose to use regular vector addition (CPFRL+VCPFRR) as a compromise. In the majority of cases, however, the first probe will result in the first row. Therefore, in the presently preferred embodiment, use of the blocking version of the formula is preferred.
Left Leg Blocking, Right Leg Non-blocking
The formulas below indicate how to merge an NLJ operator's left and right child when only the left leg has at least one blocking operator:
TC M =TC L+V TC R
CPFR M =CPFR L+B CPFR R
CPLR M=(CPLR R+O(CPLR LV CPFR L))+V CPFR L
CPTB M =N(CPTB L , RC(CPLR R))
The formulas for TCM, CPFRM and CPLRM come directly from the unary non-blocking roll-up formulas.
The left leg may be viewed as logically coming underneath the right leg, and by assumption only the left leg blocks, so CPTBL becomes CPTBM after normalizing CPTBL to the right leg's repeat count.
Left Leg Non-Blocking, Right Leg Blocking
The formulas below indicate how to merge an NLJ operator's left and right children when only the right leg has at least one blocking operator:
TC M =TC L+V TC R
CPFR M =CPFR R
CPLR M =CPLR R
CPTB M =CPTB R+O(CPLR L/V RC(CPTB R))
The formulas for TCM, CPFRM and CPLRM come directly from the unary blocking roll-up formulas.
The formula for CPTBM is a simplification of the unary blocking roll-up formula taking into account that the left leg has no blocking activity.
Both Legs Blocking
The formulas below indicate how to merge an NLJ operator's left and right children when both legs have at least one blocking operator:
TC M =TC L+V TC R
CPFR M =CPFR R
CPLR M =CPLR R
CPTB M=(CPTB R+O(CPLR L/V RC(CPTB R)))+B N(CPTB L , RC(CPTB R))
The formulas for TCM, CPFRM, CPLRM, and CPTBM all come directly from the unary blocking roll-up formulas.
Hash Join Operator
Unlike binary operators described earlier, the hash join (HJ) operator does not merge the roll-up costs of its two children and then use a unary roll-up formula to produce its own roll-up cost. Instead, it uses intermediate cost vectors and values calculated and saved off during preliminary costing and combines these directly with the roll-up costs of its two children to produce its own roll-up cost.
Hash Join Specific Vectors and Values
A hash join consists of potentially three stages. Stage 1 involves taking rows produced by the right (inner) child, building a main memory hash table out of a cluster of rows that fits in memory and writing the remaining overflow clusters to disk. Stage 2 involves taking rows produced by the left (outer) child and, if the row in question belongs to a main memory cluster, probing the hash table built in stage 1; if the row in question belongs to an overflow cluster, it gets written to an overflow cluster on disk. Stage 3 joins corresponding overflow clusters produced in stages 1 and 2. If stages 1 and 2 did not produce overflow clusters, stage 3 does not need to execute.
Since no activity above the HJ operator can proceed until stage 1 completes, we regard stage 1 as blocking activity, so we place its resource usage in the blocking vector of the HJ preliminary cost:, i.e., CPTBPP.
Preliminary costing produces separate vectors for resource usage in stage 2 and stage 3. We denote these two vectors as STG2 and STG3, respectively. For repeat counts greater than one, both vectors represent cumulative usage over all probes.
Preliminary costing also determines what fraction of STG2 pertains to first row production and what fraction of STG3 pertains to first row production. We denote these two fractions as frFrac2 and frFrac3 respectively. Preliminary costing produces its preliminary first and last row costs using the following formulas:
CPFR PP =frFrac 2*STG 2+B frFrac 3*STG 3
CPLR PP =STG 2+B STG 3
For HJ roll-up, however, we do not use CPFRPP and CPLRPP directly. Instead we use their constituent parts—frFrac1, frFrac2, STG1 and STG2—which preliminary costing has stored along with the traditional vectors of the preliminary cost.
Hash Join Roll-up Formulas
The formulas below indicate how to perform final cost roll-up for an HJ operator:
TC PR=(TC L+V TC R)+V TC PP
CPFR PR=(frFrac 2*V((CPLR LV CPFR L)+O STG 2))+B(frFrac 3*V STG 3)
CPLR PR=((CPLR LV CPFR L)+O STG 2)+B STG 3
CPTB PR=(CPFR L+V CPTB L)+O(((((CPLR RV CPFR R)/V RC(CPLR R))+O CPTB PP)+V CPFR R)+B CPTB R)
The formula for TCPR, as a total cost computation, involves regular vector addition.
In the computation of CPLRPR, no ancestor activity can begin until the left child has produced at least one row, so the left child's first row cost belongs with the total blocking roll-up cost: CPTBPR. The remaining portion of the left child's last row activity overlaps with stage 2, hence the term ((CPLRLVCPFRL)+OSTG2). Stage 3, however, can not proceed until stage 2 finishes, so blocking addition is used when adding its resource usage.
The computation for CPFRPR follows the same approach as the computation for CPLRPR with the additional complication of multiplying the stage 2 resource usage (including the left child's last row activity) and the stage 3 resource usage by the corresponding first row fractions—frFrac2 and frFrac3 respectively.
To compute CPTBPR, one first recognizes that all of the right child's last row activity, except the portion necessary to produce the first row, overlaps with the HJ operator's blocking activity, hence the term ((CPLRRVCPFRR)/VRC(CPLRR))+OCPTBPP. (Note that dividing by RC(CPLRR) converts last row resource usage from a cumulative cost over all probes to an average cost per probe, and also note that the HJ operator's blocking activity, CPTBPP, represents stage 1.) Next, CPFRR is added back, which was subtracted out earlier, and then the right child's blocking cost, CPTBR, is added using blocking addition. As mentioned earlier, the left child's blocking activity includes the activity necessary to produce its first row, hence the term CPFRL+VCPTBL. Since the left leg's blocking activity overlaps with the with the HJ operator, it is added using overlapped addition.
Several preferred embodiments of the present invention have been described. Nevertheless, it will be understood that various other modifications may be made without departing from the spirit and scope of the present invention. It will be understood, for example, that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using various techniques for implementing the functionality described herein. The invention may be implemented in any appropriate operating system using the appropriate programming languages and/or programming techniques. Thus, the present invention is not limited to the presently preferred embodiments described herein, but may be altered in a variety of ways which will be apparent to persons skilled in the art based on the present description.

Claims (8)

We claim:
1. A process for predicting an elapsed time for executing a query, wherein a plan for executing the query includes a plurality of operators forming a query tree and at least one operator is a blocking operator, comprising:
assigning a preliminary cost to each operator;
combining the costs of at least some of the operators, including the at least one blocking operator; and
converting the combined costs into a predicted elapsed time to execute the plan;
wherein the predicted elapsed time is the time predicted for returning the first row of the query's response.
2. A process for predicting an elapsed time for executing a query, wherein a plan for executing the query includes a plurality of operators forming a query tree and at least one operator is a blocking operator, comprising:
assigning a preliminary cost to each operator;
combining the costs of at least some of the operators, including the at least one blocking operator; and
converting the combined costs into a predicted elapsed time to execute the plan;
wherein the predicted elapsed time is the time predicted for returning the last row of the query's response.
3. A process for predicting an elapsed time for executing a query, wherein a plan for executing the query includes a plurality of operators that will utilize resources during their execution, wherein the operators form a query tree and wherein at least two of the operators compete for the same resources, comprising:
assigning a preliminary cost to each operator;
combining the costs of at least the operators that compete for the same resources; and
converting the combined costs into a predicted elapsed time to execute the plan while accounting for the effect of the competition for resources on the predicted elapsed time;
wherein the predicted elapsed time is the time predicted for returning the first row of the query's response.
4. A process for predicting an elapsed time for executing a query, wherein a plan for executing the query includes a plurality of operators that will utilize resources during their execution, wherein the operators form a query tree and wherein at least two of the operators compete for the same resources, comprising:
assigning a preliminary cost to each operator;
combining the costs of at least the operators that compete for the same resources; and
converting the combined costs into a predicted elapsed time to execute the plan while accounting for the effect of the competition for resources on the predicted elapsed time;
wherein the predicted elapsed time is the time predicted for returning the last row of the query's response.
5. A process for predicting an elapsed time for executing a query, wherein a plan for executing the query includes a plurality of operators that will utilize resources during their execution, the operators form a query tree and at least one of the operators utilizes two of the resources in a non-concurrent manner, comprising:
assigning preliminary costs to the operators;
combining the costs of at least one of the operators that utilize resources in a non-concurrent manner with at least one other operator; and
converting the combined costs into a predicted elapsed time to execute the plan while accounting for the effect of the non-concurrent utilization of resources;
wherein the predicted elapsed time is the time predicted for returning the first row of the query's response.
6. A process for predicting an elapsed time for executing a query, wherein a plan for executing the query includes a plurality of operators that will utilize resources during their execution, the operators form a query tree and at least one of the operators utilizes two of the resources in a non-concurrent manner, comprising:
assigning preliminary costs to the operators;
combining the costs of at least one of the operators that utilize resources in a non-concurrent manner with at least one other operator; and
converting the combined costs into a predicted elapsed time to execute the plan while accounting for the effect of the non-concurrent utilization of resources;
wherein the predicted elapsed time is the time predicted for returning the last row of the query's response.
7. A process for predicting an elapsed time for executing a query in a database system having memory, wherein a plan for executing the query will use more of the memory than is physically available for the execution, and wherein the plan includes a plurality of operators forming a query tree, comprising:
assigning preliminary costs to the operators;
combining the costs of a plurality of the operators; and
converting the combined costs into a predicted elapsed time to execute the plan while accounting for the effect of the insufficient memory on the predicted elapsed time;
wherein the predicted elapsed time is the time predicted for returning the first row of the query's response.
8. A process for predicting an elapsed time for executing a query in a database system having memory, wherein a plan for executing the query will use more of the memory than is physically available for the execution, and wherein the plan includes a plurality of operators forming a query tree, comprising:
assigning preliminary costs to the operators;
combining the costs of a plurality of the operators; and
converting the combined costs into a predicted elapsed time to execute the plan while accounting for the effect of the insufficient memory on the predicted elapsed time;
wherein the predicted elapsed time is the time predicted for returning the last row of the query's response.
US09/162,638 1998-09-28 1998-09-28 Database query cost model optimizer Expired - Lifetime US6330552B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/162,638 US6330552B1 (en) 1998-09-28 1998-09-28 Database query cost model optimizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/162,638 US6330552B1 (en) 1998-09-28 1998-09-28 Database query cost model optimizer

Publications (1)

Publication Number Publication Date
US6330552B1 true US6330552B1 (en) 2001-12-11

Family

ID=22586494

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/162,638 Expired - Lifetime US6330552B1 (en) 1998-09-28 1998-09-28 Database query cost model optimizer

Country Status (1)

Country Link
US (1) US6330552B1 (en)

Cited By (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020046208A1 (en) * 2000-10-12 2002-04-18 Johan Andersson Objects in a computer system
US6470337B1 (en) * 1998-12-24 2002-10-22 Nec Corporation Information retrieval system using a hierarchical index for narrowing a retrieval result and its method and storing medium with information retrieval program stored therein
US20030018618A1 (en) * 2001-03-15 2003-01-23 International Business Machines Corporation Representation for data used in query optimization
US20030055813A1 (en) * 2001-05-15 2003-03-20 Microsoft Corporation Query optimization by sub-plan memoization
US20030172010A1 (en) * 2002-03-08 2003-09-11 Agile Software Corporation System and method for analyzing data
US20030172008A1 (en) * 2002-03-08 2003-09-11 Agile Software Corporation System and method for managing and monitoring supply costs
US20030212701A1 (en) * 2002-05-10 2003-11-13 International Business Machines Corporation Method, system, and program for selecting a join order for tables subject to a join operation
US6760684B1 (en) * 1999-04-30 2004-07-06 Oracle International Corporation Method and mechanism for profiling a system
US6763359B2 (en) * 2001-06-06 2004-07-13 International Business Machines Corporation Learning from empirical results in query optimization
US6778976B2 (en) * 2000-04-28 2004-08-17 International Business Machines Corporation Selectivity estimation for processing SQL queries containing having clauses
US20040205082A1 (en) * 2003-04-14 2004-10-14 International Business Machines Corporation System and method for querying XML streams
US20050081010A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation Method and system for autonomic performance improvements in an application via memory relocation
US20050102613A1 (en) * 2003-11-07 2005-05-12 Microsoft Corporation Generating a hierarchical plain-text execution plan from a database query
US20050120000A1 (en) * 2003-09-06 2005-06-02 Oracle International Corporation Auto-tuning SQL statements
US20060026176A1 (en) * 2004-07-29 2006-02-02 International Business Machines Corporation Fee-based model based on database federation and query support
US20060080264A1 (en) * 2004-10-08 2006-04-13 International Business Machines Corporation System, method and program to estimate cost of a product and/or service
US7058622B1 (en) * 2001-12-26 2006-06-06 Tedesco Michael A Method, apparatus and system for screening database queries prior to submission to a database
US20060155654A1 (en) * 2002-08-13 2006-07-13 Frederic Plessis Editor and method for editing formulae for calculating the price of a service and a system for automatic costing of a service
US20060167845A1 (en) * 2005-01-25 2006-07-27 International Business Machines Corporation Selection of optimal plans for FIRST-N-ROW queries
US20060200496A1 (en) * 2005-02-22 2006-09-07 Transparency Software, Inc. Organization action incidents
US20060259442A1 (en) * 2005-05-17 2006-11-16 International Business Machines Corporation System method and program product to estimate cost of integrating and utilizing heterogeneous data sources
US20070043696A1 (en) * 2005-08-19 2007-02-22 Haas Peter J Method for estimating the cost of query processing
US20070050282A1 (en) * 2005-08-25 2007-03-01 Sas Institute Inc. Financial risk mitigation optimization systems and methods
US7233939B1 (en) * 2002-04-30 2007-06-19 Oracle International Corporation Systems and methods of optimizing database queries for efficient delivery of query data subsets
US20070162426A1 (en) * 2006-01-10 2007-07-12 Brown Douglas P Closed-loop predicate analysis
US20070162425A1 (en) * 2006-01-06 2007-07-12 International Business Machines Corporation System and method for performing advanced cost/benefit analysis of asynchronous operations
US20070208694A1 (en) * 2002-11-14 2007-09-06 Seisint, Inc. Query scheduling in a parallel-processing database system
US20070226090A1 (en) * 2006-03-08 2007-09-27 Sas Institute Inc. Systems and methods for costing reciprocal relationships
US20080065435A1 (en) * 2006-08-25 2008-03-13 John Phillip Ratzloff Computer-implemented systems and methods for reducing cost flow models
US7353212B1 (en) * 2000-05-23 2008-04-01 Microsoft Corporation Method and structure for assigning a transaction cost
US7392242B1 (en) * 2004-02-27 2008-06-24 Hyperion Solutions Corporation Query costing in a multidimensional database
US20080195578A1 (en) * 2007-02-09 2008-08-14 Fabian Hueske Automatically determining optimization frequencies of queries with parameter markers
US20080319926A1 (en) * 2007-06-25 2008-12-25 Akm Kamrul Alam Method for dynamic usage billing
US20090018880A1 (en) * 2007-07-13 2009-01-15 Bailey Christopher D Computer-Implemented Systems And Methods For Cost Flow Analysis
US20090030888A1 (en) * 2007-07-25 2009-01-29 Birendra Kumar Sahu Techniques for scoring and comparing query execution plans
US7590508B1 (en) * 2004-10-22 2009-09-15 Sprint Communications Company L.P. Method and system for forecasting usage costs and computer capacity
US20100082602A1 (en) * 2008-07-05 2010-04-01 Archana Sulochana Ganapathi Predicting Performance Of Multiple Queries Executing In A Database
US20100080284A1 (en) * 2008-09-26 2010-04-01 Qualcomm Incorporated Resolving geometric relationships among video data units
US20100080285A1 (en) * 2008-09-26 2010-04-01 Qualcomm Incorporated Determining availability of video data units
US20100080296A1 (en) * 2008-09-26 2010-04-01 Qualcomm Incorporated Locating motion vectors for video data units
US8046362B2 (en) 2008-04-24 2011-10-25 Lexisnexis Risk & Information Analytics Group, Inc. Statistical record linkage calibration for reflexive and symmetric distance measures at the field and field value levels without the need for human interaction
US8090733B2 (en) 2008-07-02 2012-01-03 Lexisnexis Risk & Information Analytics Group, Inc. Statistical measure and calibration of search criteria where one or both of the search criteria and database is incomplete
US20120143863A1 (en) * 2005-12-13 2012-06-07 Muse Green Investments Intelligent data retrieval system
US8200518B2 (en) 2008-02-25 2012-06-12 Sas Institute Inc. Computer-implemented systems and methods for partial contribution computation in ABC/M models
US8332857B1 (en) * 2008-12-30 2012-12-11 Teradota Us, Inc. Database system having a regulator that performs workload regulation based on optimizer estimates
EP2575052A1 (en) * 2011-09-30 2013-04-03 Tata Consultancy Services Limited Testing SQL query writing skills
US8516488B1 (en) * 2010-11-09 2013-08-20 Teradata Us, Inc. Adjusting a resource estimate in response to progress of execution of a request
US8924981B1 (en) 2010-11-12 2014-12-30 Teradat US, Inc. Calculating priority indicators for requests in a queue
US8966493B1 (en) * 2010-11-09 2015-02-24 Teradata Us, Inc. Managing execution of multiple requests in a job using overall deadline for the job
US9015171B2 (en) 2003-02-04 2015-04-21 Lexisnexis Risk Management Inc. Method and system for linking and delinking data records
US9020830B2 (en) 2011-03-08 2015-04-28 Apptio, Inc. Hierarchy based dependent object relationships
US9189505B2 (en) 2010-08-09 2015-11-17 Lexisnexis Risk Data Management, Inc. System of and method for entity representation splitting without the need for human interaction
US9275050B2 (en) 2011-10-24 2016-03-01 Apptio, Inc. Global dictionaries using universal primitives
US9350561B1 (en) 2015-05-27 2016-05-24 Apptio, Inc. Visualizing the flow of resources in an allocation model
US9384511B1 (en) 2015-12-16 2016-07-05 Apptio, Inc. Version control for resource allocation modeling
US9411859B2 (en) 2009-12-14 2016-08-09 Lexisnexis Risk Solutions Fl Inc External linking based on hierarchical level weightings
US9495419B1 (en) * 2015-12-16 2016-11-15 International Business Machines Corporation Runtime optimization for multi-index access
US9529863B1 (en) 2015-12-21 2016-12-27 Apptio, Inc. Normalizing ingested data sets based on fuzzy comparisons to known data sets
US9910892B2 (en) 2008-07-05 2018-03-06 Hewlett Packard Enterprise Development Lp Managing execution of database queries
US9996570B2 (en) 2015-03-18 2018-06-12 International Business Machines Corporation Index traversals utilizing alternative in-memory search structure and system memory costing
US10019477B2 (en) 2015-04-06 2018-07-10 International Business Machines Corporation Anticipatory query completion by pattern detection
US10157356B2 (en) 2016-12-14 2018-12-18 Apptio, Inc. Activity based resource allocation modeling
US10268980B1 (en) 2017-12-29 2019-04-23 Apptio, Inc. Report generation based on user responsibility
US10268979B2 (en) 2015-09-28 2019-04-23 Apptio, Inc. Intermediate resource allocation tracking in data models
US10289721B2 (en) * 2016-07-14 2019-05-14 International Business Machines Corporation Query management based on amount of data change
US10325232B2 (en) 2013-09-20 2019-06-18 Apptio, Inc. Allocating heritage information in data models
US10324951B1 (en) 2017-12-29 2019-06-18 Apptio, Inc. Tracking and viewing model changes based on time
US10387815B2 (en) 2015-09-29 2019-08-20 Apptio, Inc. Continuously variable resolution of resource allocation
US10417591B2 (en) * 2013-07-03 2019-09-17 Apptio, Inc. Recursive processing of object allocation rules
US10474974B2 (en) 2016-09-08 2019-11-12 Apptio, Inc. Reciprocal models for resource allocation
US10482407B2 (en) 2016-11-14 2019-11-19 Apptio, Inc. Identifying resource allocation discrepancies
US10726367B2 (en) 2015-12-28 2020-07-28 Apptio, Inc. Resource allocation forecasting
US10937036B2 (en) 2012-11-13 2021-03-02 Apptio, Inc. Dynamic recommendations taken over time for reservations of information technology resources
US10936978B2 (en) 2016-09-20 2021-03-02 Apptio, Inc. Models for visualizing resource allocation
US11151493B2 (en) 2015-06-30 2021-10-19 Apptio, Inc. Infrastructure benchmarking based on dynamic cost modeling
US11327968B2 (en) * 2020-04-02 2022-05-10 Sap Se Optimizing output data formats to improve query performance in database systems
US20230042738A1 (en) * 2021-08-04 2023-02-09 Cysiv, Inc. Database System with Run-Time Query Mode Selection
US11775552B2 (en) 2017-12-29 2023-10-03 Apptio, Inc. Binding annotations to data objects

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301317A (en) * 1992-04-27 1994-04-05 International Business Machines Corporation System for adapting query optimization effort to expected execution time
US5367675A (en) 1991-12-13 1994-11-22 International Business Machines Corporation Computer automated system and method for optimizing the processing of a query in a relational database system by merging subqueries with the query
US5546570A (en) 1995-02-17 1996-08-13 International Business Machines Corporation Evaluation strategy for execution of SQL queries involving recursion and table queues
US5548755A (en) 1995-02-17 1996-08-20 International Business Machines Corporation System for optimizing correlated SQL queries in a relational database using magic decorrelation
US5619692A (en) 1995-02-17 1997-04-08 International Business Machines Corporation Semantic optimization of query order requirements using order detection by normalization in a query compiler system
US5630120A (en) 1992-10-12 1997-05-13 Bull, S.A. Method to help in optimizing a query from a relational data base management system
US5689708A (en) * 1995-03-31 1997-11-18 Showcase Corporation Client/server computer systems having control of client-based application programs, and application-program control means therefor
US5689696A (en) 1995-12-28 1997-11-18 Lucent Technologies Inc. Method for maintaining information in a database used to generate high biased histograms using a probability function, counter and threshold values
US5761654A (en) 1996-06-05 1998-06-02 Oracle Corporation Memory structure and method for tuning a database statement using a join-tree data structure representation, including selectivity factors, of a master table and detail table
US5765146A (en) * 1993-11-04 1998-06-09 International Business Machines Corporation Method of performing a parallel relational database query in a multiprocessor environment
US5875445A (en) * 1997-05-29 1999-02-23 Oracle Corporation Performance-related estimation using pseudo-ranked trees
JPH11203288A (en) * 1998-01-09 1999-07-30 Canon Inc Database reference device
US6023695A (en) * 1997-10-31 2000-02-08 Oracle Corporation Summary table management in a computer system
US6026391A (en) * 1997-10-31 2000-02-15 Oracle Corporation Systems and methods for estimating query response times in a computer system
US6212514B1 (en) * 1998-07-31 2001-04-03 International Business Machines Corporation Data base optimization method for estimating query and trigger procedure costs

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5367675A (en) 1991-12-13 1994-11-22 International Business Machines Corporation Computer automated system and method for optimizing the processing of a query in a relational database system by merging subqueries with the query
US5301317A (en) * 1992-04-27 1994-04-05 International Business Machines Corporation System for adapting query optimization effort to expected execution time
US5630120A (en) 1992-10-12 1997-05-13 Bull, S.A. Method to help in optimizing a query from a relational data base management system
US5765146A (en) * 1993-11-04 1998-06-09 International Business Machines Corporation Method of performing a parallel relational database query in a multiprocessor environment
US5548755A (en) 1995-02-17 1996-08-20 International Business Machines Corporation System for optimizing correlated SQL queries in a relational database using magic decorrelation
US5619692A (en) 1995-02-17 1997-04-08 International Business Machines Corporation Semantic optimization of query order requirements using order detection by normalization in a query compiler system
US5546570A (en) 1995-02-17 1996-08-13 International Business Machines Corporation Evaluation strategy for execution of SQL queries involving recursion and table queues
US5689708A (en) * 1995-03-31 1997-11-18 Showcase Corporation Client/server computer systems having control of client-based application programs, and application-program control means therefor
US5689696A (en) 1995-12-28 1997-11-18 Lucent Technologies Inc. Method for maintaining information in a database used to generate high biased histograms using a probability function, counter and threshold values
US5761654A (en) 1996-06-05 1998-06-02 Oracle Corporation Memory structure and method for tuning a database statement using a join-tree data structure representation, including selectivity factors, of a master table and detail table
US5875445A (en) * 1997-05-29 1999-02-23 Oracle Corporation Performance-related estimation using pseudo-ranked trees
US6023695A (en) * 1997-10-31 2000-02-08 Oracle Corporation Summary table management in a computer system
US6026391A (en) * 1997-10-31 2000-02-15 Oracle Corporation Systems and methods for estimating query response times in a computer system
JPH11203288A (en) * 1998-01-09 1999-07-30 Canon Inc Database reference device
US6212514B1 (en) * 1998-07-31 2001-04-03 International Business Machines Corporation Data base optimization method for estimating query and trigger procedure costs

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
Article by Clifford A. Lynch entitled "Selectivity Estimation and Query Optimization in Large Databases with Highly Skewed Distributions of Column Values" published by University of California dated 1988 pp. 240-251.
Article by Haas et al. entitled "Sampling-Based Estimation of the Number of Distinct Values of an Attribute" published by Proceedings of the 21st VLDB Conference dated 1995 pp. 311-321.
Article by Leonard D. Shapiro entitled "Join Processing in Database Systems with Large Main Memories" published by ACM Transactions on Database Systems dated 1986, vol. 11, No. 3, pp. 239-264.
Article by Mackert et al. entitled "R* Optimizer Validation and Performance Evaluation for Distributed Queries" published by Proceedings of the 12th International Conference on Very Large Databases, dated 1986 pp. 149-159.
Article by Piatetsky-Shapiro et al. entitled "Accurate Estimation of the Number of Tuples Satisfying a Condition" published by ACM dated 1984 pp. 256-276.
Article by Poosala et al. entitled: "Improved Histograms for Selectivity Estimation of Range Predicates" SIGMOD Jun. 1996, pp. 294-305.
Article by Selinger et al., entitled "Access Path Selection in a Relational Database Management System" published by ACM dated 1979, pp. 23-34.
Article by Sun et al. entitled: "An Instant and Accurate Size Estimation Method for Joins and Selection in a Retrieval-Intensive Environment." SIGMOD, May 1993, pp. 79-88.
Article by Yannis E. Ioannidis entitled "University of Serial Histograms" published by Proceedings of the 19th VLDB Conference dated 1993 pp. 256-267.
Batabase Searcher: "Advance cost estimates on DIALOG for search results. (Price Watch)"; Jul.-Aug. 1991, v7, n6, p. 38.*
IBM Technical Disclosure Bulletin: "Effective Approach to Query I/O Parallelism Using Sequential Prefetch and Horizontal Data Partitions"; Sep. 1993, vol. 36, No. 9A, pp. 541-544.*
Spiliopulou et al: "A cost model for the estimation of query execution time in parallel environment supporting pipelines"; Computers ans Artificial Intelligence, 1996, vol. 15, No. 4, pp. 341-368, (Abstract Only).*
Subramanian et al: "Query optimization in multidatabase systems"; Distributed and Parallel Databases, Apr. 1998, vol. 6, No. 2, pp. 183-210, (Abstract Only).*

Cited By (165)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6470337B1 (en) * 1998-12-24 2002-10-22 Nec Corporation Information retrieval system using a hierarchical index for narrowing a retrieval result and its method and storing medium with information retrieval program stored therein
US6760684B1 (en) * 1999-04-30 2004-07-06 Oracle International Corporation Method and mechanism for profiling a system
US6778976B2 (en) * 2000-04-28 2004-08-17 International Business Machines Corporation Selectivity estimation for processing SQL queries containing having clauses
US7353212B1 (en) * 2000-05-23 2008-04-01 Microsoft Corporation Method and structure for assigning a transaction cost
US20020046208A1 (en) * 2000-10-12 2002-04-18 Johan Andersson Objects in a computer system
US20030018618A1 (en) * 2001-03-15 2003-01-23 International Business Machines Corporation Representation for data used in query optimization
US7007009B2 (en) * 2001-03-15 2006-02-28 International Business Machines Corporation Representation for data used in query optimization
US6850925B2 (en) * 2001-05-15 2005-02-01 Microsoft Corporation Query optimization by sub-plan memoization
US20030055813A1 (en) * 2001-05-15 2003-03-20 Microsoft Corporation Query optimization by sub-plan memoization
US7240044B2 (en) * 2001-05-15 2007-07-03 Microsoft Corporation Query optimization by sub-plan memoization
US20050033730A1 (en) * 2001-05-15 2005-02-10 Microsoft Corporation Query optimization by sub-plan memoization
US6763359B2 (en) * 2001-06-06 2004-07-13 International Business Machines Corporation Learning from empirical results in query optimization
US7058622B1 (en) * 2001-12-26 2006-06-06 Tedesco Michael A Method, apparatus and system for screening database queries prior to submission to a database
US8386296B2 (en) * 2002-03-08 2013-02-26 Agile Software Corporation System and method for managing and monitoring supply costs
US20030172008A1 (en) * 2002-03-08 2003-09-11 Agile Software Corporation System and method for managing and monitoring supply costs
US20030172010A1 (en) * 2002-03-08 2003-09-11 Agile Software Corporation System and method for analyzing data
US7233939B1 (en) * 2002-04-30 2007-06-19 Oracle International Corporation Systems and methods of optimizing database queries for efficient delivery of query data subsets
US20030212701A1 (en) * 2002-05-10 2003-11-13 International Business Machines Corporation Method, system, and program for selecting a join order for tables subject to a join operation
US6980981B2 (en) 2002-05-10 2005-12-27 International Business Machines Corporation Method, system, and program for selecting a join order for tables subject to a join operation
US8719074B2 (en) * 2002-08-13 2014-05-06 Sap Ag Editor and method for editing formulae for calculating the price of a service and a system for automatic costing of a service
US20060155654A1 (en) * 2002-08-13 2006-07-13 Frederic Plessis Editor and method for editing formulae for calculating the price of a service and a system for automatic costing of a service
US20070208694A1 (en) * 2002-11-14 2007-09-06 Seisint, Inc. Query scheduling in a parallel-processing database system
US9037606B2 (en) 2003-02-04 2015-05-19 Lexisnexis Risk Solutions Fl Inc. Internal linking co-convergence using clustering with hierarchy
US9043359B2 (en) 2003-02-04 2015-05-26 Lexisnexis Risk Solutions Fl Inc. Internal linking co-convergence using clustering with no hierarchy
US9384262B2 (en) 2003-02-04 2016-07-05 Lexisnexis Risk Solutions Fl Inc. Internal linking co-convergence using clustering with hierarchy
US9020971B2 (en) 2003-02-04 2015-04-28 Lexisnexis Risk Solutions Fl Inc. Populating entity fields based on hierarchy partial resolution
US9015171B2 (en) 2003-02-04 2015-04-21 Lexisnexis Risk Management Inc. Method and system for linking and delinking data records
US7392239B2 (en) * 2003-04-14 2008-06-24 International Business Machines Corporation System and method for querying XML streams
US20080215559A1 (en) * 2003-04-14 2008-09-04 Fontoura Marcus F System and method for querying xml streams
US20040205082A1 (en) * 2003-04-14 2004-10-14 International Business Machines Corporation System and method for querying XML streams
US8825629B2 (en) 2003-09-06 2014-09-02 Oracle International Corporation Method for index tuning of a SQL statement, and index merging for a multi-statement SQL workload, using a cost-based relational query optimizer
US20050120000A1 (en) * 2003-09-06 2005-06-02 Oracle International Corporation Auto-tuning SQL statements
US7805411B2 (en) * 2003-09-06 2010-09-28 Oracle International Corporation Auto-tuning SQL statements
US7747606B2 (en) 2003-09-06 2010-06-29 Oracle International Corporation Automatic SQL tuning advisor
US7739263B2 (en) 2003-09-06 2010-06-15 Oracle International Corporation Global hints
US7664730B2 (en) 2003-09-06 2010-02-16 Oracle International Corporation Method and system for implementing a SQL profile
US7664778B2 (en) 2003-09-06 2010-02-16 Oracle International Corporation SQL tuning sets
US7634456B2 (en) 2003-09-06 2009-12-15 Oracle International Corporation SQL structure analyzer
US8983934B2 (en) 2003-09-06 2015-03-17 Oracle International Corporation SQL tuning base
US20050187917A1 (en) * 2003-09-06 2005-08-25 Oracle International Corporation Method for index tuning of a SQL statement, and index merging for a multi-statement SQL workload, using a cost-based relational query optimizer
US20050177557A1 (en) * 2003-09-06 2005-08-11 Oracle International Corporation Automatic prevention of run-away query execution
US20050138015A1 (en) * 2003-09-06 2005-06-23 Oracle International Corporation High load SQL driven statistics collection
US20050125393A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation SQL tuning sets
US20050119999A1 (en) * 2003-09-06 2005-06-02 Oracle International Corporation Automatic learning optimizer
US20050120001A1 (en) * 2003-09-06 2005-06-02 Oracle International Corporation SQL structure analyzer
US20050125427A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation Automatic SQL tuning advisor
US20050125452A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation SQL profile
US20050125398A1 (en) * 2003-09-06 2005-06-09 Oracle International Corporation Global hints
US20050081010A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation Method and system for autonomic performance improvements in an application via memory relocation
US20050102613A1 (en) * 2003-11-07 2005-05-12 Microsoft Corporation Generating a hierarchical plain-text execution plan from a database query
US7167848B2 (en) * 2003-11-07 2007-01-23 Microsoft Corporation Generating a hierarchical plain-text execution plan from a database query
US8019751B2 (en) 2004-02-27 2011-09-13 Oracle International Corporation Query costing in a multidimensional database
US20080270365A1 (en) * 2004-02-27 2008-10-30 Baccash Jonathan M Query costing in a multidimensional database
US7392242B1 (en) * 2004-02-27 2008-06-24 Hyperion Solutions Corporation Query costing in a multidimensional database
US20060026176A1 (en) * 2004-07-29 2006-02-02 International Business Machines Corporation Fee-based model based on database federation and query support
US7792878B2 (en) * 2004-07-29 2010-09-07 International Business Machines Corporation Fee-based model based on database federation and query support
US20070282626A1 (en) * 2004-10-08 2007-12-06 Yue Zhang System, Method and Program to Estimate Cost of a Product and/or Service
US7469235B2 (en) 2004-10-08 2008-12-23 International Business Machines Corporation System, method and program to estimate cost of a product and/or service
US7315844B2 (en) 2004-10-08 2008-01-01 International Business Machines Corporation System, method and program to estimate cost of a product and/or service
US20060080264A1 (en) * 2004-10-08 2006-04-13 International Business Machines Corporation System, method and program to estimate cost of a product and/or service
US7996332B1 (en) 2004-10-22 2011-08-09 Sprint Communications Company L.P. Method and system for forecasting usage costs and computer capacity
US7590508B1 (en) * 2004-10-22 2009-09-15 Sprint Communications Company L.P. Method and system for forecasting usage costs and computer capacity
US20060167845A1 (en) * 2005-01-25 2006-07-27 International Business Machines Corporation Selection of optimal plans for FIRST-N-ROW queries
US20060200496A1 (en) * 2005-02-22 2006-09-07 Transparency Software, Inc. Organization action incidents
US8145579B2 (en) 2005-05-17 2012-03-27 International Business Machines Corporation System, method and program product to estimate cost of integrating and utilizing heterogeneous data sources
US20060259442A1 (en) * 2005-05-17 2006-11-16 International Business Machines Corporation System method and program product to estimate cost of integrating and utilizing heterogeneous data sources
US20110040703A1 (en) * 2005-05-17 2011-02-17 International Business Machines Corporation System, Method and Program Product to Estimate Cost of Integrating and Utilizing Heterogeneous Data Sources
US7636735B2 (en) 2005-08-19 2009-12-22 International Business Machines Corporation Method for estimating the cost of query processing
US20070043696A1 (en) * 2005-08-19 2007-02-22 Haas Peter J Method for estimating the cost of query processing
US20070050282A1 (en) * 2005-08-25 2007-03-01 Sas Institute Inc. Financial risk mitigation optimization systems and methods
US7624054B2 (en) 2005-08-25 2009-11-24 Sas Institute Inc. Financial risk mitigation optimization systems and methods
US20120143863A1 (en) * 2005-12-13 2012-06-07 Muse Green Investments Intelligent data retrieval system
US20070162425A1 (en) * 2006-01-06 2007-07-12 International Business Machines Corporation System and method for performing advanced cost/benefit analysis of asynchronous operations
US7860858B2 (en) * 2006-01-10 2010-12-28 Teradata Us, Inc. Closed-loop predicate analysis
US20070162426A1 (en) * 2006-01-10 2007-07-12 Brown Douglas P Closed-loop predicate analysis
US20070226090A1 (en) * 2006-03-08 2007-09-27 Sas Institute Inc. Systems and methods for costing reciprocal relationships
US7634431B2 (en) * 2006-03-08 2009-12-15 Sas Institute Inc. Systems and methods for costing reciprocal relationships
US7813948B2 (en) 2006-08-25 2010-10-12 Sas Institute Inc. Computer-implemented systems and methods for reducing cost flow models
US20080065435A1 (en) * 2006-08-25 2008-03-13 John Phillip Ratzloff Computer-implemented systems and methods for reducing cost flow models
US20080195578A1 (en) * 2007-02-09 2008-08-14 Fabian Hueske Automatically determining optimization frequencies of queries with parameter markers
US20080222092A1 (en) * 2007-02-09 2008-09-11 Fabian Hueske Automatically determining optimization frequencies of queries with parameter markers
US7987178B2 (en) 2007-02-09 2011-07-26 International Business Machines Corporation Automatically determining optimization frequencies of queries with parameter markers
US8200586B2 (en) * 2007-06-25 2012-06-12 International Business Machines Corporation Method for dynamic usage billing
US20080319926A1 (en) * 2007-06-25 2008-12-25 Akm Kamrul Alam Method for dynamic usage billing
US8024241B2 (en) 2007-07-13 2011-09-20 Sas Institute Inc. Computer-implemented systems and methods for cost flow analysis
US20090018880A1 (en) * 2007-07-13 2009-01-15 Bailey Christopher D Computer-Implemented Systems And Methods For Cost Flow Analysis
US20090030888A1 (en) * 2007-07-25 2009-01-29 Birendra Kumar Sahu Techniques for scoring and comparing query execution plans
US7941425B2 (en) * 2007-07-25 2011-05-10 Teradata Us, Inc. Techniques for scoring and comparing query execution plans
US8200518B2 (en) 2008-02-25 2012-06-12 Sas Institute Inc. Computer-implemented systems and methods for partial contribution computation in ABC/M models
US9836524B2 (en) 2008-04-24 2017-12-05 Lexisnexis Risk Solutions Fl Inc. Internal linking co-convergence using clustering with hierarchy
US8316047B2 (en) 2008-04-24 2012-11-20 Lexisnexis Risk Solutions Fl Inc. Adaptive clustering of records and entity representations
US8195670B2 (en) 2008-04-24 2012-06-05 Lexisnexis Risk & Information Analytics Group Inc. Automated detection of null field values and effectively null field values
US8250078B2 (en) 2008-04-24 2012-08-21 Lexisnexis Risk & Information Analytics Group Inc. Statistical record linkage calibration for interdependent fields without the need for human interaction
US8266168B2 (en) 2008-04-24 2012-09-11 Lexisnexis Risk & Information Analytics Group Inc. Database systems and methods for linking records and entity representations with sufficiently high confidence
US8275770B2 (en) 2008-04-24 2012-09-25 Lexisnexis Risk & Information Analytics Group Inc. Automated selection of generic blocking criteria
US9031979B2 (en) 2008-04-24 2015-05-12 Lexisnexis Risk Solutions Fl Inc. External linking based on hierarchical level weightings
US8572052B2 (en) 2008-04-24 2013-10-29 LexisNexis Risk Solution FL Inc. Automated calibration of negative field weighting without the need for human interaction
US8046362B2 (en) 2008-04-24 2011-10-25 Lexisnexis Risk & Information Analytics Group, Inc. Statistical record linkage calibration for reflexive and symmetric distance measures at the field and field value levels without the need for human interaction
US8135680B2 (en) 2008-04-24 2012-03-13 Lexisnexis Risk Solutions Fl Inc. Statistical record linkage calibration for reflexive, symmetric and transitive distance measures at the field and field value levels without the need for human interaction
US8135681B2 (en) 2008-04-24 2012-03-13 Lexisnexis Risk Solutions Fl Inc. Automated calibration of negative field weighting without the need for human interaction
US8135679B2 (en) 2008-04-24 2012-03-13 Lexisnexis Risk Solutions Fl Inc. Statistical record linkage calibration for multi token fields without the need for human interaction
US8484168B2 (en) 2008-04-24 2013-07-09 Lexisnexis Risk & Information Analytics Group, Inc. Statistical record linkage calibration for multi token fields without the need for human interaction
US8489617B2 (en) 2008-04-24 2013-07-16 Lexisnexis Risk Solutions Fl Inc. Automated detection of null field values and effectively null field values
US8495077B2 (en) 2008-04-24 2013-07-23 Lexisnexis Risk Solutions Fl Inc. Database systems and methods for linking records and entity representations with sufficiently high confidence
US8135719B2 (en) 2008-04-24 2012-03-13 Lexisnexis Risk Solutions Fl Inc. Statistical record linkage calibration at the field and field value levels without the need for human interaction
US8285725B2 (en) 2008-07-02 2012-10-09 Lexisnexis Risk & Information Analytics Group Inc. System and method for identifying entity representations based on a search query using field match templates
US8090733B2 (en) 2008-07-02 2012-01-03 Lexisnexis Risk & Information Analytics Group, Inc. Statistical measure and calibration of search criteria where one or both of the search criteria and database is incomplete
US8572070B2 (en) 2008-07-02 2013-10-29 LexisNexis Risk Solution FL Inc. Statistical measure and calibration of internally inconsistent search criteria where one or both of the search criteria and database is incomplete
US8190616B2 (en) 2008-07-02 2012-05-29 Lexisnexis Risk & Information Analytics Group Inc. Statistical measure and calibration of reflexive, symmetric and transitive fuzzy search criteria where one or both of the search criteria and database is incomplete
US8639691B2 (en) 2008-07-02 2014-01-28 Lexisnexis Risk Solutions Fl Inc. System for and method of partitioning match templates
US8639705B2 (en) 2008-07-02 2014-01-28 Lexisnexis Risk Solutions Fl Inc. Technique for recycling match weight calculations
US8484211B2 (en) 2008-07-02 2013-07-09 Lexisnexis Risk Solutions Fl Inc. Batch entity representation identification using field match templates
US8661026B2 (en) 2008-07-02 2014-02-25 Lexisnexis Risk Solutions Fl Inc. Entity representation identification using entity representation level information
US8495076B2 (en) 2008-07-02 2013-07-23 Lexisnexis Risk Solutions Fl Inc. Statistical measure and calibration of search criteria where one or both of the search criteria and database is incomplete
US9189523B2 (en) * 2008-07-05 2015-11-17 Hewlett-Packard Development Company, L.P. Predicting performance of multiple queries executing in a database
US9910892B2 (en) 2008-07-05 2018-03-06 Hewlett Packard Enterprise Development Lp Managing execution of database queries
US20100082602A1 (en) * 2008-07-05 2010-04-01 Archana Sulochana Ganapathi Predicting Performance Of Multiple Queries Executing In A Database
US8634457B2 (en) * 2008-09-26 2014-01-21 Qualcomm Incorporated Determining availability of video data units
US8724697B2 (en) 2008-09-26 2014-05-13 Qualcomm Incorporated Locating motion vectors for video data units
US20100080284A1 (en) * 2008-09-26 2010-04-01 Qualcomm Incorporated Resolving geometric relationships among video data units
US8660176B2 (en) 2008-09-26 2014-02-25 Qualcomm Incorporated Resolving geometric relationships among video data units
US20100080285A1 (en) * 2008-09-26 2010-04-01 Qualcomm Incorporated Determining availability of video data units
US20100080296A1 (en) * 2008-09-26 2010-04-01 Qualcomm Incorporated Locating motion vectors for video data units
US8332857B1 (en) * 2008-12-30 2012-12-11 Teradota Us, Inc. Database system having a regulator that performs workload regulation based on optimizer estimates
US9836508B2 (en) 2009-12-14 2017-12-05 Lexisnexis Risk Solutions Fl Inc. External linking based on hierarchical level weightings
US9411859B2 (en) 2009-12-14 2016-08-09 Lexisnexis Risk Solutions Fl Inc External linking based on hierarchical level weightings
US9189505B2 (en) 2010-08-09 2015-11-17 Lexisnexis Risk Data Management, Inc. System of and method for entity representation splitting without the need for human interaction
US9501505B2 (en) 2010-08-09 2016-11-22 Lexisnexis Risk Data Management, Inc. System of and method for entity representation splitting without the need for human interaction
US8516488B1 (en) * 2010-11-09 2013-08-20 Teradata Us, Inc. Adjusting a resource estimate in response to progress of execution of a request
US8966493B1 (en) * 2010-11-09 2015-02-24 Teradata Us, Inc. Managing execution of multiple requests in a job using overall deadline for the job
US8924981B1 (en) 2010-11-12 2014-12-30 Teradat US, Inc. Calculating priority indicators for requests in a queue
US9305275B2 (en) 2011-03-08 2016-04-05 Apptio, Inc. Platform for rapid development of applications
US9020830B2 (en) 2011-03-08 2015-04-28 Apptio, Inc. Hierarchy based dependent object relationships
EP2575052A1 (en) * 2011-09-30 2013-04-03 Tata Consultancy Services Limited Testing SQL query writing skills
US9275050B2 (en) 2011-10-24 2016-03-01 Apptio, Inc. Global dictionaries using universal primitives
US10937036B2 (en) 2012-11-13 2021-03-02 Apptio, Inc. Dynamic recommendations taken over time for reservations of information technology resources
US10417591B2 (en) * 2013-07-03 2019-09-17 Apptio, Inc. Recursive processing of object allocation rules
US10325232B2 (en) 2013-09-20 2019-06-18 Apptio, Inc. Allocating heritage information in data models
US9996569B2 (en) 2015-03-18 2018-06-12 International Business Machines Corporation Index traversals utilizing alternate in-memory search structure and system memory costing
US9996570B2 (en) 2015-03-18 2018-06-12 International Business Machines Corporation Index traversals utilizing alternative in-memory search structure and system memory costing
US10282443B2 (en) 2015-04-06 2019-05-07 International Business Machines Corporation Anticipatory query completion by pattern detection
US10019477B2 (en) 2015-04-06 2018-07-10 International Business Machines Corporation Anticipatory query completion by pattern detection
US9350561B1 (en) 2015-05-27 2016-05-24 Apptio, Inc. Visualizing the flow of resources in an allocation model
US11151493B2 (en) 2015-06-30 2021-10-19 Apptio, Inc. Infrastructure benchmarking based on dynamic cost modeling
US10268979B2 (en) 2015-09-28 2019-04-23 Apptio, Inc. Intermediate resource allocation tracking in data models
US10387815B2 (en) 2015-09-29 2019-08-20 Apptio, Inc. Continuously variable resolution of resource allocation
US9384511B1 (en) 2015-12-16 2016-07-05 Apptio, Inc. Version control for resource allocation modeling
US9495419B1 (en) * 2015-12-16 2016-11-15 International Business Machines Corporation Runtime optimization for multi-index access
US9747338B2 (en) * 2015-12-16 2017-08-29 International Business Machines Corporation Runtime optimization for multi-index access
US9720968B2 (en) * 2015-12-16 2017-08-01 International Business Machines Corporation Runtime optimization for multi-index access
US9898506B2 (en) 2015-12-16 2018-02-20 International Business Machines Corporation Runtime optimization for multi-index access
US20170177669A1 (en) * 2015-12-16 2017-06-22 International Business Machines Corporation Runtime optimization for multi-index access
US9529863B1 (en) 2015-12-21 2016-12-27 Apptio, Inc. Normalizing ingested data sets based on fuzzy comparisons to known data sets
US10726367B2 (en) 2015-12-28 2020-07-28 Apptio, Inc. Resource allocation forecasting
US10289721B2 (en) * 2016-07-14 2019-05-14 International Business Machines Corporation Query management based on amount of data change
US10474974B2 (en) 2016-09-08 2019-11-12 Apptio, Inc. Reciprocal models for resource allocation
US10936978B2 (en) 2016-09-20 2021-03-02 Apptio, Inc. Models for visualizing resource allocation
US10482407B2 (en) 2016-11-14 2019-11-19 Apptio, Inc. Identifying resource allocation discrepancies
US10157356B2 (en) 2016-12-14 2018-12-18 Apptio, Inc. Activity based resource allocation modeling
US10324951B1 (en) 2017-12-29 2019-06-18 Apptio, Inc. Tracking and viewing model changes based on time
US10268980B1 (en) 2017-12-29 2019-04-23 Apptio, Inc. Report generation based on user responsibility
US11775552B2 (en) 2017-12-29 2023-10-03 Apptio, Inc. Binding annotations to data objects
US11327968B2 (en) * 2020-04-02 2022-05-10 Sap Se Optimizing output data formats to improve query performance in database systems
US20230042738A1 (en) * 2021-08-04 2023-02-09 Cysiv, Inc. Database System with Run-Time Query Mode Selection
US11755579B2 (en) * 2021-08-04 2023-09-12 Cysiv, Inc. Database system with run-time query mode selection

Similar Documents

Publication Publication Date Title
US6330552B1 (en) Database query cost model optimizer
US6263345B1 (en) Histogram synthesis modeler for a database query optimizer
US7330848B2 (en) Method and apparatus for generating statistics on query expressions for optimization
US6625593B1 (en) Parallel query optimization strategies for replicated and partitioned tables
Graefe et al. Dynamic query evaluation plans
US7184998B2 (en) System and methodology for generating bushy trees using a left-deep tree join enumeration algorithm
US8612368B2 (en) Systems and methods for processing machine learning algorithms in a MapReduce environment
US7010516B2 (en) Method and system for rowcount estimation with multi-column statistics and histograms
US7136850B2 (en) Self tuning database retrieval optimization using regression functions
US5548755A (en) System for optimizing correlated SQL queries in a relational database using magic decorrelation
US5864840A (en) Evaluation of existential and universal subquery in a relational database management system for increased efficiency
US6665664B2 (en) Prime implicates and query optimization in relational databases
US20030172059A1 (en) Database system providing methodology for eager and opportunistic property enforcement
US8380699B2 (en) System and method for optimizing queries
Trummer et al. Solving the join ordering problem via mixed integer linear programming
US20110082856A1 (en) System and method for optimizing queries
Theodoratos et al. Data Currency Quality Factors in Data Warehouse Design.
Baldacci et al. A cost model for SPARK SQL
US6598044B1 (en) Method for choosing optimal query execution plan for multiple defined equivalent query expressions
Jakobson et al. CALIDA: a system for integrated retrieval from multiple heterogeneous databases
Claussen et al. Optimization and evaluation of disjunctive queries
Dong et al. A framework for optimizing distributed workflow executions
Valduriez et al. Query optimization for database programming languages
Raza et al. Autonomic view of query optimizers in database management systems
Henderson et al. Blutune: Tuning up ibm db2 with ml

Legal Events

Date Code Title Description
AS Assignment

Owner name: TANDEM COMPUTERS, INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FARRAR, CHRISTOPHER M.;LESLIE, HARRY A.;CELIS, PEDRO;AND OTHERS;REEL/FRAME:009778/0460;SIGNING DATES FROM 19990111 TO 19990211

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: COMPAQ COMPUTER CORPORATION, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TANDEM COMPUTERS INC.;REEL/FRAME:018207/0889

Effective date: 19981231

AS Assignment

Owner name: COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:COMPAQ COMPUTER CORPORATION;REEL/FRAME:018420/0216

Effective date: 20010620

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P.;REEL/FRAME:018454/0084

Effective date: 20021001

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027