WO1994020903A1 - Transaction queue management - Google Patents

Transaction queue management Download PDF

Info

Publication number
WO1994020903A1
WO1994020903A1 PCT/SE1994/000172 SE9400172W WO9420903A1 WO 1994020903 A1 WO1994020903 A1 WO 1994020903A1 SE 9400172 W SE9400172 W SE 9400172W WO 9420903 A1 WO9420903 A1 WO 9420903A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
transaction
node unit
data
tqm
Prior art date
Application number
PCT/SE1994/000172
Other languages
French (fr)
Inventor
Sven Nauckhoff
Original Assignee
Sven Nauckhoff
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sven Nauckhoff filed Critical Sven Nauckhoff
Publication of WO1994020903A1 publication Critical patent/WO1994020903A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • This invention concerns a method in accordance with the preamble of claim 1. This invention also concerns a system for performing the method.
  • a distributed computer system compri ⁇ ses a number of computer node units, in short also called node units, connected in a data communications network.
  • Each node unit comprises a computer, an operating system, an application program and a data communications device.
  • the computer comprises a central processing unit, a data storage unit, a data input unit, e.g. a keyboard or a disk drive, and a data output unit, e.g. a display device.
  • the application program may e.g. be a word processing program or a local database program comprising a database management system.
  • the data communications device comprises an input/output port to the data communications network, and hardware and software for communicating data from a first node unit to a second node unit via the data communications network.
  • trans ⁇ action oriented applications i.e. applications that support transaction processing
  • a distributed computer system with different application programs in diffe ⁇ rent computer node units connected by a data communications network.
  • transaction and “transaction processing”, and other terms and expressions used in the pre ⁇ sent text, see appendix A.
  • the distributed computer system described above constitutes a complex database, the arisen technical problems are the ones met in the context of dis ⁇ tributed databases and a database will be used in the present application as an example of a transaction oriented applica ⁇ tion program.
  • One of the technical problems is to synchronize the usage of information that is common to different transactions. Another problem is to fulfil the requirement that distributed trans ⁇ actions are to have concurrent access to shared data residing in different node units. Other interrelated technical pro ⁇ blems are to reduce and control redundancy, to avoid incon- sistency and to maintain integrity of the data in the dis ⁇ tributed computer system. Yet another technical problem is to manage safe recovery in case of a system failure.
  • a distributed transaction is, according to prior art, processed concurrently occupying a first node unit and at least one second node unit, and in ⁇ termediate results are communicated between said node units over the network.
  • a transaction is controlled, i.e. initia- ted, managed and terminated, by the database management system in the first node unit.
  • updates are made in both the first node unit and in the second node unit.
  • at least two messages are sent between the controlling first node unit and the second node unit.
  • a first message is an order from the first node unit to the second node unit to permanent its updates and a second message is a confirmation from the second node unit to the first node unit that the updates now have been made permanent.
  • the transaction is committed by the first node unit.
  • this commit ⁇ ment there is an uncertainty, due to possible unreliabilities in the data communications network, about whether the updates are carried out and made permanent or not. For example, a system failure may occur when said confirmation message is sent and make it impossible to commit the transaction despite the fact that the updates in the second node unit have been made permanent.
  • This kind of uncertainties concerning the un- reliability of transferring messages in connection with the step of committing a transaction entails difficulties to maintain integrity in a distributed database.
  • the main object of the invention is to provide a method for synchronization of usage of data shared or needed by diffe- rent transactions executed in the same node or in different nodes in a distributed computer system.
  • a further object is to provide a method for controlling redundancy in a distributed computer system.
  • Another object is to provide a method for performing distri ⁇ authored transactions in order to avoid inconsistency and to maintain integrity in a distributed computer system.
  • Another object of the invention is to provide a method that support safe recovery of data in case of a system failure in a distributed computer system.
  • Another object is to control the computer of a node unit to start and execute a certain transaction at a certain point in time.
  • Yet another object of the invention is to provide a method for communication of application data and control data between applications in a distributed computer system.
  • Another object of the invention is to provide a method for reliable transfer of data between geographically separate node units in a distributed computer system.
  • Still another object of the invention is to provide a method for the management of transactions in a distributed data pro ⁇ cessing system utilized for work flow management in an orga- nisation, e.g a commercial enterprise.
  • Another object of the invention is to provide a system for execution of the method according to the invention.
  • Fig. 1 shows an illustration of a distributed computer system comprising a plurality of node units connected by means of a data communications network
  • Fig. 2 shows a schematic specification of a transaction T
  • Fig. 3a shows a schematic specification of a transaction Tl
  • Fig. 3b shows a schematic specification of a transaction T2
  • Fig. 4 shows a flow chart describing an embodiment of the in- ventive method
  • Fig. 5a shows a schematic overview of a first embodiment of a queue item
  • Fig. 5b shows a schematic overview of a second embodiment of a queue item
  • Fig. 5c shows a schematic picture of transaction queues in a node unit
  • FIG. 6 shows an illustration of an application of the method according to the invention involving two node units
  • Fig 7 shows a flow chart describing an embodiment of the invention according to Fig 6 and the flow chart of Fig 4,
  • Fig 8 shows schematically two node units connected by a communications network
  • Fig 9 schematically shows components of an embodiment of the invention applied in two node units.
  • Appendix A is a list of definitions of terms used in the present application.
  • Appendix B is a table showing the characteristics of diffe ⁇ rent TQM message types
  • Appendix C is a table showing a protocol for transitions of communication elements resulting from TQM messages and control signals
  • Appendix D is a table showing different cases of process ele ⁇ ment transitions.
  • a distributed data management system is very useful to an or ⁇ ganisation with different members executing specific tasks and managing data of more or less general interest to the organisation as a whole.
  • Such a system could be implemented as a distributed database system according to prior art.
  • prior art and the definition given in e.g. CJ. Date, An Introduction to Database Systems.
  • a close analysis of the activity in many organi- sations, such as major enterprises has led the inventor to the conclusion that the needed system in fact may be regarded as a special case of a distributed database.
  • distributed transactions do not need concurrent access to all data in the database.
  • Fig. 1 shows a distributed computer system 18 comprising geographically separate node units 20, 22, 24 connected by a data communications network 26, e.g. a local area network, LAN, or a wide area network, WAN.
  • Each node comprises a data processing unit, such as a workstation, a PC, a server or any other computer capable of managing, storing and processing data.
  • two nodes 22 and 24 comprise workstations supporting interaction between user and machine.
  • Each node unit 20, 22, 24 is provided with a data managing system such as an operating system or an application platform, e.g. DOS, OS/2 or Windows, running an application program, e.g. a local database management system.
  • a node unit 20 may also comprise a mainframe database system or any other independently working subsystem.
  • This embodiment of the in ⁇ vention will be described by means of an example in which each node unit in the distributed system comprises a local database management system, a DBMS. It is, however, within the scope of the invention to have different data managing or data processing systems in different node units.
  • the database management system in each node unit is responsible for the control of redundancy, consistency and integrity in that particular node unit as well as for local recovery management in case of a system failure.
  • a node 20, 22, 24 and the scope of a local database management system is confined tc the closely connected hardware units that communicate without using the network 26.
  • a node unit may itself comprise several transaction environments managed by different program systems residing in the same node unit, e.g. a data base management system and a word pro ⁇ cessing system.
  • a transaction T is initiated in a first node unit 22, whereby said transaction T involves processing of information stored in the form of digital data both in the first node unit 22 and in a second node unit 24.
  • processing information it is understood that digital data is detected, erased, replaced, modified, added or handled in any other way.
  • Fig 2 shows schematically a specification of the transaction T comprising a number of operations opl,...,op6.
  • Operations opl, op2 and op3 are symbolically described as al, a2 and a3, respectively, whereby the "a” means that the operation pro ⁇ Clears data stored in the first node unit 22.
  • operations op4, op5 and op6 are described as bl, b2 and b3 , respectively, whereby the "b” means that the operation pro ⁇ Cons data stored in the second node unit 24.
  • the transaction T comprises two sets of operations al,a2,a3 and bl,b2,b3, each set involving the node units 22 and 24, re ⁇ spectively.
  • the operations opl-op6 of transaction T are executed concurrently occupying both the first node unit 22 and the second node unit 24, entailing uncertainty in the commit operation.
  • the transaction T is divi ⁇ ded into at least two transactions, namely a first transac ⁇ tion Tl comprising the set of operations al,a2 ,a3 and a second transaction T2 comprising the set of operations bl,b2- ,b3.
  • Fig 3a shows a specification of transaction Tl
  • Fig 3b shows a specification of transaction T2 resulting from the division of transaction T.
  • the transactions Tl and T2 are executed in a sequence, whereby transaction Tl is ex- ecuted solely in node unit 22 and transaction T2 solely in node unit 24.
  • a transaction is executed in the node unit in which the digital data current for the specific transaction is stored, within which node unit a database management system is handling data in a single database.
  • the single database management system in the node units handles the integrity and recovery problems with the aid of methods according to prior art, e.g. COMMIT and ROLLBACK, while the distributed information is synchronized and transactions are distributed according to the inventive method. If application data stored in the first node unit 22 also is needed by transaction T2 in node unit 24, that particular data is transferred to said node unit 22 via the data communications network.
  • Fig 4 shows a flow chart that schematically describes the steps of the above mentioned embodiment of the inventive method.
  • step 30 the transaction T is initiated in a first node unit 22.
  • Said transaction T is in step 32 then divided into two transactions Tl and T2, whereupon the transaction Tl is initiated and executed in step 34.
  • the steps 30-34 are carried out in the first node unit 22 as one transaction, i.e. either all steps are carried out or none of them.
  • control data and possibly application data is transferred from the first node unit 22 to the second node unit 24 via the data communication means 26.
  • step 38 the transaction T2 is initiated and executed in the second node unit 24.
  • Transaction T is initiated in first node unit.
  • Transaction Tl is initiated and executed in the first node unit. 5. Data and specification of transaction sequence is trans ⁇ ferred to second node unit.
  • Transaction T2 is initiated in second node unit, and so on.
  • the first operation is executed.
  • control data and application data is transferred between at least two node units by means of a queue item and a transac- tion queue.
  • a first embodiment of a queue item 40 symbolically depicted in Fig 5a, comprises firstly, digital control data 42 that will cause a transaction to be executed and secondly, asso- ciated application data 44 to be transferred from a first node unit to a second node unit.
  • a second embodiment of a queue item 41 shown in Fig 5b, comprises control data 46 that will cause a transaction to be executed and that will cause associated application data 48 to be transferred from a first node unit to a second node unit.
  • the application data 48 is stored in a file or some other storage structure and is associated to the queue item by means of a link 50, e.g. a data pointer.
  • a queue item 40,41 is compiled and put on a transaction queue at the end of a first transaction executed in a first node unit.
  • a queue item is stored in a local data ⁇ base managed by a local database management system.
  • data about its transport state is stored and referred to as the data state of said queue item.
  • the data state comprises data e.g. about the time a queue item is created, sent and received. The data state of a queue item will be explained later in the text in conjunction with a detailed description of an embodiment of the inven ⁇ tion.
  • a number of unordered transaction queues are established as communication channels between originating node units and destination node units in a distributed system.
  • a transaction queue is unordered since the order of successful commit of the queue items in the transaction queue is not predictable due to transmission delays and local database manager locking.
  • a first transaction Tl executed in a first node unit 22, which transaction Tl creates and sends a queue item to a second node unit 24, will also be called an originating transaction executed in an originating node unit.
  • a second transaction T2 executed in a second node unit 24 will also be called a destination transaction executed in a desti- nation node unit.
  • a destination transaction may also be an originating transac ⁇ tion for subsequent transactions and a destination transac ⁇ tion may consequently also be an originating transaction. It is within the scope of the invention that the originating node unit and the destination node unit may be the same node unit. For example , two different application programs resi ⁇ ding in the same node unit may communicate via a transaction queue.
  • a transaction queue uses the present data communica ⁇ tions network for the physical transport of a queue item. - Since each node unit has a separate database management system and means for data communication, the reliability of the communication is depending on the underlying support for integrity, consistency and recoverability of data.
  • a transaction queue is defined by the two node units it connects, and by the path used to communicate data between them.
  • One transaction queue is established for each direction of communication between the two node units.
  • a queue item is injected into a transaction queue in an originating node unit it is stored in a first queue database.
  • the queue database will be explained later in the text in conjunction with a detailed description of an embodiment. Queue items thus injected in the transaction queue are marked with their time of injection and are queuing to be sent to the destina ⁇ tion.
  • the above mentioned queue item data state comprises information about the time item a queue item is to be trans ⁇ ferred to a destination node.
  • a queue item is selected from a transaction queue in the originating node unit at a time depending on the data state and is then transferred to the destination node unit via the data communications network.
  • said queue item is stored in a second queue database, in which queue items are queuing to be processed in the destination node unit.
  • the queue item data state also comprises information about the time when a queue item is to be processed in the destina ⁇ tion node unit.
  • the queue data ⁇ base may contain queue items belonging to different transac- tion queues.
  • the queue item to be processed is selected among the queue items in different transaction queues and is chosen on the basis of said queue item data state.
  • In each transac ⁇ tion queue there is one queue item that is the next one to be processed in that particular transaction queue.
  • the queue item selected for processing is then the first of the first queue items of the different transaction queues.
  • Fig. 5c shows an illustration of three transaction queues t-queue 1, t-queue 2, t-queue 3 in a node unit.
  • Each transaction queue contains four queue items I,,,I 12 ,1 13 ,1 14 ; I 21 ,I 22 ,1, 3 ,I 24 ; 1 ⁇ , 1 ⁇ , 1 ⁇ - ,I 34 , respectively.
  • the time axis represents the points in time in which a queue item is to be processed.
  • the second queue item I 12 in transaction queue 1 is to be pro ⁇ Waitd at the time t 3 .
  • the order within a transaction queue may be based on the time each queue item was created, it arrived in the destination node unit or on a definite presc ⁇ ription of the time a certain queue item shall be processed. The last may be the case if a certain transaction has to be initiated at a certain point in time. In the example of Fig.
  • queue order between the transaction queues is graphi ⁇ cally depicted. It is clear that queue item I,, of t-queue 1 is picked from the transaction queue at the time t,, the queue item I 21 of t-queue 2 at the time t 2 , the queue item I 12 of t-queue 1 at the time t 3 and so on.
  • a transaction queue is an unidirectional (two way) connection between two node units. For example addressing of node units or indication of paths lies outside the inventive method. It is a requirement that the underlying communication method used by the inventive method in a given node unit is capable of transferring a message to another given node unit with some degree of predictability.
  • the transaction queues are sequential at both ends, i.e. an originating transaction injects queue items in a sequential fashion.
  • the fact of actual injection and actual subsequent processing is defined as a result of successful commit of the involved transactions. Therefore, factors such as database manager locking, transport delays etc. will introduce uncertainty as to the sequence of transaction commit events as viewed from the outside of the involved transactions. This characteristic is typical for prior art database transaction management systems.
  • Fig 6 shows schematically an example of an application of the invention comprising a first transaction Tl executed in a first node unit 22, see also Fig 1, and a second transaction T2 executed in a second node unit 24.
  • the first transaction Tl can communicate a queue item 54 from the first node unit 22 to the second node unit 24 by an intermediate transaction queue 52.
  • Fig 7 is flow chart that in more detail shows steps of the application according to Fig 6, which steps are comprised in one embodiment of the steps 34-38 of Fig 4.
  • step 58 the first transaction Tl is initiated in a first node unit 22 by e.g. a user or by a preceding transaction. Operations of t- ransaction Tl processing application data in the first node unit 22 are then executed in step 60.
  • a queue item 54 is compiled in step 62 and is then put on the transaction queue 52 in step 64.
  • the first transaction Tl is committed in step 66. Note that the compilation of the queue item 54 (Fig. 6) and injection of said queue item 54 into the transaction queue is carried out within the transaction Tl. Thus the queue item 54 will not exist unless the transaction Tl has been successfully completed and committed.
  • step 68 the transfer of the queue item 54 is initiated in the first node unit 22, and then said queue item 54 is transferred by the transaction queue 52 in step 70.
  • step 72 the queue item 54 is received in the second node unit 24 and the thus success ⁇ ful transfer is committed in step 74.
  • a subsequent transac ⁇ tion T2 is thus not initiated and executed until a complete transfer of the queue item 54 has been fulfilled.
  • Processing of the queue item 54 in the second node unit 24 is immediate ⁇ ly or at a predetermined moment in accordance with an in ⁇ ternal system clock initiated in step 76 and said processing is carried out in step 78.
  • step 80 the second transaction T2 is initiated by the control data comprised in the queue item 54 and in step 82 operations of transaction T2 proces ⁇ sing application data in the second node unit 24 are execu ⁇ ted. Finally the second transaction T2 is committed in step 84.
  • Transactions are. executed in a sequence with a time interval between the completion of a first transaction Tl and the in ⁇ itiation of a second transaction T2. During this time in ⁇ terval changes in a first database made by the first transac- tion Tl are visible to a third transaction T3 following queue items being transferred from other nodes in the network.
  • a queue item may be injected in a queue by the first transac ⁇ tion Tl executing in a first node unit Nl, said queue item being destined to be subsequently executed by the second transaction T2 in a second node unit N2.
  • a third trans ⁇ action T3 may inject a queue item that represents a future fourth transaction T4 to be executed in the second node unit N2. In node unit N2, due to transport delays and locking, the fourth transaction T4 may actually be executed and committed before the second transaction T2.
  • the nature of the transac ⁇ tions T2 and T4 are therefore such that act in additive fashion as regards to common information.
  • the inventive method may be executed by means of a general computer in conjunction with especially designed hardware installed in a node unit or software executed in said node unit.
  • a transaction queue management unit henceforth also called a TQM unit, which comprises a set of functional components here implemented as computer programs.
  • Each node unit in a dis ⁇ tributed computer system according to the invention may comprise a TQM unit.
  • Fig 8 shows schematically a first node unit 86 and a second node unit 88 connected by means of a data communications net ⁇ work 26.
  • the first node unit 86 comprises a first application program 90, e.g. a database management system, and a first TQM unit 94.
  • the first TQM unit in its turn comprises a first application program interface 98.
  • the second node unit 88 comprises a second application program 92 and a second TQM unit 96, said second TQM 96 unit in its turn comprises a second application program interface 100.
  • the first TQM unit 94 is arranged as an interface between the application program 90 and the communications network 26.
  • the design of the application program interface 98, through which the first application program 90 communicates with the first TQM unit 94, is dependent of said application program 90.
  • the configuration of the second node unit 88 is similar to that of the first node unit 86.
  • the first application program 90 communicates with the second application program 92 through the first TQM unit 94, via the communications network and through the second TQM unit 96. This way transactions can be distributed among different application programs, e.g. with a Paradox® database in the first node unit 86 and a Focus® database in the second node unit 88.
  • transaction queues which includes specification of: a. system components and capabilities in the local node units that communicate via a certain transaction queue, b. the means of communication between a first, destination node unit and a second, originating node unit, and of the associated parameters, c. the method to initiate a destination transaction in a node unit and of the method to compile corresponding digital con ⁇ trol data needed to initiate said destination transaction.
  • Allow an originating transaction to: a. specify one or more destination transactions to be per- formed after the originating transaction terminates, by com ⁇ piling control information, b. specify associated application data to be passed to a destination transaction, c. store the above mentioned control data and application data as part of the successfully committed originating trans ⁇ action.
  • the TQM unit comprises a number of functional program modu ⁇ les, of which some are designed as callable functions resi ⁇ ding in a program library. Others are formed as database transaction programs which are initiated in local database systems executing in different nodes in the distributed computer system.
  • the TQM unit in each node unit also uses the database facilities in the node unit to store information on transaction queues and their component items, so that asso- ciated database objects are defined and available in each node unit.
  • Each node unit in a distributed computer system according to the invention may comprise a TQM unit.
  • a TQM unit according to one embodiment of the invention comprises 9 main compo ⁇ nents, preferably in the form of separate modules.
  • Fig 9 schematically depicts a first node unit 102 and a second node unit 104 connected by means of a data communications network 26, whereby said node units 102 and 104 comprise a first TQM unit 106 and a second TQM unit 108, respectively.
  • the two node units 102 and 104 each comprise components of the same kind referred to as e.g. the first originating application program OAP1 and the second originating application program OAP2, see further explanation below.
  • the component in general the reference will be used without the tag number 1 or 2, e.g. OAP and likewise for other compo ⁇ nents.
  • the following components may be comprised in a TQM unit:
  • An originating application program OAP1, 0AP2 which con ⁇ tains application dependent code to create queue items.
  • the OAP is an interface between the application program and other TQM unit components, see also reference 98, 100 in Fig 8.
  • a destination application program DAP1, DAP2 which con ⁇ tains application dependent code to receive queue items.
  • the DAP is an interface between the application program and other TQM unit components, see also reference 98, 100 in Fig 8.
  • a queue programming interface QPI1, QPI2 through which the OAP and DAP issue requests to a transaction queue manager TQM, see below, in order to put a queue item in a transaction queue or to retrieve a queue item from a transaction queue.
  • the QPI contains logic to define addresses to the TQM and to pass parameters to said TQM.
  • the component QPI supports, among other functions, the functions PUT_ITEM, that injects a queue item into a transaction queue, and GET_ITEM that retri- eves a queue item from a transaction queue. Functions supported by QPI will be described in more detail later in the text.
  • a transaction queue manager TQMl, TQM2 that ensures that queue items are communicated correctly between an originating node unit and a destination node unit. Note that the first node unit 102 and the second node unit 104 by turns may appear as originating node unit and destination node unit.
  • the TQM further maintains data state variables for the queue items.
  • the data state variables describe the status of a queue item, such as if it is current or not. The data state variables will be further described later in the text. Para ⁇ meters are passed to the transaction queue manager TQM through logical rules incorporated in the above mentioned queue programming interface QPI.
  • a deferred process initiator DPIl, DPI2 initiates at defi ⁇ ned time intervals either the TQM or the DAP to perform processing based on the contents of the queue database QDB, see below.
  • a queue database QDBl, QDB2 in which queue specifications, queue items and their data state variables are stored.
  • a Timed Initiator Tl automatically initiates transactions based on elapsed time intervals or at a defined time of day.
  • the TQM unit relies on each node unit to provide a computer system clock.
  • An administrative interface handler AIHl, AIH2 provides functions that constitute a queue administration interface through which a person or a program may issue requests to the transaction queue manager TQM in order to administrate said TQM. Administration comprises the designing, defining and setting up of transaction queues and destination transac ⁇ tions, i.e. the establishment of the infrastructure of the transaction queue management environment.
  • An operations interface handler OIHl, IOH2 provides func ⁇ tions that constitute a queue operations interface, through which a person or a program may issue requests to the Trans ⁇ action Queue Manager TQM in order to manage the operations of said TQM. Operation comprises monitoring and maintaining the functionality and performance of the transaction queue mana ⁇ gement arrangement.
  • TQM Messages The components of the TQM units communicate with each other through program calls, TQM Messages and TQM Triggers.
  • a queue item is sent between two node units as a TQM Message.
  • TQM Messages and TQM Triggers will be explained further below.
  • the control data flow of a TQM unit 106, 108 within a node unit 102, 104 and between two node units 102 and 104 will now be explained by means of an example describing a transfer of a queue item between the two node units 102 and 104.
  • Fig. 9 > means a program call
  • a first transaction Tl (not shown) is executed in the first node unit 102 under the control of the first originating application program OAP1, a first queue item 54 (not shown) is compiled and the function PUT_ITEM is called through the first queue programming interface QPIl by program call 110 as a part of said first transaction Tl.
  • the PUT_ITEM call 110 is then passed to the first transaction queue mana ⁇ ger TQMl through a TQM Message 112 created by the first queue programming interface QPIl, the TQM message comprising the queue item 54.
  • the TQM Message 112 is then processed by TQMl, the processing comprising a unique identification item being added to or coupled to the queue item 54.
  • this identification item is constructed using date and time with a predetermined resolution, but other identifi ⁇ cation codes are possible within the scope of the invention.
  • An entry concerning the TQM Message 112 is then added to the first queue database QDBl through program call 113 , and a TQM Message 114 including the queue item 54 is sent to the second transaction queue manager TQM2 in the second node unit 104 via the data communications network 26.
  • a queue item may be transported in any fashion, e.g. via telecommu ⁇ nication network or through interchange of floppy disks carried by couriers.
  • a transac- tion queue management transaction is started in said second TQM unit 108.
  • the TQM Message 114 is processed by the second transaction queue manager TQM2 and, since the TQM message 114 in this case contains a new queue item 54 said queue item 54 is added to the second queue database QDB2 through a program call 116. If the changes in QDB2 are successfully committed, which is indicated by a return parameter from the program called by program call 116, a TQM Message 117 is sent back from the TQM2 to the TQMl to acknowledge receipt of the TQM Message 114 and the first queue item 54.
  • the transaction T2 is then started in the second destination application program DAP2 by the TQM2 through a TQM Trigger 118. Otherwise said transaction is subsequently started by the second timed in- itiator TI2. At intervals predefined to the TI2 the second deferred process initiator DPI2 is started by said TI2 through a TQM Trigger 120.
  • the queue database QDB2 is searc ⁇ hed by the DPI2 through a program call 122 and, depending on the queue specifications and the current status of the queue items stored in the QDB2, either the DAP2 or the TQM2 is initiated to execute the required processing through TQM Trigger 124 or 126, respectively.
  • TQM Trigger 118 or 124 the DAP2 then repeatedly calls the func ⁇ tion GET_ITEM through program calls 128 to the second queue programming interface QPI2.
  • Each such program call 128 is passed by the QPI2 to the TQM2 through a program call 130, whereupon the TQM2 retrieves the queue item that is the next one to be processed, which in this case is the first queue item 54, from the QDB2 through program call 132 and marks the queue item 54 with a control item as having been processed by the DAP2.
  • a queue item is marked when it is accessed by the DAP (GET_ITEM) and is part of the common transaction.
  • the mark as well as the resulting application data is made vis ⁇ ible to the outside components when the DAP transaction is committed.
  • the queue database and the application data is stored by the same database manager and kept synchronized by the common local database transaction manager, the changes are part of a common transaction.
  • a TQM Message 132 is generated and sent to the TQMl in the first node after the transaction has been committed. These cases are specified in the state transition diagram explained later in this text. If the transaction T2 specified by said first queue item 54 requires access to data residing in a third node unit, the transaction T2 processing the first queue item 54 comprises the specification of a second queue item that in its turn will cause a subsequent transaction T3 (not shown) to be executed according to the above described method. When pro ⁇ cessing a TQM Message the transaction queue manager TQM is executing dependent of the current data state for the queue item which said message refers to. This control information is described below in conjunction with a description of TQM Message types.
  • the component QPI may support the functions PUT_ITEM, GET_ITEM, REINSTATE_ITEM, PEEK_ITEM AND SKIP_ITEM implemented as callable subroutines invocable from programs that are designed as database transactions.
  • Each function returns an output parameter that contains information on the execution of the function, and in case of rejection of the function call provides the application program with information on the cause.
  • the following input parameters are used in the func ⁇ tion calls:
  • Queue - specifies the unique identity of a destination database system, whereby the name is e.g. a code or an in ⁇ direct reference;
  • Destination_transaction - specifies the unique identity of a destination transaction within the destination TQM unit that is to process the current queue item, whereby the name is e.g. a code or an indirect reference; Info - specifies application information that is to be communicated to the destination transaction, is e.g stored in a data file;
  • Fixup_trans - is the name of a local transaction through which an originating transaction handles error conditions, and is a return parameter, returned by all five functions, that contains information on the successful execution of the function and that, in the case of unsuccessful execution provides the application program with information of the cause.
  • each thus injected queue item becomes part of the specified trans- action queue.
  • the function GET_ITEM with the input parameters Queue, Desti- nation_transaction and Info is used by a destination transac ⁇ tion to retrieve a current queue item from the transaction queue. Following the successful retrieval of the current queue item it is marked as processed and the queue position is changed in such a way that the queue item that follows the retrieved and processed item becomes the new current item.
  • a transaction that has issued at least one GET_ITEM call is committed, the queue item thus retrieved and processed is removed from the transaction queue.
  • the function REINSTATE_ITEM using no input parameter is used by a destination transaction to indicate that the processing of the queue item retrieved from the transaction queue by the previous GET_ITEM call is not carried out and that the item is to be reinstated and marked as unprocessed. Following a successful reinstatement of the previously retrieved but unprocessed queue item the queue position is changed in such a way that said queue item again is the new current one.
  • the function PEEK_ITEM with the input parameters Queue, Destination_transaction and Info is used by a destination transaction to retrieve information on the current queue item in a transaction queue.
  • the PEEK_ITEM function merely reads information and does not change the processing state of the current queue item or change its current queue position.
  • the function SKIP_ITEM with the input parameter Queue is used by a destination transaction to move the queue position of the current queue item to the next position.
  • the Queue Database QDB is used to store queue items and different types of information comprising:
  • the above mentioned data state values for a queue item are described below, in conjunction with a description of the TQM Message that uses and transfers the information given by the data state.
  • the queue database QDB is maintained by the transaction queue manager TQM but is also accessed by the deferred process initiator DPI and other TQM unit components.
  • the queue database QDB is utilized for the handling of con ⁇ trol information used in different processes of the trans- action queue management unit and for the handling of queue items, comprising the following types of information and purposes:
  • Information needed by the transaction queue manager TQM when application data is transferred to another node unit e.g. information on available transport protocols, on whether data is transmitted separately or as a sequence of messages and on the state of transmission of this data.
  • Activation of a queue item i.e. as part of the processing of a PUT_ITEM call or when receiving a new queue item from another node, a new queue item is inserted into the queue database by the transaction queue manager TQM.
  • queue item i.e said queue items in the queue database QDB are monitored by the deferred process initiator DPI in an originating system.
  • Miscellaneous activities such as maintenance of queue specifications, error handling, performance monitoring and start and stop of queues.
  • a TQM Trigger is used to control the initiation of the pro ⁇ cessing of a queue item within the TQM unit of a node unit. It is generated by the transaction queue manager TQM or the deferred process initiator DPI and has the form of digital control signals or program calls that cause the operating components of a node to initiate processing of a given trans ⁇ action directed to a queue item in a given transaction queue.
  • a TQM trigger is internal to a node unit (TQM unit) and may e.g. be the result of a timed event or a program call.
  • a TQM message is the initialization of a transaction at a TQM node and is caused by the receipt of a TQM message from another component in the same or in another node unit.
  • a TQM Message is used to control the initiation of the pro- cessing of a given queue item.
  • a TQM Message is generated by the queue programming interface QPI or the transaction queue manager TQM, whereby control items are compiled and trans ⁇ ferred to other TQM unit components either within a node unit or to the TQM unit components of another node unit.
  • a TQM Message is itself processed in the transaction queue manager TQM of the receiving unit. When sent and received within the same TQM unit, said processing is executed within the scope of the current transaction, i.e.
  • the processing initiated by the control item comprised in a TQM Message is not ex ⁇ ecuted by the destination component until or unless the transaction that has originated the TQM Message has been committed. If the originating transaction fails to be commit ⁇ ted successfully the current TQM Message is discarded by the sending system.
  • a resend count element is provided in each component able to send and receive a message.
  • the content of the resend count element can be set to zero, which indicates that no resend is needed, and increased one numeral step for each needed resend request. If the content of the resend count element exceeds a preset value, referred to as the resend limit, a systematic error in the transfer mechanism exists. The resend count is kept in each queue item.
  • a TQM Message comprises a message creation time item MCT which is defined by the generating system as the time at which the sending component or sending system created the message. If a message is received at a time that falls outside the message validity period MVP, the message will be ignored by the receiving system and the message is discarded without further processing.
  • the time is defined as the date and time of day TOD based on e.g Green- wich Mean Time.
  • the rule for decision of message validity is then preferably expressed as:
  • the message validity period MVP for a given transaction queue is preferably dimensioned based on the maximum time it will take for a message to travel between the originating system and the destination system. If the MVP is too short there will be a high probability that valid messages will be unne ⁇ cessarily discarded and, on the other hand, if the MVP is too long the queue items will be stored for an unnecessarily long time in the queue database. The time during which a queue item is stored in the queue database is referred to as reten ⁇ tion period. In order to compensate for minor errors in time measuring units, i.e.
  • TDELTA is used in the preferred embodi- ment of the invention to increase the robustness of the inventive method.
  • the value of TDELTA is chosen to reflect the maximum time difference that reasonably can be expected to exist between two node units connected by a transaction queue.
  • a TQM Message comprises items containing the following con ⁇ trol information that is used to control the processing of a TQM Message in a transaction queue manager TQM:
  • this value is e.g. specified as a set of values, from which the receiving node may choose the most suitable.
  • message type 7 which is sent by a destina- tion TQM unit to a the originating TQM unit. The sender cannot know whether the retention time period has elapsed or not at the originating system, so it phrases this uncertainty by giving "D.E” as requested states. Note, however, that the "Receiver curr?" and “Receiver Req'd" states are for informa- tion only, i.e. not used as part of TQM protocols.
  • Application Information which contain or refer to the application data that is communicated from the originating transaction to the destination transaction.
  • the size of the application dependent information is restricted to fit within one TQM Message.
  • the application data is transported in a sequence of TQM Messages and in a third embodiment said data is transported using existing transport means such as file transfer means.
  • TQM data state stored in the queue database QDB.
  • a queue item is defi ⁇ ned as active for a given transaction queue if the identifi ⁇ cation of the queue item is registered in the QDB. Deactiva- tion of an active queue item entails removal of the item from the QDB.
  • the TQM data state is composed of state elements, comprising a communication element and a process element.
  • the data state elements are each stored as a one character value in each active queue item in the QDB.
  • the state elements may also be given a default value for a given transaction queue.
  • the communication element is relevant for queue items both at the originating and destination sides of a transaction queue.
  • the queue item may be activated by the originating TQM unit, i.e., it may be accepted that an originating appli- cation program OAP issues a PUT_ITEM call to create a new value for the queue item. This is the default value for the originating side of a transaction queue.
  • the queue item is active and has been created in the originating TQM unit but has not yet been acknowledged as received by the destination TQM-unit. The originating TQM unit will continue sending messages about the queue item until acknowledged as received by the destination system or until the resend threshold is reached.
  • the queue item is active and has been created in the originating TQM unit and has been acknowledged as received by the destination TQM unit but has not yet been acknowledged as queuing by the destination TQM unit.
  • the originating system will continue sending messages about the queue item until acknowledged as queuing by the destination TQM unit or until the resend threshold is reached.
  • the queue item is active and has been created at the originating TQM unit and has been acknowledged as received by the destination TQM unit and has been acknowledged as queuing by the destination TQM unit but has not yet reached the end of its retention time period.
  • the queue item is active and has been created at the originating TQM unit and has been acknowledged as received by the destination TQM unit and has been acknowledged as queuing by the destination TQM unit and has reached the end of its retention time period. This item may be deactivated by the originating TQM unit at any time.
  • the queue item may be activated by the destination TQM unit, i.e., a TQM Message from an originating TQM unit default value for the destination side of a transaction queue.
  • the queue item is active and has been created at the destination TQM unit but an acknowledgement response has not yet been received from the originating TQM unit.
  • the queue item is active and has been created at the destination TQM unit and an acknowledgement response has been received from the originating TQM unit but the item has not yet reached the end of its retention time period.
  • the queue item is active and has been created at the destination TQM unit and an acknowledgement response has been received from the originating TQM unit and the item has reached the end of its retention time period. If processed by the DAP, the TQM process element has the value "p" and the queue item may be deactivated by this destination system at any time.
  • the TQM process element is relevant only for the destination side of a transaction queue. All queue items at the origina ⁇ ting side have the same value.
  • the queue item For a given queue item, defined by a unique queue item iden ⁇ tification and queue name, the following values are defined for the TQM process element: U.
  • the queue item resides at the origination side of a trans ⁇ action queue. This is also the default value for inactive entries at the origination side of a transaction queue.
  • the queue item is active and resides at the destination side of a transaction queue but has not yet been processed successfully by the destination application program DAP to which it is related.
  • the queue item is active and resides at the destination side of a transaction queue and has been processed success ⁇ fully by the destination application program to which it is related. This is also the default value for inactive entries at the destination side of a transaction queue.
  • the table in appendix B shows an example of message types 1- 13 appearing during communication between a first, originat ⁇ ing system in the table labelled "Orig", and a second, desti ⁇ nation system labelled "Dest".
  • System "Syst” and component “Comp” are specified under the headings "Sender” and "Receiv ⁇ er", respectively.
  • the components able to send messages in this embodiment are the transaction queue manager TQM, the queue programming interface QPI and the deferred process initiator DPI.
  • the state elements can assume the data state values A, B, C, D, E, Z, Y, X and W.
  • a message type 1 is characterized in that the sender is the component QPI in the originating system, the receiver is the component TQM in the originating system, the current state for sender value is A, the Assumed Current State for Receiver value is A and the Requested State value is B.
  • OKI - is caused by the processing of a Type 1 message, cf. Appendix 1, i.e. a successful execution of a PUT_ITEM call. Associated application information is stored and depending on the specifications for the transaction queue, an output Type 2 message may be generated. The resend count is set to zero.
  • OK2 - is caused by the processing of a Type 3 message, i.e. a resend request initiated by the deferred process initiator DPI. An output Type 2 message is generated and the resend count is increased.
  • OK3 - is caused by the processing of a Type 4 message from a destination system acknowledging that a sent queue item has been successfully received.
  • the current Time of Day value TOD is stored as a value called TOK3, which thus represents the time when the OK3 transition took place.
  • An output Type 5 message containing TOK3 is generated and the resend count is set to zero.
  • OK4 - is caused by the processing of a Type 6 message, i.e. a resend is initiated by the deferred process initiator DPI or by the processing of a faulty Type 4 message from the desti ⁇ nation TQM unit acknowledging that a sent item has been received.
  • An output Type 5 message containing TOK3 is genera ⁇ ted and the resend count is increased.
  • 0K5 - is caused by the processing of a Type 7 message from the destination TQM unit acknowledging that a transferred queue item is queuing.
  • the message contains the time value TOK10 from the destination system.
  • the time value TOK5 MAX (TOD+TDELTA, TOK10) is calculated and is stored in the queue database QDB.
  • OK6 - is caused by the processing of a Type 8 message, i.e. a message initiated by the deferred process initiator DPI having found that the retention period of a queue item has expired.
  • OK7 - is caused by the processing of a Type 9 message issued when the current TQM unit has decided to deactivate a queue item. This may occur at the same time as OK6 or at a later time.
  • OK8 - activates an item in the destination TQM unit and is caused by the processing of a Type 2 message from the origi ⁇ nating TQM unit containing a new queue item.
  • the transition OK8 is coupled with the transition OK16 described below.
  • An output Type 4 message acknowledging the receipt of the new queue item is generated.
  • OK9 - is caused by the processing of a faulty Type 2 message from the originating TQM unit.
  • An output Type 4 message acknowledging the receipt is generated.
  • OK10 - is caused by the processing of a Type 5 message from the originating TQM unit responding to the receipt of an ack ⁇ nowledgement message.
  • the Type 5 message contains the time value TOK3 from the originating TQM unit. The time value
  • TOK10 MAX (TOD + TDELTA, TOK3) is calculated and is stored in the queue database QDB. An output Type 7 message contai ⁇ ning TOK10 is generated.
  • OK11 - is caused by the processing of a Type 5 message from the originating TQM unit responding to the receipt of a faulty Type 4 message from the destination unit.
  • An output Type 7 message containing TOK10 is generated.
  • OK12 - is caused by the processing of a Type 10 message sent by the Deferred Process Initiator when it has detected that the retention period has expired for a queue item.
  • OK13 - is caused by the processing of a Type 11 message sent when the system decides to deactivate a queue item. This may occur at the same time as transition OK12 or at a later time.
  • IG1 - is requested by a late TQM Message. The request is ignored and no output message is generated.
  • IG2 - is requested by a Type 5 message from an originating TQM unit. The request is ignored and an output Type 12 messa ⁇ ge is generated.
  • ER1 - is initiated by a local event in a TQM system and is requested based on information held in the local queue data ⁇ base. This transition should never occur and, if requested, indicates faulty program logic in the TQM unit.
  • ER2 - is caused by a message that should have been dis ⁇ carded because its message validity period MVP has expired, or by this or a related TQM system having been erroneously recovered following a database failure.
  • ER3 - is invalid.
  • the current state indicates that the node is an originating system and the requested state is valid only for a destination TQM unit. This request is caused by faulty program logic or by corrupt queue database content.
  • ER4 - is invalid.
  • the current state indicates that the node is a destination TQM unit and the requested state is valid only for an originating TQM unit. This request is caused by faulty program logic or by corrupt queue database content.
  • Each queue item is as mentioned above provided with at least one process element describing the processing status of said queue item.
  • a processing element can, as described above, assume a .value U, N or P.
  • a table showing different cases of transitions in an example of a process element is presented in Appendix D. In the table the current value of the process element "Curr. Value” and the requested values "Requested Value” can assume the mentioned values U, P and N.
  • the diffe ⁇ rent cases of process element transitions are as follows, wherein transition:
  • OK14 - is caused by the processing of any TQM Message in an originating TQM unit.
  • OK15 - is caused by the processing of any TQM Message in a destination TQM unit, except for the first Type 2 message or a Type 3 message concerning a particular queue item.
  • OK16 - is caused by the processing of a Type 2 message in a destination TQM unit, i.e, the receipt of a new queue item from the originating TQM unit.
  • the transition is coupled with the communication element transition OK8 described above and cannot occur at any other time.
  • the value N is an indicator to the deferred process initiator DPI to initiate the desti ⁇ nation application program DAP to process the queue item attached to the current process element.
  • OK17 - is caused by the processing of a Type 13 message in a destination TQM unit, i.e. the destination application pro ⁇ gram DAP has issued a successful GET_ITEM call through the queue programming interface QPI.
  • GET_ITEM calls can only be processed against queue items with the processing state value N stored in the process element, therefore only one Type 13 message, i.e. a GET_ITEM call, can be processed for each active queue element.
  • ER5 - is invalid since it can only be performed in a desti- nation TQM unit and the current state indicates that this is an originating TQM unit.
  • ER6 - is invalid since it can only be performed in an originating TQM unit and the current state indicates that this is an destination TQM unit.
  • Atomic An atomic operation is indivisible.
  • a transaction should be atomic, which means that it either is executed in its entirety or is totally cancelled.
  • a sequence of operations that is fundamentally not atomic can be made to look as if it really were atomic from an external point of view.
  • COMMIT A COMMIT operation signals successful end- of-transaction. It tells the transaction manager that a logical unit of work has been successfully completed, and that all of the updates made by that unit of work can now be "committed” or made permanent. C.f. ROLLBACK.
  • Destination node unit A destination node unit is a node unit in which a queue item is received from an originating node unit and in which a destination transaction is executed initiated by said queue item.
  • Destination transaction A destination transaction is a second transaction caused to be executed by a queue item compiled in a first, originating transaction.
  • Integrity The problem of integrity is the problem of ensuring that the data in the database is accurate. Inconsis ⁇ tency is an example of lack of integrity.
  • Originating node unit is a first node unit in which a first, originating transac ⁇ tion is executed and from which a queue item is transferred to a second, destination node unit.
  • Originating transaction An originating transac ⁇ tion is a first transaction that compiles a queue item that will cause a second, destination transaction to be executed.
  • ROLLBACK A ROLLBACK operation signals unsuccessful end-of-transaction. It tells the transaction manager that something has gone wrong, and all of the updates by the logical unit of work so far must be "rolled back" or undone. C.f. COMMIT.
  • Transaction A transaction is a logical unit of work that comprises at least one operation, and in general a sequ ⁇ ence of several operations.
  • a transaction in a database may consist of a sequence of several database opera ⁇ tions that transforms a consistent state of the database into another consistent state, without necessarily preserving consistency in all intermediate states.
  • Transaction manager A system component that provides atomicity or semblance of atomicity in a transaction proces ⁇ sing system.
  • Transaction processing A system that supports transaction processing guarantees that if a transaction executes some updates and then a failure occurs before the transaction reaches its normal termination, then those updates will be undone.

Abstract

Method for performing transactions and for communicating control data and application data in a distributed data processing system comprising node units (102, 104), wherein a first transaction (T) is initiated in a first node unit (102), the first transaction (T) being divisible into a first subtransaction (T1) and a second subtransaction (T2) such that the first subtransaction (T1) is executable given access solely to data stored in the first node unit (102) and the second subtransaction (T2) is executable given access to data stored in a second node unit (104). An arrangement for execution of the method comprises a queue programming interface (QPI1, QPI2), a transaction queue manager (TQM1, TQM2), a deferred process initiator (DPI1, DPI2), a queue database (QDB1, QDB2), a timed initiator (TI1, TI2), an administrative interface handler (AIH1, AIH2) and an operations interface handler (OIH1, OIH2), said functional components being arranged to exchange control signals and application data.

Description

TRANSACTION QUEUE MANAGEMENT
This invention concerns a method in accordance with the preamble of claim 1. This invention also concerns a system for performing the method.
BACKGROUND OF THE INVENTION
A distributed computer system according to prior art compri¬ ses a number of computer node units, in short also called node units, connected in a data communications network. Each node unit comprises a computer, an operating system, an application program and a data communications device. The computer comprises a central processing unit, a data storage unit, a data input unit, e.g. a keyboard or a disk drive, and a data output unit, e.g. a display device. The application program may e.g. be a word processing program or a local database program comprising a database management system. The data communications device comprises an input/output port to the data communications network, and hardware and software for communicating data from a first node unit to a second node unit via the data communications network.
Problems in Prior Art
A number of technical problems arise when integrated trans¬ action oriented applications, i.e. applications that support transaction processing, are implemented in a distributed computer system with different application programs in diffe¬ rent computer node units connected by a data communications network. For a definition of "transaction" and "transaction processing", and other terms and expressions used in the pre¬ sent text, see appendix A. The distributed computer system described above constitutes a complex database, the arisen technical problems are the ones met in the context of dis¬ tributed databases and a database will be used in the present application as an example of a transaction oriented applica¬ tion program.
One of the technical problems is to synchronize the usage of information that is common to different transactions. Another problem is to fulfil the requirement that distributed trans¬ actions are to have concurrent access to shared data residing in different node units. Other interrelated technical pro¬ blems are to reduce and control redundancy, to avoid incon- sistency and to maintain integrity of the data in the dis¬ tributed computer system. Yet another technical problem is to manage safe recovery in case of a system failure.
The above mentioned technical problems all concern consisten- cy, directly or indirectly, and are linked to the fact that a logical unit of work, a transaction, intended to be a single atomic (see Appendix A) transaction in most cases is a sequ¬ ence of several operations. A transaction transforms a data¬ base from a first consistent state into a second consistent state, without necessarily preserving consistency in all intermediate states. A method according to prior art commonly applied to achieve a semblance of atomicity is based on the well known COMMIT and ROLLBACK concepts (see Appendix A) which works well with transactions in a single database, where a single database is a database with one database management system in contrast to a distributed database. When all the operations of a transaction are carried out, the transaction is committed and the updates in the database are therewith confirmed. If an operation fails and the transac- tion cannot be committed as successfully carried out, all updates made up to the failing step are made undone through a rollback operation that resets the database to the state it held before the transaction was started. An operation may fail for example if a system crash occurs between two up- dates, or if an arithmetic overflow occurs on the second of two updates.
In a distributed system with distributed transactions the COMMIT and ROLLBACK concepts cannot, however, be adopted with full reliability. For example, a distributed transaction is, according to prior art, processed concurrently occupying a first node unit and at least one second node unit, and in¬ termediate results are communicated between said node units over the network. A transaction is controlled, i.e. initia- ted, managed and terminated, by the database management system in the first node unit. As a result of the transaction updates are made in both the first node unit and in the second node unit. When the transaction is going to be commit- ted and the updates are going to be made permanent, at least two messages are sent between the controlling first node unit and the second node unit. A first message is an order from the first node unit to the second node unit to permanent its updates and a second message is a confirmation from the second node unit to the first node unit that the updates now have been made permanent. Thereafter, the transaction is committed by the first node unit. When executing this commit¬ ment there is an uncertainty, due to possible unreliabilities in the data communications network, about whether the updates are carried out and made permanent or not. For example, a system failure may occur when said confirmation message is sent and make it impossible to commit the transaction despite the fact that the updates in the second node unit have been made permanent. This kind of uncertainties concerning the un- reliability of transferring messages in connection with the step of committing a transaction entails difficulties to maintain integrity in a distributed database.
In the relevant technical literature, e.g CJ. Date, An Introduction To Database Systems. requirements for distribu¬ ted databases as well as indications of general solutions to the above mentioned problems are described. However, there is currently no solution to the above mentioned problems accor¬ ding to the state of the art. Thus, there is no distributed database product on the market which can solve the above mentioned technical problems. OBJECTS OF THE INVENTION
The main object of the invention is to provide a method for synchronization of usage of data shared or needed by diffe- rent transactions executed in the same node or in different nodes in a distributed computer system.
A further object is to provide a method for controlling redundancy in a distributed computer system.
Another object is to provide a method for performing distri¬ buted transactions in order to avoid inconsistency and to maintain integrity in a distributed computer system.
Another object of the invention is to provide a method that support safe recovery of data in case of a system failure in a distributed computer system.
Another object is to control the computer of a node unit to start and execute a certain transaction at a certain point in time.
Yet another object of the invention is to provide a method for communication of application data and control data between applications in a distributed computer system.
Another object of the invention is to provide a method for reliable transfer of data between geographically separate node units in a distributed computer system.
Still another object of the invention is to provide a method for the management of transactions in a distributed data pro¬ cessing system utilized for work flow management in an orga- nisation, e.g a commercial enterprise.
Another object of the invention is to provide a system for execution of the method according to the invention. These objects and other objects which are apparent from the description are achieved by providing a method in accordance with the characterizing part of claim l. Further features of the method and a system for performing the method according to the invention are described in the other claims.
BRIEF DESCRIPTION OF DRAWINGS AND APPENDICES
For a more complete understanding of the present invention and of further objects and advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings and appendices, in which:
Fig. 1 shows an illustration of a distributed computer system comprising a plurality of node units connected by means of a data communications network,
Fig. 2 shows a schematic specification of a transaction T,
Fig. 3a shows a schematic specification of a transaction Tl,
Fig. 3b shows a schematic specification of a transaction T2,
Fig. 4 shows a flow chart describing an embodiment of the in- ventive method,
Fig. 5a shows a schematic overview of a first embodiment of a queue item,
Fig. 5b shows a schematic overview of a second embodiment of a queue item,
Fig. 5c shows a schematic picture of transaction queues in a node unit,
Fig. 6 shows an illustration of an application of the method according to the invention involving two node units, Fig 7 shows a flow chart describing an embodiment of the invention according to Fig 6 and the flow chart of Fig 4,
Fig 8 shows schematically two node units connected by a communications network,
Fig 9 schematically shows components of an embodiment of the invention applied in two node units.
Appendix A is a list of definitions of terms used in the present application,
Appendix B is a table showing the characteristics of diffe¬ rent TQM message types,
Appendix C is a table showing a protocol for transitions of communication elements resulting from TQM messages and control signals,
Appendix D is a table showing different cases of process ele¬ ment transitions.
DESCRIPTION OF EMBODIMENTS OF THE INVENTION
A distributed data management system is very useful to an or¬ ganisation with different members executing specific tasks and managing data of more or less general interest to the organisation as a whole. Such a system could be implemented as a distributed database system according to prior art. According to prior art and the definition given in e.g. CJ. Date, An Introduction to Database Systems. there is, when a distributed transaction is executed, a constant requirement for concurrent access to data stored in different node units. However, a close analysis of the activity in many organi- sations, such as major enterprises, has led the inventor to the conclusion that the needed system in fact may be regarded as a special case of a distributed database. According to an embodiment of the invention distributed transactions do not need concurrent access to all data in the database. A Distributed Computer System
Fig. 1 shows a distributed computer system 18 comprising geographically separate node units 20, 22, 24 connected by a data communications network 26, e.g. a local area network, LAN, or a wide area network, WAN. Each node comprises a data processing unit, such as a workstation, a PC, a server or any other computer capable of managing, storing and processing data. In the embodiment depicted in Fig 1 two nodes 22 and 24 comprise workstations supporting interaction between user and machine. Each node unit 20, 22, 24 is provided with a data managing system such as an operating system or an application platform, e.g. DOS, OS/2 or Windows, running an application program, e.g. a local database management system. A node unit 20 may also comprise a mainframe database system or any other independently working subsystem. This embodiment of the in¬ vention will be described by means of an example in which each node unit in the distributed system comprises a local database management system, a DBMS. It is, however, within the scope of the invention to have different data managing or data processing systems in different node units. The database management system in each node unit is responsible for the control of redundancy, consistency and integrity in that particular node unit as well as for local recovery management in case of a system failure. A node 20, 22, 24 and the scope of a local database management system is confined tc the closely connected hardware units that communicate without using the network 26. According to the inventive method all transactions are executed within an environment defined by a node unit and said scope of a local system. This environment is hereafter also called a transaction environment. A node unit may itself comprise several transaction environments managed by different program systems residing in the same node unit, e.g. a data base management system and a word pro¬ cessing system.
Distribution of a Transaction
Now referring to Fig 1 and Fig 2 an example of a distributed transaction according to prior art will be described. A transaction T is initiated in a first node unit 22, whereby said transaction T involves processing of information stored in the form of digital data both in the first node unit 22 and in a second node unit 24. By processing information it is understood that digital data is detected, erased, replaced, modified, added or handled in any other way.
Fig 2 shows schematically a specification of the transaction T comprising a number of operations opl,...,op6. Operations opl, op2 and op3 are symbolically described as al, a2 and a3, respectively, whereby the "a" means that the operation pro¬ cesses data stored in the first node unit 22. Likewise, operations op4, op5 and op6 are described as bl, b2 and b3 , respectively, whereby the "b" means that the operation pro¬ cesses data stored in the second node unit 24. Thus, the transaction T comprises two sets of operations al,a2,a3 and bl,b2,b3, each set involving the node units 22 and 24, re¬ spectively. According to prior art the operations opl-op6 of transaction T are executed concurrently occupying both the first node unit 22 and the second node unit 24, entailing uncertainty in the commit operation.
According to the inventive method the transaction T is divi¬ ded into at least two transactions, namely a first transac¬ tion Tl comprising the set of operations al,a2 ,a3 and a second transaction T2 comprising the set of operations bl,b2- ,b3. Fig 3a shows a specification of transaction Tl and Fig 3b shows a specification of transaction T2 resulting from the division of transaction T. Transaction Tl comprises beyond said operations al,a2,a3 also at least one control operation a4..an, where n=0, 1,2... , whereby said control operation cau¬ ses data to be transferred from the first node unit 22 to the second node unit 24 and the second transaction T2 to be ex¬ ecuted in the second node unit 24. The transactions Tl and T2 are executed in a sequence, whereby transaction Tl is ex- ecuted solely in node unit 22 and transaction T2 solely in node unit 24. Thus, a transaction is executed in the node unit in which the digital data current for the specific transaction is stored, within which node unit a database management system is handling data in a single database. The single database management system in the node units handles the integrity and recovery problems with the aid of methods according to prior art, e.g. COMMIT and ROLLBACK, while the distributed information is synchronized and transactions are distributed according to the inventive method. If application data stored in the first node unit 22 also is needed by transaction T2 in node unit 24, that particular data is transferred to said node unit 22 via the data communications network.
Fig 4 shows a flow chart that schematically describes the steps of the above mentioned embodiment of the inventive method. In step 30 the transaction T is initiated in a first node unit 22. Said transaction T is in step 32 then divided into two transactions Tl and T2, whereupon the transaction Tl is initiated and executed in step 34. The steps 30-34 are carried out in the first node unit 22 as one transaction, i.e. either all steps are carried out or none of them. Then, in step 36, control data and possibly application data is transferred from the first node unit 22 to the second node unit 24 via the data communication means 26. In step 38 the transaction T2 is initiated and executed in the second node unit 24.
Analysis and Division of a Transaction
The analysis of the division of work into a sequence of reliable, dependent transactions is in most cases made when a certain application is designed. The division and distribu¬ tion of transactions is based on knowledge of node units, their stored data and their processing requirements. In a first embodiment of the invention the following steps are comprised:
1. Transaction T is initiated in first node unit.
2. The whole set of operations in transaction T is analyzed. 3. A sequence of transactions T1...TN is established and it is decided what data should be transferred to a second node unit.
4. Transaction Tl is initiated and executed in the first node unit. 5. Data and specification of transaction sequence is trans¬ ferred to second node unit.
6. Transaction T2 is initiated in second node unit, and so on.
In a second embodiment the following steps are comprised:
1. Transaction T initiated in a first node unit.
2. The first operation is executed.
3. If the second operation should be executed in a second node unit, then an analysis is made as to what information has to be transferred to the second node unit
3. Data and specification of the rest of the transaction is transferred to the second node unit, and so on operation by operation.
Transfer of Data and Initiation of a Transaction The main object of the invention is thus achieved by a method in which transactions to be executed in the same node unit or in different node units are executed either in parallel or in an ordered and predictable sequence, whereby access is given only to local data of which some may have been transferred by a preceding transaction. According to the inventive method, control data and application data is transferred between at least two node units by means of a queue item and a transac- tion queue.
A first embodiment of a queue item 40, symbolically depicted in Fig 5a, comprises firstly, digital control data 42 that will cause a transaction to be executed and secondly, asso- ciated application data 44 to be transferred from a first node unit to a second node unit.
A second embodiment of a queue item 41, shown in Fig 5b, comprises control data 46 that will cause a transaction to be executed and that will cause associated application data 48 to be transferred from a first node unit to a second node unit. In said second embodiment the application data 48 is stored in a file or some other storage structure and is associated to the queue item by means of a link 50, e.g. a data pointer.
A queue item 40,41 is compiled and put on a transaction queue at the end of a first transaction executed in a first node unit. In a node unit a queue item is stored in a local data¬ base managed by a local database management system. In con¬ nection with each queue item data about its transport state is stored and referred to as the data state of said queue item. The data state comprises data e.g. about the time a queue item is created, sent and received. The data state of a queue item will be explained later in the text in conjunction with a detailed description of an embodiment of the inven¬ tion.
According to the inventive method a number of unordered transaction queues, one queue in each direction for each pair of node units, are established as communication channels between originating node units and destination node units in a distributed system. A transaction queue is unordered since the order of successful commit of the queue items in the transaction queue is not predictable due to transmission delays and local database manager locking. In this specifica¬ tion a first transaction Tl, executed in a first node unit 22, which transaction Tl creates and sends a queue item to a second node unit 24, will also be called an originating transaction executed in an originating node unit. Likewise, a second transaction T2 executed in a second node unit 24 will also be called a destination transaction executed in a desti- nation node unit. See also definitions in appendix A. A destination transaction may also be an originating transac¬ tion for subsequent transactions and a destination transac¬ tion may consequently also be an originating transaction. It is within the scope of the invention that the originating node unit and the destination node unit may be the same node unit. For example , two different application programs resi¬ ding in the same node unit may communicate via a transaction queue. A transaction queue uses the present data communica¬ tions network for the physical transport of a queue item. - Since each node unit has a separate database management system and means for data communication, the reliability of the communication is depending on the underlying support for integrity, consistency and recoverability of data.
A transaction queue is defined by the two node units it connects, and by the path used to communicate data between them. One transaction queue is established for each direction of communication between the two node units. When a queue item is injected into a transaction queue in an originating node unit it is stored in a first queue database. The queue database will be explained later in the text in conjunction with a detailed description of an embodiment. Queue items thus injected in the transaction queue are marked with their time of injection and are queuing to be sent to the destina¬ tion. The above mentioned queue item data state comprises information about the time item a queue item is to be trans¬ ferred to a destination node. A queue item is selected from a transaction queue in the originating node unit at a time depending on the data state and is then transferred to the destination node unit via the data communications network. In the destination node unit said queue item is stored in a second queue database, in which queue items are queuing to be processed in the destination node unit.
The queue item data state also comprises information about the time when a queue item is to be processed in the destina¬ tion node unit. In the destination node unit the queue data¬ base may contain queue items belonging to different transac- tion queues. The queue item to be processed is selected among the queue items in different transaction queues and is chosen on the basis of said queue item data state. In each transac¬ tion queue there is one queue item that is the next one to be processed in that particular transaction queue. The queue item selected for processing is then the first of the first queue items of the different transaction queues. Fig. 5c shows an illustration of three transaction queues t-queue 1, t-queue 2, t-queue 3 in a node unit. Each transaction queue contains four queue items I,,,I12,113,114; I21 ,I22,1,3,I24; 1^ , 1^ , 1^- ,I34, respectively. The time axis represents the points in time in which a queue item is to be processed. For example, the second queue item I12 in transaction queue 1 is to be pro¬ cessed at the time t3. The order within a transaction queue may be based on the time each queue item was created, it arrived in the destination node unit or on a definite presc¬ ription of the time a certain queue item shall be processed. The last may be the case if a certain transaction has to be initiated at a certain point in time. In the example of Fig. 5c the queue order between the transaction queues is graphi¬ cally depicted. It is clear that queue item I,, of t-queue 1 is picked from the transaction queue at the time t,, the queue item I21 of t-queue 2 at the time t2, the queue item I12 of t-queue 1 at the time t3 and so on.
A transaction queue is an unidirectional (two way) connection between two node units. For example addressing of node units or indication of paths lies outside the inventive method. It is a requirement that the underlying communication method used by the inventive method in a given node unit is capable of transferring a message to another given node unit with some degree of predictability. The transaction queues are sequential at both ends, i.e. an originating transaction injects queue items in a sequential fashion. However, the fact of actual injection and actual subsequent processing is defined as a result of successful commit of the involved transactions. Therefore, factors such as database manager locking, transport delays etc. will introduce uncertainty as to the sequence of transaction commit events as viewed from the outside of the involved transactions. This characteristic is typical for prior art database transaction management systems.
Fig 6 shows schematically an example of an application of the invention comprising a first transaction Tl executed in a first node unit 22, see also Fig 1, and a second transaction T2 executed in a second node unit 24. The first transaction Tl can communicate a queue item 54 from the first node unit 22 to the second node unit 24 by an intermediate transaction queue 52.
Fig 7 is flow chart that in more detail shows steps of the application according to Fig 6, which steps are comprised in one embodiment of the steps 34-38 of Fig 4. In step 58 the first transaction Tl is initiated in a first node unit 22 by e.g. a user or by a preceding transaction. Operations of t- ransaction Tl processing application data in the first node unit 22 are then executed in step 60. A queue item 54 is compiled in step 62 and is then put on the transaction queue 52 in step 64. The first transaction Tl is committed in step 66. Note that the compilation of the queue item 54 (Fig. 6) and injection of said queue item 54 into the transaction queue is carried out within the transaction Tl. Thus the queue item 54 will not exist unless the transaction Tl has been successfully completed and committed. In step 68 the transfer of the queue item 54 is initiated in the first node unit 22, and then said queue item 54 is transferred by the transaction queue 52 in step 70. In step 72 the queue item 54 is received in the second node unit 24 and the thus success¬ ful transfer is committed in step 74. A subsequent transac¬ tion T2 is thus not initiated and executed until a complete transfer of the queue item 54 has been fulfilled. Processing of the queue item 54 in the second node unit 24 is immediate¬ ly or at a predetermined moment in accordance with an in¬ ternal system clock initiated in step 76 and said processing is carried out in step 78. In step 80 the second transaction T2 is initiated by the control data comprised in the queue item 54 and in step 82 operations of transaction T2 proces¬ sing application data in the second node unit 24 are execu¬ ted. Finally the second transaction T2 is committed in step 84.
Time Interval Between Transactions
Transactions are. executed in a sequence with a time interval between the completion of a first transaction Tl and the in¬ itiation of a second transaction T2. During this time in¬ terval changes in a first database made by the first transac- tion Tl are visible to a third transaction T3 following queue items being transferred from other nodes in the network. A queue item may be injected in a queue by the first transac¬ tion Tl executing in a first node unit Nl, said queue item being destined to be subsequently executed by the second transaction T2 in a second node unit N2. Now, a third trans¬ action T3 may inject a queue item that represents a future fourth transaction T4 to be executed in the second node unit N2. In node unit N2, due to transport delays and locking, the fourth transaction T4 may actually be executed and committed before the second transaction T2. The nature of the transac¬ tions T2 and T4 are therefore such that act in additive fashion as regards to common information.
DETAILED DESCRIPTION OF AN EMBODIMENT
The method according to the invention will now be further explained by a detailed description of an embodiment of the invention. The inventive method may be executed by means of a general computer in conjunction with especially designed hardware installed in a node unit or software executed in said node unit. In this specification will be described a transaction queue management unit, henceforth also called a TQM unit, which comprises a set of functional components here implemented as computer programs. Each node unit in a dis¬ tributed computer system according to the invention may comprise a TQM unit.
Fig 8 shows schematically a first node unit 86 and a second node unit 88 connected by means of a data communications net¬ work 26. The first node unit 86 comprises a first application program 90, e.g. a database management system, and a first TQM unit 94. The first TQM unit in its turn comprises a first application program interface 98. In a similar way, the second node unit 88 comprises a second application program 92 and a second TQM unit 96, said second TQM 96 unit in its turn comprises a second application program interface 100. The first TQM unit 94 is arranged as an interface between the application program 90 and the communications network 26. The design of the application program interface 98, through which the first application program 90 communicates with the first TQM unit 94, is dependent of said application program 90. The configuration of the second node unit 88 is similar to that of the first node unit 86. The first application program 90 communicates with the second application program 92 through the first TQM unit 94, via the communications network and through the second TQM unit 96. This way transactions can be distributed among different application programs, e.g. with a Paradox® database in the first node unit 86 and a Focus® database in the second node unit 88.
TOM Unit Functions
A TQM unit according to an embodiment of the inventive method comprises functions that:
1. Support definition of transaction queues, which includes specification of: a. system components and capabilities in the local node units that communicate via a certain transaction queue, b. the means of communication between a first, destination node unit and a second, originating node unit, and of the associated parameters, c. the method to initiate a destination transaction in a node unit and of the method to compile corresponding digital con¬ trol data needed to initiate said destination transaction.
2. Allow an originating transaction to: a. specify one or more destination transactions to be per- formed after the originating transaction terminates, by com¬ piling control information, b. specify associated application data to be passed to a destination transaction, c. store the above mentioned control data and application data as part of the successfully committed originating trans¬ action.
3. Cause that data needed by a destination transaction and associated data is reliably and timely transported without loss or duplication to and stored in the node unit in which the destination transaction is to be executed.
4. Prompt the processing of a destination transaction by in¬ itiating its execution until the associated data is accept¬ ed and said destination transaction is successfully committ¬ ed.
5. Ensure that a transaction is executed exactly once.
6. Allow a destination transaction to also be an originating transaction.
7. Implement transport protocols based on and making effi- cient use of any available data communications protocols that are supported by communicating node units.
8. Monitor usage characteristics, availability and perfor¬ mance of node unit components as well as the underlying data communications means.
9. Handle temporary and foreseeable errors and monitor the TQM unit for detection of systematic malfunctions.
10. Communicate faults and disturbances by controlling alarm equipment or by compiling and sending messages to a certain node unit for error handling.
11. Allow responsible personnel to discard faulty or other- wise improper queue items.
The TQM unit comprises a number of functional program modu¬ les, of which some are designed as callable functions resi¬ ding in a program library. Others are formed as database transaction programs which are initiated in local database systems executing in different nodes in the distributed computer system. The TQM unit in each node unit also uses the database facilities in the node unit to store information on transaction queues and their component items, so that asso- ciated database objects are defined and available in each node unit.
TOM Unit Components Each node unit in a distributed computer system according to the invention may comprise a TQM unit. A TQM unit according to one embodiment of the invention comprises 9 main compo¬ nents, preferably in the form of separate modules. Fig 9 schematically depicts a first node unit 102 and a second node unit 104 connected by means of a data communications network 26, whereby said node units 102 and 104 comprise a first TQM unit 106 and a second TQM unit 108, respectively. The two node units 102 and 104 each comprise components of the same kind referred to as e.g. the first originating application program OAP1 and the second originating application program OAP2, see further explanation below. When referring to the component in general the reference will be used without the tag number 1 or 2, e.g. OAP and likewise for other compo¬ nents. The following components may be comprised in a TQM unit:
1. An originating application program OAP1, 0AP2 which con¬ tains application dependent code to create queue items. The OAP is an interface between the application program and other TQM unit components, see also reference 98, 100 in Fig 8.
2. A destination application program DAP1, DAP2 which con¬ tains application dependent code to receive queue items. The DAP is an interface between the application program and other TQM unit components, see also reference 98, 100 in Fig 8.
3. A queue programming interface QPI1, QPI2 through which the OAP and DAP issue requests to a transaction queue manager TQM, see below, in order to put a queue item in a transaction queue or to retrieve a queue item from a transaction queue. The QPI contains logic to define addresses to the TQM and to pass parameters to said TQM. The component QPI supports, among other functions, the functions PUT_ITEM, that injects a queue item into a transaction queue, and GET_ITEM that retri- eves a queue item from a transaction queue. Functions supported by QPI will be described in more detail later in the text.
4. A transaction queue manager TQMl, TQM2 that ensures that queue items are communicated correctly between an originating node unit and a destination node unit. Note that the first node unit 102 and the second node unit 104 by turns may appear as originating node unit and destination node unit. The TQM further maintains data state variables for the queue items. The data state variables describe the status of a queue item, such as if it is current or not. The data state variables will be further described later in the text. Para¬ meters are passed to the transaction queue manager TQM through logical rules incorporated in the above mentioned queue programming interface QPI.
5. A deferred process initiator DPIl, DPI2 initiates at defi¬ ned time intervals either the TQM or the DAP to perform processing based on the contents of the queue database QDB, see below.
6. A queue database QDBl, QDB2 in which queue specifications, queue items and their data state variables are stored.
7. A Timed Initiator Tl automatically initiates transactions based on elapsed time intervals or at a defined time of day. The TQM unit relies on each node unit to provide a computer system clock.
8. An administrative interface handler AIHl, AIH2 provides functions that constitute a queue administration interface through which a person or a program may issue requests to the transaction queue manager TQM in order to administrate said TQM. Administration comprises the designing, defining and setting up of transaction queues and destination transac¬ tions, i.e. the establishment of the infrastructure of the transaction queue management environment. 9. An operations interface handler OIHl, IOH2 provides func¬ tions that constitute a queue operations interface, through which a person or a program may issue requests to the Trans¬ action Queue Manager TQM in order to manage the operations of said TQM. Operation comprises monitoring and maintaining the functionality and performance of the transaction queue mana¬ gement arrangement.
The components of the TQM units communicate with each other through program calls, TQM Messages and TQM Triggers. E.g. a queue item is sent between two node units as a TQM Message. TQM Messages and TQM Triggers will be explained further below. The control data flow of a TQM unit 106, 108 within a node unit 102, 104 and between two node units 102 and 104 will now be explained by means of an example describing a transfer of a queue item between the two node units 102 and 104. In the schematic description of the system structure of Fig. 9 > means a program call, >>>» means a TQM Trigger, ==•==> means a TQM Message. A first transaction Tl (not shown) is executed in the first node unit 102 under the control of the first originating application program OAP1, a first queue item 54 (not shown) is compiled and the function PUT_ITEM is called through the first queue programming interface QPIl by program call 110 as a part of said first transaction Tl. The PUT_ITEM call 110 is then passed to the first transaction queue mana¬ ger TQMl through a TQM Message 112 created by the first queue programming interface QPIl, the TQM message comprising the queue item 54. The TQM Message 112 is then processed by TQMl, the processing comprising a unique identification item being added to or coupled to the queue item 54. In the preferred embodiment this identification item is constructed using date and time with a predetermined resolution, but other identifi¬ cation codes are possible within the scope of the invention. An entry concerning the TQM Message 112 is then added to the first queue database QDBl through program call 113 , and a TQM Message 114 including the queue item 54 is sent to the second transaction queue manager TQM2 in the second node unit 104 via the data communications network 26. In principle, a queue item may be transported in any fashion, e.g. via telecommu¬ nication network or through interchange of floppy disks carried by couriers. When the second TQM unit 108 in the second node 104 has received the TQM Message 114 a transac- tion queue management transaction is started in said second TQM unit 108. The TQM Message 114 is processed by the second transaction queue manager TQM2 and, since the TQM message 114 in this case contains a new queue item 54 said queue item 54 is added to the second queue database QDB2 through a program call 116. If the changes in QDB2 are successfully committed, which is indicated by a return parameter from the program called by program call 116, a TQM Message 117 is sent back from the TQM2 to the TQMl to acknowledge receipt of the TQM Message 114 and the first queue item 54. If the specification of the first queue item 54 says that a transaction T2 (not shown) is to be started immediately, the transaction T2 is then started in the second destination application program DAP2 by the TQM2 through a TQM Trigger 118. Otherwise said transaction is subsequently started by the second timed in- itiator TI2. At intervals predefined to the TI2 the second deferred process initiator DPI2 is started by said TI2 through a TQM Trigger 120. The queue database QDB2 is searc¬ hed by the DPI2 through a program call 122 and, depending on the queue specifications and the current status of the queue items stored in the QDB2, either the DAP2 or the TQM2 is initiated to execute the required processing through TQM Trigger 124 or 126, respectively. When trigged by either TQM Trigger 118 or 124 the DAP2 then repeatedly calls the func¬ tion GET_ITEM through program calls 128 to the second queue programming interface QPI2. Each such program call 128 is passed by the QPI2 to the TQM2 through a program call 130, whereupon the TQM2 retrieves the queue item that is the next one to be processed, which in this case is the first queue item 54, from the QDB2 through program call 132 and marks the queue item 54 with a control item as having been processed by the DAP2. A queue item is marked when it is accessed by the DAP (GET_ITEM) and is part of the common transaction. The mark as well as the resulting application data is made vis¬ ible to the outside components when the DAP transaction is committed. The queue database and the application data is stored by the same database manager and kept synchronized by the common local database transaction manager, the changes are part of a common transaction. In some cases a TQM Message 132 is generated and sent to the TQMl in the first node after the transaction has been committed. These cases are specified in the state transition diagram explained later in this text. If the transaction T2 specified by said first queue item 54 requires access to data residing in a third node unit, the transaction T2 processing the first queue item 54 comprises the specification of a second queue item that in its turn will cause a subsequent transaction T3 (not shown) to be executed according to the above described method. When pro¬ cessing a TQM Message the transaction queue manager TQM is executing dependent of the current data state for the queue item which said message refers to. This control information is described below in conjunction with a description of TQM Message types.
The Queue Programming Interface QPI
The component QPI may support the functions PUT_ITEM, GET_ITEM, REINSTATE_ITEM, PEEK_ITEM AND SKIP_ITEM implemented as callable subroutines invocable from programs that are designed as database transactions. Each function returns an output parameter that contains information on the execution of the function, and in case of rejection of the function call provides the application program with information on the cause. The following input parameters are used in the func¬ tion calls:
Queue - specifies the unique identity of a destination database system, whereby the name is e.g. a code or an in¬ direct reference;
Destination_transaction - specifies the unique identity of a destination transaction within the destination TQM unit that is to process the current queue item, whereby the name is e.g. a code or an indirect reference; Info - specifies application information that is to be communicated to the destination transaction, is e.g stored in a data file;
Fixup_trans - is the name of a local transaction through which an originating transaction handles error conditions, and is a return parameter, returned by all five functions, that contains information on the successful execution of the function and that, in the case of unsuccessful execution provides the application program with information of the cause.
The function PUT_ITEM with the input parameters Queue, Desti- nation_transaction. Info and optionally Fixup_trans injects a queue item into a transaction queue, allowing a first, origi¬ nating transaction to inject a queue item for later proces¬ sing in a second, destination transaction. When a transaction that has issued at least one PUT_ITEM call is committed, each thus injected queue item becomes part of the specified trans- action queue.
The function GET_ITEM with the input parameters Queue, Desti- nation_transaction and Info is used by a destination transac¬ tion to retrieve a current queue item from the transaction queue. Following the successful retrieval of the current queue item it is marked as processed and the queue position is changed in such a way that the queue item that follows the retrieved and processed item becomes the new current item. When a transaction that has issued at least one GET_ITEM call is committed, the queue item thus retrieved and processed is removed from the transaction queue.
The function REINSTATE_ITEM using no input parameter is used by a destination transaction to indicate that the processing of the queue item retrieved from the transaction queue by the previous GET_ITEM call is not carried out and that the item is to be reinstated and marked as unprocessed. Following a successful reinstatement of the previously retrieved but unprocessed queue item the queue position is changed in such a way that said queue item again is the new current one.
The function PEEK_ITEM with the input parameters Queue, Destination_transaction and Info is used by a destination transaction to retrieve information on the current queue item in a transaction queue. The PEEK_ITEM function merely reads information and does not change the processing state of the current queue item or change its current queue position.
The function SKIP_ITEM with the input parameter Queue is used by a destination transaction to move the queue position of the current queue item to the next position.
The Queue Database ODB
The Queue Database QDB is used to store queue items and different types of information comprising:
- Administrative information about transaction queues and associated nodes, e.g. specifications of the message validity period MVP (explained below) or default data state values;
- Control information on active entries e.g. TQM data state, where an active entry e.g. is a queue item queuing to be processed or a TQM Message;
- Application information related to active entries;
- Operational information on the queues, e.g performance specifications.
The above mentioned data state values for a queue item are described below, in conjunction with a description of the TQM Message that uses and transfers the information given by the data state. The queue database QDB is maintained by the transaction queue manager TQM but is also accessed by the deferred process initiator DPI and other TQM unit components. The queue database QDB is utilized for the handling of con¬ trol information used in different processes of the trans- action queue management unit and for the handling of queue items, comprising the following types of information and purposes:
1. Information needed when the validity of a call from the queue programming interface QPI is verified by said QPI or by the transaction queue manager TQM. E.g. status information.
2. Queue specific information needed when TQM Messages are constructed and processed by the transaction queue manager
TQM.
3. Detailed information about transmission between node units, e.g about existing data communication means and the method for using them, needed by the transaction queue mana¬ ger TQM for communication with another node unit.
4. Information needed by the transaction queue manager TQM when application data is transferred to another node unit, e.g. information on available transport protocols, on whether data is transmitted separately or as a sequence of messages and on the state of transmission of this data.
5. Activation of a queue item, i.e. as part of the processing of a PUT_ITEM call or when receiving a new queue item from another node, a new queue item is inserted into the queue database by the transaction queue manager TQM.
6. Monitoring and resending already sent, but not yet acknow- ledged by a destination system as received, queue item, i.e said queue items in the queue database QDB are monitored by the deferred process initiator DPI in an originating system.
7. Finding and retrieving queue items that are to be pro- cessed, i.e the deferred process initiator DPI in a destina¬ tion TQM unit finds the queue items that are to be processed by the destination application program DAP through GET_ITEM calls. 8. Deactivation of no longer current queue items that are to be deleted from the queue database QDB by the deferred pro¬ cess initiator DPI.
9. Miscellaneous activities such as maintenance of queue specifications, error handling, performance monitoring and start and stop of queues.
TOM Trigger A TQM Trigger is used to control the initiation of the pro¬ cessing of a queue item within the TQM unit of a node unit. It is generated by the transaction queue manager TQM or the deferred process initiator DPI and has the form of digital control signals or program calls that cause the operating components of a node to initiate processing of a given trans¬ action directed to a queue item in a given transaction queue. A TQM trigger is internal to a node unit (TQM unit) and may e.g. be the result of a timed event or a program call. A TQM message is the initialization of a transaction at a TQM node and is caused by the receipt of a TQM message from another component in the same or in another node unit.
TOM Message
A TQM Message is used to control the initiation of the pro- cessing of a given queue item. A TQM Message is generated by the queue programming interface QPI or the transaction queue manager TQM, whereby control items are compiled and trans¬ ferred to other TQM unit components either within a node unit or to the TQM unit components of another node unit. A TQM Message is itself processed in the transaction queue manager TQM of the receiving unit. When sent and received within the same TQM unit, said processing is executed within the scope of the current transaction, i.e. the transaction started with the generation of the message, and when sent to and received by the TQM in another node unit the TQM Message is formed as an internode message, said processing being executed as a transaction in the destination unit. The processing initiated by the control item comprised in a TQM Message is not ex¬ ecuted by the destination component until or unless the transaction that has originated the TQM Message has been committed. If the originating transaction fails to be commit¬ ted successfully the current TQM Message is discarded by the sending system. In order to keep count of possible need to resend a message or to require a message to be resent, a resend count element is provided in each component able to send and receive a message. The content of the resend count element can be set to zero, which indicates that no resend is needed, and increased one numeral step for each needed resend request. If the content of the resend count element exceeds a preset value, referred to as the resend limit, a systematic error in the transfer mechanism exists. The resend count is kept in each queue item.
For each transaction queue there is a defined message validi¬ ty period MVP during which a TQM Message is valid. A TQM Message comprises a message creation time item MCT which is defined by the generating system as the time at which the sending component or sending system created the message. If a message is received at a time that falls outside the message validity period MVP, the message will be ignored by the receiving system and the message is discarded without further processing. In one embodiment of the invention the time is defined as the date and time of day TOD based on e.g Green- wich Mean Time. The rule for decision of message validity is then preferably expressed as:
IF TOD < MCT + MVP THEN process ELSE discard. The message validity period MVP for a given transaction queue is preferably dimensioned based on the maximum time it will take for a message to travel between the originating system and the destination system. If the MVP is too short there will be a high probability that valid messages will be unne¬ cessarily discarded and, on the other hand, if the MVP is too long the queue items will be stored for an unnecessarily long time in the queue database. The time during which a queue item is stored in the queue database is referred to as reten¬ tion period. In order to compensate for minor errors in time measuring units, i.e. the system clocks, comprised in the node units, a value TDELTA is used in the preferred embodi- ment of the invention to increase the robustness of the inventive method. The value of TDELTA is chosen to reflect the maximum time difference that reasonably can be expected to exist between two node units connected by a transaction queue.
A TQM Message comprises items containing the following con¬ trol information that is used to control the processing of a TQM Message in a transaction queue manager TQM:
- Queue Item Identification - which uniquely defines the given queue item to which the TQM Message relates.
- Current State for Sender - which specifies a value that the sender has stored in the queue database for the given queue item and which is committed as a part the transaction producing the message.
- Assumed Current State for Receiver - which specifies a value that the sender assumes that the receiver has stored in its queue database QDB for the given queue item, this state is specified as one value or as a set of values.
- Requested State - which specifies a value that the sender requests the receiver to store in the receiver queue database
QDB for the given queue item, this value is e.g. specified as a set of values, from which the receiving node may choose the most suitable. An example of this is clear from Appendix B, explained below, message type 7 which is sent by a destina- tion TQM unit to a the originating TQM unit. The sender cannot know whether the retention time period has elapsed or not at the originating system, so it phrases this uncertainty by giving "D.E" as requested states. Note, however, that the "Receiver curr?" and "Receiver Req'd" states are for informa- tion only, i.e. not used as part of TQM protocols.
- Application Information - which contain or refer to the application data that is communicated from the originating transaction to the destination transaction. According to a first, preferred embodiment of the invention the size of the application dependent information is restricted to fit within one TQM Message. In a second embodiment the application data is transported in a sequence of TQM Messages and in a third embodiment said data is transported using existing transport means such as file transfer means.
Data state of a queue item
Much of the processing done by the deferred process initiator DPI and the transaction queue manager TQM is depending on the data state of an active queue item, in short called TQM data state stored in the queue database QDB. A queue item is defi¬ ned as active for a given transaction queue if the identifi¬ cation of the queue item is registered in the QDB. Deactiva- tion of an active queue item entails removal of the item from the QDB.
The TQM data state is composed of state elements, comprising a communication element and a process element. In one em- bodiment of the invention the data state elements are each stored as a one character value in each active queue item in the QDB. The state elements may also be given a default value for a given transaction queue.
The Communication Element
The communication element is relevant for queue items both at the originating and destination sides of a transaction queue.
For a given queue item, defined by a unique queue item iden- tification and a transaction queue name, the following values are defined for the communication element:
A. The queue item may be activated by the originating TQM unit, i.e., it may be accepted that an originating appli- cation program OAP issues a PUT_ITEM call to create a new value for the queue item. This is the default value for the originating side of a transaction queue. B. the queue item is active and has been created in the originating TQM unit but has not yet been acknowledged as received by the destination TQM-unit. The originating TQM unit will continue sending messages about the queue item until acknowledged as received by the destination system or until the resend threshold is reached.
C. The queue item is active and has been created in the originating TQM unit and has been acknowledged as received by the destination TQM unit but has not yet been acknowledged as queuing by the destination TQM unit. The originating system will continue sending messages about the queue item until acknowledged as queuing by the destination TQM unit or until the resend threshold is reached.
D. The queue item is active and has been created at the originating TQM unit and has been acknowledged as received by the destination TQM unit and has been acknowledged as queuing by the destination TQM unit but has not yet reached the end of its retention time period.
E. The queue item is active and has been created at the originating TQM unit and has been acknowledged as received by the destination TQM unit and has been acknowledged as queuing by the destination TQM unit and has reached the end of its retention time period. This item may be deactivated by the originating TQM unit at any time.
Z. The queue item may be activated by the destination TQM unit, i.e., a TQM Message from an originating TQM unit default value for the destination side of a transaction queue.
Y. The queue item is active and has been created at the destination TQM unit but an acknowledgement response has not yet been received from the originating TQM unit.
X. The queue item is active and has been created at the destination TQM unit and an acknowledgement response has been received from the originating TQM unit but the item has not yet reached the end of its retention time period.
W. The queue item is active and has been created at the destination TQM unit and an acknowledgement response has been received from the originating TQM unit and the item has reached the end of its retention time period. If processed by the DAP, the TQM process element has the value "p" and the queue item may be deactivated by this destination system at any time.
The TOM Process Element
The TQM process element is relevant only for the destination side of a transaction queue. All queue items at the origina¬ ting side have the same value.
For a given queue item, defined by a unique queue item iden¬ tification and queue name, the following values are defined for the TQM process element: U. The queue item resides at the origination side of a trans¬ action queue. This is also the default value for inactive entries at the origination side of a transaction queue.
N. The queue item is active and resides at the destination side of a transaction queue but has not yet been processed successfully by the destination application program DAP to which it is related.
P. The queue item is active and resides at the destination side of a transaction queue and has been processed success¬ fully by the destination application program to which it is related. This is also the default value for inactive entries at the destination side of a transaction queue.
TOM Messages and Data State Values
The table in appendix B shows an example of message types 1- 13 appearing during communication between a first, originat¬ ing system in the table labelled "Orig", and a second, desti¬ nation system labelled "Dest". System "Syst" and component "Comp" are specified under the headings "Sender" and "Receiv¬ er", respectively. The components able to send messages in this embodiment are the transaction queue manager TQM, the queue programming interface QPI and the deferred process initiator DPI. The state elements can assume the data state values A, B, C, D, E, Z, Y, X and W. The Current State for Sender value is indicated under the heading "Sender Curr!", the Assumed Current State for Receiver under the heading "Receiver Curr?" and the Requested State under "Receiver Req'd". As an example, a message type 1 is characterized in that the sender is the component QPI in the originating system, the receiver is the component TQM in the originating system, the current state for sender value is A, the Assumed Current State for Receiver value is A and the Requested State value is B.
To illustrate the changes in data states that occur as a result of different TQM Messages being sent and received, a protocol for the transition of communication elements repre- senting the data states in the Queue Database QDB, according to one embodiment of the invention, is described in con¬ junction with the table in appendix C The table shows diffe¬ rent communication element transitions that occur when the Current State for Sender value "Curr. Value" is changed to the Requested State value "Requested Value" assuming the above explained values A-W, combinations of which constitutes the message types listed in appendix 1. The different cases of communication element transitions according to the table in appendix C are as follows, wherein transition letters OK stand for "okey, i.e. process this message", IG stand for
"ignore this message" and is a result of messages that arrive out of sequence or too late, and ER stand for "error" and signals abnormal system operation.
OKI - is caused by the processing of a Type 1 message, cf. Appendix 1, i.e. a successful execution of a PUT_ITEM call. Associated application information is stored and depending on the specifications for the transaction queue, an output Type 2 message may be generated. The resend count is set to zero. OK2 - is caused by the processing of a Type 3 message, i.e. a resend request initiated by the deferred process initiator DPI. An output Type 2 message is generated and the resend count is increased.
OK3 - is caused by the processing of a Type 4 message from a destination system acknowledging that a sent queue item has been successfully received. The current Time of Day value TOD is stored as a value called TOK3, which thus represents the time when the OK3 transition took place. An output Type 5 message containing TOK3 is generated and the resend count is set to zero.
OK4 - is caused by the processing of a Type 6 message, i.e. a resend is initiated by the deferred process initiator DPI or by the processing of a faulty Type 4 message from the desti¬ nation TQM unit acknowledging that a sent item has been received. An output Type 5 message containing TOK3 is genera¬ ted and the resend count is increased.
0K5 - is caused by the processing of a Type 7 message from the destination TQM unit acknowledging that a transferred queue item is queuing. The message contains the time value TOK10 from the destination system. The time value TOK5 = MAX (TOD+TDELTA, TOK10) is calculated and is stored in the queue database QDB.
OK6 - is caused by the processing of a Type 8 message, i.e. a message initiated by the deferred process initiator DPI having found that the retention period of a queue item has expired. The retention period of a queue item expires at a time calculated as TOK6 >= T0K5 + 2*MVP.
OK7 - is caused by the processing of a Type 9 message issued when the current TQM unit has decided to deactivate a queue item. This may occur at the same time as OK6 or at a later time. OK8 - activates an item in the destination TQM unit and is caused by the processing of a Type 2 message from the origi¬ nating TQM unit containing a new queue item. The transition OK8 is coupled with the transition OK16 described below. An output Type 4 message acknowledging the receipt of the new queue item is generated.
OK9 - is caused by the processing of a faulty Type 2 message from the originating TQM unit. An output Type 4 message acknowledging the receipt is generated.
OK10 - is caused by the processing of a Type 5 message from the originating TQM unit responding to the receipt of an ack¬ nowledgement message. The Type 5 message contains the time value TOK3 from the originating TQM unit. The time value
TOK10 = MAX (TOD + TDELTA, TOK3) is calculated and is stored in the queue database QDB. An output Type 7 message contai¬ ning TOK10 is generated.
OK11 - is caused by the processing of a Type 5 message from the originating TQM unit responding to the receipt of a faulty Type 4 message from the destination unit. An output Type 7 message containing TOK10 is generated.
OK12 - is caused by the processing of a Type 10 message sent by the Deferred Process Initiator when it has detected that the retention period has expired for a queue item. The item retention period expires at the time TOK12 >= TOK10 + 2*MVP.
OK13 - is caused by the processing of a Type 11 message sent when the system decides to deactivate a queue item. This may occur at the same time as transition OK12 or at a later time.
IG1 - is requested by a late TQM Message. The request is ignored and no output message is generated.
IG2 - is requested by a Type 5 message from an originating TQM unit. The request is ignored and an output Type 12 messa¬ ge is generated. ER1 - is initiated by a local event in a TQM system and is requested based on information held in the local queue data¬ base. This transition should never occur and, if requested, indicates faulty program logic in the TQM unit.
ER2 - is caused by a message that should have been dis¬ carded because its message validity period MVP has expired, or by this or a related TQM system having been erroneously recovered following a database failure.
ER3 - is invalid. The current state indicates that the node is an originating system and the requested state is valid only for a destination TQM unit. This request is caused by faulty program logic or by corrupt queue database content.
ER4 - is invalid. The current state indicates that the node is a destination TQM unit and the requested state is valid only for an originating TQM unit. This request is caused by faulty program logic or by corrupt queue database content.
Each queue item is as mentioned above provided with at least one process element describing the processing status of said queue item. A processing element can, as described above, assume a .value U, N or P. A table showing different cases of transitions in an example of a process element is presented in Appendix D. In the table the current value of the process element "Curr. Value" and the requested values "Requested Value" can assume the mentioned values U, P and N. The diffe¬ rent cases of process element transitions are as follows, wherein transition:
OK14 - is caused by the processing of any TQM Message in an originating TQM unit.
OK15 - is caused by the processing of any TQM Message in a destination TQM unit, except for the first Type 2 message or a Type 3 message concerning a particular queue item. OK16 - is caused by the processing of a Type 2 message in a destination TQM unit, i.e, the receipt of a new queue item from the originating TQM unit. The transition is coupled with the communication element transition OK8 described above and cannot occur at any other time. The value N is an indicator to the deferred process initiator DPI to initiate the desti¬ nation application program DAP to process the queue item attached to the current process element.
OK17 - is caused by the processing of a Type 13 message in a destination TQM unit, i.e. the destination application pro¬ gram DAP has issued a successful GET_ITEM call through the queue programming interface QPI. GET_ITEM calls can only be processed against queue items with the processing state value N stored in the process element, therefore only one Type 13 message, i.e. a GET_ITEM call, can be processed for each active queue element.
ER5 - is invalid since it can only be performed in a desti- nation TQM unit and the current state indicates that this is an originating TQM unit.
ER6 - is invalid since it can only be performed in an originating TQM unit and the current state indicates that this is an destination TQM unit.
Appendix A
List of definitions and expressions used in the present application:
Atomic An atomic operation is indivisible. A transaction should be atomic, which means that it either is executed in its entirety or is totally cancelled. A sequence of operations that is fundamentally not atomic can be made to look as if it really were atomic from an external point of view.
COMMIT A COMMIT operation signals successful end- of-transaction. It tells the transaction manager that a logical unit of work has been successfully completed, and that all of the updates made by that unit of work can now be "committed" or made permanent. C.f. ROLLBACK.
DBMS Database management system.
Inconsistency Suppose that a data item is represented by two distinct copies in the memory of a database and the DBMS is not aware of this duplication (i.e. redundancy is not controlled) then there will be some occasions on which the two entries will not agree, namely when one and only one of the two entries is updated. At such times the database is said to be inconsistent, and it is capable of supplying incorrect or contradictory information.
Destination node unit A destination node unit is a node unit in which a queue item is received from an originating node unit and in which a destination transaction is executed initiated by said queue item.
Destination transaction A destination transaction is a second transaction caused to be executed by a queue item compiled in a first, originating transaction. Integrity The problem of integrity is the problem of ensuring that the data in the database is accurate. Inconsis¬ tency is an example of lack of integrity.
Originating node unit An originating node unit is a first node unit in which a first, originating transac¬ tion is executed and from which a queue item is transferred to a second, destination node unit.
Originating transaction An originating transac¬ tion is a first transaction that compiles a queue item that will cause a second, destination transaction to be executed.
Redundancy Same data is stored more than once in different memory areas. This cannot be totally eliminated but should be controlled and reduced.
ROLLBACK A ROLLBACK operation signals unsuccessful end-of-transaction. It tells the transaction manager that something has gone wrong, and all of the updates by the logical unit of work so far must be "rolled back" or undone. C.f. COMMIT.
Transaction A transaction is a logical unit of work that comprises at least one operation, and in general a sequ¬ ence of several operations. For example, a transaction in a database may consist of a sequence of several database opera¬ tions that transforms a consistent state of the database into another consistent state, without necessarily preserving consistency in all intermediate states.
Transaction manager A system component that provides atomicity or semblance of atomicity in a transaction proces¬ sing system.
Transaction processing A system that supports transaction processing guarantees that if a transaction executes some updates and then a failure occurs before the transaction reaches its normal termination, then those updates will be undone.
Note: Many of the definitions given in this list are taken from CJ. Date: An Introduction to Database Systems.

Claims

Claims
1. Method for performing transactions and for communicating control data and application data between applications in a distributed data processing system comprising node units (20, 22, 24) connected in a network, data being communicable between node units, each node unit comprising at least one data processing system of computer hardware components, computer software components and stored digital data, whereby at least one application program is executed in each node unit, c h a r a c t e r i z e d in
- that a first transaction (T) is initiated in a first node unit (20,22), wherein the transaction (T) involves processing application data stored and managed in at least one node unit (20,22,24) , and in
- that the first transaction (T) is divisible into a first subtransaction (Tl) and at least one second subtransaction (T2) such that the first subtransaction (Tl) is executable given access solely to local data stored in the first node unit (22) and the second subtransaction (T2) is executable given access to data stored in a second node unit (24) and possibly in further node units (20) .
2. Method according to claim 1, c h a r a c t e r i z e d in that the first subtransaction (Tl) is executed in the first node unit (22) , wherein said subtransaction (Tl) may comprise generation of a queue item (40,41) comprising control data and/or application data and wherein said queue item (40, 41) is stored in a memory device in the first node unit (22) .
3 . Method according to claim 2 , c h a r a c t e r i z e d in that
- said queue item (40, 41) is transferred to the second node unit (24) , the transfer being caused by a first transaction queue manager (TQMl) in dependence of said control data, said first transaction queue manager (TQMl) being located in said first node unit (22) , and in that
- said queue item (40, 41) is stored in a memory device in the second node unit (24) .
4. Method according to claim 3, c h a r a c t e r i z e d in that the second subtransaction (T2) is initiated and at least partially executed in the second node unit (24) , said in¬ itiation and said execution being caused by a second transac- tion queue manager (TQM2) in dependence of said control data in said queue item (40,41), said second transaction queue manager (TQM2) being located in said second node unit (24) .
5. Method according to claim 1, c h a r a c t e r i z e d in that said second subtransaction (T2) may be a first trans¬ action (T) divisible in further subtransactions.
6. Arrangement for performing transactions and for communica¬ ting control data and application data between applications in a distributed data processing system comprising node units (20, 22, 24) connected in a network, each node unit compri¬ sing at least one data processing system of computer hardware components, computer software components and stored digital data, whereby at least one application program (OAPl,OAP2,- DAP1,DAP2) is executed in each node unit, c h a r a c t e r i z e d in that it comprises the following functional components: a queue programming interface (QPIl, QPI2) , a transaction queue manager (TQM1,TQM2) , a deferred process initiator (DPI1,DPI2) , a queue database (QDB1,QDB2) , a timed initiator (TI1,TI2) , an administrative interface handler (AIH1,AIH2) and an operations interface handler (OIHl,OIH2) , said functional components being arranged to exchange control signals and application data.
PCT/SE1994/000172 1993-03-01 1994-03-01 Transaction queue management WO1994020903A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE9300671A SE9300671D0 (en) 1993-03-01 1993-03-01 WORK FLOW MANAGEMENT
SE9300671-6 1993-03-01

Publications (1)

Publication Number Publication Date
WO1994020903A1 true WO1994020903A1 (en) 1994-09-15

Family

ID=20389066

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/SE1994/000173 WO1994020910A1 (en) 1993-03-01 1994-03-01 Distributed work flow management
PCT/SE1994/000172 WO1994020903A1 (en) 1993-03-01 1994-03-01 Transaction queue management

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/SE1994/000173 WO1994020910A1 (en) 1993-03-01 1994-03-01 Distributed work flow management

Country Status (5)

Country Link
US (1) US5893128A (en)
EP (1) EP0788631B1 (en)
DE (1) DE69429787T2 (en)
SE (1) SE9300671D0 (en)
WO (2) WO1994020910A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0747812A2 (en) * 1995-06-07 1996-12-11 Tandem Computers Incorporated Customer information control system and method with API start and cancel transaction functions in a loosely coupled parallel processing environment
EP0747832A2 (en) * 1995-06-07 1996-12-11 Tandem Computers Incorporated Customer information control system and method in a loosely coupled parallel processing environment
EP0817019A2 (en) * 1996-07-02 1998-01-07 International Business Machines Corporation Method of stratified transaction processing
US6012094A (en) * 1996-07-02 2000-01-04 International Business Machines Corporation Method of stratified transaction processing
EP1574953A1 (en) * 2003-10-22 2005-09-14 Sap Ag Asynchronous consumption of independent planned requirements
US7647295B2 (en) * 2004-11-05 2010-01-12 International Business Machines Corporation Method, apparatus, computer program, and computer program product for managing the durability of a pluraliy of transactions

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5878403A (en) 1995-09-12 1999-03-02 Cmsi Computer implemented automated credit application analysis and decision routing system
US7181427B1 (en) * 1995-09-12 2007-02-20 Jp Morgan Chase Bank, N.A. Automated credit application system
GB2318030B (en) * 1996-10-04 2001-03-14 Ibm Communication system with exchange of capability information
US7089332B2 (en) 1996-07-01 2006-08-08 Sun Microsystems, Inc. Method for transferring selected display output from a computer to a portable computer over a wireless communication link
EP0854423A1 (en) * 1997-01-20 1998-07-22 TELEFONAKTIEBOLAGET L M ERICSSON (publ) Data partitioning and duplication in a distributed data processing system
JP3497342B2 (en) * 1997-02-27 2004-02-16 株式会社日立製作所 Client / server system, server, client processing method, and server processing method
US6119117A (en) * 1997-07-15 2000-09-12 Kabushiki Kaisha Toshiba Document management method, document retrieval method, and document retrieval apparatus
JP3465543B2 (en) * 1997-08-01 2003-11-10 富士ゼロックス株式会社 Workflow support system and method
US20040138992A1 (en) * 1997-09-03 2004-07-15 Defrancesco James Computer implemented automated credit application analysis and decision routing system
JPH11213082A (en) * 1997-11-19 1999-08-06 Fujitsu Ltd Work flow managing device, its system and computer readable storing medium storing program of these
JPH11154189A (en) * 1997-11-21 1999-06-08 Hitachi Ltd Method and system for controlling state monitor type work flow
US6115646A (en) * 1997-12-18 2000-09-05 Nortel Networks Limited Dynamic and generic process automation system
US6202056B1 (en) * 1998-04-03 2001-03-13 Audiosoft, Inc. Method for computer network operation providing basis for usage fees
US7051004B2 (en) * 1998-04-03 2006-05-23 Macrovision Corporation System and methods providing secure delivery of licenses and content
WO1999063460A2 (en) * 1998-04-30 1999-12-09 Ec Cubed, Inc. System and method for managing documents available to a plurality of users
US6505176B2 (en) * 1998-06-12 2003-01-07 First American Credit Management Solutions, Inc. Workflow management system for an automated credit application system
US6507845B1 (en) 1998-09-14 2003-01-14 International Business Machines Corporation Method and software for supporting improved awareness of and collaboration among users involved in a task
US6937993B1 (en) * 1998-09-16 2005-08-30 Mci, Inc. System and method for processing and tracking telecommunications service orders
US6871220B1 (en) 1998-10-28 2005-03-22 Yodlee, Inc. System and method for distributed storage and retrieval of personal information
ATE242511T1 (en) 1998-10-28 2003-06-15 Verticalone Corp APPARATUS AND METHOD FOR AUTOMATICALLY COMPOSING AND TRANSMITTING TRANSACTIONS CONTAINING PERSONAL ELECTRONIC INFORMATION OR DATA
US7200804B1 (en) * 1998-12-08 2007-04-03 Yodlee.Com, Inc. Method and apparatus for providing automation to an internet navigation application
US7672879B1 (en) 1998-12-08 2010-03-02 Yodlee.Com, Inc. Interactive activity interface for managing personal data and performing transactions over a data packet network
US8069407B1 (en) 1998-12-08 2011-11-29 Yodlee.Com, Inc. Method and apparatus for detecting changes in websites and reporting results to web developers for navigation template repair purposes
US7085997B1 (en) 1998-12-08 2006-08-01 Yodlee.Com Network-based bookmark management and web-summary system
US6517587B2 (en) * 1998-12-08 2003-02-11 Yodlee.Com, Inc. Networked architecture for enabling automated gathering of information from Web servers
US7752535B2 (en) 1999-06-01 2010-07-06 Yodlec.com, Inc. Categorization of summarized information
US6725445B1 (en) * 1999-07-08 2004-04-20 International Business Machines Corporation System for minimizing notifications in workflow management system
US7882426B1 (en) * 1999-08-09 2011-02-01 Cognex Corporation Conditional cell execution in electronic spreadsheets
US6490600B1 (en) 1999-08-09 2002-12-03 Cognex Technology And Investment Corporation Processing continuous data streams in electronic spreadsheets
US6859907B1 (en) 1999-08-09 2005-02-22 Cognex Technology And Investment Corporation Large data set storage and display for electronic spreadsheets applied to machine vision
US7337389B1 (en) 1999-12-07 2008-02-26 Microsoft Corporation System and method for annotating an electronic document independently of its content
US7407175B2 (en) * 2000-03-01 2008-08-05 Deka Products Limited Partnership Multiple-passenger transporter
US6687834B1 (en) * 2000-04-14 2004-02-03 International Business Machines Corporation Data processing system, method and program for generating a job within an automated test environment
US6959433B1 (en) 2000-04-14 2005-10-25 International Business Machines Corporation Data processing system, method, and program for automatically testing software applications
JP2001337825A (en) * 2000-05-25 2001-12-07 Hitachi Ltd Storage system provided with on-line display method for manual
US6874124B2 (en) * 2000-05-31 2005-03-29 Fujitsu Limited Electronic document processing system and electronic document processors
WO2002019224A1 (en) * 2000-09-01 2002-03-07 Togethersoft Corporation Methods and systems for integrating process modeling and project planning
US6993528B1 (en) * 2000-10-04 2006-01-31 Microsoft Corporation Methods and systems for allowing third party client applications to influence implementation of high-level document commands
CA2333342A1 (en) * 2001-01-31 2002-07-31 Curomax Corporation Automotive finance portal
US6829369B2 (en) 2001-05-18 2004-12-07 Lockheed Martin Corporation Coding depth file and method of postal address processing using a coding depth file
GB2376311B (en) * 2001-06-04 2005-06-08 Hewlett Packard Co A method of managing workflow in a computer-based system
US7174342B1 (en) * 2001-08-09 2007-02-06 Ncr Corp. Systems and methods for defining executable sequences to process information from a data collection
US20030074270A1 (en) * 2001-10-16 2003-04-17 Brown Otis F. Computerized method and system for managing and communicating information regarding an order of goods
US6931589B2 (en) 2001-11-29 2005-08-16 Orbograph Ltd. Distributed document processing
WO2005010731A2 (en) 2003-07-31 2005-02-03 Dealertrack, Inc. Integrated electronic credit application, contracting and securitization system and method
US20050182713A1 (en) * 2003-10-01 2005-08-18 Giancarlo Marchesi Methods and systems for the auto reconsideration of credit card applications
US20070011334A1 (en) * 2003-11-03 2007-01-11 Steven Higgins Methods and apparatuses to provide composite applications
US8726278B1 (en) 2004-07-21 2014-05-13 The Mathworks, Inc. Methods and system for registering callbacks and distributing tasks to technical computing works
US7502745B1 (en) * 2004-07-21 2009-03-10 The Mathworks, Inc. Interfaces to a job manager in distributed computing environments
US20050149342A1 (en) * 2003-12-24 2005-07-07 International Business Machines Corporation Method and apparatus for creating and customizing plug-in business collaboration protocols
US8458488B2 (en) * 2003-12-31 2013-06-04 International Business Machines Corporation Method and system for diagnosing operation of tamper-resistant software
US7346620B2 (en) * 2004-02-12 2008-03-18 International Business Machines Corporation Adjusting log size in a static logical volume
US20050231738A1 (en) * 2004-03-10 2005-10-20 Elynx, Ltd. Electronic document management system
US20050246212A1 (en) * 2004-04-29 2005-11-03 Shedd Nathanael P Process navigator
US7580867B2 (en) * 2004-05-04 2009-08-25 Paul Nykamp Methods for interactively displaying product information and for collaborative product design
US7499899B2 (en) * 2004-07-02 2009-03-03 Northrop Grumman Corporation Dynamic software integration architecture
US7908313B2 (en) * 2004-07-21 2011-03-15 The Mathworks, Inc. Instrument-based distributed computing systems
US8868660B2 (en) * 2006-03-22 2014-10-21 Cellco Partnership Electronic communication work flow manager system, method and computer program product
US7606752B2 (en) 2006-09-07 2009-10-20 Yodlee Inc. Host exchange in bill paying services
US8683490B2 (en) * 2007-02-15 2014-03-25 Microsoft Corporation Computer system events interface
US20090089698A1 (en) * 2007-09-28 2009-04-02 Bruce Gordon Fuller Automation visualization schema with zooming capacity
US8261334B2 (en) 2008-04-25 2012-09-04 Yodlee Inc. System for performing web authentication of a user by proxy
US20100057826A1 (en) * 2008-08-29 2010-03-04 Weihsiung William Chow Distributed Workflow Process Over a Network
US8555359B2 (en) 2009-02-26 2013-10-08 Yodlee, Inc. System and methods for automatically accessing a web site on behalf of a client
US8868506B1 (en) * 2010-06-17 2014-10-21 Evolphin Software, Inc. Method and apparatus for digital asset management
CN102609818A (en) * 2012-02-15 2012-07-25 苏州亚新丰信息技术有限公司 Circulating method of 3rd Generation (3G) mobile communication operating and maintaining flows based on sequence number
US9443269B2 (en) * 2012-02-16 2016-09-13 Novasparks, Inc. FPGA matrix architecture
GB2527798A (en) * 2014-07-02 2016-01-06 Ibm Synchronizing operations between regions when a network connection fails
US20170308836A1 (en) * 2016-04-22 2017-10-26 Accenture Global Solutions Limited Hierarchical visualization for decision review systems
US10303678B2 (en) * 2016-06-29 2019-05-28 International Business Machines Corporation Application resiliency management using a database driver
CN113434268A (en) * 2021-06-09 2021-09-24 北方工业大学 Workflow distributed scheduling management system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0456492A2 (en) * 1990-05-10 1991-11-13 Kabushiki Kaisha Toshiba A distributed database system
US5170480A (en) * 1989-09-25 1992-12-08 International Business Machines Corporation Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time
US5241675A (en) * 1992-04-09 1993-08-31 Bell Communications Research, Inc. Method for enforcing the serialization of global multidatabase transactions through committing only on consistent subtransaction serialization by the local database managers
GB2264796A (en) * 1992-03-02 1993-09-08 Ibm Distributed transaction processing
US5247664A (en) * 1991-03-28 1993-09-21 Amoco Corporation Fault-tolerant distributed database system and method for the management of correctable subtransaction faults by the global transaction source node
EP0567999A2 (en) * 1992-04-30 1993-11-03 Oracle Corporation Method and apparatus for executing a distributed transaction in a distributed database

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5008853A (en) * 1987-12-02 1991-04-16 Xerox Corporation Representation of collaborative multi-user activities relative to shared structured data objects in a networked workstation environment
JPH01195568A (en) * 1988-01-29 1989-08-07 Hitachi Ltd Electronic document editing control system
US5142674A (en) * 1988-03-08 1992-08-25 International Business Machines Corporation Interchange object data base index which eliminates the need for private copies of interchange documents files by a plurality of application programs
DE69025935D1 (en) * 1990-08-28 1996-04-18 Hartford Fire Insurance Comp Computer system and labor administration method
US5161214A (en) * 1990-08-28 1992-11-03 International Business Machines Corporation Method and apparatus for document image management in a case processing system
GB2248052B (en) * 1990-08-31 1994-06-08 Instance Ltd David J Bag ties and manufacture thereof
US5132900A (en) * 1990-12-26 1992-07-21 International Business Machines Corporation Method and apparatus for limiting manipulation of documents within a multi-document relationship in a data processing system
US5287501A (en) * 1991-07-11 1994-02-15 Digital Equipment Corporation Multilevel transaction recovery in a database system which loss parent transaction undo operation upon commit of child transaction
US5337407A (en) * 1991-12-31 1994-08-09 International Business Machines Corporation Method and system for identifying users in a collaborative computer-based system
GB2263988B (en) * 1992-02-04 1996-05-22 Digital Equipment Corp Work flow management system and method
US5446880A (en) * 1992-08-31 1995-08-29 At&T Corp. Database communication system that provides automatic format translation and transmission of records when the owner identified for the record is changed
US5535322A (en) * 1992-10-27 1996-07-09 International Business Machines Corporation Data processing system with improved work flow system and method
US5649200A (en) * 1993-01-08 1997-07-15 Atria Software, Inc. Dynamic rule-based version control system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5170480A (en) * 1989-09-25 1992-12-08 International Business Machines Corporation Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time
EP0456492A2 (en) * 1990-05-10 1991-11-13 Kabushiki Kaisha Toshiba A distributed database system
US5247664A (en) * 1991-03-28 1993-09-21 Amoco Corporation Fault-tolerant distributed database system and method for the management of correctable subtransaction faults by the global transaction source node
GB2264796A (en) * 1992-03-02 1993-09-08 Ibm Distributed transaction processing
US5241675A (en) * 1992-04-09 1993-08-31 Bell Communications Research, Inc. Method for enforcing the serialization of global multidatabase transactions through committing only on consistent subtransaction serialization by the local database managers
EP0567999A2 (en) * 1992-04-30 1993-11-03 Oracle Corporation Method and apparatus for executing a distributed transaction in a distributed database

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Principles of Distributed Database Systems", (M.T. OEZSU et al.), PRENTICE-HALL INTERNATIONAL ED., ENGLEWOOD CLIFFS 1991, page 269 -page 275, page 317 - page 319. *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0747812A2 (en) * 1995-06-07 1996-12-11 Tandem Computers Incorporated Customer information control system and method with API start and cancel transaction functions in a loosely coupled parallel processing environment
EP0747832A2 (en) * 1995-06-07 1996-12-11 Tandem Computers Incorporated Customer information control system and method in a loosely coupled parallel processing environment
EP0747812A3 (en) * 1995-06-07 1998-03-25 Tandem Computers Incorporated Customer information control system and method with API start and cancel transaction functions in a loosely coupled parallel processing environment
EP0747832A3 (en) * 1995-06-07 1998-04-01 Tandem Computers Incorporated Customer information control system and method in a loosely coupled parallel processing environment
EP0817019A2 (en) * 1996-07-02 1998-01-07 International Business Machines Corporation Method of stratified transaction processing
EP0817019A3 (en) * 1996-07-02 1998-01-14 International Business Machines Corporation Method of stratified transaction processing
US6012094A (en) * 1996-07-02 2000-01-04 International Business Machines Corporation Method of stratified transaction processing
EP1574953A1 (en) * 2003-10-22 2005-09-14 Sap Ag Asynchronous consumption of independent planned requirements
US7647295B2 (en) * 2004-11-05 2010-01-12 International Business Machines Corporation Method, apparatus, computer program, and computer program product for managing the durability of a pluraliy of transactions

Also Published As

Publication number Publication date
DE69429787T2 (en) 2002-08-29
US5893128A (en) 1999-04-06
EP0788631B1 (en) 2002-01-30
DE69429787D1 (en) 2002-03-14
WO1994020910A1 (en) 1994-09-15
SE9300671D0 (en) 1993-03-01
EP0788631A1 (en) 1997-08-13

Similar Documents

Publication Publication Date Title
WO1994020903A1 (en) Transaction queue management
EP0467546A2 (en) Distributed data processing systems
EP0772136B1 (en) Method of commitment in a distributed database transaction
US7149761B2 (en) System and method for managing the synchronization of replicated version-managed databases
US7937618B2 (en) Distributed, fault-tolerant and highly available computing system
Moss et al. Nested transactions: An approach to reliable distributed computing
US7962458B2 (en) Method for replicating explicit locks in a data replication engine
Oki et al. Viewstamped replication: A new primary copy method to support highly-available distributed systems
EP0950955B1 (en) Method and apparatus for correct and complete transactions in a fault tolerant distributed database system
CA2205725C (en) Preventing conflicts in distributed systems
US8301593B2 (en) Mixed mode synchronous and asynchronous replication system
Ellis A Robust Algorithm for Updating Duplicate Databases.
CA2413615C (en) Conflict resolution for collaborative work system
EP0595453A1 (en) Distributed data processing system
CN101512527B (en) Data processing system and method of handling requests
DuBourdieux Implementation of Distributed Transactions.
EP0834127B1 (en) Reduction of logging in distributed systems
CN112527759B (en) Log execution method and device, computer equipment and storage medium
EP0394019A2 (en) Computerised database system
US7533132B2 (en) Parallel replication mechanism for state information produced by serialized processing
EP1244015B1 (en) Parallel replication mechanism for state information produced by serialized processing
CN112069160B (en) CAP-based data cleaning synchronization method
Zhou et al. A system for managing remote procedure call transactions
Swanson et al. MVS/ESA coupled-systems considerations
KR100282779B1 (en) How to Load Redundant Relations in an Exchange Database Management System

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase