WO1999066406A1 - Processor bridge with posted write buffer - Google Patents
Processor bridge with posted write buffer Download PDFInfo
- Publication number
- WO1999066406A1 WO1999066406A1 PCT/US1999/012606 US9912606W WO9966406A1 WO 1999066406 A1 WO1999066406 A1 WO 1999066406A1 US 9912606 W US9912606 W US 9912606W WO 9966406 A1 WO9966406 A1 WO 9966406A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bus
- bridge
- processing
- processmg
- sets
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1629—Error detection by comparing the output of redundant processing systems
- G06F11/1641—Error detection by comparing the output of redundant processing systems where the comparison is not performed by the redundant processing components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1629—Error detection by comparing the output of redundant processing systems
- G06F11/165—Error detection by comparing the output of redundant processing systems with continued operation after detection of the error
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4027—Coupling between buses using bus bridges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1608—Error detection by comparing the output signals of redundant hardware
- G06F11/1625—Error detection by comparing the output signals of redundant hardware in communications, e.g. transmission, interfaces
Definitions
- This relates to fault tolerant computer systems and to mechanisms for enabling recovery from an error durmg operation of the computer system
- the mvention relates to a processor b ⁇ dge providing a posted write buffer for use in such a fault tolerant computer system
- the aim of the present mvention is to provide a mechanism which facilitates the taking of action to limit the impact of an error or to completely recover from an error where pendmg I/O operations have already been mitiated by a processor or processor set
- a bndge for a multi-processor system
- the bridge mcludes a bus mterfaces for connection to an I/O bus of a first processing set, an I/O bus of a second processing set and a device bus
- the bridge also includes a memory subsystem and a b ⁇ dge control mechanism
- the b ⁇ dge control mechanism is operable to monitor operation of the first and second processing sets in a combined, lockstep, operating mode and to be responsive to detection of a lockstep e ⁇ or to cause the b ⁇ dge to be operable in an e ⁇ or mode m which write accesses initiated by the processor sets are buffered in a bridge buffer pendmg resolution of the e ⁇ or mode
- a bndge including a b ⁇ dge control mechanism with a buffer for buffering write accesses initiated by processor sets, enables write commands and the associated data which are located in various queues withm the system to be stored pendmg resolution of an e ⁇ or Accordmgly, this stored information can be used at least to limit the impact of the e ⁇ or and preferably completely to recover from that e ⁇ or It can further be used to identify the originator of the e ⁇ or
- the buffer (posted write buffer) provides storage for I/O write accesses which occur after an e ⁇ or condition is detected To allow the accesses to be rerun, the address to which the transaction would have occu ⁇ ed, any associated data, the type of w ⁇ te command and also any byte enable information required to identify which parts of the data are valid, can be stored for later use in the posted write buffer It should be noted that the bus mterfaces referenced above need not be separate components of the bridge, but may be incorporated in other components of the bridge,
- a respective buffer region is provided for each processmg set
- the b ⁇ dge control mechanism of an embodiment of the invention is operable, in an initial e ⁇ or mode to store in the posted write buffer any internal bridge w ⁇ te accesses issued by the processmg sets and to allow and to arbitrate any internal bridge read accesses initiated by the processmg sets, and to store m a posted wnte buffer any complete device write accesses mitiated by the processing sets and to abort any device bus read accesses mitiated by the processmg sets
- the bndge control mechanism is operable m the initial e ⁇ or mode, where an address part of a device wnte access has already been issued to the device bus, to buffer burst data parts of the device write access in one or more disconnect registers
- the bndge control mechanism of an embodiment of the invention is operable, m a primary e ⁇ or mode in which a processmg set asserts itself as a primary processmg set to allow and to arbitrate any internal bridge write accesses initiated by the p ⁇ mary processmg set, to discard any internal bndge write accesses mitiated by any other processing set, and to allow and to arbitrate any internal b ⁇ dge read accesses mitiated by the processing sets, and to discard any device bus w ⁇ te accesses mitiated by the processmg sets and to abort any device bus read accesses mitiated by the processmg sets
- a controllable routing matrix connects the first processor bus interface, the second processor bus mterface, the device bus interface and the memory sub-system
- the bridge control mechanism is operable to control the routmg matrix selectively to interconnect the first processor bus mterface, the second processor bus mterface, the device bus mterface and the memory sub-system according to
- the b ⁇ dge control mechanism of an embodiment of the invention further comprises an address decode mechanism coupled to the first processor bus mterface, the second processor bus mterface and the device bus mterface, the address decode mechanism being operable to determine target addresses for w ⁇ te and read accesses
- the initiator of the write and read accesses will be determined by an arbitration mechanism Initiator and target controllers control the routing of a path from an input to an output of the routing mat ⁇ x, the initiator and target controllers bemg responsive to the arbitration address decode mechanisms, respectively
- the bridge control mechanism of an embodiment of the mvention comp ⁇ ses a comparator connected to the first and second processor bus mterfaces The comparator is operable in the combined mode to detect differences between signals on the I/O buses of the first and second processmg sets as indicative of a lockstep e ⁇ or
- a bridge controller is connected to an output of the comparator, the bndge controller being operable, in response to a signal indicative of a lockstep e
- a posted write buffer is configured in random access memory connected to or forming part of the b ⁇ dge Disconnect registers are configured as separate hardware registers
- the b ⁇ dge could mclude more than two processor bus interfaces for connection to respective I/O buses of further processmg sets.
- a multi-processor system compnsmg a first processing set having an I/O bus, a second processmg set having an I/O bus, a device bus and a bridge as set out above.
- Each processing set can mclude at least one processor, memory and one or more processing set I/O bus controllers
- a method of operatmg a multiprocessor system as set out above, the method comp ⁇ smg: momtormg operation of the first and second processing sets m a combmed, lockstep, operating mode, and on detection of a lockstep e ⁇ or, buffering write accesses mitiated by the processor sets in a buffer pending resolution of the e ⁇ or.
- Figure 1 is a schematic overview of a fault tolerant computer system incorporating an embodiment of the mvention
- Figure 2 is a schematic overview of a specific implementation of a system based on that of Figure 1
- Figure 3 is a schematic representation of one implementation of a processmg set
- Figure 4 is a schematic representation of another example of a processmg set
- Figure 5 is a schematic representation of a further processmg set
- Figure 6 is a schematic block diagram of an embodiment of a b ⁇ dge for the system of Figure 1;
- Figure 7 is a schematic block diagram of storage for the bndge of Figure 6;
- Figure 8 is a schematic block diagram of control logic of the bndge of Figure 6;
- Figure 9 is a schematic representation of a routmg matnx of the b ⁇ dge of Figure 6,
- Figure 10 is an example implementation of the bndge of Figure 6;
- Figure 11 is a state diagram illustrating operational states of the bndge of Figure 6;
- Figure 12 is a flow diagram illustrating stages m the operation of the bndge of Figure 6;
- Figure 13 is a detail of a stage of operation from Figure 12,
- Figure 14 illustrates the posting of I/O cycles in the system of Figure 1;
- Figure 15 illustrates the data stored m a posted wnte buffer
- Figure 16 is a schematic representation of a slot response register
- Figure 17 illustrates a dissimilar data wnte stage
- Figure 18 illustrates a modification to Figure 17,
- Figure 19 illustrates a dissimilar data read stage
- Figure 20 illustrates an alternative dissimilar data read stage
- Figure 21 is a flow diagram summansmg the operation of a dissimilar data wnte mechamsm
- Figure 22 is a schematic block diagram explaining arbitration withm the system of Figure 1;
- Figure 23 is a state diagram illustrating the operation of a device bus arbiter,
- Figure 24 is a state diagram illustrating the operation of a bridge arbiter,
- Figure 25 is a timing diagram for PCI signals,
- Figure 26 is a schematic diagram illustrating the operation of the bndge of Figure 6 for direct memory access
- Figure 27 is a flow diagram illustrating a direct memory access method m the b ⁇ dge of Figure 6, and Figure 28 is a flow diagram of a re-integration process mcludmg the momtormg of a dirty RAM
- Figure 1 is a schematic overview of a fault tolerant computmg system 10 comprising a plurality of
- CPUsets processes (processmg sets) 14 and 16 and a b ⁇ dge 12
- the bridge 12 forms an mterface between the processmg sets and I/O devices such as devices 28, 29, 30, 31 and 32
- processing set is used to denote a group of one or more processors, possibly mcludmg memory, which output and receive common outputs and mputs
- CPUset could be used instead, and that these terms could be used mterchangeably throughout this document
- b ⁇ dge is used to denote any device, apparatus or a ⁇ angement suitable for interconnecting two or more buses of the same or different types.
- the first processmg set 14 is connected to the bndge 12 via a first processmg set I/O bus (PA bus) 24, in the present instance a Penpheral Component Interconnect (PCI) bus.
- the second processmg set 16 is connected to the bndge 12 via a second processing set I/O bus (PB bus) 26 of the same type as the PA bus 24 (i.e. here a PCI bus)
- PB bus second processing set I/O bus
- the I/O devices are connected to the bridge 12 via a device I/O bus (D bus) 22, in the present mstance also a PCI bus
- buses 22, 24 and 26 are all PCI buses, this is merely by way of example, and in other embodiments other bus protocols may be used and the D-bus 22 may have a different protocol from that of the PA bus and the PB bus (P buses) 24 and 26
- the processmg sets 14 and 16 and the bridge 12 are operable in synchronism under the control of a common clock 20, which is connected thereto by clock signal lmes 21
- E-NET Ethernet
- SCSI Small Computer System Interface
- the FETs enables an mcrease m the length of the D bus 22 as only those devices which are active are switched on, reducing the effective total bus length It will be appreciated that the number of I/O devices which may be connected to the D bus 22, and the number of slots provided for them, can be adjusted according to a particular implementation in accordance with specific design requirements
- FIG 2 is a schematic overview of a particular implementation of a fault tolerant computer employing a b ⁇ dge structure of the type illustrated in Figure 1
- the fault tolerant computer system includes a plurality (here four) of bndges 12 on first and second I/O motherboards (MB 40 and MB 42) order to increase the number of I/O devices which may be connected and also to improve reliability and redundancy.
- two processmg sets 14 and 16 are each provided on a respective processmg set board 44 and 46, with the processmg set boards 44 and 46 'bridging' the I/O motherboards MB 40 and MB 42.
- a first, master clock source 20A is mounted on the first motherboard 40 and a second, slave clock source 20B is mounted on the second motherboard 42.
- Clock signals are supplied to the processmg set boards 44 and 46 via respective connections (not shown tn Figure 2)
- First and second bndges 12.1 and 12.2 are mounted on the first I/O motherboard 40.
- the first b ⁇ dge 12.1 is connected to the processmg sets 14 and 16 by P buses 24.1 and 26 1, respectively.
- the second bridge 12.2 is connected to the processing sets 14 and 16 by P buses 24.2 and 26.2, respectively.
- the b ⁇ dge 12.1 is connected to an I/O databus (D bus) 22.1 and the bndge 12.2 is connected to an I O databus (D bus) 22.2.
- Third and fourth bndges 12.3 and 12.4 are mounted on the second I/O motherboard 42.
- the bridge 12.3 is connected to the processmg sets 14 and 16 by P buses 24 3 and 26.3, respectively.
- the bridge 4 is connected to the processmg sets 14 and 16 by P buses 24 4 and 26.4, respectively.
- the bridge 12 3 is connected to an I/O databus (D bus) 22.3 and the bndge 12 4 is connected to an I/O databus (D bus) 22 4.
- FIG. 2 is a schematic overview of one possible configuration of a processing set, such as the processing set 14 of Figure 1.
- the processing set 16 could have the same configuration.
- a plurality of processors (here four) 52 are connected by one or more buses 54 to a processing set bus controller 50.
- one or more processing set output buses 24 are connected to the processmg set bus controller 50, each processmg set output bus 24 being connected to a respective bridge 12.
- FIG 4 is an alternative configuration of a processmg set, such as the processmg set 14 of Figure 1
- a plurality of processor/memory groups 61 are connected to a common internal bus 64.
- Each processor/memory group 61 includes one or more processors 62 and associated memory 66 connected to a internal group bus 63.
- An interface 65 connects the internal group bus 63 to the common internal bus 64 Accordmgly, in the a ⁇ angement shown in Figure 4, individual processmg groups, with each of the processors 62 and associated memory 66 are connected via a common internal bus 64 to a processmg set bus controller 60
- the mterfaces 65 enable a processor 62 of one processing group to operate not only on the data in its local memory 66, but also in the memory of another processing group 61 within the processing set 14
- the processing set bus controller 60 provides a common interface between the common internal bus 64 and the processing set I/O bus(es) (P bus(es)) 24 connected to the b ⁇ dge(s) 12 It should be noted that although only two processmg groups 61 are shown in Figure 4, it will be appreciated that such a structure is not limited to this number of processmg groups.
- Figure 5 illustrates an alternative configuration of a processmg set, such as the processmg set 14 of
- a simple processmg set includes a single processor 72 and associated memory 76 connected via a common bus 74 to a processing set bus controller 70
- the processing set bus controller 70 provides an interface between the internal bus 74 and the processmg set I/O bus(es) (P bus(es)) 24 for connection to the b ⁇ dge(s) 12.
- the processing set may have many different forms and that the particular choice of a particular processing set structure can be made on the basis of the processing requirement of a particular application and the degree of redundancy required.
- the processing sets 14 and 16 refe ⁇ ed it is assumed that the processing sets 14 and 16 refe ⁇ ed to have a structure as shown m Figure 3, although it will be appreciated that another form of processing set could be provided.
- the bndge(s) 12 are operable in a number of operatmg modes. These modes of operation will be descnbed m more detail later. However, to assist in a general understandmg of the structure of the b ⁇ dge, the two operatmg modes will be b ⁇ efly summarized here.
- a bndge 12 is operable to route addresses and data between the processing sets 14 and 16 (via the PA and PB buses 24 and 26, respectively) and the devices (via the D bus 22).
- I/O cycles generated by the processing sets 14 and 16 are compared to ensure that both processing sets are operatmg co ⁇ ectly.
- the bridge 12 routes and arbitrates addresses and data from one of the processmg sets 14 and 16 onto the D bus 22 and/or onto the other one of the processing sets 16 and 14, respectively.
- the processing sets 14 and 16 are not synchronized and no I/O compa ⁇ sons are made.
- DMA operations are also permitted m both modes.
- the different modes of operation, mcludmg the combined and split modes will be described in more detail later. However, there now follows a description of the basic structure of an example of the bridge 12.
- FIG. 6 is a schematic functional overview of the bridge 12 of Figure 1
- First and second processing set I/O bus interfaces, PA bus mterface 84 and PB bus mterface 86, are connected to the PA and PB buses 24 and 26, respectively.
- a device I/O bus interface, D bus interface 82, is connected to the D bus 22.
- the PA, PB and D bus interfaces need not be configured as separate elements but could be incorporated in other elements of the bridge. Accordingly, withm the context of this document, where a references is made to a bus interface, this does not require the presence of a specific separate component, but rather the capability of the bridge to connect to the bus concerned, for example by means of physical or logical bridge connections for the lmes of the buses concerned.
- Routmg (heremafter termed a routing matnx) 80 is connected via a first internal path 94 to the PA bus mterface 84 and via a second internal path 96 to the PB bus interface 86.
- the routing matnx 80 is further connected via a third internal path 92 to the D bus interface 82.
- the routmg matnx 80 is thereby able to provide I/O bus transaction routmg in both directions between the PA and PB bus interfaces 84 and 86. It is also able to provide routing in both directions between one or both of the PA and PB bus mterfaces and the D bus interface 82.
- the routing matnx 80 is connected via a further internal path 100 to storage control logic 90
- the storage control logic 90 controls access to bridge registers 110 and to a random access memory (SRAM) 126
- SRAM random access memory
- the routmg matrix 80 is therefore also operable to provide routing in both directions between the PA, PB and D bus interfaces 84, 86 and 82 and the storage control logic 90
- the routing matnx 80 is controlled by b ⁇ dge control logic 88 over control paths 98 and 99
- the bridge control logic 88 is responsive to control signals, data and addresses on internal paths 93, 95 and 97, and also to clock signals on the clock lme(s) 21
- each of the P buses (PA bus 24 and PB bus 26) operates under a PCI protocol
- the processmg set bus controllers 50 also operate under the PCI protocol
- the PA and PB bus mterfaces 84 and 86 each provide all the functionality required for a compatible mterface providmg both master and slave operation for data transfe ⁇ ed to and from the D bus 22 or internal memories and registers of the bridge m the storage subsystem 90
- the bus mterfaces 84 and 86 can provide diagnostic mformation to internal b ⁇ dge status registers in the storage subsystem 90 on transition of the bridge to an e ⁇ or state (EState) or on detection of an I/O e ⁇ or
- the device bus mterface 82 performs all the functionality required for a PCI compliant master and slave interface for transfe ⁇ ing data to and from one of the PA and PB buses 84 and 86
- the D bus 82 is operable durmg direct memory access (DMA) transfers to provide diagnostic mformation to internal status registers in the storage subsystem 90 of the bridge on transition to an EState or on detection of an I/O e ⁇ or
- Figure 7 illustrates m more detail the bridge registers 110 and the SRAM 124
- the storage control logic 110 is connected via a path (e g a bus) 112 to a number of register components 114, 116, 118, 120
- the storage control logic is also connected via a path (e g a bus) 128 to the SRAM 126 m which a posted write buffer component 122 and a dirty RAM component 124 are mapped
- a particular configuration of the components 114, 116, 118, 120, 122 and 124 is shown in Figure 7, these components may be configured in other ways, with other components defined as regions of a common memory (e g a random access memory such as the SRAM 126, with the path 112/128 being formed by the internal addressing of the regions of memory)
- the posted w ⁇ te buffer 122 and the dirty RAM 124 are mapped to different regions of the SRAM memory 126
- the registers 114, 116, 118 and 120 are configured as separate from the SRAM memory
- Control and status registers (CSRs) 114 form internal registers which allow the control of various operating modes of the bridge, allow the capture of diagnostic mformation for an EState and for I/O e ⁇ ors, and control processmg set access to PCI slots and devices connected to the D bus 22 These registers are set by signals from the routing mat ⁇ x 80
- Dissimilar data registers (DDRs) 116 provide locations for containing dissimilar data for different processmg sets to enable non-deterministic data events to be handled These registers are set by signals from the PA and PB buses
- B ⁇ dge decode logic enables a common write to disable a data comparator and allow writes to two DDRs 116, one for each processmg set 14 and 16
- a selected one of the DDRs can then be read ui-sync by the processmg sets 14 and 16
- the DDRs thus provide a mechanism enablmg a location to be reflected from one processmg set (14/16) to another (16/14)
- SRRs Slot response registers
- Disconnect registers 120 are used for the storage of data phases of an I/O cycle which is aborted while data is in the bridge on the way to another bus
- the disconnect registers 120 receive all data queued m the bndge when a target device disconnects a transaction, or as the EState is detected.
- These registers are connected to the routmg matnx 80 The routing matnx can queue up to three data words and byte enables.
- the initial addresses are voted as bemg equal, address target controllers denve addresses which mcrement as data is exchanged between the b ⁇ dge and the destmation (or target)
- a wnter for example a processor I/O write, or a DVMA (D bus to P bus access)
- this data can be caught m the b ⁇ dge when an e ⁇ or occurs.
- this data is stored in the disconnect registers 120 when an e ⁇ or occurs.
- the DDRs 116, the SRRs 118 and the disconnect registers may form an mtegral part of the CSRs 114.
- EState and e ⁇ or CSRs 114 provided for the capture of a failing cycle on the P buses 24 and 26, with an indication of the failing datum. Following a move to an EState, all of the writes mitiated to the P buses are logged in the posted write buffer 122 These may be other writes that have been posted m the processing set bus controllers 50, or which may be initiated by software before an EState mterrupt causes the processors to stop carrying out w ⁇ tes to the P buses 24 and 26
- a dirty RAM 124 is used to indicate which pages of the mam memory 56 of the processmg sets 14 and
- Each page (e g each 8K page) is marked by a single bit in the dirty RAM 124 which is set when a DMA write occurs and can be cleared by a read and clear cycle initiated on the dirty RAM 124 by a processor 52 of a processing set 14 and 16.
- DMA direct memory access
- FIG. 8 is a schematic functional overview of the b ⁇ dge control logic 88 shown m Figure 6.
- the bridge ca ⁇ ies out decoding necessary to enable the isolating FETs for each slot before an access to those slots is mitiated.
- the address decodmg performed by the address decode logic 136 and 138 essentially permits four basic access types:
- slot 0 on motherboard A has the same address when refe ⁇ ed to by processmg set 14 or by processmg set 16.
- a smgle device select signal can be provided for the switched PCI slots as the FET signals can be used to enable a co ⁇ ect card.
- Separate FET switch lmes are provided to each slot for separately switching the FETs for the slots.
- the SRRs 118 which could be incorporated m the CSR registers 114, are associated with the address decode functions
- the SRRs 118 serve in a number of different roles which will be desc ⁇ bed m more detail later. However, some of the roles are summarized here
- each slot may be disabled so that writes are simply acknowledged without any transaction occurring on the device bus 22, whereby the data is lost Reads will return meanmgless data, once agam without causmg a transaction on the device board
- each slot can be m one of three states
- the states are:
- a slot that is not owned by a processing set 14 or 16 making an access (this includes not owned or unowned slots) cannot be accessed. Accordingly, such an access is aborted
- the ownership bits are assessable and settable while m the combined mode, but have no effect until a split state is entered This allows the configuration of a split system to be determined while still in the combined mode.
- Each PCI device is allocated an area of the processing set address map. The top bits of the address are determined by the PCI slot Where a device ca ⁇ ies out DMA, the bridge is able to check that the device is using the co ⁇ ect address because a D bus arbiter mforms the b ⁇ dge which device is usmg the bus at a particular time.
- a device access is a processing set address which is not valid for it, then the device access will be ignored It should be noted that an address presented by a device will be a virtual address which would be translated by an I/O memory management unit in the processing set bus controller 50 to an actual memory address.
- the addresses output by the address decoders are passed via the initiator and target controllers 138 and 140 to the routmg mat ⁇ x 80 via the lines 98 under control of a bridge controller 132 and an arbiter 134
- An arbiter 134 is operable m va ⁇ ous different modes to arbitrate for use of the bridge on a first-come- first-served basis using conventional PCI bus signals on the P and D buses
- the arbiter 134 is operable to arbitrate between the ln-sync processing sets 14 and 16 and any initiators on the device bus 22 for use of the bndge 12 Possible plausiblenos are - processmg set access to the device bus 22,
- both processmg sets 14 and 16 must arbitrate the use of the b ⁇ dge and thus access to the device bus 22 and internal bridge registers (e g CSR registers 114)
- the bndge 12 must also contend with initiators on the device bus 22 for use of that device bus 22
- Each slot on the device bus has an arbitration enable bit associated with it
- These arbitration enable bits are cleared after reset and must be set to allow a slot to request a bus
- the arbitration enable bit for that device is automatically reset by the bridge
- a PCI bus mterface in the processmg set bus controller(s) 50 expects to be the master bus controller for the P bus concerned, that is it contams the PCI bus arbiter for the PA or PB bus to which it is connected
- the bridge 12 cannot directly control access to the PA and PB buses 24 and 26
- the b ⁇ dge 12 competes for access to the PA or PB bus with the processing set on the bus concerned under the control of the bus controller 50 on the bus concerned.
- FIG. 8 Also shown in Figure 8 is a comparator 130 and a b ⁇ dge controller 132
- the comparator 130 is operable to compare I/O cycles from the processmg sets 14 and 16 to determme any out-of-sync events On determining an out-of-sync event, the comparator 130 is operable to cause the bndge controller 132 to activate an EState for analysis of the out-of-sync event and possible recovery therefrom
- Figure 9 is a schematic functional overview of the routing mat ⁇ x 80
- the routmg matnx 80 comprises a multiplexer 143 which is responsive to initiator control signals 98 from the initiator controller 138 of Figure 8 to select one of the PA bus path 94 , PB bus path 96, D bus path 92 or internal bus path 100 as the cu ⁇ ent input to the routing matrix
- Separate output buffers 144, 145, 146 and 147 are provided for output to each of the paths 94, 96, 92 and 100, with those
- FIG 10 is a schematic representation of a physical configuration of the b ⁇ dge m which the bridge control logic 88, the storage control logic 90 and the bridge registers 110 are implemented m a first field programmable gate a ⁇ ay (FPGA) 89, the routing matnx 80 is implemented in further FPGAs 80 1 and 80.2 and the SRAM 126 is implemented as one or more separate SRAMs addressed by a address control lmes 127
- the bus mterfaces 82, 84 and 86 shown m Figure 6 are not separate elements, but are integrated m the FPGAs 80 1, 80 2 and 89
- Two FPGAs 80 1 and 80.2 are used for the upper 32 bits 32-63 of a 64 bit PCI bus and the lower 32 bits 0-31 of the 64 bit PCI bus It will be appreciated that a single FPGA could be employed for the routing matnx 80 where the necessary logic can be accommodated withm the device.
- the bndge control logic, storage control logic and the bndge registers could be incorporated m the same FPGA as the routmg matrix. Indeed many other configurations may be envisaged, and mdeed technology other than FPGAs, for example one or more Application Specific Integrated Circuits (ASICs) may be employed. As shown m Figure 10, the FPGAs 89, 80.1 and 80.2 and the SRAM 126 are connected via internal bus paths 85 and path control lmes 87
- FIG 11 is a transition diagram illustrating m more detail the various operatmg modes of the bridge
- the bridge operation can be divided into three basic modes, namely an e ⁇ or state (EState) mode 150, a split state mode 156 and a combined state mode 158.
- the EState mode 150 can be further divided mto 2 states. After initial resetting on powering up the bndge, or following an out-of sync event, the bridge is in this initial EState 152. In this state, all wntes are stored m the posted w ⁇ te buffer 120 and reads from the internal bridge registers (e.g., the CSR registers 116) are allowed, and all other reads are treated as e ⁇ ors (i.e. they are aborted).
- the internal bridge registers e.g., the CSR registers 116
- the individual processing sets 14 and 16 perform evaluations for determining a restart time.
- Each processmg set 14 and 16 will determme its own restart timer timing.
- the tuner setting depends on a "blame" factor for the transition to the EState.
- the bndge then moves (155) to the split state 156.
- the split state 156 access to the device bus 22 is controlled by the SRR registers 118 while access to the bndge storage is simply arbitrated. The pnmary status of the processmg sets 14 and 16 is ignored. Transition to a combined operation is achieved by means of a sync_reset (157)
- the bridge is then operable in the combmed state 158, whereby all read and write accesses on the D bus 22 and the PA and PB buses 24 and 26 are allowed. All such accesses on the PA and PB buses 24 and 26 are compared in the comparator 130. Detection of a mismatch between any read and write cycles (with an exception of specific dissimilar data I/O cycles) cause a transition 151 to the EState 150
- the vanous states descnbed are controlled by the bridge controller 132.
- the role of the comparator 130 is to monitor and compare I/O opeiations on the PA and PB buses in the combined state 151 and, in response to a mismatched signal, to notify the bndge controller 132, whereby the b ⁇ dge controller 132 causes the transition 152 to the e ⁇ or state 150.
- the I/O operations can mclude all I/O operations initiated by the processing sets, as well as DMA transfers m respect of DMA mitiated by a device on the device bus.
- a system running m the combined mode 158 transitions to the EState 150 where there is a comparison failure detected in this bridge, or alternatively a companson failure is detected in another bridge in a multi- bridge system as shown, for example, m Figure 2
- transitions to an EState 150 can occur in other situations, for example m the case of a software controlled event forming part of a self test operation
- an mterrupt is signaled to all or a subset of the processors of the processmg sets via an mterrupt line 95 Following this, all I/O cycles generated on a P bus 24 or 26 result m reads being returned with an exception and w ⁇ tes bemg recorded m the posted write buffer
- the operation of the comparator 130 will now be descnbed m more detail
- the comparator is connected to paths 94, 95, 96 and 97 for comparing address, data and selected control signals from the PA and PB bus interfaces 84 and 86
- a failed comparison of m-sync accesses to device I/O bus 22 devices causes a move from the combmed state 158 to the EState 150
- the address, command, address panty, byte enables and parity e ⁇ or parameters are compared If the companson fails dunng the address phase, the b ⁇ dge asserts a retry to the processing set bus controllers 50, which prevents data leaving the I/O bus controllers 50 No activity occurs in this case on the device I/O bus 22 On the processor(s) retrying, no e ⁇ or is returned
- An e ⁇ or is returned to the processors
- the address, command, parity, byte enables and data parameters are compared
- the bridge If the comparison fails during the address phase, the bridge asserts a retry to the processmg set bus controllers 50, which results in the processmg set bus controllers 50 retrying the cycle again
- the posted write buffer 122 is then active No activity occurs on the device I/O bus 22 If the comparison fails during the data phase of a write operation, no data is passed to the D bus 22
- the failing data and any other transfer attributes from both processmg sets 14 and 16 are stored in the disconnect registers 122, and any subsequent posted write cycles are recorded in the posted wnte buffer 118
- the data control and panty are checked for each datum. If the data does not match, the bridge 12 terminates the transfer on the P bus.
- control and parity e ⁇ or signals are checked for co ⁇ ectness.
- E ⁇ ors fall roughly mto two types, those which are made visible to the software by the processing set bus controller 50 and those which are not made visible by the processing set bus controller 50 and hence need to be made visible by an mterrupt from the bridge 12.
- the bridge is operable to capture e ⁇ ors reported m connection with processing set read and w ⁇ te cycles, and DMA reads and wntes.
- Clock control for the bndge is performed by the bridge controller 132 in response to the clock signals from the clock lme 21. Individual control lmes from the controller 132 to the va ⁇ ous elements of the bridge are not shown in Figures 6 to 10.
- Figure 12 is a flow diagram illustrating a possible sequence of operatmg stages where lockstep e ⁇ ors are detected during a combmed mode of operation.
- Stage SI represents the combined mode of operation where lockstep e ⁇ or checking is performed by the comparator 130 shown in Figure 8.
- Stage S2 a lockstep e ⁇ or is assumed to have been detected by the comparator 130.
- Stage S3 the cu ⁇ ent state is saved in the CSR registers 114 and posted w ⁇ tes are saved in the posted w ⁇ te buffer 122 and/or in the disconnect registers 120
- FIG. 13 illustrates Stage S3 in more detail. Accordingly, in Stage S31, the bridge controller 132 detects whether the lockstep e ⁇ or notified by the comparator 130 has occu ⁇ ed durmg a data phase m which it is possible to pass data to the device bus 22. In this case, in Stage S32, the bus cycle is terminated. Then, m Stage S33 the data phases are stored m the disconnect registers 120 and control then passes to Stage S35 where an evaluation is made as to whether a further I/O cycle needs to be stored.
- Stage S3 is performed at the initiation of the initial e ⁇ or state 152 shown m Figure 1 1.
- the first and second processing sets arbitrate for access to the bridge.
- the posted write address and data phases for each of the processing sets 14 and 16 are stored m separate portions of the posted write buffer 122, and/or m the single set of disconnect registers as described above.
- Figure 14 illustrates the source of the posted wnte I/O cycles which need to be stored m the posted wnte buffer 122.
- output buffers 162 in the individual processors contam I/O cycles which have been posted for transfer via the processing set bus controllers 50 to the b ⁇ dge 12 and eventually to the device bus 22.
- buffers 160 in the processmg set controllers 50 also contain posted I/O cycles for transfer over the buses 24 and 26 to the bridge 12 and eventually to the device bus 22.
- a wnte cycle 164 posted to the posted wnte buffer 122 can comprise an address field 165 mcludmg an address and an address type, and between one and 16 data fields 166 including a byte enable field and the data itself
- the data is w ⁇ tten into the posted write buffer 122 m the EState unless the initiating processing set has been designated as a pnmary CPU set At that tune, non-primary writes m an EState still go to the posted w ⁇ te buffer even after one of the CPU sets has become a prunary processmg set
- the value of the posted w ⁇ te buffer pomter can be cleared at reset, or by software usmg a write under the control of a pnmary processmg set
- the mdividual processmg sets independently seek to evaluate the e ⁇ or state and to determine whether one of the processmg sets is faulty This determination is made by the mdividual processors m an e ⁇ or state in which they individually read status from the control state and EState registers 114 Durmg this e ⁇ or mode, the arbiter 134 arbitrates for access to the b ⁇ dge 12
- Stage S5 one of the processmg sets 14 and 16 establishes itself as the pnmary processing set This is determined by each of the processing sets identifying a time factor based on the estimated degree of responsibility for the e ⁇ or, whereby the first processing set to time out becomes the primary processing set In Stage S5, the status is recovered for that processing set and is copied to the other processing set The primary processing is able to access the posted w ⁇ te buffer 122 and the disconnect registers 120
- Stage S6 the bridge is operable in a split mode If it is possible to re-establish an equivalent status for the first and second processmg sets, then a reset is issued at Stage S7 to put the processing sets m the combmed mode at Stage SI However, it may not be possible to re-establish an equivalent state until a faulty processing set is replaced Accordingly the system will stay in the Split mode of Stage S6 in order to continued operation based on a single processing set After replacing the faulty processing set the system could then establish an equivalent state and move via Stage S7 to Stage SI
- the comparator 130 is operable m the combmed mode to compare the I/O operations output by the first and second processing sets 14 and 16 This is fine as long as all of the I/O operations of the first and second processing sets 14 and 16 are fully synchronized and deterministic Any deviation from this will be mterpreted by the comparator 130 as a loss of lockstep This is in pnnciple co ⁇ ect as even a mmor deviation from identical outputs, if not trapped by the comparator 130, could lead to the processing sets diverging further from each other as the individual processing sets act on the deviating outputs
- a stnct application of this puts significant constraints on the design of the individual processing sets An example of this is that it would not be possible to have independent time of day clocks in the individual processing sets operatmg under then own clocks This is because it is impossible to obtam two crystals which are 100% identical in operation Even small differences in the phase of the clocks could be critical as to whether the same sample is taken at any one
- Figure 17 is a schematic representation of details of the bndge of Figures 6 to 10 It will be noted that details of the bridge not shown in Figure 6 to 8 are shown in Figure 17, whereas other details of the bridge shown in Figures 6 to 8 are not shown in Figure 17, for reasons of cla ⁇ ty
- the DDRs 116 are provided in the b ⁇ dge registers 110 of Figure 7, but could be provided elsewhere in the b ⁇ dge in other embodiments
- One DDR 116 is provided for each processmg set In the example of the multi-processor system of Figure 1 where two processmg sets 14 and 16 are provided, two DDRs 116A and 116B are provided, one for each of the first and second processing sets 14 and 16, respectively
- Figure 17 represents a dissimilar data write stage
- the addressmg logic 136 is shown schematically to compnse two decoder sections, one decoder section 136A for the first processmg set and one decoder section 136B for the second processmg set 16 Durmg an address phase of a dissimilar data I/O write operation each of the processing sets 14 and 16 outputs the same predetermined address DDR-W which is separately interpreted by the respective first and second decoding sections 136A and 136B as addressmg the respective first and second respective DDRs 116A and 116B As the same address is output by the first and second processing sets 14 and 16, this is not mterpreted by the comparator 130 as a lockstep e ⁇ or
- the decodmg section 136A, or the decoding section 136B, or both are a ⁇ anged to further output a disable signal 137 in response to the predetermined write address supplied by the first and second processing sets 14 and 16
- This disable signal is supplied to the comparator 130 and is operative dunng the data phase of the write operation to disable the comparator
- the data output by the first processing set can be stored m the first DDR 116A and the data output by the second processing set can be stored in the second DDR 116B without the comparator being operative to detect a difference, even if the data from the first and second processing sets is different
- the first decoding section is operable to cause the routing matnx to store the data from the first processmg set 14 in the first DDR 116A
- the second decodmg section is operable to cause the routmg matnx to store the data from the second processing set 16 m the second DDR 116B
- the comparator 130 is once again enabled to detect any differences between I
- the processing sets are then operable to read the data from a selected one of the DDRs 116A/116B
- Figure 18 illustrates an alternative a ⁇ angement where the disable signal 137 is negated and is used to control a gate 131 at the output of the comparator 130 When the disable signal is active the output of the comparator is disabled, whereas when the disable signal is inactive the output of the comparator is enabled
- Figure 19 illustrates the reading of the first DDR 116A in a subsequent dissimilar data read stage.
- each of the processing sets 14 and 16 outputs the same predetermmed address DDR-RA which is separately interpreted by the respective first and second decoding sections 136A and 136B as addressing the same DDR, namely the first DDR 116A
- the content of the first DDR 116A is read by both of the processmg sets 14 and 16, thereby enabling those processmg sets to receive the same data.
- This enables the two processmg sets 14 and 16 to achieve determmistic behavior, even if the source of the data wntten mto the DDRs 116 by the processmg sets 14 and 16 was not deterministic.
- the processmg sets could each read the data from the second DDR 116B
- Figure 20 illustrates the reading of the second DDR 116B in a dissimilar data read stage following the dissimilar data w ⁇ te stage of Figure 15.
- each of the processing sets 14 and 16 outputs the same predetermined address DDR-RB which is separately interpreted by the respective first and second decodmg sections 136A and 136B as addressmg the same DDR, namely the second DDR 116B
- the content of the second DDR 116B is read by both of the processmg sets 14 and 16, thereby enablmg those processing sets to receive the same data.
- this enables the two processing sets 14 and 16 to achieve determmistic behavior, even if the source of the data written mto the DDRs 116 by the processing sets 14 and 16 was not determmistic
- the selection of which of the first and second DDRs 116A and 116B to be read can be determined in any approp ⁇ ate manner by the software operating on the processmg modules. This could be done on the basis of a simple selection of one or the other DDRs, or on a statistical basis or randomly or m any other manner as long as the same choice of DDR is made by both or all of the processmg sets.
- FIG. 21 is a flow diagram summarizing the various stages of operation of the DDR mechanism described above
- stage S10 a DDR write address DDR-W is received and decoded by the address decoders sections 136A and 136B durmg the address phase of the DDR write operation
- stage SI 1 the comparator 130 is disabled
- stage S12 the data received from the processing sets 14 and 16 dunng the data phase of the DDR write operation is stored m the first and second DDRs 116A and 116B, respectively, as selected by the first and second decode sections 136A and 136B, respectively.
- stage SI 3 a DDR read address is received from the first and second processing sets and is decoded by the decode sections 136A and 136B, respectively
- FIG. 22 is a schematic representation of the arbitration performed on the respective buses 22, 24 and
- Each of the processing set bus controllers 50 m the respective processmg sets 14 and 16 includes a conventional PCI master bus arbiter 180 for providing arbitration to the respective buses 24 and 26
- Each of the master arbiters 180 is responsive to request signals from the associated processmg set bus controller 50 and the bndge 12 on respective request (REQ) lines 181 and 182
- the master arbiters 180 allocate access to the bus on a first-come-first-served basis, issuing a grant (GNT) signal to the winning party on an approp ⁇ ate grants line 183 or 184.
- a conventional PCI bus arbiter 185 provides arbitration on the D bus 22.
- the D bus arbiter 185 can be configured as part of the D bus mterface 82 of Figure 6 or could be separate therefrom.
- the D bus arbiter is responsive to request signals from the contendmg devices, mcludmg the bridge and the devices 30, 31, etc. connected to the device bus 22.
- Respective request lines 186, 187, 188, etc for each of the entities competing for access to the D bus 22 are provided for the request signals (REQ).
- the D bus arbiter 185 allocates access to the D bus on a first-come-first-served basis, issumg a grant (GNT) signal to the winning entity via respective grant lines 189, 190, 192, etc.
- GNT grant
- Figure 23 is a state diagram summansmg the operation of the D bus arbiter 185.
- up to six request signals may be produced by respective D bus devices and one by the b ⁇ dge itself
- these are sorted by a priority encoder and a request signal (REQ#) with the highest pnonty is registered as the winner and gets a grant (GNT#) signal.
- Each winner which is selected modifies the priorities m a priority encoder so that given the same REQ# signals on the next move to grant A different device has the highest pnonty, hence each device has a "fair" chance of accessmg DEVs.
- the bridge REQ# has a higher weighting than D bus devices and will, under very busy conditions, get the bus for every second device.
- BACKOFF is required as, under PCI rules, a device may access the bus one cycle after GNT# is removed. Devices may only be granted access to D bus if the bndge is not in the not in the EState. A new GNT# is produced at the times when the bus is idle
- FIG. 24 is a state diagram summansmg the operation of the bridge arbiter 134
- a pnonty encoder can be provided to resolve access attempts which collide. In this case "a collision" the loser/losers are retried which forces them to give up the bus Under PCI rules retried devices must try repeatedly to access the bridge and this can be expected to happen.
- a PA or PB bus input selects which P bus interface will win a bndge access. Both are informed they won. Allowed selection enables latent fault checkmg during normal operation EState prevents the D bus from winning.
- the b ⁇ dge arbiter 134 is responsive to standard PCI signals provided on standard PCI control lmes 22, 24 and 25 to control access to the bridge 12
- Figure 25 illustrates signals associated with an I/O operation cycle on the PCI bus
- a PCI frame signal (FRAME#) is initially asserted
- address (A) signals will be available on the DATA BUS and the appropriate command (write/read) signals (C) will be available on the command bus (CMD BUS)
- C command (write/read) signals
- IRDY# the initiator ready signal
- DEVSEL# device selected signal
- TRDY# data transfer
- the bndge is operable to allocate access to the b ⁇ dge resources and thereby to negotiate allocation of a target bus m response to the FRAME# bemg asserted low for the initiator bus concerned
- the bridge arbiter 134 is operable to allocate access to the bndge resources and/or to a target bus on a first-come- first-served basis m response to the FRAME# bemg asserted low
- the arbiters may be additionally provided with a mechanism for loggmg the arbitration requests, and can imply a conflict resolution based on the request and allocation history where two requests are received at an identical tune
- a simple priority can be allocated to the vanous requesters, whereby, m the case of identically timed requests, a particular requester always wms the allocation process
- Each of the slots on the device bus 22 has a slot response register (SRR) 118, as well as other devices connected to the bus, such as a SCSI mterface
- SRR slot response register
- Each of the SRRs 118 contams bits defining the ownership of the slots, or the devices connected to the slots on the direct memory access bus
- each SRR 118 comprises a four bit register
- a larger register will be required to determme ownership between more than two processing sets For example, if three processmg sets are provided, then a five bit register will be required for each slot
- Figure 16 illustrates schematically one such four bit register 600 As shown in Figure 16, a first bit 602 is identified as SRR[0], a second bit 604 is identified as SRR[1], a thud bit 606 is identified as SRR[2] and a fourth bit 608 is identified as SRR[3]
- Bit SRR[0] is a bit which is set when writes for valid transactions are to be suppressed
- Bit SRR[1] is set when the device slot is owned by the first processmg set 14 This defines the access route between the first processing set 14 and the device slot When this bit is set, the first processing set 14 can always be master of a device slot 22, while the ability for the device slot to be master depends on whether bit
- Bit SRR[2] is set when the device slot is owned by the second processmg set 16 This defines the access route between the second processing set 16 and the device slot
- Bit SRR[3] is an arbitration bit which gives the device slot the ability to become master of the device bus 22, but only if it is owned by one of the processing sets 14 and 16, that is if one of the SRR [1] and SRR[2]
- the fake bit (SRR[0]) of an SRR 118 When the fake bit (SRR[0]) of an SRR 118 is set, wntes to the device for that slot are ignored and do not appear on the device bus 22 Reads return indeterminate data without causing a transaction on the device bus 22.
- the fake bit SRR[0] of the SRR 188 co ⁇ esponding to the device which caused the e ⁇ or is set by the hardware configuration of the bridge to disable further access to the device slot concerned.
- An interrupt may also be generated by the bridge to inform the software which originated the access leading to the I/O e ⁇ or that the e ⁇ or has occu ⁇ ed.
- the fake bit has an effect whether the system is in the split or the combmed mode of operation.
- each slot can be in three states: Not-owned;
- a slot which is not owned by the processing set making the access (this includes un-owned slots) cannot be accessed and results in an abort.
- a processing set can only claim an un-owned slot; it cannot wrest ownership away from another processing set. This can only be done by powering-off the other processing set. When a processing set is powered off, all slots owned by it move to the un-owned state. Whilst it is not possible for a processing set to wrest ownership from another processing set, it is possible for a processing set to give ownership to another processing set.
- the owned bits can be altered when in the combined mode of operation state but they have no effect until the split mode is entered.
- Table 2 summarizes the access rights as determined by an SRR 118.
- the setting of SRR[2] logic high indicates that the device is owned by processing set B
- SRR[3] is set logic low and the device is not allowed access to the processing set SRR[0] is set high so that any writes to the device are ignored and reads therefrom return indeterminate data
- the malfunctioning device is effectively isolated from the processing set, and provides indeterminate data to satisfy any device drivers, for example, that might be looking for a response from the device
- Figure 26 illustrates the operation of the bridge 12 for direct memory access by a device such as one of the devices 28, 29, 30, 31 and 32 to the memory 56 of the processing sets 14 and 16
- DMA direct memory access
- the address decode logic 142 holds or has access to a geographic address map 196, which identifies the relationship between the processor address space and the slots as a result of the geographic address employed
- This geographic address map 196 could be held as a table in the b ⁇ dge memory 126, along with the posted write buffer 122 and the dirty RAM 124. Alternatively, it could be held as a table in a separate memory element, possibly forming part of the address decoder 142 itself.
- the map 182 could be configured in a form other than a table.
- the address decode logic 142 is configured to verify the co ⁇ ectness of the DMA addresses supplied by the device 30. In one embodiment of the invention, this is achieved by comparing four significant address bits of the address supplied by the device 30 with the co ⁇ esponding four address bits of the address held in the geographic addressing map 196 for the slot identified by the D bus grant signal for the DMA request. In this example, four address bits are sufficient to determine whether the address supplied is within the co ⁇ ect address range. In this specific example, 32 bit PCI bus addresses are used, with bits 31 and 30 always being set to 1, bit 29 being allocated to identify which of two bridges on a motherboard is being addressed (see Figure 2) and bits
- Bits 25-0 define an offset from the base address for the address range for each slot. Accordingly, by comparing bits 29-26, it is possible to identify whether the address(es) supplied fall(s) within the appropriate address range for the slot concerned. It will be appreciated that in other embodiments a different number of bits may need to be compared to make this determination depending upon the allocation of the addresses.
- the address decode logic 142 could be a ⁇ anged to use the bus grant signal 184 for the slot concerned to identify a table entry for the slot concerned and then to compare the address in that entry with the address(es) received with the DMA request as described above.
- the address decode logic 142 could be a ⁇ anged to use the address(es) received with the DMA address to address a relational geographic address map and to determine a slot number therefrom, which could be compared to the slot for which the bus grant signal 194 is intended and thereby to determine whether the addresses fall within the address range appropriate for the slot concerned.
- the address decode logic 142 is a ⁇ anged to permit DMA to proceed if the DMA addresses fall within the expected address space for the slot concerned. Otherwise, the address decoder is a ⁇ anged to ignore the slots and the physical addresses.
- the address decode logic 142 is further operable to control the routing of the DMA request to the appropriate processing set(s) 14/16. If the bridge is in the combined mode, the DMA access will automatically be allocated to all of the in-sync processing sets 14/16. The address decode logic 142 will be aware that the bridge is in the combined mode as it is under the control of the bridge controller 132 (see Figure 8). However, where the bridge is in the split mode, a decision will need to be made as to which, if any, of the processing sets the DMA request is to be sent.
- the access When the system is in split mode, the access will be directed to a processing set 14 or 16 which owns the slot concerned. If the slot is un-owned, then the bridge does not respond to the DMA request.
- the address decode logic 142 is operable to determine the ownership of the device originating the DMA request by accessing the SRR 118 for the slot concerned. The appropriate slot can be identified by the D bus grant signal.
- the address decode logic 142 is operable to control the target controller 140 (see Figure 8) to pass the DMA request to the appropriate processing set(s) 14/16 based on the ownership bits SRR[1] and SRR[2]. If bit SRR[1] is set, the first processing set 14 is the owner and the DMA request is passed to the first processing set.
- bit SRR[2] is set, the second processing set 16 is the owner and the DMA request is passed to the second processing set. If neither of the bit SRR[1] and SRR[2] is set, then the DMA request is ignored by the address decoder and is not passed to either of the processing sets 14 and 16.
- FIG 27 is a flow diagram summarizing the DMA verification process as illustrated with reference to Figure 24.
- the D-bus arbiter 160 arbitrates for access to the D bus 22.
- stage S21 the address decoder 142 verifies the DMA addresses supplied with the DMA request by accessing the geographic address map.
- stage S22 the address decoder ignores the DMA access where the address falls outside the expected range for the slot concerned.
- the actions of the address decoder are dependent upon whether the bridge is in the combined or the split mode.
- the address decoder controls the target controller 140 (see Figure 8) to cause the routing matrix 80 (see Figure 6) to pass the DMA request to both processing sets 14 and 16. If the bridge is in the split mode, the address decoder is operative to verify the ownership of the slot concerned by reference to the SRR 118 for that slot in stage S25.
- the address decoder 142 controls the target controller 140 (see Figure 8) to cause the routing matrix 80 (see Figure 6) to pass the DMA request to first processing set 14. If the slot is allocated to the second processing set 16 (i.e. the SRR[2] bit is set), then in stage S27 the address decoder 142 controls the target controller 140 (see Figure 8) to cause the routing matrix 80 (see Figure 6) to pass the DMA request to the second processing set 16.
- step SI 8 the address decoder 142 ignores or discards the DMA request and the DMA request is not passed to the processing sets 14 and 16.
- a DMA, or direct vector memory access (DVMA) request sent to one or more of the processing sets causes the necessary memory operations (read or write as appropriate) to be effected on the processing set memory.
- DVMA direct vector memory access
- the automatic recovery process includes reintegration of the state of the processing sets to a common status in order to attempt a restart in lockstep.
- the processing set which asserts itself as the primary processing set as described above copies its complete state to the other processing set. This involves ensuring that the content of the memory of both processors is the same before trying a restart in lockstep mode.
- a problem with the copying of the content of the memory from one processing set to the other is that during this copying process a device connected to the D bus 22 might attempt to make a direct memory access (DMA) request for access to the memory of the primary processing set.
- DMA direct memory access
- a duty RAM 124 is provided m the bndge. As descnbed earlier the dirty RAM 124 is configured as part of the b ⁇ dge SRAM memory 126
- the duty RAM 124 compnses a bit map havmg a duly mdicator, for example a duty bit, for each block, or page, of memory.
- the bit for a page of memory is set when a wnte access to the area of memory concerned is made.
- one bit is provided for every 8K page of mam processmg set memory
- the bit for a page of processmg set memory is set automatically by the address decoder 142 when this decodes a DMA request for that page of memory for either of the processmg sets 14 or 16 from a device connected to the D bus 22.
- the duty RAM can be reset, or cleared when it is read by a processmg set, for example by means of read and clear mstructions at the beginning of a copy pass, so that it can start to record pages which are dirtied since a given tune.
- the duty RAM 124 can be read word by word. If a large word size is chosen for readmg the duty RAM
- the bits m the duty RAM 124 will mdicate those pages of processmg set memory which have been changed (or dirtied) by DMA w ⁇ tes dunng the penod of the copy. A further copy pass can then be performed for only those pages of memory which have been dirtied. This will take less tune that a full copy of the memory.
- the duty RAM 124 is set and cleared m both the combmed and split modes. This means that m split mode the duty RAM 124 may be cleared by either processmg set.
- the duty RAM 124 address is decoded from bits 13 to 28 of the PCI address presented by the D bus device. E ⁇ oneous accesses which present illegal combmations of the address bits 29 to 31 are mapped into the duty RAM 124 and a bit is dirtied on a w ⁇ te, even though the bridge will not pass these transactions to the processing sets.
- the bndge defines the whole area from 0x00008000 to OxOOOOffff as duty RAM and will clear the contents of any location m this range on a read.
- Figure 28 is a flow diagram summansmg the operation of the duty RAM 124.
- stage S41 the prunary processmg set reads the duty RAM 124 which has the effect of resetting the duty RAM 124.
- stage S42 the pnmary processor (e g processmg set 14) copies the whole of its memory 56 to the memory 56 of the other processmg set (e g processmg set 16)
- stage S43 the primary processmg set reads the duty RAM 124 which has the effect of resetting the duty RAM 124
- the primary processor determmes whether less than a predetermmed number of bits have been wntten m the duty RAM 124
- the processor m stage S45 copies those pages of its memory 56 which have been dirtied, as indicated by the duty bits read from the duty RAM 124 in stage S43, to the memory 56 of the other processmg set Control then passes back to stage S43 If, m stage S44, it is determined less than the predetermmed number of bits have been w ⁇ tten m the duty
- the primary processor causes the bndge to inhibit DMA requests from the devices connected to the D bus 22 This could, for example, be achieved by clearmg the arbitration enable bit for each of the device slots, thereby denying access of the DMA devices to the D bus 22
- the address decoder 142 could be configured to ignore DMA requests under mstructions from the pnmary processor Dunng the period m which DMA accesses are prevented, the pnmary processor then makes a final copy pass from its memory to the memory 56 of the other processor for those memory pages co ⁇ espondmg to the bits set in the dirty RAM 124
- the pnmary processor can issue a reset operation for initiating a combmed mode
- DMA accesses are once more permitted
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP99930146A EP1088273B1 (en) | 1998-06-15 | 1999-06-04 | Processor bridge with posted write buffer |
JP2000555163A JP2002518738A (en) | 1998-06-15 | 1999-06-04 | Processor bridge with posted write buffer |
AT99930146T ATE216097T1 (en) | 1998-06-15 | 1999-06-04 | PROCESSOR BRIDGE WITH REWRITE CHECKER |
DE69901251T DE69901251T2 (en) | 1998-06-15 | 1999-06-04 | PROCESSOR BRIDGE WITH REPLAY BUFFER |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/094,844 US6148348A (en) | 1998-06-15 | 1998-06-15 | Bridge interfacing two processing sets operating in a lockstep mode and having a posted write buffer storing write operations upon detection of a lockstep error |
US09/094,844 | 1998-06-15 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO1999066406A1 true WO1999066406A1 (en) | 1999-12-23 |
WO1999066406A9 WO1999066406A9 (en) | 2000-04-06 |
Family
ID=22247506
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1999/012606 WO1999066406A1 (en) | 1998-06-15 | 1999-06-04 | Processor bridge with posted write buffer |
Country Status (6)
Country | Link |
---|---|
US (1) | US6148348A (en) |
EP (1) | EP1088273B1 (en) |
JP (1) | JP2002518738A (en) |
AT (1) | ATE216097T1 (en) |
DE (1) | DE69901251T2 (en) |
WO (1) | WO1999066406A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001080009A2 (en) * | 2000-04-13 | 2001-10-25 | Stratus Technologies Bermuda, Ltd. | Fault-tolerant computer system with voter delay buffer |
GB2369692A (en) * | 2000-11-29 | 2002-06-05 | Sun Microsystems Inc | A fault tolerant computer with a bridge allowing direct memory access (DMA) between main memories of duplicated processing sets |
GB2369691B (en) * | 2000-11-29 | 2003-06-04 | Sun Microsystems Inc | Control logic for memory modification tracking |
WO2011117155A1 (en) * | 2010-03-23 | 2011-09-29 | Continental Teves Ag & Co. Ohg | Redundant two-processor controller and control method |
WO2011117156A3 (en) * | 2010-03-23 | 2011-12-08 | Continental Teves Ag & Co. Ohg | Control computer system, method for controlling a control computer system, and use of a control computer system |
US8131951B2 (en) | 2008-05-30 | 2012-03-06 | Freescale Semiconductor, Inc. | Utilization of a store buffer for error recovery on a store allocation cache miss |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6018332A (en) * | 1997-11-21 | 2000-01-25 | Ark Interface Ii, Inc. | Overscan user interface |
US6625756B1 (en) * | 1997-12-19 | 2003-09-23 | Intel Corporation | Replay mechanism for soft error recovery |
US6587961B1 (en) * | 1998-06-15 | 2003-07-01 | Sun Microsystems, Inc. | Multi-processor system bridge with controlled access |
US6260159B1 (en) * | 1998-06-15 | 2001-07-10 | Sun Microsystems, Inc. | Tracking memory page modification in a bridge for a multi-processor system |
US6366968B1 (en) * | 1998-06-26 | 2002-04-02 | Intel Corporation | Physical write packets processing when posted write error queue is full, with posted write error queue storing physical write requests when posted write packet fails |
US6687851B1 (en) | 2000-04-13 | 2004-02-03 | Stratus Technologies Bermuda Ltd. | Method and system for upgrading fault-tolerant systems |
GB2369694B (en) * | 2000-11-29 | 2002-10-16 | Sun Microsystems Inc | Efficient memory modification tracking |
GB2369690B (en) * | 2000-11-29 | 2002-10-16 | Sun Microsystems Inc | Enhanced protection for memory modification tracking |
US6742136B2 (en) * | 2000-12-05 | 2004-05-25 | Fisher-Rosemount Systems Inc. | Redundant devices in a process control system |
DE10124027A1 (en) * | 2001-05-16 | 2002-11-21 | Continental Teves Ag & Co Ohg | Method for operation of a microprocessor system equipped for execution of safety critical functions and uncritical functions, e.g. for a motor vehicle, in which safety critical and uncritical operations can be distinguished |
US7165128B2 (en) * | 2001-05-23 | 2007-01-16 | Sony Corporation | Multifunctional I/O organizer unit for multiprocessor multimedia chips |
US6985975B1 (en) * | 2001-06-29 | 2006-01-10 | Sanera Systems, Inc. | Packet lockstep system and method |
US6954819B2 (en) * | 2002-01-09 | 2005-10-11 | Storcase Technology, Inc. | Peripheral bus switch to maintain continuous peripheral bus interconnect system operation |
US7155721B2 (en) * | 2002-06-28 | 2006-12-26 | Hewlett-Packard Development Company, L.P. | Method and apparatus for communicating information between lock stepped processors |
US7085959B2 (en) * | 2002-07-03 | 2006-08-01 | Hewlett-Packard Development Company, L.P. | Method and apparatus for recovery from loss of lock step |
US7168006B2 (en) * | 2003-06-30 | 2007-01-23 | International Business Machines Corporation | Method and system for saving the state of integrated circuits upon failure |
US7415551B2 (en) * | 2003-08-18 | 2008-08-19 | Dell Products L.P. | Multi-host virtual bridge input-output resource switch |
US7194663B2 (en) * | 2003-11-18 | 2007-03-20 | Honeywell International, Inc. | Protective bus interface and method |
US7404017B2 (en) | 2004-01-16 | 2008-07-22 | International Business Machines Corporation | Method for managing data flow through a processing system |
US7296181B2 (en) * | 2004-04-06 | 2007-11-13 | Hewlett-Packard Development Company, L.P. | Lockstep error signaling |
US7308566B2 (en) * | 2004-10-25 | 2007-12-11 | Hewlett-Packard Development Company, L.P. | System and method for configuring lockstep mode of a processor module |
US7516359B2 (en) * | 2004-10-25 | 2009-04-07 | Hewlett-Packard Development Company, L.P. | System and method for using information relating to a detected loss of lockstep for determining a responsive action |
US7818614B2 (en) * | 2004-10-25 | 2010-10-19 | Hewlett-Packard Development Company, L.P. | System and method for reintroducing a processor module to an operating system after lockstep recovery |
US7624302B2 (en) | 2004-10-25 | 2009-11-24 | Hewlett-Packard Development Company, L.P. | System and method for switching the role of boot processor to a spare processor responsive to detection of loss of lockstep in a boot processor |
US20060107116A1 (en) * | 2004-10-25 | 2006-05-18 | Michaelis Scott L | System and method for reestablishing lockstep for a processor module for which loss of lockstep is detected |
US7627781B2 (en) * | 2004-10-25 | 2009-12-01 | Hewlett-Packard Development Company, L.P. | System and method for establishing a spare processor for recovering from loss of lockstep in a boot processor |
US7356733B2 (en) * | 2004-10-25 | 2008-04-08 | Hewlett-Packard Development Company, L.P. | System and method for system firmware causing an operating system to idle a processor |
US7366948B2 (en) * | 2004-10-25 | 2008-04-29 | Hewlett-Packard Development Company, L.P. | System and method for maintaining in a multi-processor system a spare processor that is in lockstep for use in recovering from loss of lockstep for another processor |
US7502958B2 (en) * | 2004-10-25 | 2009-03-10 | Hewlett-Packard Development Company, L.P. | System and method for providing firmware recoverable lockstep protection |
US7272681B2 (en) * | 2005-08-05 | 2007-09-18 | Raytheon Company | System having parallel data processors which generate redundant effector date to detect errors |
US7669073B2 (en) * | 2005-08-19 | 2010-02-23 | Stratus Technologies Bermuda Ltd. | Systems and methods for split mode operation of fault-tolerant computer systems |
US20080123522A1 (en) * | 2006-07-28 | 2008-05-29 | David Charles Elliott | Redundancy coupler for industrial communications networks |
US9146835B2 (en) | 2012-01-05 | 2015-09-29 | International Business Machines Corporation | Methods and systems with delayed execution of multiple processors |
US11494087B2 (en) * | 2018-10-31 | 2022-11-08 | Advanced Micro Devices, Inc. | Tolerating memory stack failures in multi-stack systems |
US11645155B2 (en) | 2021-02-22 | 2023-05-09 | Nxp B.V. | Safe-stating a system interconnect within a data processing system |
DE102021116389A1 (en) | 2021-06-24 | 2022-12-29 | ebm-papst neo GmbH & Co. KG | Master-slave network and method of operating a master-slave network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0747831A2 (en) * | 1995-06-07 | 1996-12-11 | International Business Machines Corporation | Data processing system including buffering mechanism for inbound and outbound reads and posted writes |
WO1997043712A2 (en) * | 1996-05-16 | 1997-11-20 | Resilience Corporation | Triple modular redundant computer system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4965717A (en) * | 1988-12-09 | 1990-10-23 | Tandem Computers Incorporated | Multiple processor system having shared memory with private-write capability |
US5251227A (en) * | 1989-08-01 | 1993-10-05 | Digital Equipment Corporation | Targeted resets in a data processor including a trace memory to store transactions |
GB2268817B (en) * | 1992-07-17 | 1996-05-01 | Integrated Micro Products Ltd | A fault-tolerant computer system |
US6233702B1 (en) * | 1992-12-17 | 2001-05-15 | Compaq Computer Corporation | Self-checked, lock step processor pairs |
US5838899A (en) * | 1994-09-20 | 1998-11-17 | Stratus Computer | Digital data processing methods and apparatus for fault isolation |
US5784599A (en) * | 1995-12-15 | 1998-07-21 | Compaq Computer Corporation | Method and apparatus for establishing host bus clock frequency and processor core clock ratios in a multi-processor computer system |
US5953742A (en) * | 1996-07-01 | 1999-09-14 | Sun Microsystems, Inc. | Memory management in fault tolerant computer systems utilizing a first and second recording mechanism and a reintegration mechanism |
US5881253A (en) * | 1996-12-31 | 1999-03-09 | Compaq Computer Corporation | Computer system using posted memory write buffers in a bridge to implement system management mode |
-
1998
- 1998-06-15 US US09/094,844 patent/US6148348A/en not_active Expired - Fee Related
-
1999
- 1999-06-04 DE DE69901251T patent/DE69901251T2/en not_active Expired - Fee Related
- 1999-06-04 WO PCT/US1999/012606 patent/WO1999066406A1/en active IP Right Grant
- 1999-06-04 EP EP99930146A patent/EP1088273B1/en not_active Expired - Lifetime
- 1999-06-04 AT AT99930146T patent/ATE216097T1/en active
- 1999-06-04 JP JP2000555163A patent/JP2002518738A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0747831A2 (en) * | 1995-06-07 | 1996-12-11 | International Business Machines Corporation | Data processing system including buffering mechanism for inbound and outbound reads and posted writes |
WO1997043712A2 (en) * | 1996-05-16 | 1997-11-20 | Resilience Corporation | Triple modular redundant computer system |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001080009A2 (en) * | 2000-04-13 | 2001-10-25 | Stratus Technologies Bermuda, Ltd. | Fault-tolerant computer system with voter delay buffer |
WO2001080009A3 (en) * | 2000-04-13 | 2002-03-21 | Stratus Technologies Internati | Fault-tolerant computer system with voter delay buffer |
GB2369692A (en) * | 2000-11-29 | 2002-06-05 | Sun Microsystems Inc | A fault tolerant computer with a bridge allowing direct memory access (DMA) between main memories of duplicated processing sets |
GB2369692B (en) * | 2000-11-29 | 2002-10-16 | Sun Microsystems Inc | Processor state reintegration |
GB2369691B (en) * | 2000-11-29 | 2003-06-04 | Sun Microsystems Inc | Control logic for memory modification tracking |
US6961826B2 (en) | 2000-11-29 | 2005-11-01 | Sun Microsystems, Inc. | Processor state reintegration using bridge direct memory access controller |
US8131951B2 (en) | 2008-05-30 | 2012-03-06 | Freescale Semiconductor, Inc. | Utilization of a store buffer for error recovery on a store allocation cache miss |
WO2011117155A1 (en) * | 2010-03-23 | 2011-09-29 | Continental Teves Ag & Co. Ohg | Redundant two-processor controller and control method |
WO2011117156A3 (en) * | 2010-03-23 | 2011-12-08 | Continental Teves Ag & Co. Ohg | Control computer system, method for controlling a control computer system, and use of a control computer system |
CN102822807A (en) * | 2010-03-23 | 2012-12-12 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Control computer system, method for controlling a control computer system, and use of a control computer system |
US8935569B2 (en) | 2010-03-23 | 2015-01-13 | Continental Teves Ag & Co. Ohg | Control computer system, method for controlling a control computer system, and use of a control computer system |
CN102822807B (en) * | 2010-03-23 | 2015-09-02 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Computer for controlling system and control method thereof and use |
Also Published As
Publication number | Publication date |
---|---|
EP1088273A1 (en) | 2001-04-04 |
EP1088273B1 (en) | 2002-04-10 |
WO1999066406A9 (en) | 2000-04-06 |
DE69901251T2 (en) | 2002-10-31 |
ATE216097T1 (en) | 2002-04-15 |
US6148348A (en) | 2000-11-14 |
DE69901251D1 (en) | 2002-05-16 |
JP2002518738A (en) | 2002-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6148348A (en) | Bridge interfacing two processing sets operating in a lockstep mode and having a posted write buffer storing write operations upon detection of a lockstep error | |
EP1090349B1 (en) | Processor bridge with dissimilar data registers | |
US5991900A (en) | Bus controller | |
EP1086425B1 (en) | Direct memory access in a bridge for a multi-processor system | |
EP1090350B1 (en) | Multi-processor system bridge with controlled access | |
EP1088272B1 (en) | Multi-processor system bridge | |
EP1088271B1 (en) | Processor bridge with dissimilar data access | |
US6167477A (en) | Computer system bridge employing a resource control mechanism with programmable registers to control resource allocation | |
US6950907B2 (en) | Enhanced protection for memory modification tracking with redundant dirty indicators | |
US20020065987A1 (en) | Control logic for memory modification tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
AK | Designated states |
Kind code of ref document: C2 Designated state(s): JP KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: C2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
COP | Corrected version of pamphlet |
Free format text: PAGES 1/23-23/23, DRAWINGS, REPLACED BY NEW PAGES 1/23-23/23; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE |
|
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 2000 555163 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1999930146 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1999930146 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1999930146 Country of ref document: EP |