US20030196076A1 - Communications system using rings architecture - Google Patents

Communications system using rings architecture Download PDF

Info

Publication number
US20030196076A1
US20030196076A1 US10/064,338 US6433802A US2003196076A1 US 20030196076 A1 US20030196076 A1 US 20030196076A1 US 6433802 A US6433802 A US 6433802A US 2003196076 A1 US2003196076 A1 US 2003196076A1
Authority
US
United States
Prior art keywords
message
data
ring
instruction
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/064,338
Inventor
Boris Zabarski
Moshe Tarrab
Oded Norman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Globespan Virata Inc
Original Assignee
Globespan Virata Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Globespan Virata Inc filed Critical Globespan Virata Inc
Priority to US10/064,338 priority Critical patent/US20030196076A1/en
Assigned to GLOBESPANVIRATA INCORPORATED reassignment GLOBESPANVIRATA INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NORMAN, ODED, TARRAB, MOSHE, ZABARSKI, BORIS
Publication of US20030196076A1 publication Critical patent/US20030196076A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7825Globally asynchronous, locally synchronous, e.g. network on chip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/422Synchronisation for ring networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/427Loop networks with decentralised control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/427Loop networks with decentralised control
    • H04L12/433Loop networks with decentralised control with asynchronous transmission, e.g. token ring, register insertion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/437Ring fault isolation or reconfiguration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • H04L12/4625Single bridge functionality, e.g. connection of two networks over a single bridge
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4637Interconnected ring systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/18Loop-free operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/306Route determination based on the nature of the carried application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/102Packet switching elements characterised by the switching fabric construction using shared medium, e.g. bus or ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/12Protocol engines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5603Access techniques
    • H04L2012/5609Topology
    • H04L2012/5612Ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5618Bridges, gateways [GW] or interworking units [IWU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5625Operations, administration and maintenance [OAM]
    • H04L2012/5627Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5652Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly
    • H04L2012/5653Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly using the ATM adaptation layer [AAL]
    • H04L2012/5656Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly using the ATM adaptation layer [AAL] using the AAL2
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5665Interaction of ATM with other protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3063Pipelined operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/40Constructional details, e.g. power supply, mechanical construction or backplane
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates generally to data communication networks and, more particularly, to receiving and transmitting systems, including ATM and other types of communications platforms and including such components as communications processors, packet processors, network processors, DMAs, FPGAs and other devices and peripheral devices.
  • ATM Asynchronous Transfer Mode
  • CITT International Committee
  • ITU-T Telecommunications Standardization Sector of the International Telecommunication Union
  • ATM is a technology capable of high speed transfer of voice, video, and other types of data across public and private networks.
  • ATM utilizes very large-scale integration (VLSI) technology to segment data into individual packets (also referred to as cells).
  • VLSI very large-scale integration
  • B-ISDN calls for packets having a fixed size of fifty-three bytes (i.e., octets).
  • each ATM cell includes a header portion comprising the first five bytes and a payload portion comprising the remaining forty-eight bytes.
  • ATM cells are routed across the various networks by passing though ATM switches, which read addressing information included in the cell header and deliver the cell to the destination referenced therein.
  • ATM does not rely upon Time Division Multiplexing (TDM) to establish the identification of each cell. Rather, ATM cells are identified solely based upon information contained within the cell header.
  • TDM Time Division Multiplexing
  • ATM differs from systems based upon conventional network architectures such as Ethernet or Token Ring in that rather than broadcasting data packets on a shared wire for all network members to receive, ATM cells dictate the successive recipient of the cell through information contained within the cell header.
  • a specific routing path through the network called a virtual path (VP) or virtual circuit (VC), is set up between two end nodes before any data is transmitted. Cells identified with a particular virtual circuit are delivered to only those nodes on that virtual circuit. In this manner, only the destination identified in the cell header receives the transmitted cell.
  • VP virtual path
  • VC virtual circuit
  • the cell header includes, among other information, addressing information that essentially describes the source of the cell or where the cell is coming from and its assigned destination.
  • ATM evolved from TDM concepts, cells from multiple sources are statistically multiplexed into a single transmission facility. Cells are identified by the contents of their headers rather than by their time position in the multiplexed stream.
  • a single ATM transmission facility may carry hundreds of thousands of ATM cells per second originating from a multiplicity of sources and traveling to a multiplicity of destinations.
  • the backbone of an ATM network generally consists of switching devices capable of handling the high-speed ATM cell streams.
  • the switching components of these devices commonly referred to as the switch fabric, perform the switching function required to implement a virtual circuit by receiving ATM cells from an input port, analyzing the information in the header of the incoming cells in real-time, and routing them to the appropriate destination port. Millions of cells per second often need to be switched by a single device.
  • connection-oriented scheme permits an ATM network to guarantee the minimum amount of bandwidth required by each connection. Such guarantees are made when the connection is set-up.
  • an analysis of existing connections is performed to determine if enough total bandwidth remains within the network to service the new connection at its requested capacity. If the necessary bandwidth is not available, the connection is refused.
  • FIG. 1 shows a typical SOC 10 , such as a communications processor, having a variety of modules, such as CPUs 14 , 22 , RAM 16 , Ethernet interface 18 , i/o interface 20 , and DMA 24 , interconnected via a switch fabric 12 .
  • SOC system on a chip
  • a communications processor is one example of a communications system commonly designed using the traditional buss approach.
  • a robust SOC communications processor may find a myriad of applications, such as for modems, bridges, routers, gateways, multi-service gateways and access equipment, and so forth.
  • Such a communications processor may be PHY [Physical layer]-independent, in which case it will be coupled with an appropriate PHY product, or it may by PHY-integrated, in order to provide the connectivity to the PHY layer of the ATM (or OSI [Opens Systems Interconnection]) layered protocol model.
  • SOC communications processor For processing the packets of information that may be of a variety of protocols, it must be able to process a wide variety of different protocols, such as ATM, FR (Frame Relay), IP (Internet Protocol), TDM, and so forth. Therefore, in such a SOC communications processor, a packet processor for processing the packets of information that may be of a variety of protocols may be implemented.
  • the processing of packets or cells performed by the packet processor may include the following tasks: packet header analysis (OSI Layer2, Layer3); frame validity—CRC (Cyclic Redundancy Code) check; forwarding decision—look up; header modification/conversion; segmentation and reassembly; data conversion (e.g., encryption); statistics gathering; and so on.
  • packet header analysis OSI Layer2, Layer3
  • CRC Cyclic Redundancy Code
  • programmable packet processors generally have an advantage over ASIC solutions.
  • a programmable packet processor can be viewed as a platform to be quickly deployed (in consideration of TTM) and then later one can add/modify system functionality by changing/adding code to the packet processor.
  • the trade-off system vendors would have at the very high end solutions (core rate OC [Optical Carrier]-48, OC-192, for example) would be power and performance in programmable packet processors as compared to fixed ASIC solutions.
  • core rate OC Optical Carrier
  • OC-192 for example
  • a programmable packet processor (also referred to as a network processor) would preferably provide a solution in the access space where the expected aggregate bandwidth is in the range of OC-3 to OC-12.
  • the access market requirements are different from the network edge, and the core.
  • systems would need to deal with lots of subscribers (ports), low speed links (T 1 , xDSL [x Digital Subscriber Line]) and with different access methods (ATM, IP, FR, TDM, etc.), whereas at the edge and the core of the network generally would use one framing solution (MPLS, IP or ATM).
  • Access systems typically would be characterized by: a large number of subscribers (ports, flows), high density; requirements for Inter Working Functions (IWFs), such as voice (TDM) to packets (ATM or IP) (e.g., Voice gateways), MAN (Metropolitan Area Network) to WAN (Wide Area Network), Ethernet to ATM or PoS [Packet Over SONET]; data grooms—asymmetric behavior large pipe to many small pipes; and the like.
  • IWFs Inter Working Functions
  • TDM voice
  • ATM or IP packets
  • MAN Micropolitan Area Network
  • WAN Wide Area Network
  • Ethernet e.g., Ethernet to ATM or PoS [Packet Over SONET]
  • data grooms asymmetric behavior large pipe to many small pipes; and the like.
  • access systems need lots of packet manipulation, especially on media conversions and IWF. Therefore, a programmable (and therefore flexible) packet processor often is a preferred solution.
  • Such a programmable packet processor could be developed using a standard general purpose microprocessor core.
  • processor cores are commercially available, including those that are licensed by Advanced RISC Machines, Ltd., ARC International, MIPS Computer Systems, Inc., and Lexra, Inc.
  • the above cores are general purpose cores that would need to be optimized for packet processing. Such optimization typically would include: additional instructions; DMA support; task switch with low overhead; specific bit manipulation instructions; etc.
  • the disadvantages of using such general purpose cores in packet processing applications include: costs incurred from license fee and royalties; limited customization—a special license is usually required to modify the core; create dependency on the core provider roadmap and technical support; over featured—FPU (Floating Point Units), MMU (Memory Management Units]; etc.
  • the present invention overcomes the problems noted above, and realizes additional advantages, by providing a number of advantages over prior systems.
  • rings architecture for communications and data handling systems
  • Enumeration process for automatically configuring the ring topology
  • automatic routing of messages through bridges automatic routing of exception messages
  • extending a ring topology to external devices and providing a flexible and re-configurable system read return address, write-ahead functionality to promote efficiency, wait-till-reset operation resumption, in-vivo scan through rings topology, staggered clocking arrangement, and stray message detection and eradication.
  • FIG. 1 An architectural overview of a flexible packet processor; a programming model for a flexible packet processor; an instruction pipeline for a flexible packet processor; an internal memory to be used with the flexible packet processor; the use of a flexible packet processor as a module on a rings-based architecture; the core of the flexible packet processor and associated compounds (agents and non-agents) on the packet processor.
  • Additional inventive elements conveyed through the embodiments and details discussed below include, among other things: an architectural overview of a communications processor; a programming model for a communications processor; a data path protocol support model for a communications processor; an exemplary network processor employed as the core packet processor for the communications processor; an exemplary rings-based SOC interconnect fabric architecture employed in the communications processor; a variety of quality of support (QOS) features that implemented in the communications processor; a series of beneficial applications of the communications processor; the various approaches for the software that can be implemented to power the communications processor; specific exemplary strategies for the software in the high performance communications processor; and a performance estimate for RFC 1483 bridging.
  • QOS quality of support
  • FIG. 1 is a block diagram illustrating a typical system on a chip.
  • FIG. 2 is a schematic diagram illustrating a ring architecture in accordance with at least one embodiment of the present invention.
  • FIG. 3 is a flow diagram illustrating an exemplary enumeration process in accordance with at least one embodiment of the present invention.
  • FIGS. 4 - 8 are a schematic diagram illustrating timing issues in a clocked system in accordance with at least one embodiment of the present invention.
  • FIG. 9 is a schematic diagram illustrating a mechanism for providing a clock signal in an opposing direction to data flow in a rings network in accordance with at least one embodiment of the present invention.
  • FIG. 10 is a schematic diagram illustrating a mechanism for providing a clock signal in a same direction as a data flow in a rings network in accordance with at least one embodiment of the present invention.
  • FIG. 11 is schematic diagram illustrating an exemplary implementation of a timing interface of a rings interface in a rings network in accordance with at least one embodiment of the present invention.
  • FIG. 12 is a schematic diagram illustrating latency issues in a ring network in accordance with at least one embodiment of the present invention.
  • FIGS. 13 and 14 are schematic diagrams illustrating exemplary implementations of bridges in ring networks in accordance with at least one embodiment of the present invention.
  • FIG. 15 is a schematic diagram illustrating an exemplary enumeration process in a ring network having a bridge in accordance with at least one embodiment of the present invention.
  • FIG. 16 is a schematic diagram illustrating an exemplary priority scheme for messages received simultaneously at a same interface of a bridge in a ring network in accordance with at least one embodiment of the present invention.
  • FIG. 17 is a schematic diagram illustrating an exemplary implementation of a bridge in accordance with at least one embodiment of the present invention.
  • FIGS. 18 and 19 are schematic diagrams illustrating an exemplary process for the elimination of stray messages in a ring network in accordance with at least one embodiment of the present invention.
  • FIGS. 20 - 22 are schematic diagrams illustrating exemplary ring networks multiple bridges in accordance with at least one embodiment of the present invention.
  • FIGS. 23 - 35 are schematic diagrams illustrating exemplary implementations of a scan interface in a ring network in accordance with at least one embodiment of the present invention.
  • FIG. 26 is a schematic diagram illustrating exemplary interface signals between two members of a ring network in accordance with at least one embodiment of the present invention.
  • FIGS. 27 and 28 are schematic diagrams illustrating an exemplary implementation of a ring interface in accordance with at least one embodiment of the present invention.
  • FIG. 29 is a flow diagram illustrating an exemplary process for determining an intended recipient of a message in a ring network in accordance with at least one embodiment of the present invention.
  • FIGS. 30 - 33 are schematic diagrams illustrating exemplary signaling within a ring interface in a ring network in accordance with at least one embodiment of the present invention.
  • FIG. 34 is a schematic diagram illustrating an exemplary use of bridges in a ring network to minimize latency in accordance with at least one embodiment of the present invention.
  • FIG. 35 is a schematic diagram illustrating an external ring interface in accordance with at least one embodiment of the present invention.
  • FIG. 36 is a block diagram illustrating an exemplary system on a chip utilizing a ring architecture in accordance with at least one embodiment of the present invention.
  • FIG. 37 is a schematic diagram illustrating the exemplary network processor of the system on a chip of FIG. 36 in accordance with at least one embodiment of the present invention.
  • FIG. 38 is a flow diagram illustrating a low overhead task switch in a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 39 is a flow diagram illustrating exemplary data paths in a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 40 is a block diagram illustrating exemplary state resources of a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 41 is a block diagram illustrating an exemplary implementation of register r 1 of a general purpose register of a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 42 is a block diagram illustrating various registers of a general purpose register of a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 43 is a block diagram illustrating an exemplary software model for a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 44 is a flow diagram illustrating an exemplary network processor pipeline in accordance with at least one embodiment of the present invention.
  • FIG. 45 is a flow diagram illustrating an exemplary network processor pipeline timing in accordance with at least one embodiment of the present invention.
  • FIG. 46 is a schematic diagram illustrating an exemplary internal memory for implementation in a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 47 is a schematic diagram of an exemplary network processor in accordance with at least one embodiment of the present invention.
  • FIG. 48 is a schematic diagram illustrating an exemplary multireader agent in accordance with at least one embodiment of the present invention.
  • FIG. 49 is a flow diagram illustrating an exemplary data alignment and packing process in accordance with at least one embodiment of the present invention.
  • FIG. 50 is a flow diagram illustrating a mapping of data from a multireader agent bus to a multireader operation in accordance with at least one embodiment of the present invention.
  • FIG. 51 is a schematic diagram illustrating an exemplary message sender of a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 52 is flow diagram illustrating an exemplary mapping of an agent write command to a message in accordance with at least one embodiment of the present invention.
  • FIG. 53 is a schematic diagram illustrating an exemplary direct memory access agent module in accordance with at least one embodiment of the present invention.
  • FIG. 54 is flow diagram illustrating an exemplary mapping of data on an agent bus to a direct memory access command.
  • FIG. 55 is a schematic diagram illustrating an exemplary cyclical redundancy code agent in accordance with at least one embodiment of the present invention.
  • FIG. 56 is a flow diagram illustrating a mapping of data on an agent bus to cyclical redundancy code data in accordance with at least one embodiment of the present invention.
  • FIG. 57 is a schematic diagram illustrating an exemplary timer agent in accordance with at least one embodiment of the present invention.
  • FIG. 58 is a flow diagram illustrating a mapping of data on an agent bus to timer data in accordance with at least one embodiment of the present invention.
  • FIG. 59 is a schematic diagram of an exemplary doorbell agent in accordance with at least one embodiment of the present invention.
  • FIG. 60 is a flow diagram illustrating an exemplary encoding of task data for use by a doorbell agent in accordance with at least one embodiment of the present invention.
  • FIG. 61 is a block diagram illustrating an exemplary communications processor implementing a ring architecture in accordance with at least one embodiment of the present invention.
  • FIG. 62 is a schematic diagram illustrating the exemplary communications processor of FIG. 61 in accordance with at least one embodiment of the present invention.
  • FIGS. 63 - 69 are schematic diagrams illustrating various implementations of an external ring interface in a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 70 is a block diagram illustrating an exemplary programming module for a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 71 is a block diagram illustrating an exemplary data path and protocol path of a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 72 is a schematic diagram illustrating an exemplary network processor utilized in a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 73 is a flow diagram illustrating an exemplary processing pipeline of a network processor utilized in a communications processor in accordance with at least one embodiment of the present invention.
  • FIGS. 74 and 75 are flow diagrams illustrating exemplary pacing processes utilized in a communications processor in accordance with at least one embodiment of the present invention.
  • FIGS. 76 - 80 are schematic diagrams illustrating various exemplary implementations of a communications processor in communications systems in accordance with at least one embodiment of the present invention.
  • FIG. 81 is a flow diagram illustrating an exemplary flow manager functionality of a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 82 is a block diagram illustrating an exemplary data plane development for use in software development for a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 83 is a block diagram illustrating an exemplary software development model in accordance with at least one embodiment of the present invention.
  • FIG. 84 is a block diagram illustrating an exemplary software design approach in accordance with at least one embodiment of the present invention.
  • FIG. 85 is a block diagram illustrating an exemplary partitioning of software and interfaces in a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 86 is a block diagram illustrating an exemplary partitioning of software in a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 87 is a flow diagram illustrating a typical process for executing program instructions using a known multiple-branch technique.
  • FIG. 88 is a schematic diagram illustrating an exemplary processing environment in accordance with at least one embodiment of the present invention.
  • FIG. 89 is a schematic diagram illustrating an exemplary architecture of a processing unit of the processing environment of FIG. 88 in accordance with at least one embodiment of the present invention.
  • rings architecture for communications and data handling systems
  • Enumeration process for automatically configuring the ring topology
  • automatic routing of messages through bridges automatic routing of exception messages
  • extending a ring topology to external devices and providing a flexible and re-configurable system read return address, write-ahead functionality to promote efficiency, wait-till-reset operation resumption, in-vivo scan through rings topology, staggered clocking arrangement, and stray message detection and eradication.
  • FIG. 1 An architectural overview of a flexible packet processor; a programming model for a flexible packet processor; an instruction pipeline for a flexible packet processor; an internal memory to be used with the flexible packet processor; the use of a flexible packet processor as a module on a rings-based architecture; the core of the flexible packet processor and associated compounds (agents and non-agents) on the packet processor.
  • Additional inventive elements conveyed through the embodiments and details discussed below include, among other things: an architectural overview of a communications processor; a programming model for a communications processor; a data path protocol support model for a communications processor; an exemplary network processor employed as the core packet processor for the communications processor; an exemplary rings-based SOC interconnect fabric architecture employed in the communications processor; a variety of quality of support (QOS) features that implemented in the communications processor; a series of beneficial applications of the communications processor; the various approaches for the software that can be implemented to power the communications processor; specific exemplary strategies for the software in the high performance communications processor; and a performance estimate for RFC 1483 bridging.
  • QOS quality of support
  • a number of acronyms are used herein to describe various embodiments of the invention.
  • a table of acronyms and definitions therefore is provided as Table 1 below: TABLE 1 Acronym Definition AAL ATM Adaptation Layer ABI Application Binary Interface ABR Available Bit Rate ADPCM Adaptive Differential Pulse Code Modulation ADSL Asymmetric Digital Subscriber Line ALU Arithmetic Logic Unit API Application Programming Interface ARC ARC Cores ARM Advanced RISC Machines ARP Address Resolution Protocol ASIC Application Specific Integrated Circuit ATIC ATM Interconnect ATM Asynchronous Transfer Mode ATMOS ATM Operating System BGP Border Gateway Protocol (see FIG.
  • One inventive aspect of the present invention is to provide a rings architecture to build a system on a chip (SOC) and allow for ease in configuration, expandability and external interface.
  • This rings architecture involves: (1) the use of transactions instead of signals; and (2) the use of a single switch fabric to carry the transactions instead of many connections as typically implemented in buss-based systems.
  • a transaction in at least one embodiment, includes a instruction generated by a certain module for directing, in a structured way, another module to perform some operation. Transactions are mapped onto single physical connection.
  • a transaction may direct a module to, for example, set a set mode flipflop to one or clear register X or add value Y to counter Z. Transactions also can be used to provide time sequencing.
  • a rings-based system on a chip comprises a plurality of ring members on a ring that communicate using point-to-point connectivity, a plurality of ring interfaces for interfacing the ring members with the ring, a message traversing the ring, wherein the message travels one ring member per clock cycle.
  • the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member.
  • subsequent ring members can be adapted to supply backpressure signals to prior ring members.
  • the message is applicable to the given ring member based on at least one of an identifier identifying the given ring member and an identifier indicating that the message applies to multiple ring members.
  • the identifier identifying the given ring member can comprise an address for the given ring member.
  • the identifier indicating that the message applies to multiple ring members may, in one implementation, comprise message data designating the message as a supervisory message.
  • the message may comprise a type field, an address field, and a data field.
  • the message may also comprise an enumberation message, wherein the enumberation message is processed by the ring members in order to assign address space consumed by each ring member.
  • a subsequent supervisory message can cause the results of the enumeration message to be returned, thereby allowing a central member comprising a CPU to infer the topology of the system.
  • the message can comprise a reset message that is processed by the plurality of ring members in order to reset the system.
  • the message may comprise an activate message that is processed by the plurality of ring members in order to activate the system.
  • the message also may include a request from a CPU ring member that causes the other ring members to report out their address information.
  • the message may also comprise a write message that is processed by one of the plurality of ring members to write data thereto, a read message that is processed by one of the plurality of ring messages to read data therefrom, and/or a stray message indicator so that the system can identify stray messages.
  • the ring members of the rings based SOC comprise a CPU and a plurality of peripherals, and wherein the peripherals are adapted to write ahead changes in peripheral status, thereby reducing the quantity of read messages that are issued by the CPU.
  • the ring of the SOC also may include an external ring interface allowing the ring to communicate with modules that are not part of the ring.
  • the rings based SOC further comprises a land bridge that allows the message to proceed from one side of the ring to an other side of the ring without traversing some of the intermediate ring members.
  • the logic of the land bridge may be configured based on the results of an enumeration message.
  • the plurality of ring members and plurality of ring interfaces of the rings-based SOC may comprise a first ring with the SOC further comprising a plurality of second ring members and a plurality of second ring interfaces defining a second ring, both the first ring and the second ring implemented as a system on a chip, and wherein the first ring and the second ring are coupled using a sea bridge.
  • the logic of the sea bridge is configured based on the results of an enumeration message.
  • the exemplary ring network 30 includes two rings 32 , 34 connected via a bridge 36 , each ring including a plurality of modules 38 - 48 .
  • the modules can include any of a variety of modules implemented in SOCs for processing and/or handling data, such as a DMA, an external interface, a timer, a CPU, an I/O, a peripheral, and the like.
  • the rings 32 , 34 and the bridge 36 represent an implementation of the switch fabric 12 of FIG. 1 in accordance with at least one embodiment of the present invention.
  • a module determines if the message the intended recipient of the message. If the module is the recipient, the module removes the message from the ring and processes it accordingly. Otherwise, the module passes the message on to the next module (e.g., from module 44 to module 46 ) during the next clock cycle. If a module has a message to send, the module waits till there is a free slot and passes the message to the module's left hand neighbor. In this case, each message is one clock long and the messages travel around the ring 32 , one hop per clock.
  • Anchor the host interface. Through this interface, the host resets, configures and controls the setup functions of the ring.
  • the Anchor also can be adapted to determine if it is the primary Anchor.
  • Bridge e.g., bridge 36
  • Bridge a combination of two devices: an upstream link and a downstream link.
  • the bridge flips the network ID and acts as an Anchor for upstream ring.
  • the host after the learning stage, programs the bridge about what switching to perform.
  • the bridge snoops on the ring and if a hit detected, consumes the message and carries it on the other side. If the message is not hit, the it is sent down as usual.
  • the bridge typically has two address/mask registers per link direction.
  • Module a collective name for components of a ring, such as a CPU, a bridge, a TDM interface, a Utopia interface, an xDSL PHY, a timer, a UART, a FCC, a MCC, a scratch RAM, a CRC calculator, and the like.
  • Packet Processor also referred to herein as Vobla—a network optimized CPU for managing communication logical links.
  • the packet processor in at least one embodiment, is used to control and terminate streams that are beyond internal functionality of the device.
  • the network side is done through the rings, the other side includes, for example, an external RAM interface.
  • the rings architecture has many advantages over traditional bus designs and is an effective way to connect many different modules, whether on the same chip or on several chips. Instead of using signals and busses, communication between modules (data and commands) are mapped onto transactions, which in turn are transmitted over ring infrastructure. Ring topology allows predictable delays and easy scalability. Each ring member adds delay of, for example, one clock. The ring clock frequency can be made as fast as needed because of geographical proximity of its members. Rings can be further connected through bridges, such as bridge 36 . These bridges are similar to network switching devices in the sense that they are programmed to direct selected portions of the traffic to the other side (e.g., from ring 32 to ring 34 ). Inside one exemplary embodiment chip, the members of the ring are connected to one another using standard [e.g., 8 bits type/20 bits address/32-64 bit data] connection. When going outside the standard, a smaller/slower interface may be defined.
  • standard e.g. 8 bits type/20 bits address/32-64 bit data
  • the ring carries two kinds of messages.
  • Setup/Config messages and Work read and write messages.
  • the Setup messages can be used to learn the network topology, assign addresses and to program the members (i.e., the elements of a ring).
  • Setup messages are initiated by a host through a special anchor member.
  • Regular members in one embodiment, reply to setup messages by providing the host their functionality ID, ring ID and their starting address. The host software can infer from that data the exact topology of the network and the functionality of its members.
  • Work messages in one embodiment, are initiated by members based on their programming and functionality.
  • On each clock a ring member examines its in-port. If the in-port has valid message, then the member determines if the message is addressed to the member. If so, the member removes the message from the ring and processes the message accordingly. If not (i.e., the message is intended for another member), on the next clock the member transmits it downstream on the out-port when the out-port becomes available.
  • Idle the connection is idle, i.e., no message
  • Reset reset and propagate to reset the entire network
  • Activate includes the address of a specific ring member.
  • the a subset of the data bits are written into the RIF (ring interface) unit control register—the first bit is activate bit (hence the name). After reset this bit is inactive. This prevents any work activity of the peripheral to take place. Setting this bit to one, enables normal work.
  • Other bits include: scan_mode_enable, stop_clock, in_vivo_scan_test, ring_loopback_enable, (soft reset), as well as other user-defined bits (discussed below). These bits may be reset to zero;
  • Table 2 sets forth a listing of message types with a proposed encoding structure and description of the encoding.
  • W width of the data message 64/32 for return
  • F enable address modification to indicate first data of frame.
  • I increment destination.
  • work_write 10SMLZZZ 0x80 S Snoop this message.
  • M TBD.
  • L Last data transfer in the message.
  • the modules assign address space for themselves. As the modules are members of at least one ring, each module can take a block of address space and tell the next module its starting address (herein referred to as Enumeration). In many systems, this assignment often gives the same results, so it may not be necessary to actually reprogram the modules, but it reduces the need to change hardware registers every time ring configuration is changed.
  • This self-addressing also serves as a self-test.
  • rings-based integrated circuit such as a SOC communications processor
  • peripherals appear to a CPU as starting address. Each offset from this starting address is assigned to a different command for the peripheral. Note that assigning different peripherals to different CPUs can simply be a matter of programming a location in RAM. Accordingly, several CPU's can be put on a IC without worrying about arbitration.
  • each member of the ring network has predefined address space. In one embodiment, this is limited to some power of 2. For example, if a UART (Universal Asynchronous Receiver/Transmitter—used for serial communications and having a transmitter and a receiver) needs 5 registers, it allocates 8 addresses for itself. It also should first align the address to a border of 8.
  • UART Universal Asynchronous Receiver/Transmitter—used for serial communications and having a transmitter and a receiver
  • the next available address e.g., 256
  • bridges This same enumeration process is repeated for each member of the ring network, except bridges, which are discussed in more detail below.
  • bridges first allocate their own space and then send the in-port Enum message to the other side of the bridge.
  • the bridge in one embodiment, is adapted to flip the zero data. Accordingly, when the Enum message is returned to the bridge on the other side, the bridge passes it back on this side.
  • bridges can program the routing themselves. If there are no loops, each bridge may need a maximum of two ranges to look at. It is expected that no loops exist for Enumeration protocol. So eventually the Enum message will get back to Anchor. This signifies the end of Enum process.
  • a communication system using a ring network architecture comprises a plurality of ring members connected in point-to-point fashion along the ring network, a transaction based connectivity for communicating a message among the ring members, and wherein the message is a configuration message that causes ring members to assign address space in the ring network.
  • the configuration message is processed by each ring member to cause that ring member to assign address space for that ring member, and wherein the configuration message is then passed to the next ring member.
  • the configuration message includes an address that defines a starting address.
  • the configuration message in one implementation, is originated by an anchor member, which may include a CPU.
  • each member processing the configuration message can revise the starting address before passing the configuration message to the next ring member.
  • each member processing the configuration message can assign the address space of the member using the starting address and address space sufficient for that member.
  • a CPU on the ring network of the system recognizes other ring members using starting addresses assigned to those ring members based on the configuration message.
  • offsets to the starting addresses of the ring members may be used for different commands for the ring members.
  • the system further comprises a second configuration message which causes ring members to respond with descriptive data, wherein the descriptive data can includes address space data for the ring members.
  • the descriptive data can includes address space data for the ring members.
  • a method of assigning address space in a ring network architecture system including a plurality of ring members comprises issuing a configuration message, processing the configuration message at each ring member to assign address space for that ring member in the ring network, modifying the configuration message based on the assigned address space, and passing the configuration message to the next ring member.
  • the configuration message is assigned by an anchor on the ring network, wherein the anchor can include a CPU member.
  • the configuration message includes a starting address and the address space is assigned based on the starting address and the address needs of that ring member.
  • the method step of modifying comprises modifying the starting address before the step of passing.
  • the plurality of ring members includes a bridge, wherein the bridge responds to the configuration message by configuring logic that provides for a subsequent message to be passed across or by the bridge depending on an address associated with the subsequent message.
  • the ring network can be adapted to process a first category of message and a second category of message, and wherein the bridge logic is operative only for the second category.
  • the first category is a supervisory message and the second category is a work message.
  • the activation register in one embodiment, is part of every ring interface (RIF). It is sent as reply to Who_Am_I message. It concatenates several key parameters of each ring member. It can be used by the Anchor to learn the topology of the network. It can include the following fields: user_controls; module ID; user_ID; soft_reset; invivo; scan_mode; stop_clock activated; and the like.
  • Module ID is a hardwired unique ID for each kind of member on the network. Ring ID is, for example, one-bit used to identify where bridges are inserted. Each time the Enumerate message crosses a bridge, this bit is flipped. Active bit is set/reset by activate (or activate all) message types to allow normal operation of the modules. While this bit is reset, the module should not operate.
  • Hardware connectivity This is when the actual hardware is connected and the topology of the Rings is built. Several rings-compliant chips can be interconnected through the external ring interface. The unused interfaces can be shorted out.
  • Reset the first message the Anchor typically propagates is a Reset message. It is flooded without clocking. The Host should wait sufficient time for the reset message to flood the whole network.
  • WhoAmI response Each module, after getting WhoAmI request, sends the contents of its Activation register as part of the WhoAmI response message.
  • the Anchor should present all these messages to the host. It typically is the host's responsibility to infer the network topology from this data.
  • ProgramWr After learning the network topology, via Who_Am_I response messages, the host can start configuring the members. Since it knows each member starting address, the host can send requests to write to any register. The last stage is to activate the network by writing active, for example, bit 1 in zero offset register. If during later stages the Host needs to get the value of any register, it can do so by issuing ProgramRd request and waiting for ProgramRd response. Bridges are special case for ProgramWr. Bridges need to be programmed first, before trying to pass data across them.
  • Activation After programming stage, the SOC is ready to perform processing and data handling tasks. To start all modules and enable them to work, the Activate message is flooded throughout the ring network.
  • Mode to kill stray messages is foreseeable that because of a bug in design or programming, a message could be sent that is not addressed to any member of the ring. Either its address is above the highest assigned address or it is addressed to empty space between consecutive members. If the address of the stray message is above high limit, it can be routed to the Anchor and consumed or discarded by the Anchor. However if the stray address is pointing to empty space, this message could circle the ring forever.
  • a process used to prevent this endless loop follows: messages can have an additional bit running along with them. If a bridge is passing a message through (not across) it can set this bit on the message. If message arrives to a bridge with this bit set, the bride discards it.
  • the bridge For each ring, only one bridge should execute the above discard process. Otherwise legitimate messages could be discarded.
  • the solution to this problem is as follows: during the Enumeration process, the bridge initializes its sides as a close side and a distant side. The close side is where the Enum message appears from. The distant size is the other side. In this case, the distant side can be selected to perform the monitoring of stray messages. On the primary ring (where Anchor is located) the job of killing stray messages is done by Anchor.
  • a solution in one section typically implies a solution for the whole system.
  • FIG. 5 illustrates a possible solution to the race problem.
  • the clock signal path 64 in the same direction of the data path 66 , is separated into a number of similar compounds (e.g., compounds 70 , 72 )
  • the logic 74 , 76 on each flip flop leaving a compound it can be ensured that the delay between flip flops is at least long enough to prevent a race condition. This also can be verified after layout.
  • the clock signal is propagated in the opposite direction of the data, as illustrated with reference to FIG. 6.
  • the clock signal 78 is provided in the opposite direction of the data signal 80 , the potential for race between compounds 70 , 72 is significantly reduced or eliminated.
  • the OK signal 82 there is at least one signal that goes against the usual flow of data (signal 80 ), this signal being the OK signal 82 , which is utilized to enable backpressure, as illustrated with reference to FIG. 7.
  • the OK signal 82 generally needs special treatment because it's sampling clock lags behind sourcing clock (signal 78 ). However, this can be solved by ensuring that the return path is longer then clock delay.
  • a latch 86 may be implemented to ensure that data provided to flipflop 62 changes only after the rising edge of the clock 78 (clkb).
  • FIG. 9 illustrates a complication resulting from the propagation of the clock 90 in a direction opposing the propagation of data in a ring network having a bridge 94 .
  • data_a leaving the bridge 94 goes to member 96 and should be sampled by the rising edge of clkb.
  • clkb lags considerably behind clka of the bridge 94 .
  • race is eminent.
  • latches should be used on the OK signal to prevent race.
  • FIG. 10 illustrates a complication resulting from the propagation of the clock 90 in a same direction of the propagation of data 102 in a ring network having a bridge 94 .
  • data_b leaves member 96 to be sampled by the bridge 94 using clk_a.
  • clkb now lags considerably behind clka.
  • this may be advantageous if the lag is considerably smaller than the clock cycle since the data can be delayed beyond the danger zone of clock delay.
  • the OK signal is covered and the last leg of data is covered.
  • the only signal that typically must be considered is the OK signal from the bridge 94 to member 96 .
  • a latch can be used at member 96 to prevent race in the OK signal.
  • a rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and where the system further comprises a system clock signal for controlling timing on the ring network wherein the system clock signal is aligned between groups of ring members instead of among all of the ring members.
  • the system clock signal runs in the same direction as the message, while in another embodiment, the system clock signal runs in the opposing direction to the message.
  • the alignment can be implemented to substantially removes skew among the clock signals. Furthermore, the alignment can prevent a flip-flop at a ring member from sampling data a clock cycle too early.
  • the system clock signal alignment preferably is performed among adjacent ring members, wherein the alignment for a ring member can be performed with respect to the ring member's upstream and downstream ring member.
  • the alignment can be performed by inserting logic at the ring members that ensures that the delay between adjacent clock signals does not exceed the delay between the adjacent members.
  • the alignment can be performed using latches that are clocked by clock signals at individual members.
  • the rings-based system further comprises a backpressure signal that runs in the opposing direction to the message, wherein the alignment is performed by inserting logic at the ring members to ensure that the return path for the backpressure signal exceeds the clock delay between adjacent members.
  • the ring topology in accordance with the present invention arranges module in a logical ring. All data and control is transmitted over this ring infrastructure sequentially around the ring.
  • considerable ring latency may be introduced.
  • module 116 sends a message to module 118 , there is little latency.
  • member 120 is to pass data to member 122 , the data must pass through four modules (i.e., four clock cycles), resulting in considerably more latency.
  • Another problem is peak latency. To illustrate, suppose that member 116 transmits mainly to member 122 and member 118 transmits data mainly to member 120 . In this case, the communication between members 118 and 120 suffers degradation due to the traffic from member 116 to member 122 .
  • a bridge may be used to minimize the latency between members of a ring.
  • a bridge 130 may be used to connect two rings 132 , 134 .
  • This bridge is analogous to a sea bridge since it connects two rings together just as a sea bridge connects two islands.
  • the sea bridge determines what messages to cross over between rings and what messages to keep on the current ring. So referring to the above latency problems, the sea bridge may be utilized to minimize peak latency issues. To illustrate, if member 134 communicates mainly with member 136 , communications between member 138 and member 140 are not affected.
  • Intraring latency resulting from a relatively large number of members of the ring between the transmitting member and the intended recipient member may be reduced by a land bridge, as illustrated with reference to FIG. 14.
  • the land bridge 146 is utilized within a ring 148 to minimize the number of hops for data/clock signals. To illustrate, without the land bridge 146 , data from member 150 to member 152 would have to go though 5 members. However, the land bridge 146 reduces the number of members in the data path between member 150 and member 152 to 3 members (with two of the members being the bridges two interfaces 154 , 156 ).
  • the bridge is adapted to analyze a message received at one of its interfaces and to pass the message through to its other interface or pass on to the next member depending on the intended recipient of the message. For example, when member 150 sends a message to member 158 , the land bridge 146 receives the message at bridge interface 154 and determines that the shortest path is to pass the message from the bridge interface 154 directly to the member 158 . However, when member 150 sends a message to member 160 , the land bridge 146 receives the message at bridge interface 154 and determines that the shortest path is to pass the message through the bridge to the bridge interface 156 and then from bridge interface 156 to the member 160 .
  • FIG. 15 an exemplary routing process by the bridge 146 is illustrated in accordance with one embodiment of the present invention.
  • the land bridge 146 appears as two ring members (interface 154 being one member and interface 156 being the second).
  • a message arriving at the near end is passed on if the destination address of the message is greater than 3 and less then 6. Otherwise, the message is passed through the bridge 146 to the far end (interface 156 ). On the far end, a message arriving at the interface 156 from the direction of member 152 will be passed through to the near end (interface 154 ) if its destination address is less than 6 but greater than 3. Otherwise the message is passed on to member 160 .
  • the address values by which a bridge 146 determines the routing of a message are determined during the enumeration process described herein.
  • FIG. 16 illustrates a situation whereby two messages are received at an interface 154 of a bridge 146 at a same time.
  • msg 1 and msg 2 are received at the same interface 154 at the same time.
  • messages transferred between interfaces of the bridge 146 are given priority, whereas in other embodiments, messages received at the bridge interface from members of the ring are given priority.
  • FIG. 17 an exemplary implementation of a bridge 170 is illustrated.
  • the bridge 170 includes control logic 172 adapted to control the upstream and downstream muxes 174 - 180 to pass either the incoming messages through either the fifo (fifos 182 - 188 ) between the downstream input and the upstream output, the upstream input to the upstream output, the downstream input to the downstream output, and the upstream input to the downstream output.
  • a rings-based system on a chip comprises a plurality of ring members on a ring that communicate using point-to-point connectivity, a message traversing the ring from member to member, the system being adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and wherein at least one of the ring members comprises a bridge.
  • the bridge of the rings-based system is adapted to allow messages to travel from one side to another side of the bridge without passing through intermediate ring members.
  • the bridge can be configured so that the message arriving at the bridge is routed according to whether an address associated with the message corresponds to one side of the bridge or the other side of the bridge.
  • the message in one embodiment, is passed across the bridge when the address is associated with the one side of the bridge, and wherein the message is passed through the bridge when the address is associated with the other side of the bridge.
  • the bridge can include logic with a range of addresses, such that the message is routed to one side of the bridge or the other side of the bridge depending on whether the address is within the range.
  • the logic may be established based on a configuration message that causes the ring members to assign their address spaces, and the configuration message may include an enumeration message.
  • the plurality of ring members of the rings-based system are a first plurality of ring members comprising a first ring network and the system further comprises a second plurality of ring members comprising a second ring network, wherein the bridge comprises a bridge between the two ring networks.
  • the bridge can be adapted to determine which messages to pass to the second ring network and which messages to keep on the first ring network.
  • the bridge may be configured so that the message arriving at the bridge is routed according to whether an address associated with the message corresponds to one side of the bridge or the other side of the bridge.
  • the bridge can include logic with a range of addresses, such that the message is routed to the first ring network or the second ring network depending on whether the address is within the range.
  • This logic can be established based on a configuration message that causes the ring members to assign their address spaces.
  • the configuration message in this instance, may include an enumeration message.
  • the message can be passed across the bridge when the address is associated with the first ring network, and wherein the message is passed through the bridge when the address is associated with the second ring network.
  • the bridge is adapted to process a first category of message and a second category of message.
  • the first category of message can include a supervisory message and the second category of message can include a work message.
  • the bridge then can be adapted to make a routing determination based on the second category of message.
  • the bridge can be adapted to identifies the category of message by examining a message type included in the message.
  • a stray message is a message addressed to an unused address of a ring network.
  • the enumeration process typically leaves gaps of unused address space between active modules when the modules align themselves to starting addresses being, for example, a power of two.
  • a stray message usually is a result of a software bug. Unchecked, stray messages may slowly choke the ring network, while such messages are difficult to detect and/or debug. However, not every member of the ring is required to know about much less have the capability to detect or remove stray messages. In one embodiment, this responsibility falls to the Anchor and/or bridges.
  • one bit of a message is used as a marker to determine if a message is a stray.
  • the bit normally is set to zero, but when a message passes through an Anchor 192 or bridge 194 , the bit is set to one. If the message arrives at the Anchor 192 or bridge 194 again, the Anchor/bridge notes the set bit and discards the stray message, thereby removing the stray from the ring.
  • the far end 200 of the bridge 198 i.e., the interface of the bridge furthest away from the anchor
  • the stray message marker bit of messages received at the near end 202 remain unchanged while the stray message marker bit is set at the far end 200 of the bridge.
  • FIGS. 20, 21, and 22 illustrate exemplary ring networks having more than one bridge per ring.
  • FIG. 20 includes a ring having two parallel bridges 208 , 210
  • FIG. 21 has a ring 212 with bridges 214 , 216 that cross
  • FIG. 22 includes a ring network having both a land bridge 222 and a sea bridge 224 .
  • Other bridge combinations may be utilized in accordance with the present invention.
  • a scan may be enabled by introducing new scan_insert member 230 , which is not a regular member.
  • the scan_insert member 230 can be adapted such that it does not introduce one clock delay.
  • For ring signals it is a mux 232 between regular ring data and scan input signals. During test modes this mux 232 inserts scan input signals instead of regular ring data.
  • this mux 232 connects ring infrastructure as usual. In scan mode, the ring is effectively cut off. Insert-scan signals come directly from input pads 234 , 236 on the chip. The tap the results pins drive the output pads. The insert scan signals form three major groups: Message type, Message address and Message data.
  • the ring should be programmed to scan mode. This can done by forcing a sequence of supervisor messages onto the ring. This sequence first resets the ring, then Enumerates it. The last stage is activating for scan of one specific member. After the scan mode is programmed to the member, the actual scan can be done. Scan mux signal is part of the ring. It is programmed via, for example, the external pad to create the shift in sequence. Then for one clock it is negated. During this cycle the scan capture occurs. Then scan mux is asserted again and clocking advances the scan out data. The scan out data is tapped off the wires entering the scan_insert module. Referring to FIG. 25, exemplary signals 240 - 250 used as scan chains are illustrated. During scan, several message data signals are used as scan chains. The number of data lines depends on how many parallel scan chains are necessary.
  • a typical silicon debug scenario is as follows: a chip is run for one billion clocks and a bug is discovered. The test is rerun for half the clocks and then stopped. all flip-flops values at the stopped state the source of the problem or error is hopefully determined.
  • in-vivo scan may be utilized.
  • the chip is started as usual.
  • the software is run for the specified number of clocks (note: optionally, a special counter may be used to freeze the rings.)
  • the ring modules are deactivated then deactivated by, for example, a message from a certain module.
  • One specified ring module is re-activated in in-vivo scan mode. This mode causes the module to run shift-out of all its flip-flops.
  • the module's ring interface is responsible for managing the scan-out. It counts bocks of, for example, 32 scan-out bits, packages them in one message and ships the message to the Anchor.
  • the Anchor or other module needs to retrieve these messages out of the Anchor and pass them to debug software.
  • the message type typically is the Program Read Response message, which is designed to get to Anchor.
  • the address is the modules self-address.
  • the data of this message is, for example, 32 bits of scan-out data. Each activation of this mode causes a certain number such messages to be generated. If the modules have more flip-flops then the total bit count of the messages, the designated module can do this activation again and again.
  • a special supervisor message Freeze message
  • Freeze message is defined to run quickly around the rings and freeze the state of each module.
  • a rings-based system on a chip comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that, during normal operation, upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and wherein the system is further adapted for a scan testing mode in which one of the ring members is enabled for a scan output and the other ring members deactivated.
  • the deactivated members can be adapted to pass messages without consuming the messages.
  • the scan output can be packaged into one or more messages that are transmitted by the one ring member.
  • the one or more messages may be transmitted to a processor, wherein the processor can include a ring member operating as a supervisor that consumes supervisory response messages.
  • the processor can be adapted to make the data from the one or more messages available to debugging software.
  • a second of the ring members of the rings-based system comprises a processor that issues at least one message that operates to deactivate the other ring members and to enable the one ring member for the scan output.
  • the operation of the system in the scan testing mode causes the one ring member to shift out flip-flops associated with the one ring member into one or more messages sent on the ring.
  • the scan testing mode can be initiated by resetting the ring network and enabling the one member for the scan mode, where initiation of the scan testing mode may include enumerating the ring network.
  • the scan testing mode allows a user of the system to debug the system without adding additional hardware.
  • the plurality of ring members are coupled to the ring network using a plurality of ring interfaces having registers, wherein the registers preferably include bits that can be set to deactivate the ring member associated with that ring interface.
  • the registers also may include bits that can be set to enable the ring member associated with that ring interface for the scan output.
  • a method of scanning in a ring network having a plurality of ring members comprises observing a defect or anomaly during normal operation of the ring network, issuing at least one message that causes one ring member to enter a scan output mode and other ring members to be deactivated, resuming operation of the ring network, and outputting scan data from the one ring member onto the ring network as messages.
  • the method in one embodiment, futher comprises causing a different ring member to enter the scan output mode in order to isolate the defect or anomaly.
  • the at least one message can comprise at least one supervisory message that configures bits in ring interfaces associated with the ring members.
  • the step of observing takes place at a point in time during the normal operation, and wherein the step of resuming is carried out just prior to the point in time.
  • the one ring member packages its scan output as messages to be transmitted to a processor ring member.
  • the processor ring member can be adapted to make the scan output available to debugging software.
  • FIGS. 26, 27 and 28 illustrate an exemplary implementation of ring signaling between modules of a ring network.
  • the OK signal 266 back pressure
  • the OK signal 266 flows in a reverse direction to inform member 268 that on the next rising clock 272 it may force new message on type/addr/data lines 274 - 278 .
  • the OK signal 266 is generated by the receiving member 270 .
  • the OK signal 266 is active and the only time it goes down is when the message type is non-idle and there is no room in the correct fifo of member 270 .
  • the correct fifo is either fifo 280 for through traffic in member 270 or the messages addressed for member 270 fifo.
  • the OK signal 266 is generated by signals coming from member 268 to member 270 and is sent roundtrip back during the same clock.
  • the generation of OK signal 266 can be done from flip-flops resident in member 270 and the type lines of message coming from member 268 . For example, if the fifo 280 is full, the OK signal 266 is negated, even though the next OK down the ring is active and is freeing an entry in the fifo 280 .
  • the same basic OK protocol is used four times in each RIF (ring interface) unit (FIG. 27). The same OK protocol is valid for the four exemplary RIF interfaces.
  • a rings-based system on a chip comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and the system is further adapted so that downstream adjacent ring members provide a signal to their upstream adjacent ring members that indicates whether a slot is available for the upstream ring member to pass the message to the downstream ring member on a given clock cycle.
  • the receipt of the signal indicating that a slot is not available causes the upstream ring member not to pass the message on that clock cycle.
  • each ring member provides the signal to the immediately prior ring member each clock cycle.
  • each ring member couples to the ring network by a ring interface, where the signals regarding slot availability are passed between adjacent ring interfaces.
  • the ring interface can include an input FIFO and a through FIFO.
  • the signal can be generated by the downstream ring member and passed to an immediately upstream ring member holding the message, where the signal is generated according to the FIFO for the downstream ring member that pertains to the message.
  • the downstream ring member can be adapted to determine that the input FIFO pertains to the message if the message is to be consumed by the downstream ring member and that the through FIFO pertains to the message if the message is not to be consumed by the downstream ring member.
  • the determination can be made by the downstream ring member examining information descriptive of the message before the message in its entirety is sent from the upstream ring member to the downstream ring member, where the information preferably comprises data from a type field and an address field for the message.
  • the signal can indicate that a slot is available when the input FIFO pertains to the message and the input FIFO can accept a message and/or when the through FIFO pertains to the message and the through FIFO can accept a message.
  • the signal generated by the downsream adjacent ring members is a backpressure signal that is generated based on data sent from the upstream ring member to the downstream ring member and then back to the upstream ring member in a round trip fashion during a single clock cycle.
  • each ring member has a ring interface, wherein each ring interface has four interfaces using or providing the signal which comprises a backpressure signal.
  • a method of controlling the transmission of messages on a ring network comprising a plurality of ring members comprises providing a message at a first upstream ring member that is available for output to a second adjacent downstream ring member, receiving a signal at the upstream ring member from the downstream ring member that indicates whether a slot is available for outputting the message on a clock cycle, and outputting the message from the upstream ring member to the downstream ring member if a slot is available and holding the message if a slot is not available.
  • the signal is generated based on the content of the message.
  • the signal can be generated based on whether the message will be consumed by the downstream ring member or pass through to a further downstream ring member.
  • the content of the message preferably includes at least a portion of the message type and/or at least a portion of the message address.
  • the downstream ring member is coupled to an input FIFO and a through FIFO, wherein the downstream ring member determines which FIFO pertains to the message.
  • the downstream ring member also can determine whether the pertinent FIFO is capable of accepting the message.
  • the Imessage path is the messages intended for this member.
  • Each message bus on the diagram above is actually collection of three fields: type/8, addr/20, data/64. It is true for 3 out of 4 interfaces.
  • the type can be in most cases reduced to work/program and read/write. Also, several other bits of type might be needed, like last and size.
  • For the address field only low order bits are needed. The address bits needed are the bits that cover the internal module address space.
  • the data field might be reduced in some cases to 32 bits or even less, for example 8bit UART.
  • the Imessage fifo may be a very reduced version of other fifos.
  • the Omessage fifo 282 transmits messages originating locally to the outside ring. It has to support full fields, because many kinds of messages can be produced.
  • the OK signal logic 284 originates in the sending member 268 . It starts with creating message type and address. Type and address fields travel to member 270 , whereas, using these two fields, a decision is made as to whether the message is a through message or it ends at and is consumed by member 270 . In each case, the status of the corresponding fifo is transmitted back as the OK signal. The next rising clock samples this OK to mux either previous message or new one or idle. As presented, all four interfaces of RIF have similar turnarounds with their OK signals.
  • incoming messages to a module are examined first to determine if the message is a supervisor or work/program message.
  • the address field 290 the intended address of the message can be determined. Since, in one embodiment, the address of the module is aligned to a power of two, an address mask 292 (referred to as split mask) may be used to compare only a subset of the bits of the address. The lower part 294 of the address is passed into the module as an internal address.
  • the subset of bits are compared against a self-address register 296 containing the addresses associated with the module (obtained during the enumeration process). If the subset 294 matches the self-address register 296 , the module can consider the message to be addressed to the module.
  • DOK down ok
  • the above discussion ignores the supervisor messages. Some of supervisors make different use of the address field, when they apply to all members (Enumerate). Some of the supervisor messages are responses from members. These messages carry address of the sender.
  • FIGS. 30 - 33 exemplary implementations of the RIF 300 are illustrated in greater detail.
  • the main RIF registers include:
  • self_address value of self address. This register typically is 20 bits although fewer bits may be used, as the lower part of this register typically is zero;
  • idnumber a constant parameter used to identify the associated member
  • ADDRESS_SPACE this is the number of bits used by internal address space. It is used to calculate the address space claimed by the ring member.
  • activated bit This bit is reset at hardware reset and modified further by activate messages. If this bit is active, the ring interface is in work mode. It will process work messages. If this bit is inactive, the ring member should wait for programming or activation; scan_enabled bit in activation register: turns the module into scan mode. Reset by hardware reset, further modifiable by activation messages.
  • in_vivo scan and related scan out of all registers during interruption of normal work. This is done on per module basis.
  • input refers to a signal entering a ring interface
  • output refers to a signal driven by the ring interface
  • the pins to a subsequent ring member/from a previous ring member include:
  • rif_d_ok output, backpressure, goes back to previous member
  • rif_d_clock input, clock in signal
  • rif_d_scan scan mode enable (the actual muxing signal, not test mode)
  • Pins for messages entering the ring member include:
  • rif_i_write output, this message is valid write and can come from a program or work write.
  • the RIF module modifies the options bits (see below) in case of program write.
  • rif_i_ok input, tells the RIF that message is accepted by member. On the next clock, a new message may be sent.
  • Control pins entering the RIF include:
  • rif_activated output, reflects activated bit in activation register, if not enabled this bit prevents work messages entering/exiting the member. Also, peripherals should not start transmit/receive operations with this bit disabled.
  • rif_reset output, either hard reset or soft reset
  • rif_scan_mode output, reflects scan bit in activation register if enabled, this member is under scan test;
  • rif_scan output, scan muxing signal if enabled, in shift of scan operation, if disabled with mode, means capture;
  • rif_clock clock for local flipflops
  • rif_user_id[ 1 : 0 ] user defined modifier of module ID input
  • Pins for messages going to the next member of the ring include:
  • rif_u_ok input, back pressure from next member
  • rif_u_clock output, clock out signal
  • rif_u_scan output, scan mode enable (the actual muxing signal, not test mode); rif_u_reset: output, hardware reset;
  • rif_u_passed_me output, indicates that message passed through bridge or Anchor already; Pins for messages exiting the member include:
  • rif_o_replace input, request to replace the relevant part of datal with self address bits
  • rif_o_ok output, tells the member that message is accepted by RIF;
  • the Anchor RIF interface in one embodiment, is a variation on the RIF interface used by regular ring members. It has one more state variable—active/passive Anchor. If the Enumerate message comes through dmessage inputs, then an Anchor declares itself passive. If Enumeration message comes from omessage input, then the Anchor declares itself an active Anchor. An active Anchor consumes all supervisor messages, whereas in regular RIFs, supervisor messages are ignored by passing them all to imessage output. For work messages there is another difference. Anchors have self-address space like any other ring member. Work messages addressed to Anchor address space are consumed. Anchors also participate in stray message kills (as discussed above). If message addressed above (or below) Enumerated address space, it will be caught and discarded by the Anchor.
  • the Bridge learns all it has to know about the topology.
  • Signal interfaces of a bridge are identical to two sets of regular RIF. The only exception is clock, which has a tree-topology. Other tug-along signals, like scan, take the longest (crossover) route.
  • From a hardware point of view bridge can be viewed as two RIFs connected back to back.
  • the bridge provides additional functionality. For one, the bridge records the first input to receive the Enumeration message. The end lucky to get hit first by Enumeration is labeled near, because it is closer to the Anchor. The other end is labeled far. Also the incoming Enumeration address is recorded as low range. The Enumeration message is sent to the other far side. When it returns on the far side dmessage input, The address is recorded again as high address. At this point bridge is ready to work.
  • Supervisor request messages are crossed to the other side.
  • Supervisor response messages are moved to near umessage output.
  • Program write messages and Program read requests are treated as work messages.
  • Program read responses are moved to the near umessage output.
  • Work messages are routed based on low/high bounds. If message address is between low/high bounds it is moved to the far umessage output. Otherwise the near side gets it. The far side also participates in detecting and removing stray messages.
  • messages appear to member module through rif_i_* signals.
  • rif_i_write changes just after rising edge of the clock. if active means valid write message arrived. Valid means correct type and context, The user does not have to worry about decoding message types and such;
  • rif_i_read changes same, means valid read message arrived
  • rif_i_ok member generates positive acknowledge to ring interface. This signal should be valid (or negated) shortly after rif_i_read or rif_i_write become valid. If OK is negated during this cycle, on the next cycle same message data will be driven. Members should make every effort to keep this signal very active;
  • rif_reset reset
  • rif_activated member received ok to operate. This signal is useful for Rx peripherals, not to start bothering anyone without activation;
  • Constant controls exiting a member and entering ring_control include:
  • module_id[ 7 : 0 ] these two bits can be used by members to tell the system something specific about themselves. For example Ethernet MACs can use one of these signals to tell the world if they are 10 or 100 mbit connected;
  • rif_o_type[ 7 : 0 ] is the type of outgoing message
  • rif_o_addr and rif_o_datal/datah are rest of the message bits
  • rif_o_ok if in current cycle this signal is inactive (low), don't change the message on the next positive edge.
  • Ring_control parameters include:
  • ring_interface_unit (also called ring_control) has 2 parameters, which should be set at verilog instance time.
  • ADDRESS_SPACE this number signifies the number of internal address lines that should enter the member. for example, member has internal memory map of 256 bytes it needs 8 address lines to address this space. Its ADDRESS_SPACE should be set to 8. It also means that to recognize a message to this member the 12 most significant bits of the message address are used.
  • MODULE_ID each hardware ring member gets, for example, 8 bits for a unique ID. This ID is unique to all instances of the same hardware, for example, all Ethernet MACs have the same ID. To distinguish between different MACs, self_address and user_id bits can be used. Module ID can be examined by Anchor using Who_Am_I messages. Module ID typically is part of the response by any module.
  • Each ring-based SOC typically has only one Anchor.
  • the hardware reset starts at this Anchor.
  • the Anchor has a hw_reset input pin. From this pin, reset is sent in two directions. One direction is down the ring. The other direction is to the module that hosts the Anchor, for example, a packet processor.
  • the reset propagates through the ring in the logical ring order. It is the same path all supervisor messages take, although the reset is a signal rather than a message. However it is unconditionally flip-floped at each ring member. It is also possible to force soft reset on ring members using Activate messages.
  • a rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, wherein the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and where the message causes a reset, such as a soft reset, of the given ring member if the message is applicable to that ring member.
  • the message preferably includes address information corresponding to the given ring member.
  • the message can include an activate message that includes at least one bit for causing a reset.
  • the message in one embodiment, causes a reset by writing at least one bit from the message into a ring interface for the given member.
  • the ring interface can includes a bit that is reset by the message, where the bit preferably includes an activated bit or a reset bit.
  • the ring interface can be adapted to provide an output to the given ring member for causing the reset, wherein the output preferably includes a control pin coupled to the given ring member.
  • a rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, wherein the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member; and wherein the system further comprises a reset control signal that causes multiple members of the ring network to be reset (such as a hard reset).
  • the reset control signal can include a hardware signal that is sent independent of the message. Furthermore, the reset control signal can be sent on a different line from the message.
  • the reset control signal can be adapted to cause all ring members except for the member from which the reset signal originates to be reset.
  • the reset control signal in one embodiment, causes a reset by causing the reset of bits in ring interfaces corresponding to the multiple members. In this case, the ring interfaces can provide an output to their corresponding ring members to cause the resets, where the outputs can include control pins coupled to the corresponding ring members.
  • Supervisor requests include reset, Enumerate, Who_Am_I requests, activate, freeze. These messages are generated by Anchor and are flooded through the network.
  • Supervisor response include Exception, WhoAmI_response. These supervisor messages are generated by regular members and float to the Anchor for its attention.
  • the Enumerate message is initiated by the active Anchor.
  • Anchor decides it active, if it is told to start the Enumeration through omessage inputs.
  • the message can include a header field, a data field, a next available address field, a ring ID, and the like.
  • the ring ID is bit flipped every time the message crosses a bridge. It is recorded in activate register in every ring interface. This bit can later be used by software to determine the exact ring topology.
  • Activate message is issued through the Anchor. It carries the address of a specific member and a few bits in the data field used to write the activation register. The bits in the activation register control the state and behavior of the members.
  • Freeze message The freeze message unclogs rings and deactivates all members.
  • the CPU needs to know how many free entries are there in a Utopia fifo. Instead of doing read operation initiated by CPU, the fifo, each time this number significantly changes, will write it in some agreed location of CPU's RAM. The CPU now only needs to read its local memory.
  • the rings-based system comprises a plurality of ring members on a ring that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member.
  • the system also is adapted to process both read messages and write messages.
  • the plurality of ring members includes a CPU and at least one peripheral that exchanges date with the CPU, wherein the peripheral includes at least one status memory that stores data describing the status of the peripheral, and where the system is configured to write ahead status changes that are accessible by the CPU.
  • the system also can be adapted to perform write ahead status changes that would otherwise be initiated by the CPU as read operations. Likewise, the write ahead operations can be programmed to occur based on read operations that would otherwise be initiated by the CPU on a regular basis.
  • the system can be adapted to write ahead status changes to a RAM on the CPU or a RAM that is accessible by the CPU.
  • the CPU can comprise a control protocol processor in a communications chip or network processor in a communications chip.
  • the status memory may comprise at least one status register.
  • the write ahead operations are performed for some peripheral status changes but not other peripheral status changes. Additionally, the write ahead operation is performed or not performed depending on the nature of the status change. Alternatively, the write ahead operation is performed or not performed based on the magnitude or the quantity of the status change.
  • a write-ahead method in a rings based communication system such as a communications processor or a network processor.
  • the method comprises identifying at least one module in a ring network that includes status registers that store status information of regular interest to a processor in the ring network, identifying which status information can be transmitted to the processor as a write ahead operation initiated by the at least one module instead of a read operation initiated by the processing, and programming the at least one module to transmit the identified status information as a write ahead operation.
  • the step of programming causes the average number of read operations initiated by the processor to decrease.
  • the identification comprises identifying which status changes are of critical importance or of regular interest to the processor.
  • the identification can include identifying what magnitude or level of status change will cause the write ahead operation.
  • Land Bridges Most members on a ring typically communicate in an asymmetric way.
  • EnetRx Ethernet receiving
  • EnetTx Ethernet transmitting
  • Pair of members is asymmetric if one is mainly the sender and the other is mainly the receiver in their relationship. In this case it makes sense to put the sender upstream from the receiver. But some pairs are almost symmetric.
  • a packet processor paired with a DMA is such an example. As such, no matter how they are placed on a ring, one direction is bound to suffer. In this case, one or more land bridges generally will provide the solution.
  • a single land bridge can be added to minimize latency between two members of a ring.
  • two or more bridges 332 , 334 may be added to a ring 336 to further minimize the number of modules between any two ring members.
  • each bridge 332 , 334 adds two interfaces (members) to the ring network, this generally will not affect the latency significantly since a message is unlikely to travel the entire perimeter of the ring network due to the bridges.
  • Ring connections between two members can include more than 100 signals.
  • Each message can include, for example, at least 104 signals. Therefore, it may be unreasonable to add this amount of pins (twice) to implement the external ring interface.
  • it may be preferably to implement a dual purpose peripheral interface 340 , such as Utopia.
  • Utopia Normal mode of operation for an Utopia interface is sending/receiving ATM cells.
  • two rings networks such as two network processors, can be connected with Utopia interfaces back to back. In this mode, instead of cells, Utopia pins will convey messages.
  • any of a variety of CPUs may be implemented as a module of the ring network topology described herein, ring networks are particularly well-suited for packet processors, various emobiments of which are described in detail below.
  • the packet processor of the present invention may on occasion be referred to herein as the Vobla, the network processor, and similar variations.
  • the network processor of the present invention may be implemented as part of a communications processor having multiple modules that are interconnected using the rings architecture described above.
  • the modules in such an arrangement for a communications processor may include the network processor of the present invention (for data plane processing of packets), a control packet processor (for control plane processing as a flow manager), various peripheral modules, and so forth.
  • a rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member; and the system further comprising means for providing an external ring interface that enables communication with at least one external peripheral device.
  • the means can comprise a field programmable gate array and/or a memory port ring member on the ring network.
  • the at least one external peripheral device can include one or more of a DSP, encryption engine, external bus, external memory, a second ring network, and the like.
  • the means is adapted to perform handshaking between the protocols of the ring network and the at least one external peripheral device, wherein the handshaking preferably includes converting message data from the ring network into transaction data.
  • the means also can be adapted to allow the ring network to write out messages to the at least one external peripheral and the at least one external peripheral to generate transactions converted into messages for the ring network.
  • the means in one embodiment, operates as a shared memory between the ring network and the at least one external peripheral.
  • the means may include a memory that operates as a RAM for messages received from the ring network and as a FIFO for transactions received from the at least one external peripheral device.
  • the means also may include a memory, wherein the ring network can write data to an address in the memory to cause an interrupt in the at least one external peripheral device.
  • the ring network is a first ring network on a first chip, where the rings-based system further comprises a second ring network on a second chip, and wherein the first ring network and the second ring network interface through the means to the at least one external peripheral device.
  • the ring network can include a first communications processor including a first protocol processor and a second network processor, and the system can further comprise a second communication processor including a second protocol processor and a second network processor, wherein the first communications processor and the second communications processor interface through the means to the at least one external peripheral device.
  • a network processor implemented on a chip comprises means for processing a plurality of protocols including ATM, frame relay, Ethernet, and IP, said means being programmable using a set of library commands to process additional protocols, and wherein said means comprises an arithmetic logic unit (ALU), a load/store unit (LSU), a preload/bump unit (PBU), a register file unit (RFU), an agent interface, and an internal memory.
  • the network processor in one embodiment, further comprises a fetch unit and a program sequencer.
  • the ALU can be adapted to perform arithmetic and logic operations on data operands.
  • the LSU can be adapted to perform address calculations in order to address data operands in the internal memory.
  • the LSU calculates an effective address according to one of five available options, including: (1) effective address is the content of a register from the RFU; (2) effective address is the sum of content of a first register from the RFU and content of a second register from the RFU; (3) effective address is the sum of content a first register from the RFU and content of a second register from the RFU after the second register is shifted by a specified number of bits; (4) effective address is the sum of the content of a register from the RFU and a displacement that occupies a specified number of bits in an instruction word; and (5) effective address is an absolute address included in the instruction word.
  • the PSU in one embodiment, performs decoding of instructions received from the internal memory.
  • the fetch unit can be adapted to control what instructions are fetched from memory for decoding by the PSU.
  • the RFU in one embodiment, comprises a first register file for a current task and a second register file for preloading register values for a next task.
  • data may be read to or written from the first register file based on a comparison between a current task ID and a task ID associated with the first register file.
  • the RFU also can comprise a third register file for storing register values for the current task that are not stored in the first register file. In this case, data may be read to or written to the third register file when the current task ID and the task ID associated with the first register file are not the same.
  • a task switch is performed by the network processor by making the next task the current task and preloading a further next task. The performance of a task switch can include treating the second register file as the third register file after the task switch.
  • the agent interface allows the network processor to interface to external modules for executing instructions, where the external modules can include one or more of a CRC module, encryption module, hashing module, and table lookup module.
  • a communications processor implemented on a chip comprising a network processor including means for processing a plurality of protocols including ATM, frame relay, Ethernet, and IP, said means being programmable using a set of library commands to process additional protocols, wherein said means comprises an arithmetic logic unit (ALU), a load/store unit (LSU), a preload/bump unit (PBU), a register file unit (RFU), an agent interface, and an internal memory.
  • the communications processor further comprises a protocol processor for controlling the network processor, wherein the protocol processor performs control plane processing and the network processor performs data plane processing.
  • the network processor can be adapted to process instructions by performing a fetch, decode, address, execute, and a write.
  • the network processor and the protocol processor are ring members on a ring network, and further comprising a plurality of other ring members on the ring network.
  • the network processor includes a plurality of compounds that share a single ring interface to the ring network, wherein the compounds can include, for example, a doorbell agent for controlling the execution sequence of tasks for the network processor.
  • the compounds also may include a multireader agent for servicing requests to read data from the internal memory, a message sender agent for sending messages onto the ring network, a DMA agent for sending messages to initiate a DMA controller on the ring network, a CRC agent for performing CRC calculations, and/or a debug module.
  • a packet processor includes the following capabilities that are typically not found in general purpose microprocessors:
  • each interface (I/f) port would require at least 2 tasks (RX [receive], TX [transmit) to handle the datapath processing.
  • RX [receive], TX [transmit) a system that includes several ports would require about two or more active tasks for each port.
  • the packet processor should be able to switch tasks with minimum overhead.
  • the packet processor may allocate shadow memory (4-8 tasks) to store registers and task status.
  • the priority scheme to choose the next_task_to_run is hardware (HW) based and is not performed by software (SW) as in a RISC (Reduced Instruction Set Computer) model.
  • Parallel engines Processing of packets can use parallel machines to accelerate performance. Examples for this capability include DMA, CRC, Lookup engine, and Peripheral Transfer Machine.
  • a well-built packet processor would have the mechanism in place to issue and receive synchronically transactions to parallel machines without stalling the packet processor.
  • Scalability One way to scale the throughput of a packet processor is by instantiating several engines. Hence, it is desirable that the programming model and the system architecture be flexible enough to accommodate scalability.
  • Special instructions Packet processing uses special operations that are not common for a general purpose processor. Instructions like Compare immediate under mask (to match specific bits), activation of parallel engines using instructions like CRC, DMA, HASH, LIST SEARCH, and mechanisms such as Sticky bits for compare and jump, are derived from the needs of packet processing.
  • Inter-task communication is supported by the architecture. Traditional RISC machines generally use SW for this communication.
  • Efficient link list operation Data structures like link lists, queues and buffers are common in communication systems.
  • a flexible packet processor should be able to manage a large number of different queue types in an efficient and quick way.
  • the flexible packet processor should support processing of the following: ATM, Frame Relay (FR), IP/Ethernet, IWF (TDM to Packets), AAL2 for wireless base stations, IP, and MPLS.
  • ATM is by far the largest access method in the access space.
  • a packet processor in the space should to be able to terminate ATM virtual circuits (VCs) Customer Premises Equipment (CPE) and should be able to switch ATM.
  • VCs virtual circuits
  • CPE Customer Premises Equipment
  • ATM is of particular interest because a vast majority of the DSL approaches use ATM as the carrier technology.
  • Frame Relay is of interest because it is commonly used in corporate access (e.g., using T 1 s or NxT 1 ).
  • Ethernet is becoming a cost effective technology for the Metropolitan Area Network (MAN). This simplifies the need for a costly router (no ATM) at the corporate edge. This is a new approach that ISPs (CLECs [Competitive Local Exchange Carrier]) use as a way to replace the old Telco access (leased lines).
  • ISPs CLECs [Competitive Local Exchange Carrier]
  • Telco access leased lines.
  • Ethernet access does not solve the issue of how to deal with corporate voice. Typical requirements for IP/Ethernet would be IP routing and Ethernet bridging at 100 Mbps and approaching 1 G-Enet.
  • Packet processing for inter-working functions is typically found in Voice Gateways (VG) and in Wireless Base Stations (WBS).
  • the VG interface the POTS (plain old telephone system) network on one side and the packet network on the other side. Voice calls are modified (compressed and packetized, or uncompressed and circuitized) between the networks.
  • typical processing requirements at the VG include: termination of AAL2 streams; support for CES (Circuit Emulation Services) (AAL1) to emulate T 1 services; termination of RTP (Real Time Protocol) (VoIP) packets; and the like
  • AAL2 processing may find useful application for Wireless Base Stations.
  • New generation WBSs use ATM as their backbone network.
  • AAL2 may be chosen to carry both voice and data.
  • AAL2 Termination at the BTS Base Transceiver Station
  • AAL2 Switching the BTS and at the MSC Mobile Switching Center
  • BSC Mobile Switching Center
  • BSC Base Station Controller
  • AAL2 Termination is done at the MSC/BSC (OC-3 and IP is routed to ISP)
  • IMA Inverse Multiplexing over ATM
  • the flexible packet processor should handle IP because IP processing can be found in various applications in the access space, such as the following: ISP aggregation router; DSLAM for handling frames; Cable modem head end; Wireless base station; MPLS (Multiprotocol Label Switching) is a newcomer to the access space. It is being used for traffic management and for Quality of Service (QoS) control. It is desirable that access equipment support LSR (edge device) (Label Switched Router) for MPLS.
  • LSR edge device
  • Label Switched Router Label Switched Router
  • a flexible packet processor according to the invention can form the basis of an access platform that is capable of addressing multiple applications in this space.
  • the flexible packet processor in accordance with various embodiments of the present invention is a general-purpose network processor core, allowing it to support many system-on-chip (SOC) configurations.
  • SOC system-on-chip
  • a library of modules containing memories, peripherals, accelerators, and other processor cores makes it possible for a variety of highly integrated and cost-effective SOC communication devices to be built around the packe processor.
  • Figure shows a block diagram of an exemplary SOC chip 350 made up of the network processor core 354 and associated SOC components (described below) according to an embodiment of the invention. Although not indicated in this configuration, a typical SOC can contain more than one network processor core 354 .
  • Internal Memory Expansion Area (Internal Memory 352 )—On-chip memories operating at full core frequency are connected to the network processor core 354 through this component.
  • the internal memory is unified and can be used for both program and data storage. Different technologies such as SRAM or ROM can be used to implement the internal memory.
  • the network processor core is the processor in which the network data path application code is executed, and which may include: a program sequencer unit (PSU); a load store unit (LSU); a fetch unit (FTU); a data arithmetic logic unit (DALU); a register file (RFU) including support of fast task switching; a preload and bump unit (PBU) for efficient task switching and context save and restore; and the like.
  • PSU program sequencer unit
  • LSU load store unit
  • FTU fetch unit
  • DALU data arithmetic logic unit
  • RAU register file
  • PBU preload and bump unit
  • a companion (sometimes called a compound) that is tightly coupled to the network processor core is the doorbell scoreboard module (doorbell) shown in FIG. 36.
  • the doorbell receives requests for service from peripherals, accelerators and DMAs, and then determines a next task ID once a task switch occurs in the network processor.
  • Peripheral Expansion Area 356 includes the functional units that interface between the network processor core and the application, including the functions that send and receive data from external input/output sources.
  • these components include accelerators 358 that execute portions of the application in order to boost performance and decrease power consumption.
  • a host interface e.g., SDRAM controller
  • a serial interface USB, UART, SSI ([Synchronous Serial Interface], Timers); a communications interface (Utopia, MII); a CRC accelerator; a able look up coprocessor; Smart FIFO; a data pump; a direct memory access (DMA) controller; as well as other CPU cores, such as packet processors (PPs).
  • a host interface e.g., SDRAM controller
  • USB UART, SSI ([Synchronous Serial Interface], Timers)
  • a communications interface Utopia, MII
  • CRC accelerator a able look up coprocessor
  • Smart FIFO Smart FIFO
  • Smart FIFO Smart FIFO
  • data pump a direct memory access (DMA) controller
  • DMA direct memory access
  • PPs packet processors
  • the following ports may be implemented: data memory ports (address, data read and data write) used for data transfers between the core and memory; program memory port (address and data read) for fetching code from the memory to the core; agent port to support tightly-coupled external user-definable functional units such as peripherals, accelerators, DMA's, smart FIFOs, and so forth; and a context memory port (address, data read and data write) used for the preload and bump of registers for fast task switching.
  • data memory ports address, data read and data write
  • program memory port address and data read
  • agent port to support tightly-coupled external user-definable functional units such as peripherals, accelerators, DMA's, smart FIFOs, and so forth
  • context memory port (address, data read and data write) used for the preload and bump of registers for fast task switching.
  • the network processor core 354 is illustrated in greater detail in accordance with at least one embodiment of the present invention.
  • the network processor core includes the following:
  • Data Arithmetic Logic Unit (DALU or ALU) 370 The DALU 370 (also referred to as the ALU below) performs the arithmetic and logical operations on data operands in the network processor core.
  • the data registers can be read from or written to memory over, for example, a 32-bit wide data bus as 8-bit, 16-bit, or 32-bit operands.
  • the source operands for the ALU 370 are 32 bits wide and originate either from data registers or from immediate data (Imm). The results of ALU operations are stored in the data registers.
  • ALU operations are performed in one clock cycle.
  • the destination of each arithmetic operation can be used as a source operand for the operation immediately following the arithmetic operation without any time penalty.
  • the components of the ALU 370 are as follows: an integer arithmetic unit for 32-bit non-saturated three-operand arithmetic operations; a logic unit for 32-bit logic operations; a bit field unit (BFU) for multi-bit shift, rotate, swap and bit-field insert and extract operations; and a condition code generation unit.
  • the ALU 370 may read two operands from the register file via the dual source bus (src 1 and src 2 in FIG. 37), or one operand from a register via the source bus and a second immediate operand via the immediate bus (Imm input to DALU on FIG. 37).
  • the ALU 370 generates a result into a destination register via the destination bus (dest on FIG. 37).
  • condition codes are optionally generated in the condition code register (part of the R 1 register, discussed further below) depending on the instruction type.
  • the ALU 370 may support both signed and unsigned arithmetic. Most of the unsigned arithmetic instructions are performed the same as the signed instructions. However, some operations may require special hardware and may be implemented as separate instructions. When performing an unsigned comparison, for example, the condition code computation is different from signed comparisons. The most significant bit of the unsigned operand has a positive weight, while in signed representation it has a negative weight. Special condition codes and instructions may be implemented to support both signed and unsigned comparisons.
  • the LSU 372 performs address calculations using integer arithmetic needed to address data operands in memory. In addition, the LSU 372 generates change-of-flow program addresses. The LSU 372 operates in parallel with other network processor core resources to minimize address generation overhead.
  • the effective address (EA) used to point to a memory location for a load or a store is calculated according to one of the following options. According to one embodiment, only the 16 least significant bits (LSBs) of the calculation result are considered.
  • the options for calculating the EA include:
  • Register indirect, No update (Rn): The EA is the content of a register Rn from the register file.
  • Absolute address The EA is the absolute address expressed in the instruction.
  • the network processor registers are classified into three types: General Purpose Registers (GPR); Special Purpose Registers (SPR); and Hidden registers (HR).
  • GPR General Purpose Registers
  • SPR Special Purpose Registers
  • HR Hidden registers
  • the general purpose registers may be used by the programmer to load data from memory, execute arithmetic or logic operations, and store the data back into memory.
  • the special purpose registers are registers that have an associated functionality, such as a task SPR, and so forth. Generally, SPRs may not be loaded or stored directly from/to memory. According to one approach, a dedicated move instruction can move data between general purpose registers and special purpose registers.
  • Hidden registers are registers which are not exposed to the programmer, but reside in the hardware as part of the machine control (e.g., a current PC [Program Counter] register).
  • the network processor of the present invention includes a special register file architecture and a memory block that are capable of managing a large number of tasks (threads) with substantially no cycle penalty.
  • the memory block has the capacity to store the register context of the tasks.
  • the register file architecture performs a reduced number of context save and restore operations and enables each active task with its own context registers.
  • the programming model of the network processor core has 32 general purpose registers. These registers can be read from or written to over the memory data buses (e.g., referring to FIG. 37, the src 1 , src 2 , and dest buses). Source operands for ALU instructions originate from these registers.
  • the destination of an ALU instruction is a register and such a destination can be also be used as a source operand for a subsequent ALU instruction in the operation immediately following, without any time penalty.
  • each register of the active register file has a 32-bit data field and a 6-bit tag field. The tag field holds the task ID, which identifies the task for which the data register value is valid.
  • the network processor core 354 includes a boundary register which specifies for each of the registers whether it is considered a global register or a general register.
  • the global registers may store global values that can be shared among multiple tasks, or they may store temporal values that are not preserved when the task yields and resumes processing.
  • Shadow register files are not part of the programming model, i.e., they are not exposed to the programmer.
  • Each of the Shadow 1 and Shadow 2 register files includes, for example, 32 registers of 32 bits.
  • task switches do not require an explicit save/restore of the general registers. Saves and restores of the general registers are done implicitly by hardware according to the following mechanisms.
  • the task ID associated with the register of the active register file is first compared to the current task ID. If the result is equality, this means that the register is maintained by the current task, and, therefore, the register is overwritten with the new value and the current task ID is marked in its tag field.
  • a non-equal result means that the register contains valid data for a different task.
  • the old register content is first sent to a write queue buffer to be saved in memory in a task ID context table, and then the new value is overwritten to the register and the current task ID is marked in its tag field.
  • the task ID associated with the register is first compared to the current task ID.
  • An equal result means that the register contains valid data for the current running task, and thus the data is read directly from the register.
  • a non-equal result means that the register contains valid data for a different task.
  • the valid data for the current task for that register resides in the Shadow 1 register file, as it was preloaded to Shadow 2 concurrent with the execution of the previous task.
  • the register value is read from the Shadow 1 register file, and the register of the active register file remains unchanged.
  • a read or write access to a global register accesses the active register file directly without changing the register's tag. Concurrent with the execution flow of the current task, a special machine (the PBU 376 of FIG. 37) preloads the register values of the next task ID into the Shadow 2 register file.
  • the preload of the register values of the next task should be completed; the Bump buffer is emptied—all data which was sent to the bump unit is saved in the context table; the next task becomes the current active task; the Shadow 2 register file becomes the shadow for the current task (Shadow 1 ); and a new next task is sampled and a new preload procedure is initiated onto Shadow 2 .
  • Special care should be taken (and special logic may be implemented) to prevent hazard cases. For example, a mismatch in the register value occurs if a register in the active register file is tagged for a task ID which is identical to the next task ID, and that register is accessed as a destination in the current task.
  • the register value should be first saved in memory in its context location and then overwritten with the new value of the current task. However, since the previous task is identical to the next task, it could be that the register value is already preloaded into the next task shadow register file (Shadow 2 ). In this case, the preloaded value into Shadow 2 is no longer valid.
  • FIG. 38 illustrates the register files structure and a mechanism for low overhead task switch according to an embodiment of the invention in accordance with the discussion above.
  • the current task ID is Task_X
  • the next task ID is Task_Y.
  • the current task ID becomes Task_Y and the next task ID becomes Task_Z.
  • a method for efficient processing of tasks in a communications system comprises sampling a current task identifier and a next task identifier, providing a first register file for storing values for a current task, and providing a second register file for storing values for the current task that are not in the first register file.
  • the method further comprises providing a third register file for preloading values for the next task, and performing a task switch by making the next task identifier the current task identifier and sampling a further next task identifier.
  • the method can further comprise the step of completing the preload of the register values for the next task identifier which after the task switch is the current task identifier.
  • the method may also comprise using the third register file as the second register file after the task switch.
  • the first register file in one embodiment, comprises registers with a data field and a task identifier field.
  • the first register file has 32 registers, each register having a 32 bit data field and a 6 bit task identifier field.
  • the first register file may be exposed to a programmer of the communications processor and the second register file and the third register file are hidden from the programmer.
  • task switches are performed without an explicit save/restore of the register files.
  • the method can further comprise performing a write during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; writing a value to the first register file when the current task identifier is the same as the task identifier in the first register file; and writing a value to the first register file when the current task identifier is not the same as the task identifier in the first register file after the content in the first register file is saved to a memory.
  • the content in the first register file can be saved to a task identifier context table.
  • the method may also comprise performing a read during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; reading a value from the first register file when the current task identifier is the same as the task identifier in the first register file; andreading a value from the second register file when the current task identifier is not the same as the task identifier in the first register file.
  • the content of the first register file may not be changed as a result of the read.
  • a system for efficient processing of tasks in a communications system comprises means for sampling a current task identifier and a next task identifier, a first register file for storing values for a current task, a second register file for storing values for the current task that are not in the first register file, a third register file for preloading values for the next task, and means for performing a task switch by making the next task identifier the current task identifier and sampling a further next task identifier.
  • the means for performing a task switch completes the preload of the register values for the next task identifier which after the task switch is the current task identifier. Similarly, the means for performing a task switch uses the third register file as the second register file after the task switch.
  • the first register file comprises registers with a data field and a task identifier field, wherein the first register file can have 32 registers, each register having a 32 bit data field and a 6 bit task identifier field, and further wherein the second register file and the third register file each have 32 registers.
  • the system may further comprise a processor which performs a write during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; writing a value to the first register file when the current task identifier is the same as the task identifier in the first register file; and writing a value to the first register file when the current task identifier is not the same as the task identifier in the first register file after the content in the first register file is saved to a memory.
  • the content in the first register file can be saved to a task identifier context table.
  • the processor may comprise an ALU.
  • the system may also comprise a processor which performs a read during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; reading a value from the first register file when the current task identifier is the same as the task identifier in the first register file; and reading a value from the second register file when the current task identifier is not the same as the task identifier in the first register file. In this case, the content of the first register file is not changed as a result of the read.
  • the means for performing a task switch comprises a preload and bump unit.
  • the processor may comprise an ALU.
  • PBU Preload and Bump Unit
  • the PBU 376 controls the access of data memory for the automatic save and restore of registers in their context table in memory.
  • a save of a register content in its location in the table context is performed whenever the register in the active register file is addressed as a destination and the register contains valid data for a task different from the current running task.
  • only one request for a save can be captured in the PBU 376 for a single instruction because only one destination can appear in an instruction.
  • the PBU 376 includes a write queue with a number of entries in order to minimize the interference with the main program flow, thus optimizing the total execution time. Whenever a register addressed as a source does not contain valid data for the current running task, the data is read from the Shadow 1 register file where it was previously preloaded.
  • the PBU 376 is also responsible for controlling the preload of the next task registers into the Shadow 2 register file.
  • the PBU 376 generates the data memory accesses for save (write) and preload (read) using the context address and data busses.
  • the load store cycles of the active flow have highest priority, followed by the preload cycles, and, at the lowest priority, are the save cycles from the write buffer.
  • the PSU 378 performs the instruction decoding and generate the controls for the other core units.
  • the PSU 378 controls the program flow including all scenarios involving the change of flow.
  • the FTU 380 is responsible for controlling the program counter (PC) for instruction fetch operations.
  • the PC may be derived from one of the following sources: sequential increment; jump to an absolute address; jump to an address specified by a register; task switch to a next task entry point; relative change of flow; exception control (e.g., reset, breakpoint, patch, etc.); and return from trap.
  • a few instructions are executed in an external module (e.g., DMA, accelerators, etc.) connected to the network processor core.
  • a messaging bus Agent Interface or AGI
  • AGI Agent Interface
  • the network processor core uses a unified memory space wherein each address can contain either program information or data.
  • This memory space is typically based on on-chip RAM and ROM.
  • the memory module should have separate ports for program, data and context accesses. Also, this memory module may have additional ports for accesses from the external world, such as the ring interface.
  • the programming model describes the rules for writing network processor programs. After a brief introduction that explains in general terms the organization of the network processor code and the flow of data through the system, the programming model (e.g., state resources, interfaces and instruction groups) is outlined in high level terms. Then, the execution flow and performance issues are discussed. And last, the programming model is detailed.
  • the programming model e.g., state resources, interfaces and instruction groups
  • the network processor comprises a 32-bit single issue RISC processor tailored for real-time communication processing goals.
  • the network processor has 32 general purpose registers, built-in support for multi-tasking, communication peripherals, on-chip SRAM, a DMA interface to external SDRAM, a built-in interface to an on-chip control processor (referred to as the host processor or the Packet processor [PP] or the Control Packet processor [CPP]).
  • the network processor have hardware support for up to 62 tasks.
  • the hardware support includes generation of task activation triggers, automatic task scheduling, save and restore of registers to and from the shadow register area in internal SRAM, special instructions for yielding the CPU, and support for passing messages between tasks.
  • Each network processor task has a dedicated register set.
  • the task registers are preserved across the periods in which the task is not running.
  • a network processor task can access internal memory with load and store instructions, and can copy data from internal to external memory and vice-versa using special DMA instructions.
  • This data is copied, using a special instruction from the peripheral's FIFO, into internal memory (arrow 406 ). On the transmit side, this data is copied, using a special instruction, from internal memory into the peripheral FIFO.
  • This type of data which is in transit through the device, can be referred to as stream data.
  • Stream data exchanged with the host processor (arrow 408 ): This data is passed by a network processor task, usually in external memory, for further processing to the host processor. On the transmit side, the host processor passes this data to a network processor task for transmit-related tasks (such as encapsulation, shaping, scheduling, and so forth) and for transmission through a peripheral. Stream data is also handed over between network processor tasks. There are cases when the stream data is not touched by the host processor.
  • Configuration data This data resides in internal memory and is set at initialization time by the host processor or by initialization procedures on the network processor (e.g., buffer size). Configuration data is consumed, but not produced, by the task.
  • Flow state data This data is kept in internal or external memory, and describes, for example, the state of each ATM connection or the state of the current Ethernet frame. Part of this data is used and updated by the task (e.g., the cell count for a connection).
  • Task state data This data is kept in internal memory (or registers), and is used by the task to keep information in case the task does not complete the work intended to be accomplished during a single period of possession of the CPU.
  • the programming model for the flexible packet processor includes the following elements.
  • state resources the hardware memory entities which hold the state of the program; interfaces—of the ways in which the program should behave to interact with hardware resources which are external to the processor; and instruction set—the description of the basic tools with which the program performs its operations.
  • FIG. 40 provides an overview 420 of the state resources for the network processor according to an embodiment of the invention.
  • the DMA interface controls the DMA machines, which copy data from the NP SRAM to external DRAM and vice versa.
  • the DMA interface is set up by the PP at initialization time, and accepts action commands from the NP via special instructions.
  • the DMA interface connects to the doorbells and the task scheduling mechanism.
  • Peripheral FIFO interface The peripheral FIFOs are set up by the PP at initialization time, and are instructed by special NP instructions to copy a data unit to internal memory (from internal memory in the case of a TX).
  • the peripheral FIFOs are connected to the doorbells and the task scheduling mechanism.
  • Accelerators/Coprocessors interface In general, there may be two kinds of accelerators/coprocessors: (1) accelerators/coprocessors that are tightly connected to the network processor core and that are accessed via a special agent instruction (e.g. CRC, multireader, message sender, etc.). These reside within network processor Compound entity; and (2) accelerators/coprocessors that are ring members and can be accessed by any other ring member interposed on the ring (via messages over the ring).
  • a special agent instruction e.g. CRC, multireader, message sender, etc.
  • Host (PP) processor interface In general, the PP will be able to initialize NP configuration registers, to share data with the NP in internal and external memories, to request services from an NP task, and to receive interrupts and messages from the NP.
  • Instruction set Instructions perform the various types of actions, such as the following: arithmetic, logic, register manipulation—modify data in registers; load/store—move data between SRAM and registers; flow control—changes in the program counter; task management—control of inter-task changes in the program counter; agent interface instructions—DMA (move data between the SRAM and the SDRAM), access to serial ports (move data between the SRAM and communication peripherals), and accelerators (specialized communication processing functions such as a CRC calculation on a block of data); special purpose register moves (and activation of coprocessors)—move data between GPRs and SPRs.
  • arithmetic, logic, register manipulation modify data in registers
  • load/store move data between SRAM and registers
  • flow control changes in the program counter
  • task management control of inter-task changes in the program counter
  • agent interface instructions DMA (move data between the SRAM and the SDRAM), access to serial ports (move data between the SRAM and communication peripherals), and accelerators (
  • the CPU executes instructions sequentially until it encounters an instruction which changes the program flow.
  • this instruction can be a conditional or unconditional branch or jump within the task, which checks a condition bit in one of the general purpose condition registers, or an instruction which terminates the current task and starts execution of another task. Instructions which cause a non-incremental change to the program counter take more then one cycle and are optionally followed by a one instruction delay slot.
  • Other instructions which influence the program flow are: arithmetic and compare instructions which modify the condition code bits, and instructions which modify the task entry point (the address from which the task will resume execution in its next execution round).
  • Tasks can be in one of three states: running, pending and dormant. At any given time there is one running task executing on the CPU. When something requests the service of a task, the task becomes pending. Each time the running task voluntarily yields the CPU, the highest priority task is selected from the pending tasks. Tasks for which nothing has requested their service are dormant, and they will not be enabled for execution and will not run. According to one embodiment of the invention, the number of tasks is determined at initialization time and there is no dynamic creation/elimination of tasks.
  • Tasks can be classified by the reason (trigger) that causes a task to become enabled for execution.
  • tasks can be classified by the entity which they serve:
  • Peripheral a task which serves a communication peripheral. Each time the RX peripheral receives a unit of data (e.g., 64 bytes of an Ethernet frame) in its FIFO or when a TX peripheral has space for a unit of data available in its FIFO, that peripheral sends a service request to their servant task.
  • a unit of data e.g. 64 bytes of an Ethernet frame
  • Timer A timer can be preprogrammed with a period cycle count. Each time it periodically expires, the timer sends a service request to its servant task.
  • Inter-task messages Data (usually stream data) can be exchanged or handed over between tasks.
  • One approach for this is to send a message (e.g., containing the data pointer) to the other task, accompanied by a service request.
  • a task serves only one master (the master being the source of service requests). This means that peripherals, timers and inter-task messages can all request service in the same manner.
  • DMA A task is permitted to yield the CPU during a DMA request (in this way the DMA will work in parallel with the CPU, and the CPU will not be stalled). The task usually wants to resume execution when the DMA action is completed. Upon completion, the DMA will send a service request to the originating task.
  • Self-request There is a limit to an execution period (the time between two sequential task switch events) of tasks.
  • the execution of the current task usually may not be preempted by an external event, so it is the programmer's responsibility to provide for yielding the CPU before reaching the time limit per task.
  • the task can issue the self-request service request before yielding in order to schedule itself for future execution.
  • Task doorbell bits are the place where the service requests are registered.
  • a network processor task can be enabled for execution by several request sources: Ordinary priority request from a serial module (e.g., a data fragment is ready in the receive FIFO and was copied to a predefined SRAM location or the transmit FIFO finished the transmission of the previous data fragment.).
  • a serial module e.g., a data fragment is ready in the receive FIFO and was copied to a predefined SRAM location or the transmit FIFO finished the transmission of the previous data fragment.
  • High priority request from a serial module e.g., the RX FIFO over a threshold or the TX FIFO under a threshold.
  • Timer uses the same doorbell bit as the ordinary priority request from a serial module.
  • each doorbell bit there is a mask bit.
  • the exceptions are the first two doorbell bits, which have a common mask bit, and the self request bit, which can not be masked. If the mask bit is set, the task will be enabled for execution by the matching request; otherwise, the request is blocked.
  • serial channels e.g., 6 for receive and 6 for transmit. These tasks will usually be activated by requests from serial channels. The rest of the tasks are expected to be activated by timers, messages from other tasks, or the host (e.g., doorbell bits 1 and 2 ).
  • a task which has more work to do then the maximum allowable latency should yield and use the self-request (doorbell bit 5 ) to be scheduled again (e.g., a timer handler task).
  • Any task can be activated by a completion of a DMA request that the task originated.
  • Mask bits can be set by software, and, in some cases, they are set automatically by hardware. A mask bit, together with the associated request bit, is cleared by hardware when the request is served by the task (the task becomes running). Mask bits can be set with a special instructions and can optionally be specified in DMA and YIELD instructions. When a task issues a DMA request and this DMA is not the last action in the task, the programmer should set a DMA doorbell mask bit and clear all other mask bits (this task should not return to execution because of any other request, for example the serial.). When the task returns to execution after completion of the DMA, all mask bits will be clear.
  • the auto set in DMA and YIELD instructions instructs the hardware upon DMA completion to set the mask bits to the default state.
  • a task issues its last DMA request, it sets the auto set indication.
  • the last YIELD instruction of a task should also set the mask bits to the default state.
  • the network processor DMA is able to serve two external busses (it can be a single DMA machine in some implementations.)
  • An immediate DMA ID field is specified in DMA instructions. Its value is an index into a translation table (the table may be programmed by the CPU or by writing to special purpose registers on the network processor). The translation result contains information like: big/little endian, and so forth.
  • the hardware scheduler selects from the pending tasks the one with the highest priority, and starts execution of that task.
  • Various approaches could be taken to task scheduling.
  • the algorithm for selecting the next task for execution is as follows.
  • the tasks which participate in the selection of the next task for execution are the tasks for which their corresponding mask bit in the Task Global Mask Register (TGMR) is cleared.
  • Tasks which participate in the selection of the next task and have unmasked requests are divided in to four groups and served in the following order:
  • Highest priority group includes urgent requests of task numbers 0 - 31 .
  • Second priority group includes regular requests of task numbers 0 - 31 .
  • Third priority group includes urgent requests of task numbers 32 - 63 .
  • Lowest priority group includes regular requests of task numbers 32 - 63 .
  • the requests are serviced according to the task number. Lower task number requests are served before higher task number requests.
  • the task resides in the higher priority class, starting from the time the urgent doorbell bit was set, until the time its doorbell mask is set to default by an option of the yield instruction, or until its doorbell mask is explicitly cleared by an instruction.
  • the tasks are in an urgent state as long as the handling of all pending urgent events is not completed (including when the task yields while doing a DMA during such a period).
  • the doorbell request bit which caused it to run and the matching mask bit are cleared.
  • the other request bits are not modified.
  • the regular and the urgent request bits are considered to be two levels of the same request and have a common mask bit. They are both cleared when the request is serviced.
  • a task can explicitly raise its priority to urgent, and return its priority to natural (normal priority, unless there is an urgent request pending) by using an agent instruction that writes to the doorbell register. This can be used to increase task priority for the period spent in a critical section or in an urgent code fragment.
  • instructions that yield the CPU take 2 cycles (they have a delay slot).
  • the other performance issue is the time it takes to restore the registers of the new task.
  • the registers of the next task are pre-loaded during the execution of the current task.
  • a global register is a general purpose register that is shared between all network processor tasks, and which can be safely used and modified by each task. (A task has to make sure that it completes the whole sequence, which includes the shared register use/update, needed for the action performed, before yielding the CPU.)
  • Inter-task messages Sending messages between tasks is done using queues. Additional information is provided in the discussion regarding data structures.
  • Host to Network Processor task messages The host is able to post a message to the input message queue of any task. The host also sets the doorbell bit of the target task. The host should not post messages to an input message queue to which a network processor task posts messages.
  • the network processor either with a hardware mechanism or a software task, should notify the host when the host message queue changes its position relative to a close to full threshold. Using such a threshold will permit a less time-constrained handling of messages on the network processor side and eliminates the need for a check if not full inquiry on the host side.
  • Host to Network Processor commands There is a command register that is written to so that the host can control network processor execution.
  • commands may include a reset, an activate task N, a deactivate task N (without aborting its current execution), and a start execution of task N (i.e., give task N a request without aborting the currently executing task).
  • Host-network processor parameters For each task an area is allocated at compilation time to hold the parameters that are initialized by the host and used by the task. The addresses of these areas are maintained together with the frame pointers and the entry points, and are loaded by the boot initialization routine (into R 6 , discussed further below) of each task. These parameters are also read by the host, and are used in the initialization drivers.
  • N is a global value, preferably programmed at initialization time. According to one approach, N (which should be odd) is 15, although other values of N may be used depending on design considerations.
  • the programmer should allocate the correct shadow area for the registers, which should be the number of tasks multiplied by the number of private registers. The programmer should use registers contiguously, starting from r 31 downwards.
  • some of the registers have special hardware support, as follows:
  • r 0 is interpreted as constant 0; writes are ignored.
  • FIG. 41 illustrates register r 1 ( 430 ) in greater detail in accordance with at least one embodiment of the present invention.
  • r 1 condition codes sticky condition (1 bit); arithmetic conditions (equal/zero [1 bit], less than/negative [1 bit], greater than/positive [1 bit], carry [1 bit], overflow [1 bit], doorbell bits [6 bits], and user defined condition bits [16 bits]).
  • r 31 user defined condition codes (32 bits).
  • r 30 entry point address of the task.
  • r 28 link address 1 (function return address).
  • r 29 link address 2 .
  • register allocation is similar to the approach taken for application binary interfaces, or ABI.
  • ABI is a standard that allows object code interoperability of functions compiled by different compilers or written in different languages. Register allocation according to this approach is as follows:
  • r 27 and other r 2 x registers are allocated to a fixed meaning. Registers which are allocated to some meaning by convention are expected to maintain the meaning over function calls. They can be modified within functions, but only according to their meaning. Each task might have different registers allocated to fixed meanings.
  • r 27 parameter area pointer and stack pointer of the task.
  • the compiler or the programmer statically allocates up to three stack frames per each task.
  • the compiler computes the area used by level0 code (first frame), and the maximum area needed for automatic variables of level1 functions of the task (second frame) and of level2 functions of the task (third frame).
  • the maximal stack frame will be allocated. All accesses to local variables will be translated by the compiler to offsets on r 27 , and there is no need for a stack pointer register for dynamically allocating frames on the stack and for modifying the stack pointer during function calls and returns.
  • the compiler limits the function call depth to two.
  • the compiler may also identify those functions which do not yield and do not call other functions, allocate their frame in an area common to all tasks, and use absolute addresses to access local variables (this may save memory per task in this case).
  • Other registers can also be allocated by convention to: data unit address in internal memory, data unit pointer in external memory, connection table base address, and so forth. Registers which are allocated to some meaning by convention are expected to maintain the meaning over function calls. Such registers can be modified within functions, but only according to their meaning.
  • r 16 , r 17 These registers do not preserve their value over any function call. They can be used without saving in level2 functions and in level1, which do not expect the value to be preserved over a level2 function call.
  • the r 16 and r 17 registers are used to pass parameters and get results to/from level1 and level2 functions. Even in the case when there are no parameters passed, these registers do not preserve their value over any function call.
  • the compiler forbids functions of more than two parameters.
  • the compiler and the assembly programmer may use the r 16 , r 17 order for level1 functions and the r 17 , r 16 order for level2 functions. This may eliminate saving and restoring of r 16 when both level1 and level2 functions have a single parameter. Also, r 16 and r 17 are the only private registers which can be modified in level2 functions.
  • r 18 -r 19 These registers should not be modified within level2 functions. They can be used without saving in levell functions, and they do not preserve their value over level1 function calls.
  • r 20 -r 26 These registers should not be modified within level1 and level2 functions. These registers can be used without saving in level0 code. Some of these registers can be assigned to a fixed meaning, in which case they can be modified within functions according to their fixed meaning.
  • r 0 -r 15 are scratch or global registers that are common to all the tasks, and which are not changed by the hardware task switching.
  • r 2 -r 5 hold information that is frequently used and shared between tasks, such as the buffer array base address (r 2 ) and the free buffer pool address (current) (r 3 ). These registers can hold popular (often used) constants, such as a table base address or an arithmetic constant.
  • r 8 -r 15 are used to hold information which does not need to be preserved across yields, such as intermediate results of an arithmetic computation.
  • r 12 , r 13 These registers preserve their values over calls to level2 functions which do not yield.
  • r 14 , r 15 These registers preserve their values across calls to levels and level2 function which do not yield.
  • Table 3 summarizes the register conventions discussed above. TABLE 3 private or special HW fixed modified by used as common handling meaning functions parameter r0 Common constant 0 NA Yes No r1 Common conditions No Yes No r2-r5 Common No Part within fixed No meaning r6-r11 Common No No level 1 & 2 No & yield r12, r13 Common No No level 1 & No yield r14, r15 Common No No No No No No r16, r17 Private No No level 1 & 2 Yes r18, r19 Private No No level 1 No r20-r27 Private No Part No No No r28 Private level 1 return No No No address r29 Private level 2 return No level 2 No address r30 Private entry point NA Yes (TBD) No r31 Private conditions No Yes (TBD) No
  • r 8 -r 9 level2 function code which does not contain a yield; level1 function code which does not contain a yield or a call to a level2 function; and level0 code which does not contain a yield or a function call.
  • r 10 -r 11 level1 function code which does not contain a yield or a call to a level2 function which yields.
  • r 12 -r 15 level0 code which does not contain a yield or a call to a function which yields.
  • r 16 , r 17 any level2 function code; level0/1 function code which does not contain a function call.
  • r 18 , r 19 any level1 function code; level0 code which does not contain a function call.
  • r 20 -r 2 X any level0 code.
  • registers r 1 and r 31 contain indications which can be used in branch conditional instructions. They can be explicitly updated by any instruction, but some of the bits in r 1 are implicitly updated by compare instructions and by arithmetic/load instructions. The carry bit is also implicitly updated by some arithmetic instructions.
  • R 1 is a global register; its value is not preserved after task switching.
  • R 31 has a copy per task.
  • the doorbell and mask fields in r 1 contains a copy of the doorbell bits of the current task.
  • the mask bits are a copy of the task's mask bits. Writes to these fields are ignored.
  • Compare instructions modify the three condition code bits, LT, EQ, and GT.
  • the compare instructions can also update the sticky bit.
  • These instructions specify a condition, such as one of NONE, LT (less than), LE (less than or equal to), EQ (equal to), NE (not equal), GT (greater than), or GE (greater than or equal to). If the condition is satisfied by the compare, the sticky bit is set; otherwise, the sticky bit is not altered. This feature is useful to efficiently implement several tests of error cases as well as other AND/OR conditions. Compare instructions also have an option to overwrite the sticky bit.
  • FIGS. 87 - 90 illustrate various mechanisms for using the accumulative condition flag, i.e., the sticky bit, to execute branch instructions in processing systems, such as a network processor or communications processor.
  • serial status The serial status indications (e.g., error, over-run/under-run, and last), optionally together with the data fragment size, should be loaded by the programmer from a fixed memory location into r 1 or r 31 .
  • User defined indications The user can keep state information in the user-defined part of r 1 or r 31 . It may be desirable for an indication to be created once and used several times. The user can also load to r 1 or r 31 a part of an array of indications.
  • Arithmetic instructions modify the condition codes. Arithmetic instructions can modify the zero, negative, and positive condition code bits. The following arithmetic instructions modify the carry condition code bit: ADD, SUB, ADD 1 , SUB 1 , SRR, SLR, SL 1 , SR 1 , and CLB
  • Conditional branch/jump and yield instructions test a single condition bit, which can be any bit in r 1 or r 31 , and compare that bit to either 0 or 1.
  • Conditional branch/jump instructions take three cycles when taken and 1-2 cycles when not taken, while unconditional branch/jump instructions take two cycles; in both cases they have an optional delay slot.
  • Conditional instructions In most of the instructions the 3-bit conditional execution field is used to specify whether the instruction is unconditional or it is conditional upon the sticky condition bit being true or false. One of the three bits is reserved for future use.
  • Branch/jump instructions can be used to call subroutines. They have an opcode bit which specifies whether the return address is to be saved, and another opcode bit which specifies whether the return address should be saved in r 28 or r 29 .
  • the return address is either PC+1 or PC+2, depending if the delayed branch option is used.
  • the function call depth is limited to two, and the depth of each call/return is specified in the instruction. Functions which do not call other functions should be defined and called as depth 2 .
  • R 30 contains the address at which the task will resume execution after a yield. It is modified by any instruction which modifies r 30 and is optionally modified by the YIELD instruction. It can optionally be modified by DMA instructions which yield.
  • Program counter according to one approach, there is a single program counter in the system (not per-task) and it is not directly accessible by the software in any manner.
  • SPRs are network processor core registers that are not defined as one of the General Purpose Registers (GPRs). Special instructions (SPRL and SPRS) are defined to enable the movement of data between SPRs and GPRs. Special Purpose Registers in the network processor include the Refetch SPR 440 , the Task SPR 442 , the Trap SPR 444 , and the Mindex SPR 446 , as shown in FIG. 42.
  • Refetch SPR 440 The refetch SPR is a 32-bit register that holds the first and second program memory addresses of the instructions to be refetched when getting out of a trap. Bits 15 : 0 hold the first instruction address (called refetch) and bits 31 : 16 hold the second instruction address (called next_refetch).
  • refetch first instruction address
  • next_refetch second instruction address
  • the network processor receives a break request and is not already in the trap mode, it continues instruction execution from the program location pointed out by the break vector and the trap mode bit is set (in the task SPR).
  • the address of the instruction that would have been executed but for the occurrence of the breakpoint is saved in bits 15 : 0 of the refetch SPR.
  • the following instruction that was supposed to be executed but for the occurrence of breakpoint is saved in bits 31 : 16 of the refetch SPR.
  • Leaving the trap mode is performed by executing the RFT instruction.
  • This instruction causes a program jump to the program location specified by the refetch SPR bits 15 : 0 , followed by the program location specified by the refetch SPR bits 31 : 16 . This also clears the trap mode bit.
  • the refetch SPR is a read/write register that can be accessed through the SPRL and SPRS instructions.
  • Task SPR 442 The task SPR is a 32-bit read only register.
  • the task SPR contains information on the current executing task and on the next task to be executed:
  • DOORBELL REQ reflects the doorbell request bits of the current task.
  • CTID reflects the Current Task ID.
  • NTID reflects the Next Task ID.
  • NTV reflects Next Task Valid bit.
  • MASK reflects the doorbell mask bits of the current task.
  • COUNT reflects the doorbell counter value of the current task.
  • the network processor switches to the next task.
  • the NTID is loaded into the CTID and the next task ID together with the next task valid bit from the doorbell are sampled into the NTID and into the NTV, respectively.
  • NTV bit is set, then the NTID is locked and there will not be further sampling. If the NTV bit is cleared, then the doorbell next task ID will continue to be sampled on each cycle until the valid bit is set.
  • the new valid next task ID is used by the pre-load logic to pre-load the next task's context.
  • the task SPR can be read by using the SPRL instruction. All other bits of the task SPR are reserved and will be read as zero.
  • the CTID, NTID and NTV bits are cleared by reset. The default state (and the reset state) of the mask of each task is 0b100.
  • Trap SPR 444 The trap SPR is a 32-bit register.
  • the trap SPR include the trap mode bit, the illegal instruction status bit, and the breakpoint status bits:
  • Bit 0 illegal Instruction (IL): When there is an illegal instruction, the IL bit is set. The IL bit can be cleared only by reset.
  • Bit 1 Trap Mode (TRAP): When TRAP bit is set, the network processor is in the trap mode. A breakpoint event causes the program flow to jump to a program location (pointed to by a given vector) and to enter the trap mode of execution by setting the trap mode bit. When in trap mode, no breakpoint and/or patch events will be accepted. The trap mode bit will be cleared by a RFT (Return From Trap) instruction or by writing zero to the trap mode bit. When the trap bit is cleared, further breakpoints and/or patches will be accepted.
  • RFT Return From Trap
  • Bit 2 Provided by Bit 1 —Program Address Break (PAB): This is a breakpoint status bit, which when set, indicates that a program address breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • PAB Program Address Break
  • Bit 3 Data Address Break (DAB): This is a breakpoint status bit, which when set, indicates that a data address breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • DAB Data Address Break
  • Bit 4 is a breakpoint status bit, which when set, indicates that a task ID breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • Bit 5 Yield Break (YB): This is a breakpoint status bit, which when set, indicates that a yield breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • Semaphores are commonly used when a section of code that contains yields should not be executed by more then one task at a time. This happens when the code is handling some data structure resource that is shared between tasks.
  • Current examples which might entail the use of semaphores are: adding and removing from a linked list queue whose descriptor is in external memory; releasing a multicast buffer (update of the reference count); emulation of a task's message queue in external memory; and a task that tries to put an inter-task message into a full message queue can use the hardware mechanism to wait until the queue is not full.
  • Network processor software semaphores in accordance with the present invention are implemented over a hardware mechanism which makes it possible to prevent the scheduling of tasks specified in a bitmap (the TGMR register).
  • the number of semaphores is limited only by size of the memory space allocated for semaphore support. Every semaphore requires a one byte indication of free/busy state plus a 64-bit mask of tasks registered for the particular semaphore. While performing the critical section protected by a semaphore, the task's priority should be raised and also all issued DMAs should be treated as urgent in order to minimize semaphore holding time.
  • semaphore ID (number) is chosen based on a simple arithmetic operation (e.g., a MOD of significant bits) on the resource address.
  • the network processor scheduler hardware includes a bitmap in an SPR register (SPR bitmap). Each bit in the bitmap, when set, prevents the scheduling of the task whose ID corresponds to the bit index.
  • the network processor software can add or remove a list of tasks specified in the specified in a software bitmap to the above list.
  • the software registers in the SPR bitmap those tasks which are prevented from execution because they are waiting for one of the currently occupied semaphores (see bad_list below).
  • the software holds an indication in internal memory for each semaphore that indicates whether that semaphore is currently in use/occupied (see semX_indic below.)
  • the software also holds for each semaphore a 64 bit bitmap corresponding to the tasks that are currently awaiting access to the semaphore (see semX_mask below). For each task awaiting the semaphore, this bit, which corresponds to that task's ID, is set.
  • the software also holds the task ID of each task in the form of a 64 bit mask (where only the bit corresponding to the task ID is set in this mask).
  • semX_indic load the “semaphore is busy” indication - a byte or a bit. bc.neq sem_ occupied ; and test it. ; Do the critical section code and release the semaphore. sti Oxff,semX_indic ; If it was not occupied, grab it and do the critical section. seturg on CRITICAL SECTION X seturg off sti 0, semX_indic ; Release the semaphore clear semX_mask bits in bad_list ; agentw. Let all in, highest priority task will be selected. . . . ; Rest of the task code and yield.
  • sem_occupied ; Register myself on the semaphore, and prevent myself from running. 1d.d r2,r3, semX mask ; Get the 64-bit mask of tasks waiting for this semaphore. set bit of current task in r2,r3 ; “Optimization”: the current task_id is prepared in a doubleword mask in the init routine. st.d ;2,r3, semX_mask ; Save the mask for common use. set semX_mask bits-in bad_list ; agentw. Prevent everyone (and myself) who is waiting to semX from being scheduled in.
  • the task sets the indication and enters the critical section. After completion of the critical section (e.g., which contains external memory accesses and task switches), the task clears the semaphore indication. It is possible that while the task was in the critical section other tasks may have registered themselves as awaiting access to the semaphore and prevented themselves from being scheduled in by the hardware scheduler. So the current task will enable these other tasks, which are registered as awaiting scheduling for the semaphore, by removing their list from the hardware bitmap.
  • the critical section e.g., which contains external memory accesses and task switches
  • the task branches to semX_occupied, registers itself in the list of tasks awaiting the semaphore, and disables those tasks by adding the list to the hardware bitmap. Task switching is then initiated after setting the resumed execution in the semX_released label. When the task resumes execution, the task deregisters itself from the list of tasks that are awaiting the semaphore, and prevents other tasks on the list from being scheduled by adding them to the hardware bitmap. The task then executes the code, which checks the semaphore indication.
  • a method of employing semaphores to limit access to a shared resource used by a multi-tasking processor comprises the steps of providing a first bitmap in a register that prevents specified tasks from running because the specified tasks are awaiting access to an occupied semaphore, storing an indication in memory that indicates whether the semaphore is occupied, storing a second bitmap in memory that identifies tasks that are awaiting access to the semaphore, and attempting to access the semaphore based on checking the indication in memory.
  • the method can further comprise the steps of setting the indication to indicate that the semaphore is occupied and performing the processing for the task, wherein performing the processing for the task includes critical section execution.
  • the critical section can include at least one of external memory accesses and task switches.
  • the method can further comprise the step of resetting the indication to indicate that the semaphore is available after the step of performing the processing for the task. Furthermore, the method additionally can comprise the step of removing from the first bitmap those tasks now included in the second bitmap in memory that identifies tasks that are awaiting access to the semaphore, thereby allowing those tasks to be scheduled for access to the semaphore.
  • the method when a task checking the indication in memory determines that the semaphore is occupied, can further comprise the steps of including the task in the second bitmap and revising the first bitmap to reflect the tasks from the list in the second bitmap.
  • the method further can include the steps of removing the task from the second bitmap when the indication reflects that the semaphore is available and revising the first bitmap to reflect the tasks from the list in the second bitmap, thereby allowing the task to access the semaphore and perform the task processing.
  • a system employing semaphores to limit access to a shared resource used by a multi-tasking processor comprises a first bitmap in a register that prevents specified tasks from running because the specified tasks are awaiting access to an occupied semaphore, an indication in memory that indicates whether the semaphore is occupied, a second bitmap in memory that identifies tasks that are awaiting access to the semaphore, and means for attempting to access the semaphore based on checking the indication in memory,
  • the means for attempting can be a processor executing a task, wherein the task can be enabled to access the semaphore when the indication reflects that the semaphore is available.
  • the task can be enabled to register itself with the second bitmap and updates the first bitmap when the reflects that the semaphore is occupied.
  • the task execution can include processing a critical section including at least one of external memory accesses and task switching, wherein the indication in memory is reset to indicate that the semaphore is available after processing the critical section.
  • FIG. 43 an exemplary software data model 450 is illustrated in accordance with at least one embodiment of the present invention.
  • data allocated in internal memory global data and task/function data.
  • Global data has a global name scope and can be symbolically referenced from anywhere in the code. References are translated to absolute addressing.
  • Local data definitions have a local name scope (detailed below) and references are translated by the assembler to r 27 + immediate offset.
  • Functions can be defined either within a task definition or outside of any task definition. Function names, which are defined outside of any task definition, have global name scope and can be called from any place in the code. They can access their local data and the global data. Function names which are defined within a task definition have a scope of the task definition. They can be called only by level0 code of that task type. They can access the common data of the task (detailed below).
  • the assembler For each task type, the assembler creates two data sections, level0 data and level1 data. Their sizes will be used by the PP software to allocate memory for the static frame of each task instance of this task type, and to initialize r 27 of the task instance.
  • a task definition can appear several times for the same task type. Such a definition shall be referred to as a task fragment.
  • the data definitions in each of the fragments are in union with the data definitions in each of the other fragments (overlap, occupy the same memory location).
  • an optional common keyword can be used, in which case the data definitions will overlap with any other data definitions, and the scope of the data names will be all the fragments of the same task type.
  • the non-common fragments of a task can be used to implement the different functions (referred to as handlers), which the generic task does.
  • the pointer to the handler is passed in the inter-task message. All the handlers will return to a label in the common part of the task. The common part of the task will only handle the input message queue and dispatch to the handlers.
  • the size of the level0 frame for a task type is the size of the data definitions in the common part plus the maximum of the sizes of the data definitions in non-common fragments of the task type.
  • Level1 functions can be called only explicitly (i.e., they can not be called using a pointer.) The assembler will find all the calls to level1 functions and will compute the level1 frame size for this task type as the maximum of the sizes of the data definitions of level1 functions called by this task type.
  • Level2 functions can be called via a pointer. The assembler will check that the data allocated in each level2 function is not more then a system level constant (80 bytes) and will add this constant to the offsets of data definitions of level1 functions.
  • Scope of labels local in functions and task fragments. Global to all fragments of that task type when in the common task fragment. Labels in task fragments and level2 function names can be passed to the PP software (flow manager) in the object file using the directive: .export label_name.
  • the assembler will produce a single code section, which will contain the code of all the tasks and functions.
  • Other function types might be considered, such as ones which do not have local data in memory or which receive as a parameter a pointer to a scratchpad area for their use.
  • code which is not associated with tasks and functions. (All the labels in this code will have global scope. It might be used for additional types of functions.) In cases when the caller's frame is no longer needed (an error condition, for example), it might call a function of the same level, which will use the caller's frame.
  • Instruction addressing All instruction addresses are word addresses, they are shifted left 2 bits to generate the memory address.
  • PC relative Branch to an offset from the current program counter specified in the 12-bit immediate signed instruction field.
  • Implicit task entry point During task switch, jump to the entry point of the next enabled task (in r 30 of that task).
  • Data addressing Data addresses are byte addresses that are taken as is, regardless of the access size.
  • Register with offset The address is the sum of the value contained in the register, with the sign extended 8-bit immediate instruction field.
  • Register with index register The address is the sum of the value contained in the register, with the value contained in the index register.
  • the following instruction groups are supported: arithmetic and logic operations; register data manipulation; load/store (to internal memory); program flow; task yielding; and agent instructions (DMA, communication peripherals, CRC, CAM, etc.).
  • the network processor pipeline 460 consists of five stages: fetch, decode, address, execute and write.
  • the network processor pipeline 460 enables a standard design flow and standard memories.
  • the network processor can perform an instruction together with a data load or store from/to a unified internal memory in each cycle.
  • the network processor pipeline 460 enables an arithmetic instruction to use as its source operands data that was loaded by the previous instruction without any bubble.
  • Conditional jump and branch instructions have no penalty when the condition is not taken while a penalty of 2 cycles occurs if the condition is taken and there is a change of flow. To reduce this penalty, delayed jump and branch instructions are provided.
  • the network processor general purpose registers (r 0 -r 31 ) are updated during the write stage without distinction as to whether they are updated from a load operation or from a data ALU operation.
  • the network processor core places the next instruction fetch address.
  • This next fetch address can originate from the Program Counter (PC) in the normal sequential flow or can come from the address ALU when there is a jump or branch instruction.
  • PC Program Counter
  • a 32-bit new fetched instruction is assumed to be ready during the next clock cycle after a specific access time from the specific internal memory. Since the network processor internal SRAM is unified for both data and programs, and since it should support 64-bit access for data, the network processor initiates a fetch of 2 instructions (64 bits).
  • the Fetch Unit (FTU) contains a fetch buffer to hold fetched instructions that were still not processed.
  • the new instruction fetch is complete and the decoding of the new instruction is performed.
  • the decode logic determine the type of the incoming instruction and the operations that should be performed at each pipeline stage for the execution of the instruction.
  • the data address for a load from memory or for a store to memory is calculated by the address ALU.
  • the address ALU get its source operands, which can originate from one or two of the GPR registers, an immediate address offset or an absolute address.
  • the destination address is also calculated by the address ALU.
  • One of the address ALU inputs is the PC itself for branch address calculation.
  • the core places the new data address on the Data Address Bus (DAB) or the new program address (for change of flow) on the Fetch Address Bus (FAB). If the instruction is a store, data to be stored into memory is placed on the Store Data Bus (SDB) during this stage.
  • DAB Data Address Bus
  • Fetch Address Bus Fetch Address Bus
  • the data ALU execution is done at the execute stage.
  • Source operands are read from the register file to the Data ALU, and data arithmetic is performed. For example, if the instruction is an ADD of r 1 with r 2 , then r 1 and r 2 are mux-ed into the data ALU and arithmetic addition is performed during the execute stage.
  • Condition Codes CC are also calculated at this stage. By the end of the execute stage, the data arithmetic execution result together with the CC are ready.
  • the register file is updated.
  • the update can come from various sources: a destination of an arithmetic result, loaded data from memory, a move from a Special Purpose Register (SPR), or a move of an immediate value into the register file.
  • SPR Special Purpose Register
  • the PC is also latched into one of the two LINK registers inside of the register file.
  • the CC register is also updated at this stage.
  • the network processor pipeline is designed to enable a standard design flow with standard memory interfaces. It is a five stage pipeline which is optimized for sequences that are frequently used and sequences that have a large effect on performance. By optimizing some of the sequences, there may be other sequences that might be problematic. These may be solved by inserting software restrictions. Table 5 below lists some of the sequence restrictions according to one embodiment of the invention. TABLE 5 No. Sequence Restriction Description 1 Register update followed by Any instruction which updates an r register (for example: move a store instructions, ALU instructions, load instructions, etc.) may not be followed immediately by a store instruction of that same r register. This includes instructions that update CC flags in r1 following by a store of r1.
  • slot Change of flow instructions include: Jump or Branch instructions Yield instructions Case instruction RFT instruction DMA instructions with the yield option set 5 Instruction inside the delay The only instructions that are allowed in a delay slot of a yield instruction slot of a “yield” are: Store instructions Agent Write instructions DMA instructions (only when the yield option is not set) 6 Change of True sticky bit Any instruction which updates the conditional sticky bit may not be before a conditional store or followed immediately by a: conditional agent write or conditional store instruction. agent read instruction conditional agent write instruction.
  • conditional agent read instruction 7 SPRS to nrefetch SPR SPRS instruction with nrefetch SPR as its destination may not be followed followed by an RFT immediately by an RFT instruction instruction r31 register update followed Any instruction which updates the r31 register may not be followed by a conditional change of immediately by a conditional change of flow instruction which uses one of flow with one of r31 bits as r31 bits as a condition a condition
  • FIG. 45 starts with the update of the Program Counter (PC) with the address of the next instruction.
  • the Fetch Address Bus (FAB) gets its content from the PC and starts a memory fetch access.
  • a new instruction is available on the Fetch Data Bus (FDB) during the decode cycle and passed directly to the decode logic.
  • the address ALU operates during the address stage and sends a new data address to the data memory. If the operation is a load then the loaded data is available on the Load Data Bus (LDB) during the execute stage.
  • LDB Load Data Bus
  • the operation is a store then the stored data is placed on the Store Data Bus (SDB) during the address stage.
  • the Data ALU gets its source operands and executes the data arithmetic at the execute stage. By the end of the execute stage, data arithmetic result and the Condition Codes (CC) are ready to be latched into the destination register on the next clock edge of the write cycle. If it is a load instruction then the loaded data is also latched into the destination register on the positive clock edge of the write cycle. All register update operations are going through the rf_in_mux and the actual update is on the write cycle. An update to any one of the Special Purpose Registers (SPRs) is also done at the write stage.
  • SPRs Special Purpose Registers
  • the Vobla (network processor [NP]) Memory (VMEM) 500 is a small and fast memory located near the network processor NP core.
  • the VMEM 500 serves the NP with three separate ports and the rest of the system with two ports.
  • the main features of the VMEM according to one embodiment of the invention include: operates with the NP clock; supports multiple ports (e.g., five ports); maximum bandwidth of, for example, about 8 Gbytes/second (5 accesses ⁇ 200 MHz ⁇ 8 bytes); 64 Kbytes of SRAM—first area between 0 to 48 KB and second area between 64 to 80 KB.
  • the SRAM in one embodiment, is divided into three sub areas: 0 to 8 K—data and tasks context; 8 to 48 K—data and program; and 64 to 80K—program.
  • the above 64 KB memory space can be accessed by the ring for writes and by the multireader for reads.
  • the priority in each one of the memory areas is according to the following rule: (1) ring interface—highest priority; (2) program; (3) data (load/store); (4) context; and (5) multi reader—lowest priority.
  • the VMEM supports the NP by three ports: data (load/store), program, and context.
  • the VMEM supports the ring interface and the NP compound by two ports: multireader and ring writer.
  • Each access of this bus is for aligned double words (64 bits): 15 bits for Address bus, A( 17 : 3 ). This allows access to 32K double words or 256 Kbytes. A( 2 : 0 ) are don't care bits in this case and 64 bits data out bus.
  • the data size can be a byte (8 bits), half-word (16 bits), word (32 bits), or double word (64 bits).
  • the access has to be aligned to the data size (half word on the boundary of half word, etc.). All the accesses are right aligned: byte in bits 0 to 7 , half-word in bits 0 to 15 , and word in bits 0 to 31 .
  • a special data aligner for this port will arrange the incoming and outcoming data according to the address and size transaction.
  • the interface will generate the byte enable signals to the VMEM according to address bits A( 2 : 0 ) and the size of the transaction, where: 16 bits Address bus—A( 15 : 0 )—Allows access to the first 64 Kbytes of the VMEM address space; A( 2 : 0 ) and data size control enable signal; 48 Kbytes of SRAM in current implementation; 64 bits data out bus for read access; and 64 bits data in bus for write access.
  • the data size is a word (32 bits) for write access and a double word (64 bits) for read access.
  • the interface will generate the byte enable signals to the VMEM according to address bit A( 2 ). No data aligner is needed for this interface, where: 11 bits Address bus—A( 12 : 2 )—allows access to the first 2K words (8 Kbytes) of the memory space—A( 1 : 0 ) and A( 15 : 3 ) are don't care bits in this case; 64 bits data out bus for read access; and 32 bits data in bus for write access.
  • the data size is a double word (64 bits).
  • the data size can be from 1 to 8 bytes and the data should be in a one aligned double word so only one access to the memory is needed.
  • the data is left aligned (big endian) and a special data aligner for this port will arrange the incoming data according to the VMEM address.
  • the interface will generate the byte enable signals to the VMEM according to address bits A( 2 : 0 ) and the size of the transaction, where 18 bits Address bus—A( 17 : 0 )—allows access to all the VMEM address space; and 64 bits data in bus for write access.
  • the VMEM uses two kinds of SRAM modules: a single port SRAM organized as 512 words of 64 bits (4 KB) and a single port SRAM organized as 2048 words of 64 bits (16 KB). Each SRAM gets 8 Byte Enable (BEs) control signals.
  • BEs Byte Enable
  • the SRAM array is divided into 13 SRAM modules and the overall size is 64 Kbytes.
  • the first group is between 0 to 48K bytes. In term of address space, each pair of SRAMs occupies 8 Kbytes.
  • the odd SRAM contains the first, third 8 bytes, etc. ( 0 - 7 , 16 - 23 , etc.), while the even SRAM will contains the second, fourth 8 bytes, etc. ( 8 - 15 , 24 - 31 , etc.).
  • the second group is between 64 to 80K bytes. This group include a single 16K byte SRAM.
  • the control is responsible for supporting the SRAM macros with addresses and data, and for routing the data from the SRAMs to the right bus.
  • a contention occurs when there are two or more accesses to the same SRAM macro. In that case, a priority mechanism is needed for avoiding starvation.
  • the VMEM sends a stall signal and the delayed transaction is kept by the VMEM until receiving service.
  • the write access from the ring Interface port has the highest priority.
  • Data In aligners There are two data aligners in the Data In Path: Data aligner for the NP Data bus. The input to the data aligner is aligned to the right with a size of 1, 2, 4 and 8 bytes.
  • Data aligner for the Ring write bus The input to the data aligner is aligned to the left (big endian) with a length of 1 to 8 bytes which is part of a one double word (64 bits) entry in the SRAM.
  • NP Context address port connects to the two muxes that support the two SRAM macros occupying address 0 to 8K bytes
  • NP Program address bus connects to the ten muxes that support the ten SRAM macros in address 8K to 48K bytes.
  • the NP data address bus is connected to the 12 address in muxes (the last SRAM is not connected to the data bus).
  • Data Out Muxes There are four data out muxes of 64 bits.
  • This mux is connected to 10 SRAM macros that reside in address 8K to 48K bytes and to the one SRAM macro that resides in address 64K to 80K bytes.
  • a 2 to 1 mux for the NP context data out bus This mux is connected to 2 SRAM macros that reside in address 0 to 8K bytes.
  • Data Out aligner There is a data aligner for the NP data out bus. The output of this aligner is right aligned according to the access size (1, 2, 4 and 8 bytes) and the access address.
  • FIG. 37 A block diagram of the network processor core according to one embodiment of the invention was provided in FIG. 37.
  • the network processor compounds are those modules of the ring network implemented by the network processor that are tightly connected to the network processor core.
  • Network processor compounds share a single ring interface and address space with the network processor core.
  • the network processor core and the network processor compounds are all elements of a single ring member.
  • Network processor compounds include agents and non-agents. Agents are programmed by network processor commands through the network processor agent interface, discussed below. Non-agents are programmed by internal agents or through the ring interface by external members.
  • FIG. 47 is a schematic diagram of the network processor 500 according to an embodiment of the invention.
  • FIG. 47 illustrates the ring interface 512 (dotted box at the bottom) and the network processor, which includes the network processor core 514 and the various compounds.
  • the compounds include agents such as the doorbell agent 516 , CRC/snoop agent 520 , multireader agent 524 , timer agent 526 , message_sender agent 528 , and DMA agent 530 .
  • the multireader module is an engine that serves requests to read portions of data from the network processor memory and sends the received data back to the destination.
  • the destination is most likely to be located external to the network processor compound (the only internal modules that might use this data are the CRC snooper or the memory in a mode when portions of the memory are copied from one location to another location).
  • the multireader is connected to the ring write interface, and to the agent interface, from which it could get requests to read data from the memory.
  • the multireader agent and the network processor memory share the same address space. Hence the multireader responds only to messages of work read type. The memory will respond only to messages of work write type.
  • the multireader can get requests for data from the following modules: 1) local network processor (via the agent interface); 2) the three DMA controllers; 3) remote (external to the compound) network processor; and 4) the host (PP).
  • All the external requests for memory reads are stored in request FIFO.
  • the local network processor requests are stored in a special request entry. There are two reasons why two different queues are used for the requests. The first reason is to have the ability to stall the local network processor if it asks for a new multiread request before the previous one was served. The second reason is to have the ability to know when the local network processor multiread was finished. These features can not be implemented by hardware for the other request sources, since the other requests sources are generated by members connected to the ring.
  • the network processor request entry is written from the agent interface and the request FIFO is written from the ring. All the requests are stored in the request entry or FIFO until they are serviced.
  • the order of serving the multiread requests is as follows: If the network processor entry has a valid multiread request, it will served before any other request in the request FIFO. If the network processor request entry is empty other requests will be served on first-in-first-out basis.
  • the multireader in one embodiment, has the ability to stall data sent to the ring.
  • a stall of data delivery could occur if the output FIFO of the ringis full, or there is a higher priority message that should be sent to the ring (for example DMA, message sender messages).
  • the multireader request FIFO preferably is 8 entries deep (which should be sufficient to avoid the overrun case).
  • FIG. 48 is a schematic diagram of the multireader agent 524 according to an embodiment of the invention.
  • the network processor memory uses a 64-bit data port.
  • the multireader wants to take advantage of this fact so every memory read will be of eight bytes. In this system there is a need to allow byte size data transfers over the ring from any memory location to any destination address.
  • Another goal is to minimize data transfers over the ring and enable straight forward writing to FIFOs. This goal is satisfied using data packing logic which means that all the transferred messages except the last one will contain 8 valid bytes. The last message might contain less than 8 bytes, in which case the message type will indicate how many valid bytes there are.
  • the multireader starts to issue memory read cycles if there is at least one multiread request pending in the multireader request FIFO or request entry. Every read cycle that the multireader issues to the memory is a 8 byte request (in order to reduce the number of requests).
  • the memory read cycle starts when the multireader generates the address and read strobe for the memory. The memory detects this request and, if not busy with other requests, it drives the data to the multireader on the following cycle. If the memory is busy and can not drive the data to the multireader, it stalls the multireader. The multireader waits for the data from the memory as long as the stall signal is asserted.
  • the originator of the multiread request will have the ability to know that the multiread operation is complete. If the originator of the multireader request is the local network processor, it will have the ability to know if the multiread operation had finished.
  • the multireader will send the network processor a signal indicating that the multireader did not finish the multireader transfer of the local network processor.
  • the multireader busy indication will be asserted when the multiread request is registered in the network processor entry and negated after the last message containing data of this request is sent to the ring.
  • the indication of multiread transfer end is controlled by software.
  • the software control is achieved by preparing a special data word at the end of the transferred block.
  • the destination of the multiread operation snoops this data. When this data is detected the multiread operation is finished. Note that only one transfer can be active during the time of the snoop (otherwise it will not be possible to detect which operation is finished).
  • the multireader looks in the type field of the incoming message (multiread request) or in the options bits of the network processor multiread request, and, if the bit F is set, the first message in the multiread process will be sent with a destination address which indicates the first byte in the frame.
  • the multireader also looks in the type field of the incoming message or in the options bits of the network processor multiread request, and, if the bit L is set, the last message in the multiread process will be sent with a destination address which indicates the last byte in the frame. (Every FIFO in the system should have three addresses which when writing to it indicates first, last data in the frame). The Multireader will modify bits 2 , 3 of the destination address according to the F,S bits.
  • a general multireader message will have the following format, as set forth in Table 6, for multireader input and output message format.
  • Table 6 Field Description type[7:0]
  • FIG. 50 describes how the multireader maps the data on the agent bus 556 to the multireader operation 558 .
  • the options are:
  • the priority of serving them will be: (1) serving local network processor requests if there are pending requests; and (2) serving all other requests on a FIFO basis.
  • the multireader decodes the message (or the agent command) and initializes its operation.
  • the multireader initiates memory read cycles and data from the memory is sent to the multireader.
  • the multireader packs the data, generates the output message, and sends it to the ring if the ring is vacant.
  • the destination is the transmit FIFO in the peripheral.
  • Example B Send data to DMA write (transmit) buffer: (1) The DMA controller issues a multireader message. This message asks for data transfer from the memory to the DMA controller write buffer (The message will contain the destination address and the number of bytes that are required and the starting location in the network processor memory).
  • the multireader initiates memory read cycles and data from the memory is sent to the multireader.
  • the multireader packs the data, generates the output message, and sends it to the ring if the ring is vacant.
  • the destination is the write buffer in the DMA controller.
  • the following restrictions may apply: do not activate more than one multireader at a time from each source (except the DMA, which can send two) in order not to cause overflow in the FIFO; and if the destination of the multiread request is one of the NP memories, only aligned transactions are supported because the memory does not support overflow of memory entry during a write (split one write command to two).
  • the message sender agent 528 is a module which translates a network processor AGENT command to a message to be sent to a destination on the ring.
  • the message sender is connected to the network processor agent interface.
  • the message sender is a powerful module since it can generate messages in all the different messages types that are available in the system. This means that the network processor can send messages to all the modules that are connected to the ring, and even replace the host in sending supervisor messages. This feature can be very beneficial while debugging the system.
  • the block diagram of the message sender 528 is shown as FIG. 51.
  • agent commands There are three instructions dedicated for agent commands: AGENTW, AGENTWI, and AGENTR.
  • the message sender ignores the AGENTR command.
  • the AGENTW/I commands drive the value of three registers, or two registers and an immediate value, on the agent bus. Those registers are marked RA, RAP, and RB (or imm 8 ).
  • the message sender will interpret the content of those registers in the following way (shown in FIG. 52):
  • RAP[ 23 : 0 ] The destination address or the 32 LS (least significant) bits of the data. This is a 24-bit address of a module (destination) that is connected to the ring or the 4 LS bytes of the data that is sent to the ring when using the 64-bit data mode.
  • RA[ 31 : 0 ] The data that will be sent to the destination (typically in work read messages it will include the return address for the data that was read from the module and the number of bytes to read).
  • RB[ 7 : 0 ] The message type that will be sent to the destination (only the LSB of RB will be used). In a 64-bit data message RB is the address of the message destination.
  • the AGENTWI command drives the value of two registers, eight bit immediate value (imm 8 ) on the agent bus.
  • the registers are marked RA and RAP.
  • the message sender will use the content of those register in the following way:
  • imm 8 the message type that will be sent to the destination.
  • FIG. 52 illustrates a mapping an agent write command 560 to a message 562 .
  • the network processor sends new requests for message sending while the message sender is busy serving previous requests, those requests will stall network processor.
  • the message sender will have an internal queue of 2 entries so it can store 2 requests for sending messages before stalling the network processor.
  • Table 8 illustrates the message sender output message format according to an embodiment of the invention.
  • TABLE 8 Field Description type[7:0] The type field describes the outgoing message type. The following types are valid. (see message type table for more details).
  • type[7:0] 010XXLFI: work read.
  • type[7:0] 100FLZZZ: work write. address[23:0] The address of the destination. This is the content of RAP or RB according to the mode used (option bit 6). If option[6] is one the address is taken from RB data[63:0]/[31:0] The message data. The content of RA or RA and RAP according to the mode used (option bit 6). If option[6] is one RA, RAP are used.
  • the message sender can send a 64-bit data message. Sending a 64 bit message is done by setting option bit[ 6 ] of the AGENTW command to one (this option is not available for the AGENTWI command). If this option is used the message sender uses the content of RA,RAP as the source for the raw data, and RB as the source for the raw address. In this mode the message type is always work write, with 8 valid data bytes. There is no provision for sending less than 8 bytes.
  • the message sender uses six option bits that are driven by network processor in order to modify the value of the raw_data and raw_address. This feature is useful when the value in the registers are used as constants and are modified as required. For example, when writing to a FIFO the content of RAP will be the FIFO address, and when the system seeks to write the first in frame or last in frame locations the address will be modified using the option bits. Data modification is useful when sending a doorbell request. The data for the doorbell request is only 3 bits. Hence the raw data can be modified to generate data for the doorbell request.
  • the address and data modification may be performed as follows: (1) the content of RAP[ 4 : 2 ] or RB (in 64 bit data mode) is OR'd with the option[ 2 : 0 ] bits to generate the message destination address; and (2) if the value of options bits[ 5 : 3 ] is not zero, the content of RA[ 2 : 0 ] or RAP (in 64 bit data mode) is replaced with options[ 5 : 3 ] bits to generate the message data. Address and data modification are active regardless of the message sender operation mode.
  • Software/hardware restrictions include the following in one embodiment of the invention: (1) the 64-bit data mode is available only when using AGENTW command; and (2) in 64 bit mode the message type is always work write.
  • this challenge is met by providing a DMA agent module as a peripheral to each processor in the system.
  • a DMA agent may be implemented as one of the tightly linked compounds on the overall network processor.
  • the DMA agent is a compound that shares the same ring interface as the overall network processor existing as a ring member.
  • the DMA agent operates to control the DMA transfer requests that are sent by the processor as follows:
  • Each DMA controller has a dynamic pool of tokens that the DMA controllers allocate for use by the DMA agents linked to the various processors. In other words, each DMA controller has a pool of tokens that the DMA controller can distribute among the various DMA agents.
  • Each valid token allows a DMA agent to send one DMA request to the DMA controller that owns the token. If there are no valid tokens, no DMA requests can be issued by the DMA agent and the processor will stall.
  • the DMA agent periodically queries the DMA controllers for tokens whenever the number of valid tokens in the DMA agent's pool is less than a number prespecified by software. The maximum number set by software can change.
  • the DMA agent module 530 (illustrated in FIG. 53) translates network processor DMA commands to ring messages used to initialize the DMA controller.
  • each network processor has one DMA agent.
  • Each DMA agent has the ability to control each and every one of the DMA controllers that are available in the system, using the context table (e.g., in the implementation there are 3 DMA controllers, and each DMA agent can control up to 4 DMA controllers).
  • the fourth DMA controller is provided for future system expansion.
  • the DMA agent is connected to the network processor agent interface and to the ring write interface.
  • the DMA agent registers can be written by the host only via the write bus using ring messages.
  • the context table is initialized by the PP once, and it is not changed during regular work.
  • the token registers should be written only by the DMA controllers.
  • the DMA agent can receive requests to initialize a DMA channel only via the agent interface using special network processor DMA commands.
  • the DMA agent has a small request queue of two entries in order to minimize the need to stall the network processor if the DMA request could not be serviced (e.g., this could happen if for example there are no available tokens, or if the DMA is unable to send the messages to the DMA controller because the ring is busy).
  • Requests Priority There are two priority levels for DMA requests in the DMA controller. The lower priority level is regular and the higher priority level is urgent. By default all DMA requests are regular. A DMA request can become urgent if the processor defines it as urgent. Requests that have urgent priority have the urg bit in the message set, and will get a higher priority in the DMA controller queue. The DMA agent ignores the urg bit (it sends it on to the DMA controller), and serves the requests in the order they arrive.
  • the DMA agent context table maps a network processor DMA command to the actual request that will be sent to the DMA controller that was selected.
  • the actual request defines the parameters for the current DMA transfer.
  • the context table has four entries.
  • the table entry to be used is determined by a two bit pointer encoded out of the 4 MSB (most significant bits) of the DRAM address in the DMA command. (The reason that 4 bits are used is because the DRAM address space is divided into 16 parts and only 4 could be accessed by the DMA).
  • the entry allocation which is hard coded.
  • the context table could be written using write messages.
  • the table should be initialized before starting any DMA access.
  • the context table could be read using read messages.
  • ADDR DMA ⁇ AGENT ⁇ BASE to DMA_AGENT_BASE+$F.
  • the maximum number of tokens which could be allocated for one channel is 15.
  • Table 10 provides a description of the DMA context table. TABLE 10 field description address[13:0] The physical base address of the DMA controller to be used. visitor[2:0] The number of the request and mask bits to set for the current DMA transfer. This field is common to all the contexts. max_tokens[3:0] This field describes the maximum number of tokens that could be used by this DMA channel.
  • DMA agent token control In order to manage DMA transfers from different sources with different contexts, a free token transfer based approach is used. According to this approach, the DMA agent has a pool of tokens. The service of a DMA request can start only if there are available valid tokens allocated for this DMA channel in the DMA agent. If there are valid tokens, the processing of the DMA request can start as previously described. If there are no available tokens to execute the DMA request, it will be registered in the DMA agent queue, and will wait for execution until the DMA agent gets a token from the DMA controller (note that of the DMA agent queue is full the request will stall the network processor).
  • Token distribution is performed using messages.
  • the DMA agent issues a request for a token to the DMA controller each time the number of valid tokens is less than the maximum allowed tokens (which is specified in the context table).
  • the DMA controller sends the token back to the agent and marks this token as used in its token list.
  • the DMA controller will free the token again when the DMA transfer is finished (i.e., before sending the message to the doorbell). If the DMA controller has no free tokens then it sends the DMA agent an invalid token (i.e., all the bits in the token response are zero).
  • the DMA controller sends the DMA agent a valid token to the address of the token that was used (the DMA agent sends this address in the token request message).
  • each DMA controller has a pool of a maximum of 16 tokens for each DMA channel.
  • the DMA agent token registers contains the token numbers that the DMA controllers allocated for use (the valid tokens are marked by setting the appropriate bit to one).
  • the token registers can be written only by the DMA controllers. There are four token registers in the DMA agent. Table 11 illustrates the DMA agent channel[i] token register.
  • ADDR DMA_AGENT ⁇ BASE+$10--DMA_AGENT_BASE+$1F 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 novt req token[15:0] 0 0 0 0
  • the DMA agent searches the appropriate token register to see if there are valid tokens. If there are valid tokens, the DMA agent uses one of them (e.g., the first one it finds) and marks that token as invalid. Then, the DMA agent starts the data transfer for channel initialization. The DMA agent also sends the DMA controller a message to replace the used token with a new one (this will be work read type message). The indication that the DMA agent issued a token replacement request is made by setting the req bit of the relevant token register.
  • the DMA controller If the DMA controller has a free token available it will send it to the DMA agent, and the agent will replace the used token with the new one (i.e., the request bit is cleared). If the DMA controller does not have a free token available, it will send the DMA agent an invalid token (i.e., all the token bits are cleared and the req bit is cleared). The DMA agent issues a new token replacement request after a maximum of 4 cycles.
  • the DMA agent has the ability to recognize if the DMA transfer is made to an illegal external address for each of the external DMA channels.
  • the DMA agent identifies such an access, it sends a special error message to the PP, informing the PP of the illegal access parameters.
  • Address error calculation is performed on the SDRAM address written by the network processor using the DMA command.
  • the SDRAM address is split into two parts. The first part is bits [ 31 : 28 ] of the address and the second part is bits [ 27 : 20 ] of the address.
  • the address error logic compares the first part of the SDRAM address to each one of the values (0, x2, 0x4, 0xf), which correspond to the 4 MS bits of the SDRAM areas. If a match is not found, an address error occurs and a special error message is generated by the DMA agent. If there is a match, the bits of the second part are compared according to a programmed mask to zero. If the result is not equal to zero an address error is generated, and an error message is sent.
  • a general DMA agent message will have the format as shown in Table 15.
  • the field describes the starting address space of the DMA agent.
  • the DMA agent register address is from DMA_AGENT_BASE_ADD to DMA_AGENT_BASE_ADD+$1F. data[31:0] The data to be written to the registers.
  • DMA Controller Message Data the DMA agent will send the DMA controller two messages for each DMA transfer that was initiated by the network processor.
  • Table 17 illustrates the DMA controller message number 1.
  • the first message that will be sent from the DMA agent to the DMA controller contains the return address for the DMA request doorbell and the internal SRAM address.
  • the doorbell and the SRAM address are 24 bits wide:
  • the 6 LSB bits of this address are the task ID number at the time the DMA command was initiated.
  • the second message contains the external DRAM address and control information for the DMA transfer.
  • the control information includes:
  • ack 1 the bit of doorbell acknowledgement enable. This bit will tell the DMA whether it should send a doorbell at the end of the transfer. This information is found in the DMA command.
  • vst[ 2 : 0 ] 3 bits of visitor code. These bits indicate which request bit the DMA controller should set in the doorbell request register.
  • Token request and token reply messages Tables 19 and 20 illustrate a token request and token reply message, respectively.
  • the data part of the token request contains the address in the token register that should be written with a new token.
  • DMA agent calculating the message destination address.
  • the messages that the DMA agent sends to the DMA controller are sent to three different destinations. The first two of these message destinations are:
  • DESTINATION_ADDRESS 1 ⁇ DMA_BASE_ADDRESS[ 13 : 0 ],0,1, token_number[ 3 : 0 ], 0,0,0,0 ⁇
  • DESTINATION_ADDRESS 2 ⁇ DMA_BASE_ADDRESS[ 13 : 0 ],0,1, token_number [ 3 : 0 ],1,0,0,0 ⁇ .
  • the destination address of the token request is:
  • DESTINATION_ADDRESS 3 ⁇ DMA_BASE_ADDRESS[ 13 : 0 ],10′b 0 ⁇ .
  • the doorbell address is the address to which a doorbell should have been sent at the end of the DMA transfer if an address error has not occurred. This address contains the task ID information in the six LSB bits and the base address of the network processor from which the message error was sent in bits 23 - 6 .)
  • FIG. 54 describes how the DMA agent maps the data on the agent bus 576 to the DMA request 578 .
  • A set auto set bit in the doorbell mask register.
  • M Modify address. Setting this bit enables the modification of the SRAM address and the DRAM address.
  • the DMA agent will have two request entries for storing network processor DMA requests. If both entries are full and the network processor issues a new request, the network processor will be stalled until one of the requests is served.
  • Destination addresses can be calculated, for example, according to several modes:
  • Register A+Register B the destination address is the sum of the values of Register A and Register B.
  • Register+Offset the destination address is the sum of the value of Register A and an immediate offset value.
  • one beneficial aspect of the present invention provides for adding a special address computation mode to the network processor data structure access commands.
  • this special mode When activated, this special mode causes the destination address to be automatically computed using a base address, offset, and an address modifier.
  • the destination address in this special mode is computed as:
  • agent option bit 9 in one of the DMA commands
  • the DMA agent will modify the value of the SRAM address and the DRAM address (that were written by the network processor) before sending the control message to the DMA controller.
  • Address modification is accomplished in the following fashion. DRAM address bits 1 , 2 , 3 are OR'd with count bits 2 , 3 , 4 (respectively), and SRAM address bits 1 , 2 , 3 are OR'd with count bits 5 , 6 , 7 (respectively).
  • the DMA transfer size is limited to one of the four options listed in Table 22 below. TABLE 22 count[1:0] transfer size 00 2 bytes 01 4 bytes 10 8 bytes 11 16 bytes
  • a method for performing address computation for a data structure address command in a communications processor comprises providing a library of read commands and write commands for a network processor in a rings based architecture, including an option bit in the read commands and write commands for an address calculation modification mode, providing an agent module for forwarding read requests and write requests to a DMA controller in response to requests including an address issued by the network processor, and modifying the value of the address when the option bit is set before forwarding the read requests and write requests to the DMA controller.
  • the method in one embodiment, permits repeated accesses to an external data structure without recomputing the destination address in its entirety each time.
  • Modifying the value of an address comprises automatically computing a destination address using a base address, an offset, and an address modifier.
  • modifying the value of an address allows computation of the destination address using a single read command or write command.
  • the DMA agent is responsible for setting the DMA mask bit in the doorbell agent each time a DMA command is issued.
  • the DMA mask bit will be set only if the NA bit is cleared (if acknowledgement is not needed for the DMA transfer there is no need to set the mask). If the auto set option bit is set and the NA bit is cleared, then two mask bits will be set at the same time in the doorbell.
  • the index of the bit that should be set is determined according to the visitor bits in the context table (the auto set code is fixed) DMA Agent Operation Scenario Examples
  • Example A The network processor asks for write DMA access:
  • the network processor issues a DMA command on the agent bus.
  • the DMA agent registers the request in the request queue and extracts parameters.
  • the DMA agent checks whether there is an available token from the DMA controller to start processing the request. If there is no token available the request waits in the queue for execution until there is an available token. If the request queue is also full, the network processor will be stalled.
  • the DMA controller issues a multireader message.
  • the multireader message requests a data transfer from the network processor memory to the DMA write buffer.
  • Example B The network processor asks for read DMA access:
  • the host has to initialize the DMA context table with all the channel configurations. This should be done at one time for all the possible configurations.
  • the network processor issues a DMA command on the agent bus.
  • the DMA agent registers the request in the request entries and extracts parameters.
  • the DMA agent checks whether there is an available token from the DMA controller to start processing the request. If there is no token available, the processing is stalled until there will be an available token.
  • a communications processor implemented as on at least one ring network.
  • the communications processor comprises a plurality of processors comprising ring members on the at least one ring network and a plurality of DMA controllers on the at least one ring network, the DMA controllers controlling servicing of DMA requests by the plurality of processors.
  • the communications processor further comprises a plurality of DMA agents coupled to the plurality of processors, each DMA agent being part of a ring member including a processor, wherein each DMA agent is adapted to service processor DMA requests by determining whether a valid token exists from a pool of tokens reflecting available DMA controllers.
  • the tokens may be DMA controller specific tokens issued by the DMA controllers to the DMA agents to indicate when specific DMA controller access is available.
  • the associated DMA agent determines whether a valid token exists and, if a valid token exists, services that DMA request using the DMA controller associated with that token.
  • the token can be marked as used or invalid when the token is used to service a DMA request. If no valid token exists the DMA agent queues the DMA request until a valid token exists.
  • the associated DMA agent can be adapted to automatically request a new valid token after an existing valid token is used to service the DMA request.
  • Each DMA agent in one embodiment, is adapted to request additional valid tokens when the number of valid tokens in the pool falls below a maximum number.
  • the processors comprise, in one embodiment, a plurality of network processors and the at least one ring network comprises a plurality of ring networks.
  • the pool of tokens is stored in a register written to by the DMA controllers.
  • the DMA agents can be adapted to service processor DMA requests by converting them to messages transmitted onto the at least one ring network.
  • the DMA controllers can distribute valid tokens by transmitting messages on the ring network that are received by specific DMA agents.
  • Each DMA controller further may be adapted to maintain a list of tokens including those tokens that have been distributed as valid tokens.
  • the DMA controllers can be adapted to respond to requests from the DMA agents for additional tokens with an invalid token when no valid tokens are available.
  • Each DMA controller can have a pool of up to, for example, 16 tokens for each DMA channel.
  • the DMA controllers in one embodiment, are capable of reading registers having the pools of tokens for the DMA agents by issuing read messages traveling on the at least one ring network.
  • FIG. 55 is a schematic diagram of the CRC agent 520 according to one embodiment of the present invention.
  • the Cyclic Redundancy Check (CRC) agent is a network processor compound module which implements logic to perform CRC calculations.
  • the CRC agent supports different types of CRC calculations like CRC 32 , CRC 16 , CRC 10 , and so forth, for different data sizes (1 to 8 bytes).
  • the CRC agents works in two major operational modes. The first mode is a snoop mode and the second mode is on-demand mode. In the snoop mode the CRC agent snoops for messages in which the S bit is set. The CRC will detect those messages and will calculate the selected CRC on the message data. The second mode of operation is the on-demand mode. In on-demand mode the network processor writes data to the CRC, and the CRC uses this data for its calculations.
  • the network processor can write the CRC registers via the agent bus using AGENTW/I commands.
  • the network processor can read the CRC residue via the agent bus using an AGENTR command.
  • the CRC agent can stall the network processor if the network processor reads the CRC results and the results are not yet ready.
  • the CRC module may also be able to generate a 32 bit random number.
  • the CRC Agent has two modes of operation:
  • On-demand mode performed for any data transferred (e.g., CRC 5 , hashing function); and snoop mode, performed for a continuous data sequence transferred from/to the serial interfaces.
  • data transferred e.g., CRC 5 , hashing function
  • snoop mode performed for a continuous data sequence transferred from/to the serial interfaces.
  • the CRC agent can be adapted to calculates CRC for 8, 16, 24 or 32 bits of data in a single cycle. If CRC is enabled for snooping, a network processor agent read instruction from a CRC residue register stalls until the last indication arrives with the last data word. Special control enables the CRC residue to be calculated on partial data (e.g. 22-bits in CRC 10 , or 0 bits in CRC 32 ); then the CRC residue is combined with the partial data to form the 32-bit last word of the frame, and this is exposed to the multireader block for transmission. In CRC 5 , the CRC module is capable of calculating the 5-bit CRC out of 19-bit data for transmit, or out of 24-bit data for the CRC check in receive (on-demand mode).
  • partial data e.g. 22-bits in CRC 10 , or 0 bits in CRC 32
  • CRC Agent in one embodiment is adapted to interface to: transmit bus—for snooping TX data and calculating CRC; and agent bus—for configuration, on-demand activation and read/write residue.
  • the network processor 514 can write to the CRC agent 520 using AGENTW commands.
  • the mapping of the AGENT command 590 to CRC data 592 is described in FIG. 56.
  • the options include:
  • TYPE[ 2 : 0 ] 3 bit CRC.
  • the types are: 000 -CRC 32 ; 001 -CRC 10 ; 010 -CRC 5 ; 011 —checksum; 100 —CRC 16 ; 111 -BIP 16 (only for writing BIP 16 reside register).
  • SIZE[ 2 : 0 ]—The number of valid bytes in the data ( 1 to 8 ) starting at the LSB of RA (size 0 means 8 valid bytes in the message).
  • G This bit indicates if the CRC agent works in the generate CRC or the check CRC mode.
  • the data for the CRC calculation and the residue are written by the network processor. Since the data in the memory is stored in big endian format, and the data in the network processor register file is stored in little endian format, the CRC module may perform some manipulation of the message data before the CRC calculation (especially if the data size is not 32 or 64 bit).
  • the CRC module contains two residue registers.
  • the first residue register is a 64 bit register containing the residue for the CRC and checksum calculations.
  • the second residue register is 32 bit register containing the residue for the BIP 16 calculation.
  • the network processor can read the results of the CRC calculations using the AGENTR command.
  • the result of the CRC machine that will be read is determined according to the operational mode that was selected.
  • the BIP 16 machine calculation result will have a different register that could be read by the network processor (i.e., the two residue registers have two different addresses). If the network processor reads one of the CRC registers and the result is not ready, the network processor will be stalled.
  • the CRC calculation is considered to be complete after all the data had arrived (last indication in the message) in snoop mode. In on-demand mode the result of the CRC calculation will be available for reading one cycle after it was written if the data size is smaller than four bytes, and two cycles after it was written for larger data sizes.
  • Example A calculating CRC in on-demand mode:
  • the network processor writes the CRC agent using AGENTW command.
  • the data that is written to the CRC agent contains: CRC type; the data on which the CRC is to be calculated, the size of the data (number of valid bytes), and a new residue if the current residue is to be overwritten; the operational mode is set to work in the on-demand mode; and in the CRC 5 mode the G should also be written.
  • Example B calculating CRC on transmit data (multireader data out):
  • the CRC machine can calculate the CRC of the transmit data by snooping the S and L bits of the multireader output messages.
  • the network processor initializes the CRC agent in the following manner:
  • Example C calculating CRC of receive data:
  • the CRC machine can calculate the CRC of the receive data by snooping the S and L bits of the agent write bus messages.
  • the network processor initializes the CRC agent as follows: (1) CRC type.
  • timer agent 526 is illustrated in accordance with one embodiment of the present invention.
  • the timer module is designed to allow the assignment of time stamps to various events within network processor tasks.
  • the timer contains a 32 bit count-up free running counter. The counter counts at a frequency which could be calculated using the following formula.
  • the counter frequency will be set to 1 MHz (which corresponds to a 1 microsecond period).
  • the prescale counter is a 10 bit down-counter, which divides its input clock frequency by the prescale value. If the prescale value is equal to zero the prescaler will be bypassed.
  • the time stamp value could be read by the network processor from the time stamp register using the agent interface.
  • the network processor can write to the timer using the AGENTW/AGENTWI commands. In order to enable timer operation only two values are required. The first value is the control information which resides in register RB or the imm 8 value (according to one approach, only one bit is used). The second value is the prescale value which determines the counting frequency of the timer. The prescale value is taken from the 10 LSB of RA. The value of RAP is ignored.
  • FIG. 58 illustrates the mapping of the AGENTW command 602 to the timer data 604 .
  • the timer control register is used to store the prescale value and to enable/disable the timer count operation.
  • the timer control register is written using AGENTW/I commands and read using the AGENTR command.
  • the timestamp register contains the value of the timer counter at the time of an agent read operation.
  • the register is read by the network processor using the AGENTR command.
  • Timer stamp value This value of the timer counter at the time of the read operation.
  • FIG. 59 is a schematic diagram of the doorbell agent 516 according to one embodiment of the invention.
  • the doorbell agent is the scheduler module which handles the execution sequence of the tasks.
  • the doorbell is connected to the network processor agent interface and to the ring write interface.
  • the doorbell registers can be accessed by the network processor using the one of the special AGENT commands, or via the write bus using ring messages (e.g., by the serials and the host). All the possible service requests from the different sources go into the doorbell agent via the write bus.
  • the doorbell detects a request message it registers the request in the doorbell logic.
  • the doorbell agent can handle requests of up to 64 different tasks.
  • the doorbell chooses the highest priority pending request (out of all the un-masked tasks), and sends its task ID to the network processor as the next task ID.
  • the network processor sends back to the doorbell the current task ID that it is executing.
  • the network processor uses the task ID information to perform the prefetch, bump and task switching, as previously described.
  • the sources for doorbell requests include: Regular serial, timer, or software request: (e.g., a message from another task) This request indicates that a data fragment had been received in the RX FIFO or there is a place to write more data into the TX FIFO for transmission, or that a timer finished its count.
  • DMA request The DMA had finished its data transfer.
  • Self-request When a task yields itself (i.e., when the task execution time exceed the maximum allowed execution time), the software can resume its execution by setting the self-request bit.
  • the starting point of the task will depend on what is written in the EP (entry point) register.
  • the EP register can be updated by hardware or by software.
  • every request bit has its own mask bit (except the self-request). When the mask bit is cleared the request is ignored and the task can not trigger task switching.
  • the self-request constitutes the only request bits that can not be masked.
  • the algorithm for selecting the next task for execution is as follows.
  • the tasks which participate in the selection of the next task for execution are the tasks for which their corresponding mask bit in the Task Global Mask Register (TGMR) is cleared.
  • Tasks which participate in the selection of the next task and have unmasked requests are divided into four groups and served in the following order:
  • Highest priority group include urgent requests of task numbers 0 - 31 .
  • Second priority group include regular requests of task numbers 0 - 31 .
  • Third priority group include urgent requests of task numbers 32 - 63 .
  • Lowest priority group include regular requests of task numbers 32 - 63 .
  • each group the requests are served according to the task number. Lower task number requests are served before higher task number requests.
  • the network processor can access the doorbell registers via the agent interface using one of special AGENT commands.
  • the network processor can directly modify only the register bits of the current task (the request, mask, counter bits value), or the global mask register (TGMR). Modifying other task register bits can be done via the ring write bus by sending a message from the message sender agent to the doorbell.
  • the data 612 for modifying the mask, request and the counter bits 614 of the current task is encoded in the RB/imm 8 part of the agent command as illustrated in FIG. 60.
  • the doorbell logic decodes the 8 LSB of RB/imm 8 and sets the appropriate bits in the current task register, counter, urgent or TGMR.
  • Setting a request or mask bit is performed by writing 5 bits of the command index in the RB/imm 8 part of the AGENT command and then 3 bits of the index or the request bit that is to be set, and then 3 bits of the mask bit that is to be set. Note: Only one mask bit at a time can be set by the network processor using a single agent command (if other mask bits were set they will be cleared by the agent write command, except for the autoset bit. Writing the auto set bit will not clear other mask bits). Writing to the request bits will not clear other requests bits if they were already set. If the index value is zero the write to that part of the register is ignored.
  • Table 27 describes the decoding of the RB/imm 8 part of the message and the operations that take place.
  • index operation RB/lmm8 value mask request Write task (0,0,0,0,0,mask_bit_index[2:0]) 000 don't change mask don't change request bits register (0,0,0,0,1,request_bit_index[2:0]) bits mask and 001 set the aset bit set the preq bit (self request don't change other request) Other request bits mask bits bits are not changed. 011 set the mdma decrement the DMA bit clear all other request counter by 1 mask bits 100 set the mpreq set the preq bit. Other bit.clear all other request bits are not mask bits changed. write (1,0,0,0,0,counter value) request counter write (0,1,0,0,0,0,0,0) TGMR write (1,1,0,0,0,0,0,0,urgent value) urgent
  • CM Cylear mask. Setting this bit will clear all the bits of the current task mask bits (including the auto set bit).
  • SG Set global. This bit determines whether the task global mask register (TGMR) bits will be set or cleared according to the data in the RA/RAP part of the agent command. If the SG bit is set then the TGMR bits will be set at the locations corresponding to the set bits in RA,RA+I data.
  • TGMR task global mask register
  • Clearing the mask registers bits is accomplished by writing 1 to the clear mask (CM) bit in the command. If the CM option is used at the same time another mask bit is written, the set operation overwrites the CM operation.
  • CM clear mask
  • peripheral request mask bit could also be set by the network processor when there is a YIELD command and the set default mask option is used. Other request bits will be cleared.
  • the network processor can initialize the DMA requests counter of the current task by setting the RB/imm 8 part of the agent bus to ⁇ 1,0,0,0,0,count_value[ 2 : 0 ] ⁇ .
  • the doorbell bits of the current task i.e., the request bits, mask bits and the counter value
  • the TGMR could be read using the agent read command (AGENTR).
  • Another option for setting the DMA mask (mdma) bit and the auto set (aset) bit is by using the network processor DMA commands.
  • the DMA commands have an option to set the DMA mask bit and the auto set bit.
  • the DMA agent When the DMA agent detects a DMA command, it can set the appropriate mask bit in the doorbell using the DMA context table (the context table stores the information as to which bit to set). The mask setting will be done if the NA bit in the DMA command is cleared. The auto set bit will be set if the A option bit in the DMA command is set.
  • the doorbell registers could be accessed by the peripherals, the network processor and the host using ring messages. Every time a peripheral wants to set a request bit, the peripheral sends a write message with a destination address of the doorbell entry it wants to set. The doorbell will set the appropriate request bit in the doorbell registers according to the content in the data field of the message.
  • Table 28 shows the encoding for the input message format.
  • the doorbell register space ranges from DOORBELL_BASE_ADD to DOORBELL_BASE_ADD + $3F.
  • the doorbell register file contains 64 registers.
  • each possible task has its own doorbell register.
  • ADDR DOORBELL_BASE to DOORBELL_BASE+ $3F (Note: Current task register bits are reflected in the network processor status register.) (Note: All of the request and mask bits [not including the auto set bit] are automatically cleared when the task enters execution.)
  • Table 30 provides a description of the doorbell register according to an embodiment of the invention. TABLE 30 field description urg The urg (urgent) bit is used to allow the software to control the priority level of a task (as opposed to the urgent request status which is being generated automatically and could not be controlled by software). If the bit is set the task has high priority. This bit is written only by the Vobla count[2:0] These bits represent the number of DMA requests that should be acknowledged.
  • Every DMA activation that requires acknowledgement at the end of the DMA transfer will cause the DMA agent to increment the counter value by 1. Every acknowledgement that is written to the dma bit in the doorbell register decrements the counter value by 1. If the counter value is equal to zero and the current task was yielded, the dma bit will be set (only if the counter was incremented at least once during the current task). If the dma mask (mdma) bit is set then a task switch will be triggered. Those bits can be written by the Vobla using the AGENT command.
  • This bit can be set from the write bus or by the Vobla, and can be cleared by Vobla. In case the bit is set and cleared at the same time, the set will overwrite the reset.
  • This bit can be set from the write bus or the Vobla, and can be cleared by the Vobla. In case the bit is set and cleared at the same time, the set will overwrite the reset.
  • This bit can be set by the Vobla and the DMA agent and can be cleared by the Vobla. In case the bit is set and cleared at the same time, the set will overwrite the reset.
  • mdma DMA request mask bit is set from the write bus or the Vobla, and can be cleared by the Vobla. In case the bit is set and cleared at the same time, the set will overwrite the reset.
  • This bit can be set by the Vobla and DMA agent, and can be cleared by the Vobla. In case the bit is set and cleared at the same time, the set will overwrite the reset.
  • This bit can be set by the Vobla and DMA agent and can be cleared by the Vobla. In case the bit is set and cleared at the same time, the set will overwrite the reset.
  • rsrvd Reserved bits are read as zero and can not be written.
  • Task Global Mask Register (TGMR).
  • the task global mask register (TGMR) is a 64 bit register (one bit per each task), which could be accessed by the network processor using the AGENT commands.
  • the TGMR is used to determine which tasks are taken into consideration when calculating the next task for execution. Every set bit will prevent the corresponding task from being selected as the next task for execution, even if that task has valid requests to serve (at least one corresponding mask and request bits are set).
  • the AGENT write command must contain the value 01000000 in the LSB of RB or the imm 8 field. Based on the value of the SG option bit and the value of RA,RAP, the TGMR bits are set or cleared. Only bits which have the corresponding RA, RAP bits set are affected.
  • the TGMR could be read using AGENTR commands.
  • the 32 LSB of TGMR are located at address 0 of the doorbell, and the 32 MSB are located at address 1 .
  • the user can read all 64 bits using the read double option of the AGENTR command. If only 32 bits are read, the other part of the data will be zeroed.
  • one challenge is knowing at certain points in time whether all of the DMA requests issued by a specific task running on a processor are finished.
  • the challenge can be significant because DMA requests may be issued by different tasks running on a processor to different DMA controllers. Also, the DMA requests may finish out of the order in which they were issued.
  • the invention provides that a DMA agent (previously discussed) be associated with each of the processors in the system.
  • the role, in this instance, of the DMA agent is to control the DMA transfer requests made by the associated processor.
  • the DMA agent sends an indication to a book-keeping unit.
  • the book-keeping unit is a request counter in the doorbell task register for each processor. The book-keeping unit receives this indication and increments the request counter. Because the preferred system performs multi-tasking, the request counter may include a separate entry (or separate request counter) for each task performed by the processor.
  • the DMA controller issues a decrement counter message to the book-keeping unit.
  • the relevant entry (or relevant request counter) is then decremented by one.
  • the relevant entry (ore relevant request counter) reaches zero, the system knows that all DMA transfers for that task have been completed.
  • each doorbell task register has its own request counter.
  • the request counter is incremented every time it gets an increment counter indication.
  • the increment counter indication is an option in the network processor DMA commands (this is the NA bit). Every time a DMA command is issued and NA bit is cleared, the counter is incremented by 1.
  • a communications processor implemented as on at least one ring network.
  • the communications processor comprises a plurality of processors comprising ring members on the at least one ring network, a plurality of DMA controllers on the at least one ring network, the DMA controllers controlling servicing of DMA requests by the plurality of processors, and a plurality of DMA agents coupled to the plurality of processors.
  • each DMA agent being part of a ring member including a processor, wherein each DMA agent is adapted to issue an indicator to a request counter coupled to the DMA agent for each DMA request issued by the DMA agent to a DMA controller, thereby allowing each DMA agent to maintain a count of the outstanding DMA requests that have been issued on behalf of the processor associated with the DMA agent.
  • the request counter maintains a separate count for each task being executed by the processor, wherein the request counter is contained in a doorbell register supporting up to 64 tasks.
  • the target DMA controller can be adapted to issue a response that causes the request counter to decrement the count by one.
  • the DMA requests issued by the DMA agent to the DMA controller and the response issued by the target DMA controller can be transmitted as messages on the at least one ring network.
  • the processor can be enabled to switch to other tasks because all DMA requests for a given task have been satisfied. In this case a new DMA request for a different task can be deferred until the counter has returned to zero for the given task.
  • a method of controlling access to DMA controllers in a multi-tasking communications processor implemented as on at least one ring network comprises issuing DMA requests to a target DMA controller, maintaining a count of DMA requests on a per-task basis, and issuing an acknowledgement that a DMA request has been satisfied by the target DMA controller.
  • the method further comprises reducing the count based on the acknowledgement and enabling a processor responsible for issuing the DMA requests to perform new activity when the count has returned to zero.
  • the DMA requests are issued as messages on the at least one ring network.
  • the acknowledgement can be issued as a message on the at least one ring network.
  • the auto set functionality is defined.
  • the mask bits will be set to their default value after the desired request has occurred without triggering a request to the network processor and a task switch.
  • the auto set bit can be written by the network processor using the agent interface, or by using the DMA command (this is one of the options of the DMA command).
  • the default mask is: the peripheral request mask bit (mpreq) is set and all the other mask bits are cleared (see Table 28).
  • the doorbell module supports this requirement in two ways.
  • the first way is software control using the urg bit in the doorbell task register (not the task SPR).
  • Each doorbell task has an urgent priority bit in its task register (urg). When this bit is set the task becomes urgent and all of its requests are considered as urgent requests. The urgent bit remains set as long as it is not cleared by the network processor.
  • a second way to control the request priority level is by sending messages to the doorbell with the urgent status indicating the request priority level. If the overwrite current status is also set then the request priority status bit in the doorbell is also updated. If the task urgent status bit is set the task requests are also considered urgent. This bit is mainly controlled by hardware.
  • a serial sends a message with the destination address of its task requests register in the doorbell register file.
  • the data part of the message specifies which bit to set.
  • the doorbell samples the task number of highest priority pending request every time a yield is executed. If there are no pending tasks the doorbell waits until the first time there is a pending task (except if the next task is the current task, in which case the network processor waits until the yield indication, because there will be no task switch), and then samples the next task ID.
  • the network processor After the next task ID is sampled by the network processor, the network processor performs the prefetch of the next task registers.
  • the doorbell logic clears the request bit and the mask register of the task which caused the task switch.
  • the doorbell calculates a new next task ID.
  • the handling of a DMA request is very similar to the handling of a serial request. The only difference is the process of setting the DMA request and the mask bits. At the time DMA command is issued there is no information as to which request mask bit should be set. The doorbell logic will get this information from the DMA agent. This will be done using the DMA context table and a special option in the Network processor DMA command (the NA bit in the DMA command). When the DMA request is registered with the DMA agent, the DMA agent will set the DMA mask bit in the doorbell register. The DMA agent will also tell the DMA controller which request bit it should send the acknowledgement when the DMA transfer is finished, in order to decrement the request counter. When the counter reaches zero and if the appropriate mask bit is set, a valid task switch request will be issued to the doorbell logic.
  • Example C DMA request with auto set:
  • the doorbell logic will set the mask to the default mask value after the current task is finished without asserting a request for task switching.
  • the following restriction is imposed: Only eight pending DMA requests (DMA requests that were issued by the DMA agent for which acknowledgement has not reached the doorbell) per task are handled by the doorbell.
  • the network processor compound includes a debug module.
  • the debug module supports various breakpoints and enables program code patching.
  • the debug module can be programmed through the ring interface.
  • the debug module contains two breakpoint channels and eight patch channels. Each one of the patch channels can be configured to be used as a patch channel or as an additional program address breakpoint channel. A single step program trace is supported.
  • the network processor core supports two kinds of program breaks: a breakpoint and a patch.
  • a breakpoint event causes the program flow to jump to a program location pointed by a given vector and to enter the trap mode of execution by setting the trap mode bit located in the network processor task SPR. When in trap mode, no further breakpoint will be accepted.
  • the trap mode bit will be cleared by executing an RFT (Return From Trap) instruction or by writing a zero to the trap mode bit. When the trap bit is cleared, the network processor returns to the normal execution mode where further breakpoints are accepted.
  • a patch event causes the program flow to jump to a program location pointed by a given vector. In a patch event the trap mode bit will not be set, thus remaining in the normal execution mode.
  • a patch event is useful for program patching of code written in ROM.
  • each of the patch channels can be configured to operate as a patch channel or as an additional program address breakpoint channel. If a patch channel is enabled and is configured as a patch, a patch event will occur whenever there is a fetch from a program location equal to the catch address (discussed below). If a patch channel is enabled and is configured as a break, a breakpoint event will occur whenever there is a fetch from a program location equal to the catch address. Each one of the patch channels will cause the network processor program to jump to a different vector location according to a vector table (see the discussion on the vector table and Table 37 below).
  • Each of the patch channels includes a patch register as shown in Table 31. TABLE 31 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 patch 0 cn b p catch address patch 1 cn b p catch address patch 2 cn b p catch address patch 3 cn b p catch address patch 4 cn b p catch address patch 5 cn b p catch address patch 6 cn b p catch address patch 7 cn b p catch address 12 11 10 9 8 7 6 5 4 3 2 1 0 patch 0 catch address patch 1 catch address patch 2 catch address patch 3 catch address patch 4 catch address patch 5 catch address patch 6 catch address patch 7 catch address
  • Patch Register This is a 32 bit read/write register (through the ring). This register is cleared by a hardware reset:
  • Bits 15 : 0 Catch Address: This is the 16 bit program address which causes a patch event or a breakpoint event.
  • Bit 16 Bit 16 —Break or Patch (B/P): When the B/P bit is cleared, the patch channel operates as a patch channel. When the B/P is set, the patch channel operates as an additional program address breakpoint channel.
  • Bit 17 —EN This is the channel enable bit. When EN is set, the channel is enabled. When EN is cleared, the channel is disabled.
  • Bits 31 - 18 reserved: These bits are reserved. Reserved bits are read as zero.
  • the debug unit includes two address breakpoint channels.
  • Address breakpoint channels can be configured to cause a breakpoint when there is a program or data memory access to specific locations.
  • Each of the address breakpoint channels is configured by its address register and by the address breakpoint control register.
  • Each of the two address breakpoint channels include an Address Register. See Table 32 and Table 33, which show the channel 0 address register and the channel 1 address register, respectively. These are 32 bit read/write registers which are cleared by a hardware reset. Bits 15 : 0 hold the break address and bits 31 : 16 hold the break mask.
  • the break address is the program location at which cause a breakpoint event. A breakpoint event occurs only if the address breakpoint is enabled and there is a match between the memory address accessed and the break address.
  • the break mask is used to specify what address bits to compare. For example, if all the mask bits are set then the address comparison will be done on all address bits.
  • the address breakpoint control register is a 32 bit read/write register. This register is used to configure the operation of each one of the address breakpoint channels.
  • Bits 1 : 0 —MODE0 These two bits specify for channel 0 on which event to cause an address breakpoint as specified in Table 35.
  • Table 35 illustrates the Address Mode (AMODE) corresponding to bits 1 : 0 . TABLE 35 Mode Breakpoint On 00 Program Fetch 01 Data Read 10 Data Write 11 Data Read or Write

Abstract

Systems and methods are provided for implementing: a rings architecture for communications and data handling systems; an enumeration process for automatically configuring the ring topology; automatic routing of messages through bridges; extending a ring topology to external devices; write-ahead functionality to promote efficiency; wait-till-reset operation resumption; in-vivo scan through rings topology; staggered clocking arrangement; and stray message detection and eradication. Other inventive elements conveyed include: an architectural overview of a packet processor; a programming model for a packet processor; an instruction pipeline for a packet processor; and use of a packet processor as a module on a rings-based architecture. Additional inventive elements conveyed include: an architectural overview of a communications processor; a data path protocol support model for a communications processor; an exemplary network processor employed as the core packet processor for the communications processor; an exemplary rings-based SOC switch fabric architecture; and a variety of quality of support features.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • Priority is claimed based on U.S. Provisional Application No. 60/301,843 entitled Communication System Using Rings Architecture, filed Jul. 2, 2001, U.S. Provisional Application No. 60/333,516 entitled Flexible Packet Processor For Use in Communications System, filed Nov. 28, 2001, and U.S. Provisional Application No. 60/347,235 entitled High Performance Communications Processor Supporting Multiple Communications Applications, filed Jan. 14, 2002.[0001]
  • BACKGROUND OF THE INVENTION
  • The present invention relates generally to data communication networks and, more particularly, to receiving and transmitting systems, including ATM and other types of communications platforms and including such components as communications processors, packet processors, network processors, DMAs, FPGAs and other devices and peripheral devices. [0002]
  • The number of business and private home users of computers continues to rapidly grow, with these users typically being connected to local area networks (LANs), wide area networks (WANs), intranets, extranets, direct subscriber line (DSL) networks, etc. With growing demand from such users for increasingly large amounts of data across such networks, bandwidth and data processing and handling speed is an ever-present concern facing service and equipment providers to this vast audience of users. Hubs, routers, modems and switches have been the predominant mechanisms for providing the interconnectivity for many users to access networks. Switches made up of expensive VLSI (very large scale integration) circuits are often used to build out networks. In addition to the drawbacks presented by the expense of implementing such circuits, clock synchronization is of continuing concern in switched networks. [0003]
  • With the proliferation of the digital age, a significant demand has arisen for versatile networking technology capable of efficiently transmitting multiple types of information at high speeds across different network environments. One increasingly popular platform is Asynchronous Transfer Mode, commonly referred to as ATM, which was developed by the International Telegraph and Telephone Consultative Committee (CCITT), and its successor organization, the Telecommunications Standardization Sector of the International Telecommunication Union (ITU-T). ATM is a technology capable of high speed transfer of voice, video, and other types of data across public and private networks. Although widely implemented, ATM is just one example of many platforms used in handling communications and data across networks. [0004]
  • ATM utilizes very large-scale integration (VLSI) technology to segment data into individual packets (also referred to as cells). For example, B-ISDN calls for packets having a fixed size of fifty-three bytes (i.e., octets). Using the B-ISDN 53-byte packet for purposes of illustration, each ATM cell includes a header portion comprising the first five bytes and a payload portion comprising the remaining forty-eight bytes. ATM cells are routed across the various networks by passing though ATM switches, which read addressing information included in the cell header and deliver the cell to the destination referenced therein. Unlike other types of networking protocols, ATM does not rely upon Time Division Multiplexing (TDM) to establish the identification of each cell. Rather, ATM cells are identified solely based upon information contained within the cell header. [0005]
  • Further, ATM differs from systems based upon conventional network architectures such as Ethernet or Token Ring in that rather than broadcasting data packets on a shared wire for all network members to receive, ATM cells dictate the successive recipient of the cell through information contained within the cell header. A specific routing path through the network, called a virtual path (VP) or virtual circuit (VC), is set up between two end nodes before any data is transmitted. Cells identified with a particular virtual circuit are delivered to only those nodes on that virtual circuit. In this manner, only the destination identified in the cell header receives the transmitted cell. [0006]
  • The cell header includes, among other information, addressing information that essentially describes the source of the cell or where the cell is coming from and its assigned destination. Although ATM evolved from TDM concepts, cells from multiple sources are statistically multiplexed into a single transmission facility. Cells are identified by the contents of their headers rather than by their time position in the multiplexed stream. A single ATM transmission facility may carry hundreds of thousands of ATM cells per second originating from a multiplicity of sources and traveling to a multiplicity of destinations. [0007]
  • The backbone of an ATM network generally consists of switching devices capable of handling the high-speed ATM cell streams. The switching components of these devices, commonly referred to as the switch fabric, perform the switching function required to implement a virtual circuit by receiving ATM cells from an input port, analyzing the information in the header of the incoming cells in real-time, and routing them to the appropriate destination port. Millions of cells per second often need to be switched by a single device. [0008]
  • This connection-oriented scheme permits an ATM network to guarantee the minimum amount of bandwidth required by each connection. Such guarantees are made when the connection is set-up. When a connection is requested, an analysis of existing connections is performed to determine if enough total bandwidth remains within the network to service the new connection at its requested capacity. If the necessary bandwidth is not available, the connection is refused. [0009]
  • The design of conventional ATM switching systems involves a compromise between which operations should be performed in hardware and which in software. [0010]
  • Generally, but not without exception, hardware gives optimal performance but reduces flexibility, while software allows greater flexibility and control over scheduling and buffering and makes it practical to have more sophisticated cell processing (e.g., OAM cell extraction, etc.). [0011]
  • The various protocols associated with platforms such as ATM, Ethernet and others are distinct and require special handling, which is essentially transparent to the user. One approach to packaging the hardware and software necessary to handle the protocol processing and general communications and data processing is system on a chip (SOC), which typically is made up of several modules, often dedicated to specific tasks, working together. A number of these modules typically are interfaces to the external environment, such as Ethernet or Utopia. Others modules can include processors or memories. To illustrate, FIG. 1 shows a [0012] typical SOC 10, such as a communications processor, having a variety of modules, such as CPUs 14, 22, RAM 16, Ethernet interface 18, i/o interface 20, and DMA 24, interconnected via a switch fabric 12.
  • The challenge currently faced by system designers is integrate the modules into a cohesive system. The usual approach is to define busses, connect the modules on the busses, run signals between the modules via the busses, add bridges to connect busses, and so on. Other challenges to designing a SOC, among others, include: heterogeneous peripheral devices; several active modules (CPU, DMA); performance bottlenecks; performance organization of connectivity and busses; customer reality changes over life of a project; design verification bottleneck, both intra-module and inter-module; and application verification. As demonstrated, these challenges result in a considerable number of mechanisms needing to be debugged during the design of a SOC. [0013]
  • Although the traditional bus oriented approach is extensively utilized, such an approach typically has the following problems: a number interfaces to debug for both timing and logic; architectural decisions typically need to be done early in design, busses often create unpredictable timing and loadings; changing anything, like adding peripheral or deleting CPU requires considerable revamping of the system; and so on. [0014]
  • A communications processor is one example of a communications system commonly designed using the traditional buss approach. A robust SOC communications processor may find a myriad of applications, such as for modems, bridges, routers, gateways, multi-service gateways and access equipment, and so forth. Such a communications processor may be PHY [Physical layer]-independent, in which case it will be coupled with an appropriate PHY product, or it may by PHY-integrated, in order to provide the connectivity to the PHY layer of the ATM (or OSI [Opens Systems Interconnection]) layered protocol model. It can be readily appreciated that if such a SOC communications processor is to be robust in terms of the applications it can support, it must be able to process a wide variety of different protocols, such as ATM, FR (Frame Relay), IP (Internet Protocol), TDM, and so forth. Therefore, in such a SOC communications processor, a packet processor for processing the packets of information that may be of a variety of protocols may be implemented. [0015]
  • The processing of packets or cells performed by the packet processor may include the following tasks: packet header analysis (OSI Layer2, Layer3); frame validity—CRC (Cyclic Redundancy Code) check; forwarding decision—look up; header modification/conversion; segmentation and reassembly; data conversion (e.g., encryption); statistics gathering; and so on. In fact, as bandwidth requirements go up, and the demand for wire speed packet processing exists, packet processors have to be optimized to solve packet processing specific tasks. Proposed solutions for packet processing that exist today range from hard wired ASICs (Application Specific Integrated Circuits) (typically inflexible) to programmable packet processors (more flexible). [0016]
  • In the last few years, there has been a need for programmable packet processors for communication systems. The major advantages to programmable solutions can include: flexible adjustment for rapidly changing communication standards; implementation of increasingly complex communications difficult to implement in an ASIC; and consideration to differentiation and Time To Market (TTM) as a crucial aspect in today competitive environment. [0017]
  • From the system vendor's vantage, programmable packet processors generally have an advantage over ASIC solutions. A programmable packet processor can be viewed as a platform to be quickly deployed (in consideration of TTM) and then later one can add/modify system functionality by changing/adding code to the packet processor. The trade-off system vendors would have at the very high end solutions (core rate OC [Optical Carrier]-48, OC-192, for example) would be power and performance in programmable packet processors as compared to fixed ASIC solutions. However, several companies have announced programmable solutions for such core rates, indicating that a programmable solution is needed by vendors for such core rate products. [0018]
  • A programmable packet processor (also referred to as a network processor) would preferably provide a solution in the access space where the expected aggregate bandwidth is in the range of OC-3 to OC-12. Of course, the access market requirements are different from the network edge, and the core. At the access points, systems would need to deal with lots of subscribers (ports), low speed links (T[0019] 1, xDSL [x Digital Subscriber Line]) and with different access methods (ATM, IP, FR, TDM, etc.), whereas at the edge and the core of the network generally would use one framing solution (MPLS, IP or ATM). Access systems, in this case, typically would be characterized by: a large number of subscribers (ports, flows), high density; requirements for Inter Working Functions (IWFs), such as voice (TDM) to packets (ATM or IP) (e.g., Voice gateways), MAN (Metropolitan Area Network) to WAN (Wide Area Network), Ethernet to ATM or PoS [Packet Over SONET]; data grooms—asymmetric behavior large pipe to many small pipes; and the like. Accordingly, access systems need lots of packet manipulation, especially on media conversions and IWF. Therefore, a programmable (and therefore flexible) packet processor often is a preferred solution.
  • Such a programmable packet processor could be developed using a standard general purpose microprocessor core. Several processor cores are commercially available, including those that are licensed by Advanced RISC Machines, Ltd., ARC International, MIPS Computer Systems, Inc., and Lexra, Inc. However, the above cores are general purpose cores that would need to be optimized for packet processing. Such optimization typically would include: additional instructions; DMA support; task switch with low overhead; specific bit manipulation instructions; etc. The disadvantages of using such general purpose cores in packet processing applications include: costs incurred from license fee and royalties; limited customization—a special license is usually required to modify the core; create dependency on the core provider roadmap and technical support; over featured—FPU (Floating Point Units), MMU (Memory Management Units]; etc. [0020]
  • Therefore, there is a need for a highly robust programmable packet processor that can support a variety of high end applications, that is capable of handling a variety of protocols, and that provides desired performance in terms of speed and power. [0021]
  • What is also needed is a high performance communications processor implementing such a programmable packet processor as its core network processor (s), and implementing other useful modules, such as memories, DMAs, and interfaces to outside PHY platforms, so that the high performance communications processor can be beneficially implemented as a SOC solution for a myriad of high end communication applications. [0022]
  • SUMMARY OF THE INVENTION
  • The present invention overcomes the problems noted above, and realizes additional advantages, by providing a number of advantages over prior systems. [0023]
  • The following description is intended to convey a thorough understanding of the inventive aspects by providing a number of specific embodiments and details including, among other things: rings architecture for communications and data handling systems, Enumeration process for automatically configuring the ring topology, automatic routing of messages through bridges, automatic routing of exception messages, extending a ring topology to external devices and providing a flexible and re-configurable system, read return address, write-ahead functionality to promote efficiency, wait-till-reset operation resumption, in-vivo scan through rings topology, staggered clocking arrangement, and stray message detection and eradication. [0024]
  • Other inventive elements conveyed through the embodiments and details discussed below include, among other things: an architectural overview of a flexible packet processor; a programming model for a flexible packet processor; an instruction pipeline for a flexible packet processor; an internal memory to be used with the flexible packet processor; the use of a flexible packet processor as a module on a rings-based architecture; the core of the flexible packet processor and associated compounds (agents and non-agents) on the packet processor. [0025]
  • Additional inventive elements conveyed through the embodiments and details discussed below include, among other things: an architectural overview of a communications processor; a programming model for a communications processor; a data path protocol support model for a communications processor; an exemplary network processor employed as the core packet processor for the communications processor; an exemplary rings-based SOC interconnect fabric architecture employed in the communications processor; a variety of quality of support (QOS) features that implemented in the communications processor; a series of beneficial applications of the communications processor; the various approaches for the software that can be implemented to power the communications processor; specific exemplary strategies for the software in the high performance communications processor; and a performance estimate for [0026] RFC 1483 bridging.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention can be understood more completely by reading the following Detailed Description of the Invention, in conjunction with the accompanying drawings in which: [0027]
  • FIG. 1 is a block diagram illustrating a typical system on a chip. [0028]
  • FIG. 2 is a schematic diagram illustrating a ring architecture in accordance with at least one embodiment of the present invention. [0029]
  • FIG. 3 is a flow diagram illustrating an exemplary enumeration process in accordance with at least one embodiment of the present invention. [0030]
  • FIGS. [0031] 4-8 are a schematic diagram illustrating timing issues in a clocked system in accordance with at least one embodiment of the present invention.
  • FIG. 9 is a schematic diagram illustrating a mechanism for providing a clock signal in an opposing direction to data flow in a rings network in accordance with at least one embodiment of the present invention. [0032]
  • FIG. 10 is a schematic diagram illustrating a mechanism for providing a clock signal in a same direction as a data flow in a rings network in accordance with at least one embodiment of the present invention. [0033]
  • FIG. 11 is schematic diagram illustrating an exemplary implementation of a timing interface of a rings interface in a rings network in accordance with at least one embodiment of the present invention. [0034]
  • FIG. 12 is a schematic diagram illustrating latency issues in a ring network in accordance with at least one embodiment of the present invention. [0035]
  • FIGS. 13 and 14 are schematic diagrams illustrating exemplary implementations of bridges in ring networks in accordance with at least one embodiment of the present invention. [0036]
  • FIG. 15 is a schematic diagram illustrating an exemplary enumeration process in a ring network having a bridge in accordance with at least one embodiment of the present invention. [0037]
  • FIG. 16 is a schematic diagram illustrating an exemplary priority scheme for messages received simultaneously at a same interface of a bridge in a ring network in accordance with at least one embodiment of the present invention. [0038]
  • FIG. 17 is a schematic diagram illustrating an exemplary implementation of a bridge in accordance with at least one embodiment of the present invention. [0039]
  • FIGS. 18 and 19 are schematic diagrams illustrating an exemplary process for the elimination of stray messages in a ring network in accordance with at least one embodiment of the present invention. [0040]
  • FIGS. [0041] 20-22 are schematic diagrams illustrating exemplary ring networks multiple bridges in accordance with at least one embodiment of the present invention.
  • FIGS. [0042] 23-35 are schematic diagrams illustrating exemplary implementations of a scan interface in a ring network in accordance with at least one embodiment of the present invention.
  • FIG. 26 is a schematic diagram illustrating exemplary interface signals between two members of a ring network in accordance with at least one embodiment of the present invention. [0043]
  • FIGS. 27 and 28 are schematic diagrams illustrating an exemplary implementation of a ring interface in accordance with at least one embodiment of the present invention. [0044]
  • FIG. 29 is a flow diagram illustrating an exemplary process for determining an intended recipient of a message in a ring network in accordance with at least one embodiment of the present invention. [0045]
  • FIGS. [0046] 30-33 are schematic diagrams illustrating exemplary signaling within a ring interface in a ring network in accordance with at least one embodiment of the present invention.
  • FIG. 34 is a schematic diagram illustrating an exemplary use of bridges in a ring network to minimize latency in accordance with at least one embodiment of the present invention. [0047]
  • FIG. 35 is a schematic diagram illustrating an external ring interface in accordance with at least one embodiment of the present invention. [0048]
  • FIG. 36 is a block diagram illustrating an exemplary system on a chip utilizing a ring architecture in accordance with at least one embodiment of the present invention. [0049]
  • FIG. 37 is a schematic diagram illustrating the exemplary network processor of the system on a chip of FIG. 36 in accordance with at least one embodiment of the present invention. [0050]
  • FIG. 38 is a flow diagram illustrating a low overhead task switch in a network processor in accordance with at least one embodiment of the present invention. [0051]
  • FIG. 39 is a flow diagram illustrating exemplary data paths in a network processor in accordance with at least one embodiment of the present invention. [0052]
  • FIG. 40 is a block diagram illustrating exemplary state resources of a network processor in accordance with at least one embodiment of the present invention. [0053]
  • FIG. 41 is a block diagram illustrating an exemplary implementation of register r[0054] 1 of a general purpose register of a network processor in accordance with at least one embodiment of the present invention.
  • FIG. 42 is a block diagram illustrating various registers of a general purpose register of a network processor in accordance with at least one embodiment of the present invention. [0055]
  • FIG. 43 is a block diagram illustrating an exemplary software model for a network processor in accordance with at least one embodiment of the present invention. [0056]
  • FIG. 44 is a flow diagram illustrating an exemplary network processor pipeline in accordance with at least one embodiment of the present invention. [0057]
  • FIG. 45 is a flow diagram illustrating an exemplary network processor pipeline timing in accordance with at least one embodiment of the present invention. [0058]
  • FIG. 46 is a schematic diagram illustrating an exemplary internal memory for implementation in a network processor in accordance with at least one embodiment of the present invention. [0059]
  • FIG. 47 is a schematic diagram of an exemplary network processor in accordance with at least one embodiment of the present invention. [0060]
  • FIG. 48 is a schematic diagram illustrating an exemplary multireader agent in accordance with at least one embodiment of the present invention. [0061]
  • FIG. 49 is a flow diagram illustrating an exemplary data alignment and packing process in accordance with at least one embodiment of the present invention. [0062]
  • FIG. 50 is a flow diagram illustrating a mapping of data from a multireader agent bus to a multireader operation in accordance with at least one embodiment of the present invention. [0063]
  • FIG. 51 is a schematic diagram illustrating an exemplary message sender of a network processor in accordance with at least one embodiment of the present invention. [0064]
  • FIG. 52 is flow diagram illustrating an exemplary mapping of an agent write command to a message in accordance with at least one embodiment of the present invention. [0065]
  • FIG. 53 is a schematic diagram illustrating an exemplary direct memory access agent module in accordance with at least one embodiment of the present invention. [0066]
  • FIG. 54 is flow diagram illustrating an exemplary mapping of data on an agent bus to a direct memory access command. [0067]
  • FIG. 55 is a schematic diagram illustrating an exemplary cyclical redundancy code agent in accordance with at least one embodiment of the present invention. [0068]
  • FIG. 56 is a flow diagram illustrating a mapping of data on an agent bus to cyclical redundancy code data in accordance with at least one embodiment of the present invention. [0069]
  • FIG. 57 is a schematic diagram illustrating an exemplary timer agent in accordance with at least one embodiment of the present invention. [0070]
  • FIG. 58 is a flow diagram illustrating a mapping of data on an agent bus to timer data in accordance with at least one embodiment of the present invention. [0071]
  • FIG. 59 is a schematic diagram of an exemplary doorbell agent in accordance with at least one embodiment of the present invention. [0072]
  • FIG. 60 is a flow diagram illustrating an exemplary encoding of task data for use by a doorbell agent in accordance with at least one embodiment of the present invention. [0073]
  • FIG. 61 is a block diagram illustrating an exemplary communications processor implementing a ring architecture in accordance with at least one embodiment of the present invention. [0074]
  • FIG. 62 is a schematic diagram illustrating the exemplary communications processor of FIG. 61 in accordance with at least one embodiment of the present invention. [0075]
  • FIGS. [0076] 63-69 are schematic diagrams illustrating various implementations of an external ring interface in a communications processor in accordance with at least one embodiment of the present invention.
  • FIG. 70 is a block diagram illustrating an exemplary programming module for a communications processor in accordance with at least one embodiment of the present invention. [0077]
  • FIG. 71 is a block diagram illustrating an exemplary data path and protocol path of a communications processor in accordance with at least one embodiment of the present invention. [0078]
  • FIG. 72 is a schematic diagram illustrating an exemplary network processor utilized in a communications processor in accordance with at least one embodiment of the present invention. [0079]
  • FIG. 73 is a flow diagram illustrating an exemplary processing pipeline of a network processor utilized in a communications processor in accordance with at least one embodiment of the present invention. [0080]
  • FIGS. 74 and 75 are flow diagrams illustrating exemplary pacing processes utilized in a communications processor in accordance with at least one embodiment of the present invention. [0081]
  • FIGS. [0082] 76-80 are schematic diagrams illustrating various exemplary implementations of a communications processor in communications systems in accordance with at least one embodiment of the present invention.
  • FIG. 81 is a flow diagram illustrating an exemplary flow manager functionality of a communications processor in accordance with at least one embodiment of the present invention. [0083]
  • FIG. 82 is a block diagram illustrating an exemplary data plane development for use in software development for a communications processor in accordance with at least one embodiment of the present invention. [0084]
  • FIG. 83 is a block diagram illustrating an exemplary software development model in accordance with at least one embodiment of the present invention. [0085]
  • FIG. 84 is a block diagram illustrating an exemplary software design approach in accordance with at least one embodiment of the present invention. [0086]
  • FIG. 85 is a block diagram illustrating an exemplary partitioning of software and interfaces in a communications processor in accordance with at least one embodiment of the present invention. [0087]
  • FIG. 86 is a block diagram illustrating an exemplary partitioning of software in a network processor in accordance with at least one embodiment of the present invention. [0088]
  • FIG. 87 is a flow diagram illustrating a typical process for executing program instructions using a known multiple-branch technique. [0089]
  • FIG. 88 is a schematic diagram illustrating an exemplary processing environment in accordance with at least one embodiment of the present invention. [0090]
  • FIG. 89 is a schematic diagram illustrating an exemplary architecture of a processing unit of the processing environment of FIG. 88 in accordance with at least one embodiment of the present invention. [0091]
  • FIG. 90 is a flow diagram illustrating an exemplary process for executing program instructions based on the value of an accumulative flag in accordance with at least one embodiment of the present invention.[0092]
  • DETAILED DESCRIPTION OF THE INVENTION
  • The following description is intended to convey a thorough understanding of the inventive aspects by providing a number of specific embodiments and details including, among other things: rings architecture for communications and data handling systems, Enumeration process for automatically configuring the ring topology, automatic routing of messages through bridges, automatic routing of exception messages, extending a ring topology to external devices and providing a flexible and re-configurable system, read return address, write-ahead functionality to promote efficiency, wait-till-reset operation resumption, in-vivo scan through rings topology, staggered clocking arrangement, and stray message detection and eradication. [0093]
  • Other inventive elements conveyed through the embodiments and details discussed below include, among other things: an architectural overview of a flexible packet processor; a programming model for a flexible packet processor; an instruction pipeline for a flexible packet processor; an internal memory to be used with the flexible packet processor; the use of a flexible packet processor as a module on a rings-based architecture; the core of the flexible packet processor and associated compounds (agents and non-agents) on the packet processor. [0094]
  • Additional inventive elements conveyed through the embodiments and details discussed below include, among other things: an architectural overview of a communications processor; a programming model for a communications processor; a data path protocol support model for a communications processor; an exemplary network processor employed as the core packet processor for the communications processor; an exemplary rings-based SOC interconnect fabric architecture employed in the communications processor; a variety of quality of support (QOS) features that implemented in the communications processor; a series of beneficial applications of the communications processor; the various approaches for the software that can be implemented to power the communications processor; specific exemplary strategies for the software in the high performance communications processor; and a performance estimate for [0095] RFC 1483 bridging.
  • It is understood, however, that the invention is not limited to the specific embodiments and details, which are exemplary only. It is further understood that one possessing ordinary skill in the art, in light of known systems and methods, would appreciate the use of the invention for its intended purposes and benefits in any number of alternative embodiments, depending upon specific design and other needs. [0096]
  • A number of acronyms are used herein to describe various embodiments of the invention. A table of acronyms and definitions therefore is provided as Table 1 below: [0097]
    TABLE 1
    Acronym Definition
    AAL ATM Adaptation Layer
    ABI Application Binary Interface
    ABR Available Bit Rate
    ADPCM Adaptive Differential Pulse Code Modulation
    ADSL Asymmetric Digital Subscriber Line
    ALU Arithmetic Logic Unit
    API Application Programming Interface
    ARC ARC Cores
    ARM Advanced RISC Machines
    ARP Address Resolution Protocol
    ASIC Application Specific Integrated Circuit
    ATIC ATM Interconnect
    ATM Asynchronous Transfer Mode
    ATMOS ATM Operating System
    BGP Border Gateway Protocol (see FIG. 8)
    B-ISDN Broadband Integrated Services Digital Network
    BLES Broadband Local Exchange Server
    BSC Binary Synchronous Communications protocol (IBM)
    BSP Board Support Package
    BTS Base Transceiver Station
    CAM Content Addressable Memory
    CBR Constant Bit Rate
    CCITT Consultative Committee on International Telegraph and
    Telephone
    CES Circuit Emulation Services
    CLEC Competitive Local Exchange Carrier
    CMTS Cable Modem Transmission System
    CPCS Common Part Convergence Sublayer (ATM)
    CPE Customer Premises Equipment
    CPP Control Protocol Processor
    CPU Central Processor Unit
    CRC Cyclic Redundancy Code
    CR-LDP CR-Label Distribution Protocol
    CS Convergence Sublayer
    CTL Control
    DDR Dual Data Rate
    DLC Digital Loop Carrier
    DMA Direct Memory Access
    DRR Data Recovery Report
    DS Differentiated Services
    DSL Digital Subscriber Line
    DSLAM Digital Subscriber Line Access Multiplexer
    DSP Digital Signal Processor
    EA Effective Address
    E-IAD Enterprise Integrated Access Device
    ENET Ethernet
    EPB External Peripheral Bus
    EPD Early Packet Discard
    EPROM Erasable Programmable Read Only Memory
    FIFO First-In-First-Out
    FPGA Field Programmable Gate Array
    FPU Floating Point Units
    FR Frame Relay
    FRF Frame Relay Forum
    FWD Forwarding
    GFR Guaranteed Frame Rate
    GPIO General Purpose Input Output
    HDLC High-level data link control
    HDSL High-bit-rate DSL
    H-MVIP H Multi-Vendor Integration Protocol
    HPCP High Performance Communications Processor
    HW Hardware
    IAD Integrated Access Device
    ID Identification
    I/f Interface
    IMA Inverse Multiplexing over ATM
    IP Internet Protocol
    IPoA IP over ATM
    IS Integrated Services
    ISOS Integrated Software on Silicon
    ISP Internet Service Provider
    ITU-T International Telecommunication Union
    IWF Inter Working Function
    LAN Local Area Networks
    LD Load
    LP Low Priority
    LPM Longest Prefix Match
    LSR Label Switched Router
    MAC Media Access Control
    MAN Metropolitan Area Network
    MDU Multi Dwelling Unit
    MEGACO H.242 IEEE (voice protocol)
    MFSU Multi Function Serial Unit
    MGCP IETS standard (voice Protocol)
    MIB Management Information Base
    MII Media Independent Interface
    MIPS MIPS Computer Systems, Inc.
    MMU Memory Management Unit
    MPLS Multi Protocol Label Switching
    MSC Mobile Switching Center
    MTU Multi Tenant Unit
    MVIP Communication backplane interface
    NI Network Interface
    NP Network Processor
    OAM Operation and Maintenance
    OC Optical Carrier
    OEM Original Equipment Manufacturer
    OS Operating System
    OSE A name of OS company
    OSI Opens Systems Interface
    OSPF Open Shortest Path First
    PBGA Plastic Ball Grid Array
    PBX Private Branch Exchange
    PCM Pulse Code Modulation
    PDU Payload Data Unit
    PHY Physical layer
    POS Packet Over SONET
    PP Protocol Processor
    PPD Parallel Presence Detect
    PPPoA Point to Point Protocol Over ATM
    PSOS Portable Scalable Operating System
    PSTN Public Switched Telephone Network
    QOS Quality of Service
    RAM Random Access Memory
    RED Random Early Delete
    RFC Request for Comment
    RIP Routing Information Protocol
    RISC Reduced Instruction Set Computer
    RMII Reduced MII
    RSVP Resource Reservation Protocol
    RTOS Real-Time Operating System
    RTP Real Time Protocol
    RX Receive
    SAR Segmentation and Reassembly
    SDRAM Synchronous Dynamic RAM
    SDSL Symmetric DSL
    SHDSL Single-Line High-Bit Rate DSL
    SIP SMDS Interface Protocol
    SMII Serial Media Independent Interface
    SMTP Simple Mail Transfer Protocol
    SNMP Simple Network Management Protocol
    SOC System-On-A-Chip
    SP Strict Priority
    SPI Serial Protocol Interface
    SPR Special Purpose Register
    SRAM Static RAM
    SSI Synchronous Serial Interface
    SSSAR Service Specific SAR
    ST-BUS a TDM protocol
    SW Software
    TCP Transmission Control Protocol
    TDM Time Division Multiplexing
    TM Traffic Management
    TOS Type of Service
    TTM Time-to-Market
    TX Transmit
    UART Universal Asynchronous Receiver-Transmitter
    UBR Unspecified Bit Rate
    UDP Universal Datagram Protocol
    UPnP Universal Plug ‘n Play
    USB Universal Serial Bus
    VBR Variable Bit Rate
    rt-VBR Real Time VBR
    VC Virtual Circuit
    VCI Virtual Channel Identifier
    VCL Virtual Channel Link
    VoATM Voice over ATM
    VoIP Voice over IP
    VP Virtual Path
    VPI Virtual Path Identifier
    VLSI Very Large Scale Integration
    WAN Wide Area Networks
    WBS Wireless Base Station
    WFQ Waited Fair Queue
  • One inventive aspect of the present invention is to provide a rings architecture to build a system on a chip (SOC) and allow for ease in configuration, expandability and external interface. This rings architecture, in one embodiment, involves: (1) the use of transactions instead of signals; and (2) the use of a single switch fabric to carry the transactions instead of many connections as typically implemented in buss-based systems. A transaction, in at least one embodiment, includes a instruction generated by a certain module for directing, in a structured way, another module to perform some operation. Transactions are mapped onto single physical connection. A transaction may direct a module to, for example, set a set mode flipflop to one or clear register X or add value Y to counter Z. Transactions also can be used to provide time sequencing. Furthermore, two transactions may be prevented from occurring at the same time, limiting the appearance of simultaneous errors (i.e. bugs). In one embodiment of the present invention, a rings-based system on a chip (SOC) is provided. The rings-based SOC comprises a plurality of ring members on a ring that communicate using point-to-point connectivity, a plurality of ring interfaces for interfacing the ring members with the ring, a message traversing the ring, wherein the message travels one ring member per clock cycle. In this embodiment, the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member. Furthermore, subsequent ring members can be adapted to supply backpressure signals to prior ring members. [0098]
  • In one embodiment, the message is applicable to the given ring member based on at least one of an identifier identifying the given ring member and an identifier indicating that the message applies to multiple ring members. The identifier identifying the given ring member can comprise an address for the given ring member. Furthermore, the identifier indicating that the message applies to multiple ring members may, in one implementation, comprise message data designating the message as a supervisory message. [0099]
  • The message may comprise a type field, an address field, and a data field. The message may also comprise an enumberation message, wherein the enumberation message is processed by the ring members in order to assign address space consumed by each ring member. Additionally, a subsequent supervisory message can cause the results of the enumeration message to be returned, thereby allowing a central member comprising a CPU to infer the topology of the system. Alternatively, the message can comprise a reset message that is processed by the plurality of ring members in order to reset the system. Conversely, the message may comprise an activate message that is processed by the plurality of ring members in order to activate the system. [0100]
  • The message also may include a request from a CPU ring member that causes the other ring members to report out their address information. The message may also comprise a write message that is processed by one of the plurality of ring members to write data thereto, a read message that is processed by one of the plurality of ring messages to read data therefrom, and/or a stray message indicator so that the system can identify stray messages. [0101]
  • In one embodiment, the ring members of the rings based SOC comprise a CPU and a plurality of peripherals, and wherein the peripherals are adapted to write ahead changes in peripheral status, thereby reducing the quantity of read messages that are issued by the CPU. The ring of the SOC also may include an external ring interface allowing the ring to communicate with modules that are not part of the ring. [0102]
  • In one embodiment, the rings based SOC further comprises a land bridge that allows the message to proceed from one side of the ring to an other side of the ring without traversing some of the intermediate ring members. The logic of the land bridge may be configured based on the results of an enumeration message. [0103]
  • Additionally, the plurality of ring members and plurality of ring interfaces of the rings-based SOC may comprise a first ring with the SOC further comprising a plurality of second ring members and a plurality of second ring interfaces defining a second ring, both the first ring and the second ring implemented as a system on a chip, and wherein the first ring and the second ring are coupled using a sea bridge. In one implementation, the logic of the sea bridge is configured based on the results of an enumeration message. [0104]
  • Referring now to FIG. 2, an [0105] exemplary ring network 30 is illustrated in accordance with at least one embodiment of the present invention. As illustrated, the exemplary ring network 30 includes two rings 32, 34 connected via a bridge 36, each ring including a plurality of modules 38-48. The modules can include any of a variety of modules implemented in SOCs for processing and/or handling data, such as a DMA, an external interface, a timer, a CPU, an I/O, a peripheral, and the like. In this case, the rings 32, 34 and the bridge 36 represent an implementation of the switch fabric 12 of FIG. 1 in accordance with at least one embodiment of the present invention. To summarize the operation of a ring of the ring network 30, consider the following exemplary operation of ring 32. In this example, messages are passed between modules counter-clockwise. When a module receives a message, the module determines if the message the intended recipient of the message. If the module is the recipient, the module removes the message from the ring and processes it accordingly. Otherwise, the module passes the message on to the next module (e.g., from module 44 to module 46) during the next clock cycle. If a module has a message to send, the module waits till there is a free slot and passes the message to the module's left hand neighbor. In this case, each message is one clock long and the messages travel around the ring 32, one hop per clock.
  • Members of the Ring [0106]
  • Anchor—the host interface. Through this interface, the host resets, configures and controls the setup functions of the ring. The Anchor also can be adapted to determine if it is the primary Anchor. [0107]
  • Bridge (e.g., bridge [0108] 36)—a combination of two devices: an upstream link and a downstream link. During the setup stage, the bridge flips the network ID and acts as an Anchor for upstream ring. The host, after the learning stage, programs the bridge about what switching to perform. The bridge snoops on the ring and if a hit detected, consumes the message and carries it on the other side. If the message is not hit, the it is sent down as usual. The bridge typically has two address/mask registers per link direction.
  • Module—a collective name for components of a ring, such as a CPU, a bridge, a TDM interface, a Utopia interface, an xDSL PHY, a timer, a UART, a FCC, a MCC, a scratch RAM, a CRC calculator, and the like. [0109]
  • External Ring (ExtRing)—used to connect several chips to create a larger topology. An external ring is particularly useful in prototyping future peripherals by FPGA-extending existing ring-based silicon. [0110]
  • Packet Processor (also referred to herein as Vobla)—a network optimized CPU for managing communication logical links. The packet processor, in at least one embodiment, is used to control and terminate streams that are beyond internal functionality of the device. The network side is done through the rings, the other side includes, for example, an external RAM interface. [0111]
  • The rings architecture has many advantages over traditional bus designs and is an effective way to connect many different modules, whether on the same chip or on several chips. Instead of using signals and busses, communication between modules (data and commands) are mapped onto transactions, which in turn are transmitted over ring infrastructure. Ring topology allows predictable delays and easy scalability. Each ring member adds delay of, for example, one clock. The ring clock frequency can be made as fast as needed because of geographical proximity of its members. Rings can be further connected through bridges, such as [0112] bridge 36. These bridges are similar to network switching devices in the sense that they are programmed to direct selected portions of the traffic to the other side (e.g., from ring 32 to ring 34). Inside one exemplary embodiment chip, the members of the ring are connected to one another using standard [e.g., 8 bits type/20 bits address/32-64 bit data] connection. When going outside the standard, a smaller/slower interface may be defined.
  • In the broadest sense, the ring carries two kinds of messages. Setup/Config messages and Work read and write messages. The Setup messages can be used to learn the network topology, assign addresses and to program the members (i.e., the elements of a ring). Setup messages are initiated by a host through a special anchor member. Regular members, in one embodiment, reply to setup messages by providing the host their functionality ID, ring ID and their starting address. The host software can infer from that data the exact topology of the network and the functionality of its members. Work messages, in one embodiment, are initiated by members based on their programming and functionality. On each clock a ring member examines its in-port. If the in-port has valid message, then the member determines if the message is addressed to the member. If so, the member removes the message from the ring and processes the message accordingly. If not (i.e., the message is intended for another member), on the next clock the member transmits it downstream on the out-port when the out-port becomes available. [0113]
  • The following are examples of message types that may be used: [0114]
  • Idle—the connection is idle, i.e., no message; Reset—reset and propagate to reset the entire network; [0115]
  • Enumerate—propagate and obey the Enumeration algorithm (described below); [0116]
  • WhoAmI request—started by the anchor member and flooded unchanged throughout the ring network; [0117]
  • WhoAmI response—each member responds to a WhoAmI request by sending this message—the data field contains values of self-address and several other significant bits that enable the Anchor to learn the topology of the network; [0118]
  • Activate—includes the address of a specific ring member. When this message hits the member, the a subset of the data bits are written into the RIF (ring interface) unit control register—the first bit is activate bit (hence the name). After reset this bit is inactive. This prevents any work activity of the peripheral to take place. Setting this bit to one, enables normal work. Other bits include: scan_mode_enable, stop_clock, in_vivo_scan_test, ring_loopback_enable, (soft reset), as well as other user-defined bits (discussed below). These bits may be reset to zero; [0119]
  • Work write—sent during normal operation. These messages activate various peripherals, fifos (first-in-first-out), write into memory, etc.; [0120]
  • Work read—work messages are used to read from fifos, move blocks of SRAM (static RAM) data and communicate with DMAs, to name a few examples. [0121]
  • Exception—started by regular ring members, to propagate to anchor (the assigned member that initiates the Enumeration process) and/or a PP (packet processor) to signify some condition needing attention; [0122]
  • Freeze—propagate message quickly through the network and disable all activity the rings. Typically used for debug purposes where a fast freeze of the current state is needed. [0123]
  • Message Type Encoding [0124]
  • Table 2 sets forth a listing of message types with a proposed encoding structure and description of the encoding. [0125]
    TABLE 2
    message type encoding description
    idle 00000XXX
    supervisor 1111nnnn
    requests 11111000 0xF8 Enumerate.
    11111001 0xF9 WhoAmI request.
    11111010 0xFA Activate
    11111011
    11111100 f0xFC freeze
    11111101
    11111110
    supervisor 11110nnn
    responses 11110000 0xF0 WhoAmI response.
    11110001 0xF1 error
    work_read 01SWMLFI 0x40
    S= enable snoop for the response of this
    message.
    W=width of the data message 64/32 for
    return M = TBD
    L =enable address modification to indicate
    last data of frame.
    F=enable address modification to indicate
    first data of frame.
    I= increment destination.
    work_write 10SMLZZZ 0x80
    S=Snoop this message.
    M=TBD.
    L=Last data transfer in the message.
    ZZZ= the number of valid bytes in the
    message.
    (ZZZ=000 means 8 valid bytes in the
    message).
  • Ring Member Enumeration [0126]
  • While it is possible to pre-assign a hard addressing scheme for the members of a ring network, in at least one embodiment, the modules assign address space for themselves. As the modules are members of at least one ring, each module can take a block of address space and tell the next module its starting address (herein referred to as Enumeration). In many systems, this assignment often gives the same results, so it may not be necessary to actually reprogram the modules, but it reduces the need to change hardware registers every time ring configuration is changed. This self-addressing also serves as a self-test. In rings-based integrated circuit, such as a SOC communications processor, peripherals appear to a CPU as starting address. Each offset from this starting address is assigned to a different command for the peripheral. Note that assigning different peripherals to different CPUs can simply be a matter of programming a location in RAM. Accordingly, several CPU's can be put on a IC without worrying about arbitration. [0127]
  • As discussed above, each member of the ring network has predefined address space. In one embodiment, this is limited to some power of 2. For example, if a UART (Universal Asynchronous Receiver/Transmitter—used for serial communications and having a transmitter and a receiver) needs 5 registers, it allocates 8 addresses for itself. It also should first align the address to a border of 8. [0128]
  • The Enumeration process starts with the Anchor member, which sends on its out-port an Enum message to begin the enumeration of rings members. As each member receives the Enum message, the member takes the address field and increments it to fit its own alignment. This becomes the zero offset address. Then the address is incremented to next available block of the same alignment. This last address is sent downstream. Referring to FIG. 3, an exemplary enumeration process is illustrated in accordance with at least one embodiment of the present invention. In this example, assume that [0129] DMA 52 needs 16 addresses, UART 54 needs 4 addresses, and timer 56 needs 256 addresses. Further assume that the DMA 32 receives an Enum message having an address value=8. Accordingly, in this example, the DMA 52 would align itself to some power of two (16, in this example) and then claim the next 16 addresses (i.e., addresses 16-31). As a result, the next available address is address 32. Therefore, the DMA 52 would change the address value of the Enum message to address=32 and provide this value to the UART 54. Since address=32 is already aligned with a power of two, the UART 54, in this example, claims addresses 32-35 and assigns address=36 to the next available address of the Enum message. This Enum message is then provided to the timer 56. Since the timer 56 requires 256 addresses, the timer 56 aligns its starting address with a power of two greater than the next available address (e.g., 256) and claims the next 256 addresses. The next available address value of the Enum message is then changed to address=512 and provided to the next member of the ring.
  • This same enumeration process is repeated for each member of the ring network, except bridges, which are discussed in more detail below. In this case, bridges first allocate their own space and then send the in-port Enum message to the other side of the bridge. Further more, the bridge, in one embodiment, is adapted to flip the zero data. Accordingly, when the Enum message is returned to the bridge on the other side, the bridge passes it back on this side. As a first approximation, bridges can program the routing themselves. If there are no loops, each bridge may need a maximum of two ranges to look at. It is expected that no loops exist for Enumeration protocol. So eventually the Enum message will get back to Anchor. This signifies the end of Enum process. [0130]
  • In accordance with one embodiment of the present invention, a communication system using a ring network architecture is provided. The system comprises a plurality of ring members connected in point-to-point fashion along the ring network, a transaction based connectivity for communicating a message among the ring members, and wherein the message is a configuration message that causes ring members to assign address space in the ring network. In one embodiment, the configuration message is processed by each ring member to cause that ring member to assign address space for that ring member, and wherein the configuration message is then passed to the next ring member. [0131]
  • In one embodiment, the configuration message includes an address that defines a starting address. The configuration message, in one implementation, is originated by an anchor member, which may include a CPU. In this case, each member processing the configuration message can revise the starting address before passing the configuration message to the next ring member. Furthermore, each member processing the configuration message can assign the address space of the member using the starting address and address space sufficient for that member. [0132]
  • In one embodiment, a CPU on the ring network of the system recognizes other ring members using starting addresses assigned to those ring members based on the configuration message. In this case, offsets to the starting addresses of the ring members may be used for different commands for the ring members. [0133]
  • Furthermore, in one embodiment, the ring network includes a bridge. In this case, the configuration message is processed by the bridge by assigning address space for the bridge and then passing the configuration message to the other side of the bridge. The configuration message can be processed by the bridge so that a subsequent message is routed according to whether an address associated with the subsequent message corresponds to one side of the bridge or the other side of the bridge. The subsequent message is passed across the bridge when the address is associated with the one side of the bridge, and wherein the subsequent message is passed through the bridge when the address is associated with the other side of the bridge. Additionally, the bridge, upon receiving a configuration message from one side of the ring network, responds by recording a first address included in the configuration message, passing the configuration message to the ring members on the other side of the ring network, and recording a second address included in the configuration message when the configuration message arrives from the other side of the ring network. In one embodiment, the first address corresponds to a near side of the bridge and the second address corresponds to a far side of the bridge. [0134]
  • In one embodiment, the system further comprises a second configuration message which causes ring members to respond with descriptive data, wherein the descriptive data can includes address space data for the ring members. Using this descriptive data, a CPU member on the ring network can be adapted to infer the topology of the ring network. [0135]
  • In accordance with yet another embodiment of the present invention, a method of assigning address space in a ring network architecture system including a plurality of ring members is provided. The method comprises issuing a configuration message, processing the configuration message at each ring member to assign address space for that ring member in the ring network, modifying the configuration message based on the assigned address space, and passing the configuration message to the next ring member. The configuration message is assigned by an anchor on the ring network, wherein the anchor can include a CPU member. [0136]
  • In one embodiment, the configuration message includes a starting address and the address space is assigned based on the starting address and the address needs of that ring member. In this case, the method step of modifying comprises modifying the starting address before the step of passing. [0137]
  • Furthermore, in one embodiment, the plurality of ring members includes a bridge, wherein the bridge responds to the configuration message by configuring logic that provides for a subsequent message to be passed across or by the bridge depending on an address associated with the subsequent message. The ring network can be adapted to process a first category of message and a second category of message, and wherein the bridge logic is operative only for the second category. In one implementation, the first category is a supervisory message and the second category is a work message. [0138]
  • Activation Register [0139]
  • The activation register, in one embodiment, is part of every ring interface (RIF). It is sent as reply to Who_Am_I message. It concatenates several key parameters of each ring member. It can be used by the Anchor to learn the topology of the network. It can include the following fields: user_controls; module ID; user_ID; soft_reset; invivo; scan_mode; stop_clock activated; and the like. Module ID is a hardwired unique ID for each kind of member on the network. Ring ID is, for example, one-bit used to identify where bridges are inserted. Each time the Enumerate message crosses a bridge, this bit is flipped. Active bit is set/reset by activate (or activate all) message types to allow normal operation of the modules. While this bit is reset, the module should not operate. [0140]
  • Stages in the Operation of a Rings Network [0141]
  • Hardware connectivity—This is when the actual hardware is connected and the topology of the Rings is built. Several rings-compliant chips can be interconnected through the external ring interface. The unused interfaces can be shorted out. [0142]
  • Reset—the first message the Anchor typically propagates is a Reset message. It is flooded without clocking. The Host should wait sufficient time for the reset message to flood the whole network. [0143]
  • Wake-Up—after power-up all modules sitting on Rings typically are in reset mode. All modules have all config bits reset. [0144]
  • Enumeration—the Host tells the Anchor to spread the Enumerate message, starting with some address (usually zero). Each Ring member receives the Enum message, computes its own address space needs and transmits downstream the next available address. The bridges add first their own space on the first ring, then transmits the message to the next ring. When other side of the bridge consumes its own message, the closer side continues with the Enum message on the first ring. [0145]
  • Flood the WhoAmI request—the Host instructs the Anchor to flood the rings with WhoAmI request message. All modules simply transmit it downstream, except bridges that follows the Enumeration algorithm. Each ring member first sends its response and clock later try to relay the Request message. This is so the request message will hit the Anchor only after all responses arrived. Anchor can determine the end of WhoAmI sequence by using this fact. [0146]
  • WhoAmI response—Each module, after getting WhoAmI request, sends the contents of its Activation register as part of the WhoAmI response message. The Anchor should present all these messages to the host. It typically is the host's responsibility to infer the network topology from this data. [0147]
  • ProgramWr—After learning the network topology, via Who_Am_I response messages, the host can start configuring the members. Since it knows each member starting address, the host can send requests to write to any register. The last stage is to activate the network by writing active, for example, [0148] bit 1 in zero offset register. If during later stages the Host needs to get the value of any register, it can do so by issuing ProgramRd request and waiting for ProgramRd response. Bridges are special case for ProgramWr. Bridges need to be programmed first, before trying to pass data across them.
  • Activation—After programming stage, the SOC is ready to perform processing and data handling tasks. To start all modules and enable them to work, the Activate message is flooded throughout the ring network. [0149]
  • Mode to kill stray messages—It is foreseeable that because of a bug in design or programming, a message could be sent that is not addressed to any member of the ring. Either its address is above the highest assigned address or it is addressed to empty space between consecutive members. If the address of the stray message is above high limit, it can be routed to the Anchor and consumed or discarded by the Anchor. However if the stray address is pointing to empty space, this message could circle the ring forever. A process used to prevent this endless loop follows: messages can have an additional bit running along with them. If a bridge is passing a message through (not across) it can set this bit on the message. If message arrives to a bridge with this bit set, the bride discards it. Care should be taken to ensure that only one bridge per ring (in case there are several) is operating in this mode. In rings where no bridge exists, the Anchor can perform this action. Messages freshly generated will have this bit zero. Also every time message crosses a bridge (from one ring to another) this bit is cleared. If a message circles the ring for a second time, the designated bridge will discard it. [0150]
  • For each ring, only one bridge should execute the above discard process. Otherwise legitimate messages could be discarded. The solution to this problem is as follows: during the Enumeration process, the bridge initializes its sides as a close side and a distant side. The close side is where the Enum message appears from. The distant size is the other side. In this case, the distant side can be selected to perform the monitoring of stray messages. On the primary ring (where Anchor is located) the job of killing stray messages is done by Anchor. [0151]
  • Rings Topology Issues [0152]
  • Clock alignment across a SOC often is a critical feature. Failing it will result in races—which are crippling or at least inefficient. While other undesirable clocking artifacts sometimes can be eliminated by lowering the frequency, cooling the chip, exposing it to light, etc., races typically are much more difficult to resolve. As FIG. 4 illustrates, if the delay between clk[0153] 1 and clk2 is greater than the delay from the output of the first flip flop 60 to the input of the second flip flop 62, a race is likely, meaning that the second flip flop 62 could sample the data output from the first flip flop 61 a whole clock period early.
  • In rings-based SOC in accordance with at least one embodiment, there typically is no need to align the clocks precisely across the whole chip. Clock alignment is needed only in singular chunks of data, herein referred to as compounds. Most of the compounds are small, such as peripherals. Others are of a medium size, such as DMAs. Some are considerably large, such as a packet processor. For larger compounds, some kind of clock alignment generally is mandatory. But the overall clocking problem can be divided into smaller, easier solved problems. To illustrate, in at least one embodiment, signals going between any two modules are tightly controlled, because they are known in advance and there is only so many of them (for example, three signal groups: clock, data and backpressure). Furthermore, because of the topology, a solution in one section typically implies a solution for the whole system. Of particular importance is the direction along the ring any of the three groups takes, how the clock tree runs, and what special rules/checks/solutions are to be defined and enforced. [0154]
  • FIG. 5 illustrates a possible solution to the race problem. In this example, the [0155] clock signal path 64, in the same direction of the data path 66, is separated into a number of similar compounds (e.g., compounds 70, 72) By controlling the logic 74, 76 on each flip flop leaving a compound, it can be ensured that the delay between flip flops is at least long enough to prevent a race condition. This also can be verified after layout.
  • Although the solution illustrated in FIG. 5 may be implemented, in at least one embodiment, the clock signal is propagated in the opposite direction of the data, as illustrated with reference to FIG. 6. By providing the [0156] clock signal 78 in the opposite direction of the data signal 80, the potential for race between compounds 70, 72 is significantly reduced or eliminated.
  • In at least one embodiment, there is at least one signal that goes against the usual flow of data (signal [0157] 80), this signal being the OK signal 82, which is utilized to enable backpressure, as illustrated with reference to FIG. 7. The OK signal 82 generally needs special treatment because it's sampling clock lags behind sourcing clock (signal 78). However, this can be solved by ensuring that the return path is longer then clock delay. Alternatively, as illustrated with reference to FIG. 8, a latch 86 may be implemented to ensure that data provided to flipflop 62 changes only after the rising edge of the clock 78 (clkb).
  • FIG. 9 illustrates a complication resulting from the propagation of the [0158] clock 90 in a direction opposing the propagation of data in a ring network having a bridge 94. As illustrated, data_a leaving the bridge 94 goes to member 96 and should be sampled by the rising edge of clkb. However, clkb lags considerably behind clka of the bridge 94. As demonstrated by the waveforms 98, race is eminent. However, by adding latches to the data lines, race can be eliminated or substantially reduced. Likewise, latches should be used on the OK signal to prevent race. It will be appreciated that the latches utility may be limited if the delay between, for example, clka and clkb is greater than about 75% of the cycle time since the substantial timing uncertainty may be introduced. FIG. 10 illustrates a complication resulting from the propagation of the clock 90 in a same direction of the propagation of data 102 in a ring network having a bridge 94. As illustrated, data_b leaves member 96 to be sampled by the bridge 94 using clk_a. As opposed to the situation referenced in FIG. 9, clkb now lags considerably behind clka. However, this may be advantageous if the lag is considerably smaller than the clock cycle since the data can be delayed beyond the danger zone of clock delay. Likewise, the OK signal is covered and the last leg of data is covered. In this case, the only signal that typically must be considered is the OK signal from the bridge 94 to member 96. In this case, a latch can be used at member 96 to prevent race in the OK signal.
  • It is often desirable to minimize lag between members of a ring, thereby increasing the number of members supported by a single ring as well as minimizing the timing constraints to be considered. However if one or more members are packet processors or other modules having considerable processing tasks, the clock entering such modules often is delayed considerably when the clock is regenerated to drive the big compound. In this case, the same principles apply and may be solved using latches, as illustrated with reference to FIG. 11, which illustrates a data signal and clock signal propagating in the same direction. In this case, the [0159] local_clock 110 lags behind the ring_interface clock 112 of the module 114 (e.g., a packet processor). For outgoing data, this typically is not a problem since it changes later then the ring interface flip flops clock. However, for data entering the module 114 from a previous member, race is a possibility. The same situation may occur in the event that the clock signal 112 and the data signal 116 propagate in opposite directions.
  • In accordance with one embodiment of the present invention, a rings-based system is provided. The system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and where the system further comprises a system clock signal for controlling timing on the ring network wherein the system clock signal is aligned between groups of ring members instead of among all of the ring members. In one embodiment, the system clock signal runs in the same direction as the message, while in another embodiment, the system clock signal runs in the opposing direction to the message. The alignment can be implemented to substantially removes skew among the clock signals. Furthermore, the alignment can prevent a flip-flop at a ring member from sampling data a clock cycle too early. [0160]
  • The system clock signal alignment preferably is performed among adjacent ring members, wherein the alignment for a ring member can be performed with respect to the ring member's upstream and downstream ring member. The alignment can be performed by inserting logic at the ring members that ensures that the delay between adjacent clock signals does not exceed the delay between the adjacent members. Similarly, the alignment can be performed using latches that are clocked by clock signals at individual members. [0161]
  • In one embodiment, the rings-based system further comprises a backpressure signal that runs in the opposing direction to the message, wherein the alignment is performed by inserting logic at the ring members to ensure that the return path for the backpressure signal exceeds the clock delay between adjacent members. [0162]
  • Bridges [0163]
  • As discussed previously, the ring topology in accordance with the present invention arranges module in a logical ring. All data and control is transmitted over this ring infrastructure sequentially around the ring. However, as illustrated by FIG. 12, considerable ring latency may be introduced. To illustrate, if [0164] module 116 sends a message to module 118, there is little latency. However, if member 120 is to pass data to member 122, the data must pass through four modules (i.e., four clock cycles), resulting in considerably more latency. Another problem is peak latency. To illustrate, suppose that member 116 transmits mainly to member 122 and member 118 transmits data mainly to member 120. In this case, the communication between members 118 and 120 suffers degradation due to the traffic from member 116 to member 122.
  • In at least one embodiment, a bridge may be used to minimize the latency between members of a ring. As illustrated in FIG. 13, a [0165] bridge 130 may be used to connect two rings 132, 134. This bridge is analogous to a sea bridge since it connects two rings together just as a sea bridge connects two islands. The sea bridge, in one embodiment, determines what messages to cross over between rings and what messages to keep on the current ring. So referring to the above latency problems, the sea bridge may be utilized to minimize peak latency issues. To illustrate, if member 134 communicates mainly with member 136, communications between member 138 and member 140 are not affected.
  • Intraring latency resulting from a relatively large number of members of the ring between the transmitting member and the intended recipient member may be reduced by a land bridge, as illustrated with reference to FIG. 14. The [0166] land bridge 146 is utilized within a ring 148 to minimize the number of hops for data/clock signals. To illustrate, without the land bridge 146, data from member 150 to member 152 would have to go though 5 members. However, the land bridge 146 reduces the number of members in the data path between member 150 and member 152 to 3 members (with two of the members being the bridges two interfaces 154, 156).
  • The bridge, either a land bridge or a sea bridge, is adapted to analyze a message received at one of its interfaces and to pass the message through to its other interface or pass on to the next member depending on the intended recipient of the message. For example, when [0167] member 150 sends a message to member 158, the land bridge 146 receives the message at bridge interface 154 and determines that the shortest path is to pass the message from the bridge interface 154 directly to the member 158. However, when member 150 sends a message to member 160, the land bridge 146 receives the message at bridge interface 154 and determines that the shortest path is to pass the message through the bridge to the bridge interface 156 and then from bridge interface 156 to the member 160. It is not necessary for a bridge to be aware of the topology of the ring when deciding the more optimal path for a message. Using the enumeration process, the bridge can obtain the information used to make this decision. Referring now to FIG. 15, an exemplary routing process by the bridge 146 is illustrated in accordance with one embodiment of the present invention. For enumeration purposes the land bridge 146 appears as two ring members (interface 154 being one member and interface 156 being the second). The member/interface of the bridge having the lower address (address=3 in this case) becomes the near end, the member/interface of the bridge having the higher address (address=6 in this case) is marked as the far end. A message arriving at the near end (from direction of the member 150) is passed on if the destination address of the message is greater than 3 and less then 6. Otherwise, the message is passed through the bridge 146 to the far end (interface 156). On the far end, a message arriving at the interface 156 from the direction of member 152 will be passed through to the near end (interface 154) if its destination address is less than 6 but greater than 3. Otherwise the message is passed on to member 160. In at least one embodiment, the address values by which a bridge 146 determines the routing of a message are determined during the enumeration process described herein. FIG. 16 illustrates a situation whereby two messages are received at an interface 154 of a bridge 146 at a same time. As illustrated msg1 and msg 2 are received at the same interface 154 at the same time. In one embodiment, messages transferred between interfaces of the bridge 146 are given priority, whereas in other embodiments, messages received at the bridge interface from members of the ring are given priority. Referring to FIG. 17, an exemplary implementation of a bridge 170 is illustrated. In this example, the bridge 170 includes control logic 172 adapted to control the upstream and downstream muxes 174-180 to pass either the incoming messages through either the fifo (fifos 182-188) between the downstream input and the upstream output, the upstream input to the upstream output, the downstream input to the downstream output, and the upstream input to the downstream output.
  • In accordance with one embodiment of the present invention, a rings-based system on a chip is provided. This system comprises a plurality of ring members on a ring that communicate using point-to-point connectivity, a message traversing the ring from member to member, the system being adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and wherein at least one of the ring members comprises a bridge. [0168]
  • In one embodiment, the bridge of the rings-based system is adapted to allow messages to travel from one side to another side of the bridge without passing through intermediate ring members. In this case, the bridge can be configured so that the message arriving at the bridge is routed according to whether an address associated with the message corresponds to one side of the bridge or the other side of the bridge. [0169]
  • Likewise, the message, in one embodiment, is passed across the bridge when the address is associated with the one side of the bridge, and wherein the message is passed through the bridge when the address is associated with the other side of the bridge. Accordingly, the bridge can include logic with a range of addresses, such that the message is routed to one side of the bridge or the other side of the bridge depending on whether the address is within the range. The logic may be established based on a configuration message that causes the ring members to assign their address spaces, and the configuration message may include an enumeration message. [0170]
  • In one embodiment, the plurality of ring members of the rings-based system are a first plurality of ring members comprising a first ring network and the system further comprises a second plurality of ring members comprising a second ring network, wherein the bridge comprises a bridge between the two ring networks. The bridge can be adapted to determine which messages to pass to the second ring network and which messages to keep on the first ring network. In this case, the bridge may be configured so that the message arriving at the bridge is routed according to whether an address associated with the message corresponds to one side of the bridge or the other side of the bridge. The bridge can include logic with a range of addresses, such that the message is routed to the first ring network or the second ring network depending on whether the address is within the range. This logic can be established based on a configuration message that causes the ring members to assign their address spaces. The configuration message, in this instance, may include an enumeration message. Furthermore, the message can be passed across the bridge when the address is associated with the first ring network, and wherein the message is passed through the bridge when the address is associated with the second ring network. [0171]
  • In another embodiment, the bridge is adapted to process a first category of message and a second category of message. The first category of message can include a supervisory message and the second category of message can include a work message. The bridge then can be adapted to make a routing determination based on the second category of message. In this case, the bridge can be adapted to identifies the category of message by examining a message type included in the message. [0172]
  • Stray Messages [0173]
  • A stray message is a message addressed to an unused address of a ring network. The enumeration process typically leaves gaps of unused address space between active modules when the modules align themselves to starting addresses being, for example, a power of two. A stray message usually is a result of a software bug. Unchecked, stray messages may slowly choke the ring network, while such messages are difficult to detect and/or debug. However, not every member of the ring is required to know about much less have the capability to detect or remove stray messages. In one embodiment, this responsibility falls to the Anchor and/or bridges. [0174]
  • Referring now to FIGS. 18 and 19, a process for removing stray messages is illustrated in accordance with at least one embodiment of the present invention. In the illustrated embodiment, one bit of a message is used as a marker to determine if a message is a stray. The bit normally is set to zero, but when a message passes through an [0175] Anchor 192 or bridge 194, the bit is set to one. If the message arrives at the Anchor 192 or bridge 194 again, the Anchor/bridge notes the set bit and discards the stray message, thereby removing the stray from the ring.
  • However, it will be appreciated that since a bridge has two ring interfaces, one of the interfaces must be selected to filter stray messages, particularly in land bridges. To illustrate, if [0176] member 196 sends a message to address=5 (an unassigned address), the land bridge 198 will receive the message at the far end 200 (address=11) and forward the message back to the near end 202 of the bridge 198 (address=3), where the process will be repeated unless the stray message is removed. Accordingly, in one embodiment, the far end 200 of the bridge 198 (i.e., the interface of the bridge furthest away from the anchor) is selected to filter for stray messages. The stray message marker bit of messages received at the near end 202 remain unchanged while the stray message marker bit is set at the far end 200 of the bridge.
  • FIGS. 20, 21, and [0177] 22 illustrate exemplary ring networks having more than one bridge per ring. To illustrate FIG. 20 includes a ring having two parallel bridges 208, 210, FIG. 21 has a ring 212 with bridges 214, 216 that cross, and FIG. 22 includes a ring network having both a land bridge 222 and a sea bridge 224. Other bridge combinations may be utilized in accordance with the present invention.
  • Debugging and Testing on the Rings [0178]
  • Due to the topology of the ring network, there is an opportunity to use the infrastructure of rings to assist scan and debug. The rings can be used as a scan chain access to individual ring members and also a special in-vivo scan mode (discussed below) may be employed. Referring to FIGS. 23 and 24, the insertion of a scan capability is illustrated. A scan may be enabled by introducing [0179] new scan_insert member 230, which is not a regular member. The scan_insert member 230 can be adapted such that it does not introduce one clock delay. For ring signals it is a mux 232 between regular ring data and scan input signals. During test modes this mux 232 inserts scan input signals instead of regular ring data. During normal operation, this mux 232 connects ring infrastructure as usual. In scan mode, the ring is effectively cut off. Insert-scan signals come directly from input pads 234, 236 on the chip. The tap the results pins drive the output pads. The insert scan signals form three major groups: Message type, Message address and Message data.
  • Before the actual scan can commence the ring should be programmed to scan mode. This can done by forcing a sequence of supervisor messages onto the ring. This sequence first resets the ring, then Enumerates it. The last stage is activating for scan of one specific member. After the scan mode is programmed to the member, the actual scan can be done. Scan mux signal is part of the ring. It is programmed via, for example, the external pad to create the shift in sequence. Then for one clock it is negated. During this cycle the scan capture occurs. Then scan mux is asserted again and clocking advances the scan out data. The scan out data is tapped off the wires entering the scan_insert module. Referring to FIG. 25, exemplary signals [0180] 240-250 used as scan chains are illustrated. During scan, several message data signals are used as scan chains. The number of data lines depends on how many parallel scan chains are necessary.
  • In-Vivo Scan [0181]
  • A typical silicon debug scenario is as follows: a chip is run for one billion clocks and a bug is discovered. The test is rerun for half the clocks and then stopped. all flip-flops values at the stopped state the source of the problem or error is hopefully determined. In such a scenario, in-vivo scan may be utilized. For an in-vivo scan, the chip is started as usual. The software is run for the specified number of clocks (note: optionally, a special counter may be used to freeze the rings.) The ring modules are deactivated then deactivated by, for example, a message from a certain module. One specified ring module is re-activated in in-vivo scan mode. This mode causes the module to run shift-out of all its flip-flops. The module's ring interface is responsible for managing the scan-out. It counts bocks of, for example, 32 scan-out bits, packages them in one message and ships the message to the Anchor. The Anchor or other module needs to retrieve these messages out of the Anchor and pass them to debug software. The message type typically is the Program Read Response message, which is designed to get to Anchor. The address is the modules self-address. The data of this message is, for example, 32 bits of scan-out data. Each activation of this mode causes a certain number such messages to be generated. If the modules have more flip-flops then the total bit count of the messages, the designated module can do this activation again and again. To facilitate fast freeze of members state, a special supervisor message (Freeze message) is defined to run quickly around the rings and freeze the state of each module. [0182]
  • In accordance with one embodiment of the present invention, a rings-based system on a chip is provided. The rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that, during normal operation, upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and wherein the system is further adapted for a scan testing mode in which one of the ring members is enabled for a scan output and the other ring members deactivated. The deactivated members can be adapted to pass messages without consuming the messages. [0183]
  • The scan output can be packaged into one or more messages that are transmitted by the one ring member. The one or more messages may be transmitted to a processor, wherein the processor can include a ring member operating as a supervisor that consumes supervisory response messages. In this case, the processor can be adapted to make the data from the one or more messages available to debugging software. Additionally, in one embodiment, a second of the ring members of the rings-based system comprises a processor that issues at least one message that operates to deactivate the other ring members and to enable the one ring member for the scan output. [0184]
  • In one embodiment, the operation of the system in the scan testing mode causes the one ring member to shift out flip-flops associated with the one ring member into one or more messages sent on the ring. The scan testing mode can be initiated by resetting the ring network and enabling the one member for the scan mode, where initiation of the scan testing mode may include enumerating the ring network. In one embodiment, the scan testing mode allows a user of the system to debug the system without adding additional hardware. [0185]
  • Furthermore, in one embodiment, the plurality of ring members are coupled to the ring network using a plurality of ring interfaces having registers, wherein the registers preferably include bits that can be set to deactivate the ring member associated with that ring interface. The registers also may include bits that can be set to enable the ring member associated with that ring interface for the scan output. [0186]
  • In accordance with another embodiment of the present invention, a method of scanning in a ring network having a plurality of ring members is provided. The method comprises observing a defect or anomaly during normal operation of the ring network, issuing at least one message that causes one ring member to enter a scan output mode and other ring members to be deactivated, resuming operation of the ring network, and outputting scan data from the one ring member onto the ring network as messages. The method, in one embodiment, futher comprises causing a different ring member to enter the scan output mode in order to isolate the defect or anomaly. The at least one message can comprise at least one supervisory message that configures bits in ring interfaces associated with the ring members. Additionally, in one embodiment, the step of observing takes place at a point in time during the normal operation, and wherein the step of resuming is carried out just prior to the point in time. [0187]
  • During the scan output mode, in one embodiment, the one ring member packages its scan output as messages to be transmitted to a processor ring member. In this case, the processor ring member can be adapted to make the scan output available to debugging software. [0188]
  • Basic Ring Interface (RIF) Overview [0189]
  • This section covers three issues. The basic ring timing and backpressure protocol. It also presents the ring interface unit block diagram, which in turn is used to describe the interface to the user module connected to the ring. Regular ring members need not be aware of the ring intricacies. The basic ring interface is intended to hide most of the timings and protocols. FIGS. 26, 27 and [0190] 28 illustrate an exemplary implementation of ring signaling between modules of a ring network. As discussed previously, in one embodiment, the OK signal 266 (back pressure) flows in a reverse direction to inform member 268 that on the next rising clock 272 it may force new message on type/addr/data lines 274-278. The OK signal 266 is generated by the receiving member 270. By default, in one embodiment, the OK signal 266 is active and the only time it goes down is when the message type is non-idle and there is no room in the correct fifo of member 270. The correct fifo is either fifo 280 for through traffic in member 270 or the messages addressed for member 270 fifo. Thus the OK signal 266 is generated by signals coming from member 268 to member 270 and is sent roundtrip back during the same clock.
  • The generation of [0191] OK signal 266 can be done from flip-flops resident in member 270 and the type lines of message coming from member 268. For example, if the fifo 280 is full, the OK signal 266 is negated, even though the next OK down the ring is active and is freeing an entry in the fifo 280. The same basic OK protocol is used four times in each RIF (ring interface) unit (FIG. 27). The same OK protocol is valid for the four exemplary RIF interfaces.
  • In accordance with one embodiment of the present invention, a rings-based system on a chip is provided. The rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and the system is further adapted so that downstream adjacent ring members provide a signal to their upstream adjacent ring members that indicates whether a slot is available for the upstream ring member to pass the message to the downstream ring member on a given clock cycle. The receipt of the signal indicating that a slot is not available, in one embodiment, causes the upstream ring member not to pass the message on that clock cycle. In one embodiment, each ring member provides the signal to the immediately prior ring member each clock cycle. [0192]
  • In one embodiment, each ring member couples to the ring network by a ring interface, where the signals regarding slot availability are passed between adjacent ring interfaces. In this case, the ring interface can include an input FIFO and a through FIFO. The signal can be generated by the downstream ring member and passed to an immediately upstream ring member holding the message, where the signal is generated according to the FIFO for the downstream ring member that pertains to the message. In this case, the downstream ring member can be adapted to determine that the input FIFO pertains to the message if the message is to be consumed by the downstream ring member and that the through FIFO pertains to the message if the message is not to be consumed by the downstream ring member. The determination can be made by the downstream ring member examining information descriptive of the message before the message in its entirety is sent from the upstream ring member to the downstream ring member, where the information preferably comprises data from a type field and an address field for the message. The signal can indicate that a slot is available when the input FIFO pertains to the message and the input FIFO can accept a message and/or when the through FIFO pertains to the message and the through FIFO can accept a message. [0193]
  • In one embodiment, the signal generated by the downsream adjacent ring members is a backpressure signal that is generated based on data sent from the upstream ring member to the downstream ring member and then back to the upstream ring member in a round trip fashion during a single clock cycle. Furthermore, in one embodiment, each ring member has a ring interface, wherein each ring interface has four interfaces using or providing the signal which comprises a backpressure signal. [0194]
  • In accordance with another embodiment of the present invention, a method of controlling the transmission of messages on a ring network comprising a plurality of ring members is provided. The method comprises providing a message at a first upstream ring member that is available for output to a second adjacent downstream ring member, receiving a signal at the upstream ring member from the downstream ring member that indicates whether a slot is available for outputting the message on a clock cycle, and outputting the message from the upstream ring member to the downstream ring member if a slot is available and holding the message if a slot is not available. [0195]
  • In one embodiment, the signal is generated based on the content of the message. In this case, the signal can be generated based on whether the message will be consumed by the downstream ring member or pass through to a further downstream ring member. The content of the message preferably includes at least a portion of the message type and/or at least a portion of the message address. [0196]
  • Furthermore, in one embodiment, the downstream ring member is coupled to an input FIFO and a through FIFO, wherein the downstream ring member determines which FIFO pertains to the message. The downstream ring member also can determine whether the pertinent FIFO is capable of accepting the message. [0197]
  • The Imessage path is the messages intended for this member. Each message bus on the diagram above is actually collection of three fields: type/8, addr/20, data/64. It is true for 3 out of 4 interfaces. For Imessage path, the type can be in most cases reduced to work/program and read/write. Also, several other bits of type might be needed, like last and size. For the address field only low order bits are needed. The address bits needed are the bits that cover the internal module address space. The data field might be reduced in some cases to 32 bits or even less, for example 8bit UART. The Imessage fifo may be a very reduced version of other fifos. [0198]
  • The Omessage fifo [0199] 282 transmits messages originating locally to the outside ring. It has to support full fields, because many kinds of messages can be produced. As can be seen from FIG. 28, the OK signal logic 284 originates in the sending member 268. It starts with creating message type and address. Type and address fields travel to member 270, whereas, using these two fields, a decision is made as to whether the message is a through message or it ends at and is consumed by member 270. In each case, the status of the corresponding fifo is transmitted back as the OK signal. The next rising clock samples this OK to mux either previous message or new one or idle. As presented, all four interfaces of RIF have similar turnarounds with their OK signals.
  • Routing of Incoming Messages [0200]
  • Referring now to FIG. 29, an exemplary process for routing of incoming messages is illustrated in accordance with at least one embodiment of the present invention. As illustrated, incoming messages to a module are examined first to determine if the message is a supervisor or work/program message. Using the [0201] address field 290, the intended address of the message can be determined. Since, in one embodiment, the address of the module is aligned to a power of two, an address mask 292 (referred to as split mask) may be used to compare only a subset of the bits of the address. The lower part 294 of the address is passed into the module as an internal address. The subset of bits are compared against a self-address register 296 containing the addresses associated with the module (obtained during the enumeration process). If the subset 294 matches the self-address register 296, the module can consider the message to be addressed to the module. Using the ours/through indication to create the correct DOK (down ok) signal, the above discussion ignores the supervisor messages. Some of supervisors make different use of the address field, when they apply to all members (Enumerate). Some of the supervisor messages are responses from members. These messages carry address of the sender.
  • Referring now to FIGS. [0202] 30-33, exemplary implementations of the RIF 300 are illustrated in greater detail.
  • The main RIF registers include: [0203]
  • self_address_valid bit flipflop: indication that Enumeration was run and address assigned; [0204]
  • self_address: value of self address. This register typically is 20 bits although fewer bits may be used, as the lower part of this register typically is zero; [0205]
  • idnumber: a constant parameter used to identify the associated member; [0206]
  • ADDRESS_SPACE: this is the number of bits used by internal address space. It is used to calculate the address space claimed by the ring member. [0207]
  • activated bit: This bit is reset at hardware reset and modified further by activate messages. If this bit is active, the ring interface is in work mode. It will process work messages. If this bit is inactive, the ring member should wait for programming or activation; scan_enabled bit in activation register: turns the module into scan mode. Reset by hardware reset, further modifiable by activation messages. [0208]
  • in_vivo scan and related: scan out of all registers during interruption of normal work. This is done on per module basis. [0209]
  • RIF Signal Descriptions [0210]
  • By convention, the term input refers to a signal entering a ring interface and output refers to a signal driven by the ring interface. [0211]
  • The pins to a subsequent ring member/from a previous ring member include: [0212]
  • rif_d_type[[0213] 7:0]: input, message type
  • rif_d_addr[[0214] 19:0]: input, message address
  • rif_d_data[[0215] 63:0]: input, message data
  • rif_d_ok: output, backpressure, goes back to previous member [0216]
  • rif_d_clock: input, clock in signal [0217]
  • rif_d_scan: scan mode enable (the actual muxing signal, not test mode) [0218]
  • rif_d_reset: input, h/w reset [0219]
  • rif_d_passed_me: input, indicates that message passed through bridge or Anchor already [0220]
  • Pins for messages entering the ring member include: [0221]
  • rif_i_write: output, this message is valid write and can come from a program or work write. The RIF module modifies the options bits (see below) in case of program write. [0222]
  • rif_i_read: output, this message is valid read. [0223]
  • rif_i_options[[0224] 5:0]: output, rest of the bits of type in the message. These bits are relevant to more sophisticated members, snooping on last and such. For simple members they do not have to be used. Option bits have one out of two possible interpretations. One for read and one for write. For write: snoop, last and size. For read: enable snoop, width of the response (64 bit or 32 bit, for example), enable last address modification (end of frame indication), enable first address modification (start of frame) and increment destination. Discussed above with reference to message type encoding.
  • rif_i_addr[[0225] 15:0]: output, relevant part of address
  • rif_i_datal[[0226] 31:0]: output, relevant part of data low
  • rif_datah[[0227] 31:0]: output, relevant part of data high
  • rif_i_ok: input, tells the RIF that message is accepted by member. On the next clock, a new message may be sent. [0228]
  • Control pins entering the RIF include: [0229]
  • rif_activated: output, reflects activated bit in activation register, if not enabled this bit prevents work messages entering/exiting the member. Also, peripherals should not start transmit/receive operations with this bit disabled. [0230]
  • rif_reset: output, either hard reset or soft reset; [0231]
  • rif_scan_mode: output, reflects scan bit in activation register if enabled, this member is under scan test; [0232]
  • rif_scan: output, scan muxing signal if enabled, in shift of scan operation, if disabled with mode, means capture; [0233]
  • rif_self_address[[0234] 19:0]: output, self address;
  • rif_clock: clock for local flipflops; [0235]
  • rif_user_id[[0236] 1:0]: user defined modifier of module ID input;
  • rif_user-control[[0237] 3:0] bits from activation register for user definition and use;
  • Pins for messages going to the next member of the ring include: [0238]
  • rif_u_type[[0239] 7:0]: output;
  • rif_u_addr[[0240] 19:0]: output;
  • rif_u_datal[[0241] 31:0]: output, data low;
  • rif_u_datah[[0242] 31:0]: output, data high;
  • rif_u_ok: input, back pressure from next member; [0243]
  • rif_u_clock: output, clock out signal; [0244]
  • rif_u_scan: output, scan mode enable (the actual muxing signal, not test mode); rif_u_reset: output, hardware reset; [0245]
  • rif_u_passed_me: output, indicates that message passed through bridge or Anchor already; Pins for messages exiting the member include: [0246]
  • rif_o_type[[0247] 7:0]: input, message type bits (type[7:3] !=0) act as valid indication;
  • rif_o_addr[[0248] 19:0]: input, message address;
  • rif_o_datal[[0249] 31:0]: input, message data low half;
  • rif_o_datah[[0250] 31:0]: input, message data high half;
  • rif_o_replace: input, request to replace the relevant part of datal with self address bits; [0251]
  • rif_o_ok: output, tells the member that message is accepted by RIF; [0252]
  • Anchor RIF Interface [0253]
  • The Anchor RIF interface, in one embodiment, is a variation on the RIF interface used by regular ring members. It has one more state variable—active/passive Anchor. If the Enumerate message comes through dmessage inputs, then an Anchor declares itself passive. If Enumeration message comes from omessage input, then the Anchor declares itself an active Anchor. An active Anchor consumes all supervisor messages, whereas in regular RIFs, supervisor messages are ignored by passing them all to imessage output. For work messages there is another difference. Anchors have self-address space like any other ring member. Work messages addressed to Anchor address space are consumed. Anchors also participate in stray message kills (as discussed above). If message addressed above (or below) Enumerated address space, it will be caught and discarded by the Anchor. [0254]
  • Bridge RIF [0255]
  • A primary function of the Bridge to direct traffic between rings. During Enumeration, the Bridge learns all it has to know about the topology. Signal interfaces of a bridge are identical to two sets of regular RIF. The only exception is clock, which has a tree-topology. Other tug-along signals, like scan, take the longest (crossover) route. From a hardware point of view bridge can be viewed as two RIFs connected back to back. However, the bridge provides additional functionality. For one, the bridge records the first input to receive the Enumeration message. The end lucky to get hit first by Enumeration is labeled near, because it is closer to the Anchor. The other end is labeled far. Also the incoming Enumeration address is recorded as low range. The Enumeration message is sent to the other far side. When it returns on the far side dmessage input, The address is recorded again as high address. At this point bridge is ready to work. [0256]
  • During normal operation, Supervisor request messages, in one embodiment, are crossed to the other side. Supervisor response messages are moved to near umessage output. Program write messages and Program read requests are treated as work messages. Program read responses are moved to the near umessage output. Work messages are routed based on low/high bounds. If message address is between low/high bounds it is moved to the far umessage output. Otherwise the near side gets it. The far side also participates in detecting and removing stray messages. [0257]
  • In one embodiment, messages appear to member module through rif_i_* signals. [0258]
  • These signals include: [0259]
  • rif_i_write: changes just after rising edge of the clock. if active means valid write message arrived. Valid means correct type and context, The user does not have to worry about decoding message types and such; [0260]
  • rif_i_read: changes same, means valid read message arrived; [0261]
  • rif_i_options[[0262] 5:0]: bits extracted from type part of the message. For read they mean snoop, width, last, first and increment and for write they mean last, snoop and size bits;
  • rif_i_ok: member generates positive acknowledge to ring interface. This signal should be valid (or negated) shortly after rif_i_read or rif_i_write become valid. If OK is negated during this cycle, on the next cycle same message data will be driven. Members should make every effort to keep this signal very active; [0263]
  • rif_i_addr[[0264] 19:0], rif_i_datal[31:0] and rif_i_datah[31:0]
  • General controls entering a RIF include: [0265]
  • rif_clock: clock; [0266]
  • rif_reset: reset; [0267]
  • rif_activated: member received ok to operate. This signal is useful for Rx peripherals, not to start bothering anyone without activation; [0268]
  • rif_self_address[[0269] 19:0]: self address on the ring;
  • Constant controls exiting a member and entering ring_control include: [0270]
  • module_id[[0271] 7:0] these two bits can be used by members to tell the system something specific about themselves. For example Ethernet MACs can use one of these signals to tell the world if they are 10 or 100 mbit connected;
  • rif_o_type[[0272] 7:0] is the type of outgoing message;
  • rif_o_addr and rif_o_datal/datah are rest of the message bits; [0273]
  • rif_o_ok: if in current cycle this signal is inactive (low), don't change the message on the next positive edge. [0274]
  • Ring_control parameters include: [0275]
  • ring_interface_unit (also called ring_control) has 2 parameters, which should be set at verilog instance time. ADDRESS_SPACE: this number signifies the number of internal address lines that should enter the member. for example, member has internal memory map of 256 bytes it needs 8 address lines to address this space. Its ADDRESS_SPACE should be set to 8. It also means that to recognize a message to this member the 12 most significant bits of the message address are used. MODULE_ID: each hardware ring member gets, for example, 8 bits for a unique ID. This ID is unique to all instances of the same hardware, for example, all Ethernet MACs have the same ID. To distinguish between different MACs, self_address and user_id bits can be used. Module ID can be examined by Anchor using Who_Am_I messages. Module ID typically is part of the response by any module. [0276]
  • Reset on the Ring [0277]
  • Each ring-based SOC typically has only one Anchor. The hardware reset starts at this Anchor. The Anchor has a hw_reset input pin. From this pin, reset is sent in two directions. One direction is down the ring. The other direction is to the module that hosts the Anchor, for example, a packet processor. The reset propagates through the ring in the logical ring order. It is the same path all supervisor messages take, although the reset is a signal rather than a message. However it is unconditionally flip-floped at each ring member. It is also possible to force soft reset on ring members using Activate messages. [0278]
  • In accordance with one embodiment of the present invention, a rings-based system is provided. The rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, wherein the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, and where the message causes a reset, such as a soft reset, of the given ring member if the message is applicable to that ring member. The message preferably includes address information corresponding to the given ring member. The message can include an activate message that includes at least one bit for causing a reset. [0279]
  • The message, in one embodiment, causes a reset by writing at least one bit from the message into a ring interface for the given member. In this case, the ring interface can includes a bit that is reset by the message, where the bit preferably includes an activated bit or a reset bit. The ring interface can be adapted to provide an output to the given ring member for causing the reset, wherein the output preferably includes a control pin coupled to the given ring member. [0280]
  • In accordance with another embodiment of the present invention, a rings-based system is provided. The rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, wherein the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member; and wherein the system further comprises a reset control signal that causes multiple members of the ring network to be reset (such as a hard reset). [0281]
  • The reset control signal can include a hardware signal that is sent independent of the message. Furthermore, the reset control signal can be sent on a different line from the message. The reset control signal can be adapted to cause all ring members except for the member from which the reset signal originates to be reset. The reset control signal, in one embodiment, causes a reset by causing the reset of bits in ring interfaces corresponding to the multiple members. In this case, the ring interfaces can provide an output to their corresponding ring members to cause the resets, where the outputs can include control pins coupled to the corresponding ring members. [0282]
  • In accordance with an additional embodiment of the present invention, a rings-based system is provided. The rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, the system being adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member, wherein the system includes a message that can cause a reset of the given ring member if the message is applicable to that ring member, and wherein the system further includes a reset control signal that causes multiple members of the ring network to be reset. The message that can cause a reset can cause a soft reset of the given ring member, wherein the reset control signal causes hard resets of the multiple members. [0283]
  • Message Types and Formats [0284]
  • Messages come in roughly four categories: [0285]
  • Supervisor requests—include reset, Enumerate, Who_Am_I requests, activate, freeze. These messages are generated by Anchor and are flooded through the network. [0286]
  • Supervisor response—include Exception, WhoAmI_response. These supervisor messages are generated by regular members and float to the Anchor for its attention. [0287]
  • Programming—include regular work write and read messages. [0288]
  • Work—includes work_read and work_write. [0289]
  • The Enumerate message: The Enumerate (or Enum) message is initiated by the active Anchor. In each ring system there is only one active Anchor. Anchor decides it active, if it is told to start the Enumeration through omessage inputs. The message can include a header field, a data field, a next available address field, a ring ID, and the like. The ring ID is bit flipped every time the message crosses a bridge. It is recorded in activate register in every ring interface. This bit can later be used by software to determine the exact ring topology. [0290]
  • Who_am_I message: To learn the topology, Anchor starts WhoAmI_request message. Each member that receives this message, firstly responds to it, then relays the request message. This order assures that Anchor will see the request message only after all responses. Thus it can determine that the WhoAmI process ended. In request message the field typically used is the type field. The address part of the message is the module's Self_Address. The data field holds info about the module. [0291]
  • Activate message: The Activate message is issued through the Anchor. It carries the address of a specific member and a few bits in the data field used to write the activation register. The bits in the activation register control the state and behavior of the members. [0292]
  • Freeze message: The freeze message unclogs rings and deactivates all members. [0293]
  • Tools for Module and Ring Network Builders [0294]
  • Write Ahead Mode—Read operations in a rings-based architecture typically is much more time consuming than write operations. Accordingly, in another inventive aspect of at least one embodiment of the present invention, status registers are usually inspected by CPUs before sending or receiving data. it generally is desirable to get status fast. The delay of two-way trip from CPU to peripheral and back often is unacceptable. The present invention provides that the peripheral, every time its status changes, sends it ahead to one or more pre-arranged locations in a CPU's RAM or other device. The extension of this idea is to change every critical read to send-ahead write. In essence, every time important parameter changes in some perihperal, its value is written to an agreed memory in the asker space. For example, the CPU needs to know how many free entries are there in a Utopia fifo. Instead of doing read operation initiated by CPU, the fifo, each time this number significantly changes, will write it in some agreed location of CPU's RAM. The CPU now only needs to read its local memory. [0295]
  • To implement the above write ahead modality, a rings-based system on a chip is provided in accordance with one embodiment of the present invention. The rings-based system comprises a plurality of ring members on a ring that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member. The system also is adapted to process both read messages and write messages. The plurality of ring members includes a CPU and at least one peripheral that exchanges date with the CPU, wherein the peripheral includes at least one status memory that stores data describing the status of the peripheral, and where the system is configured to write ahead status changes that are accessible by the CPU. [0296]
  • The system also can be adapted to perform write ahead status changes that would otherwise be initiated by the CPU as read operations. Likewise, the write ahead operations can be programmed to occur based on read operations that would otherwise be initiated by the CPU on a regular basis. The system can be adapted to write ahead status changes to a RAM on the CPU or a RAM that is accessible by the CPU. The CPU can comprise a control protocol processor in a communications chip or network processor in a communications chip. The status memory may comprise at least one status register. [0297]
  • In at least one embodiment, the write ahead operations are performed for some peripheral status changes but not other peripheral status changes. Additionally, the write ahead operation is performed or not performed depending on the nature of the status change. Alternatively, the write ahead operation is performed or not performed based on the magnitude or the quantity of the status change. [0298]
  • In accordance with another embodiment of the present invention, a write-ahead method in a rings based communication system, such as a communications processor or a network processor, is provided. The method comprises identifying at least one module in a ring network that includes status registers that store status information of regular interest to a processor in the ring network, identifying which status information can be transmitted to the processor as a write ahead operation initiated by the at least one module instead of a read operation initiated by the processing, and programming the at least one module to transmit the identified status information as a write ahead operation. In one embodiment, the step of programming causes the average number of read operations initiated by the processor to decrease. [0299]
  • In one embodiment, the identification comprises identifying which status changes are of critical importance or of regular interest to the processor. Alternatively, the identification can include identifying what magnitude or level of status change will cause the write ahead operation. [0300]
  • Land Bridges—Most members on a ring typically communicate in an asymmetric way. For example, EnetRx (Ethernet receiving) traffic is mostly from a peripheral to a packet processor. For EnetTx (Ethernet transmitting) it is the other way around. Pair of members is asymmetric if one is mainly the sender and the other is mainly the receiver in their relationship. In this case it makes sense to put the sender upstream from the receiver. But some pairs are almost symmetric. A packet processor paired with a DMA is such an example. As such, no matter how they are placed on a ring, one direction is bound to suffer. In this case, one or more land bridges generally will provide the solution. [0301]
  • As discussed previously with reference to FIG. 14, a single land bridge can be added to minimize latency between two members of a ring. As illustrated in FIG. 34, two or [0302] more bridges 332, 334 may be added to a ring 336 to further minimize the number of modules between any two ring members. Although each bridge 332, 334 adds two interfaces (members) to the ring network, this generally will not affect the latency significantly since a message is unlikely to travel the entire perimeter of the ring network due to the bridges.
  • Implementation of an External Ring Interface [0303]
  • Referring now to FIG. 35, an exemplary [0304] external ring interface 340 is illustrated in accordance with one embodiment of the present invention. Ring connections between two members can include more than 100 signals. Each message can include, for example, at least 104 signals. Therefore, it may be unreasonable to add this amount of pins (twice) to implement the external ring interface. As such, it may be preferably to implement a dual purpose peripheral interface 340, such as Utopia. Normal mode of operation for an Utopia interface is sending/receiving ATM cells. In a similar manner, two rings networks, such as two network processors, can be connected with Utopia interfaces back to back. In this mode, instead of cells, Utopia pins will convey messages. This will slow the specific ring speed, but not the chip speed since if the Utopia interface is behind a bridge, only messages to the other side are slowed down, not the internal messages. Using Utopia infrastructure for this, also enables us to connect an external FPGA 344 (Field-Programmable Gate Array) as a new peripheral.
  • The following is non-inclusive list of some of the identified advantages associated with the rings topology of the present invention: high speed circuit design—all connections are point to point unidirectional connections; scalability—once the address routing is resolved the actual topology can be changed relatively easily; the switch fabric is transparent to software, only delays are affected by the topology; typically easier to implement than crossbar or switch design; debug and test visibility—each member can be examined and operated alone; possibility of late processing load balancing—different peripherals can be assigned to different CPUs; and the possibility of no need for precise across-the-chip clock alignment—clock can be adapted to run along messages. [0305]
  • Although any of a variety of CPUs may be implemented as a module of the ring network topology described herein, ring networks are particularly well-suited for packet processors, various emobiments of which are described in detail below. The packet processor of the present invention may on occasion be referred to herein as the Vobla, the network processor, and similar variations. According to one embodiment, the network processor of the present invention may be implemented as part of a communications processor having multiple modules that are interconnected using the rings architecture described above. The modules in such an arrangement for a communications processor may include the network processor of the present invention (for data plane processing of packets), a control packet processor (for control plane processing as a flow manager), various peripheral modules, and so forth. [0306]
  • In accordance with one embodiment of the present invention, a rings-based system is provided. The rings-based system comprises a plurality of ring members on a ring network that communicate using point-to-point connectivity, a message traversing the ring from member to member, where the system is adapted so that upon the message arriving at a given ring member the message is processed by that ring member if the message is applicable to that ring member, and if the message is not applicable to that ring member, the message is passed on to the next ring member; and the system further comprising means for providing an external ring interface that enables communication with at least one external peripheral device. The means can comprise a field programmable gate array and/or a memory port ring member on the ring network. The at least one external peripheral device can include one or more of a DSP, encryption engine, external bus, external memory, a second ring network, and the like. [0307]
  • In one embodiment, the means is adapted to perform handshaking between the protocols of the ring network and the at least one external peripheral device, wherein the handshaking preferably includes converting message data from the ring network into transaction data. The means also can be adapted to allow the ring network to write out messages to the at least one external peripheral and the at least one external peripheral to generate transactions converted into messages for the ring network. [0308]
  • The means, in one embodiment, operates as a shared memory between the ring network and the at least one external peripheral. In this case, the means may include a memory that operates as a RAM for messages received from the ring network and as a FIFO for transactions received from the at least one external peripheral device. The means also may include a memory, wherein the ring network can write data to an address in the memory to cause an interrupt in the at least one external peripheral device. [0309]
  • In one embodiment, the ring network is a first ring network on a first chip, where the rings-based system further comprises a second ring network on a second chip, and wherein the first ring network and the second ring network interface through the means to the at least one external peripheral device. [0310]
  • Alternatively, the ring network can include a first communications processor including a first protocol processor and a second network processor, and the system can further comprise a second communication processor including a second protocol processor and a second network processor, wherein the first communications processor and the second communications processor interface through the means to the at least one external peripheral device. [0311]
  • In accordance with yet another embodiment of the present invention, a network processor implemented on a chip is provided. The network processor comprises means for processing a plurality of protocols including ATM, frame relay, Ethernet, and IP, said means being programmable using a set of library commands to process additional protocols, and wherein said means comprises an arithmetic logic unit (ALU), a load/store unit (LSU), a preload/bump unit (PBU), a register file unit (RFU), an agent interface, and an internal memory. The network processor, in one embodiment, further comprises a fetch unit and a program sequencer. [0312]
  • The ALU can be adapted to perform arithmetic and logic operations on data operands. The LSU can be adapted to perform address calculations in order to address data operands in the internal memory. The LSU calculates an effective address according to one of five available options, including: (1) effective address is the content of a register from the RFU; (2) effective address is the sum of content of a first register from the RFU and content of a second register from the RFU; (3) effective address is the sum of content a first register from the RFU and content of a second register from the RFU after the second register is shifted by a specified number of bits; (4) effective address is the sum of the content of a register from the RFU and a displacement that occupies a specified number of bits in an instruction word; and (5) effective address is an absolute address included in the instruction word. The PSU, in one embodiment, performs decoding of instructions received from the internal memory. The fetch unit can be adapted to control what instructions are fetched from memory for decoding by the PSU. The internal memory can be adapted for storing program information and data. [0313]
  • The RFU, in one embodiment, comprises a first register file for a current task and a second register file for preloading register values for a next task. In this case, data may be read to or written from the first register file based on a comparison between a current task ID and a task ID associated with the first register file. The RFU also can comprise a third register file for storing register values for the current task that are not stored in the first register file. In this case, data may be read to or written to the third register file when the current task ID and the task ID associated with the first register file are not the same. In one embodiment, a task switch is performed by the network processor by making the next task the current task and preloading a further next task. The performance of a task switch can include treating the second register file as the third register file after the task switch. [0314]
  • The agent interface, in one embodiment, allows the network processor to interface to external modules for executing instructions, where the external modules can include one or more of a CRC module, encryption module, hashing module, and table lookup module. [0315]
  • In yet another embodiment of the present invention, a communications processor implemented on a chip is provided. The communications processor comprising a network processor including means for processing a plurality of protocols including ATM, frame relay, Ethernet, and IP, said means being programmable using a set of library commands to process additional protocols, wherein said means comprises an arithmetic logic unit (ALU), a load/store unit (LSU), a preload/bump unit (PBU), a register file unit (RFU), an agent interface, and an internal memory. The communications processor further comprises a protocol processor for controlling the network processor, wherein the protocol processor performs control plane processing and the network processor performs data plane processing. The network processor can be adapted to process instructions by performing a fetch, decode, address, execute, and a write. [0316]
  • In one embodiment, the network processor and the protocol processor are ring members on a ring network, and further comprising a plurality of other ring members on the ring network. In this case, the network processor includes a plurality of compounds that share a single ring interface to the ring network, wherein the compounds can include, for example, a doorbell agent for controlling the execution sequence of tasks for the network processor. The compounds also may include a multireader agent for servicing requests to read data from the internal memory, a message sender agent for sending messages onto the ring network, a DMA agent for sending messages to initiate a DMA controller on the ring network, a CRC agent for performing CRC calculations, and/or a debug module.Generally, a packet processor includes the following capabilities that are typically not found in general purpose microprocessors: [0317]
  • Zero overhead task switching—Usually, each interface (I/f) port would require at least 2 tasks (RX [receive], TX [transmit) to handle the datapath processing. A system that includes several ports would require about two or more active tasks for each port. As such, the packet processor should be able to switch tasks with minimum overhead. The packet processor may allocate shadow memory (4-8 tasks) to store registers and task status. The priority scheme to choose the next_task_to_run is hardware (HW) based and is not performed by software (SW) as in a RISC (Reduced Instruction Set Computer) model. [0318]
  • Parallel engines—Processing of packets can use parallel machines to accelerate performance. Examples for this capability include DMA, CRC, Lookup engine, and Peripheral Transfer Machine. A well-built packet processor would have the mechanism in place to issue and receive synchronically transactions to parallel machines without stalling the packet processor. [0319]
  • Data movements—Packet processing require data movements from First-In-First-Out (FIFO) memory to internal memory, and from internal memory to external memory and vice versa. This is performed using parallel Direct Memory Access (DMA) machines. Data transfers should be optimized and deterministic within boundaries. Hence the right mechanisms have to be in place between the DMAs and the packet processor to allow the transactions between the engines and to ensure deterministic behavior. [0320]
  • Scalability—One way to scale the throughput of a packet processor is by instantiating several engines. Hence, it is desirable that the programming model and the system architecture be flexible enough to accommodate scalability. [0321]
  • Special instructions—Packet processing uses special operations that are not common for a general purpose processor. Instructions like Compare immediate under mask (to match specific bits), activation of parallel engines using instructions like CRC, DMA, HASH, LIST SEARCH, and mechanisms such as Sticky bits for compare and jump, are derived from the needs of packet processing. [0322]
  • Inter-task communication—Inter-task communication is supported by the architecture. Traditional RISC machines generally use SW for this communication. [0323]
  • Efficient link list operation—Data structures like link lists, queues and buffers are common in communication systems. A flexible packet processor should be able to manage a large number of different queue types in an efficient and quick way. [0324]
  • Exemplary Processing Requirements [0325]
  • According to one aspect of the invention, the flexible packet processor should support processing of the following: ATM, Frame Relay (FR), IP/Ethernet, IWF (TDM to Packets), AAL2 for wireless base stations, IP, and MPLS. [0326]
  • ATM is by far the largest access method in the access space. A packet processor in the space should to be able to terminate ATM virtual circuits (VCs) Customer Premises Equipment (CPE) and should be able to switch ATM. ATM is of particular interest because a vast majority of the DSL approaches use ATM as the carrier technology. Frame Relay is of interest because it is commonly used in corporate access (e.g., using T[0327] 1s or NxT1).
  • After dominating the LAN space, Ethernet is becoming a cost effective technology for the Metropolitan Area Network (MAN). This simplifies the need for a costly router (no ATM) at the corporate edge. This is a new approach that ISPs (CLECs [Competitive Local Exchange Carrier]) use as a way to replace the old Telco access (leased lines). However, Ethernet access does not solve the issue of how to deal with corporate voice. Typical requirements for IP/Ethernet would be IP routing and Ethernet bridging at 100 Mbps and approaching [0328] 1G-Enet.
  • Packet processing for inter-working functions (IWF) (e.g., TDM to packets) is typically found in Voice Gateways (VG) and in Wireless Base Stations (WBS). The VG interface the POTS (plain old telephone system) network on one side and the packet network on the other side. Voice calls are modified (compressed and packetized, or uncompressed and circuitized) between the networks. Hence typical processing requirements at the VG include: termination of AAL2 streams; support for CES (Circuit Emulation Services) (AAL1) to emulate T[0329] 1 services; termination of RTP (Real Time Protocol) (VoIP) packets; and the like AAL2 processing may find useful application for Wireless Base Stations. New generation WBSs use ATM as their backbone network. To optimize bandwidth, AAL2 may be chosen to carry both voice and data. In that case, the following processing requirements result: AAL2 Termination at the BTS (Base Transceiver Station); AAL2 Switching the BTS and at the MSC (Mobile Switching Center)/BSC (Base Station Controller); AAL2 Termination is done at the MSC/BSC (OC-3 and IP is routed to ISP); and IMA (Inverse Multiplexing over ATM) is being used as the connection between BTSs and the MSC both for redundancy and for cost.
  • The flexible packet processor should handle IP because IP processing can be found in various applications in the access space, such as the following: ISP aggregation router; DSLAM for handling frames; Cable modem head end; Wireless base station; MPLS (Multiprotocol Label Switching) is a newcomer to the access space. It is being used for traffic management and for Quality of Service (QoS) control. It is desirable that access equipment support LSR (edge device) (Label Switched Router) for MPLS. [0330]
  • As demonstrated above, the access market requires different access methods. The access market has a need for IWF between these different methods, which, in turn, drives the requirement for unique processing capabilities. Also, the different market segments have many similarities regarding their processing requirements. Thus, a flexible packet processor according to the invention can form the basis of an access platform that is capable of addressing multiple applications in this space. [0331]
  • Architectural Overview of a Flexible Packet Processor [0332]
  • The flexible packet processor in accordance with various embodiments of the present invention is a general-purpose network processor core, allowing it to support many system-on-chip (SOC) configurations. A library of modules containing memories, peripherals, accelerators, and other processor cores makes it possible for a variety of highly integrated and cost-effective SOC communication devices to be built around the packe processor. Figure shows a block diagram of an [0333] exemplary SOC chip 350 made up of the network processor core 354 and associated SOC components (described below) according to an embodiment of the invention. Although not indicated in this configuration, a typical SOC can contain more than one network processor core 354.
  • Internal Memory Expansion Area (Internal Memory [0334] 352)—On-chip memories operating at full core frequency are connected to the network processor core 354 through this component. The internal memory is unified and can be used for both program and data storage. Different technologies such as SRAM or ROM can be used to implement the internal memory.
  • [0335] Network Processor Core 354—The network processor core is the processor in which the network data path application code is executed, and which may include: a program sequencer unit (PSU); a load store unit (LSU); a fetch unit (FTU); a data arithmetic logic unit (DALU); a register file (RFU) including support of fast task switching; a preload and bump unit (PBU) for efficient task switching and context save and restore; and the like. These components are discussed below in greater detail.
  • A companion (sometimes called a compound) that is tightly coupled to the network processor core is the doorbell scoreboard module (doorbell) shown in FIG. 36. The doorbell receives requests for service from peripherals, accelerators and DMAs, and then determines a next task ID once a task switch occurs in the network processor. [0336]
  • [0337] Peripheral Expansion Area 356, Accelerators 358 and System Expansion Area 360—These components shown in FIG. 36 include the functional units that interface between the network processor core and the application, including the functions that send and receive data from external input/output sources. In addition, these components include accelerators 358 that execute portions of the application in order to boost performance and decrease power consumption. These components are application-specific and may or may not include various functional units such as: a host interface; an external memory interface (e.g., SDRAM controller); a serial interface (USB, UART, SSI ([Synchronous Serial Interface], Timers); a communications interface (Utopia, MII); a CRC accelerator; a able look up coprocessor; Smart FIFO; a data pump; a direct memory access (DMA) controller; as well as other CPU cores, such as packet processors (PPs).
  • To provide the data exchange between the core and the other on-chip blocks or modules, the following ports may be implemented: data memory ports (address, data read and data write) used for data transfers between the core and memory; program memory port (address and data read) for fetching code from the memory to the core; agent port to support tightly-coupled external user-definable functional units such as peripherals, accelerators, DMA's, smart FIFOs, and so forth; and a context memory port (address, data read and data write) used for the preload and bump of registers for fast task switching. [0338]
  • Referring now to FIG. 37, the [0339] network processor core 354 is illustrated in greater detail in accordance with at least one embodiment of the present invention. As discussed above, the network processor core, in one embodiment, includes the following:
  • Data Arithmetic Logic Unit (DALU or ALU) [0340] 370 The DALU 370 (also referred to as the ALU below) performs the arithmetic and logical operations on data operands in the network processor core. The data registers can be read from or written to memory over, for example, a 32-bit wide data bus as 8-bit, 16-bit, or 32-bit operands. The source operands for the ALU 370 are 32 bits wide and originate either from data registers or from immediate data (Imm). The results of ALU operations are stored in the data registers.
  • According to one aspect of the invention, ALU operations are performed in one clock cycle. The destination of each arithmetic operation can be used as a source operand for the operation immediately following the arithmetic operation without any time penalty. In one embodiment, the components of the [0341] ALU 370 are as follows: an integer arithmetic unit for 32-bit non-saturated three-operand arithmetic operations; a logic unit for 32-bit logic operations; a bit field unit (BFU) for multi-bit shift, rotate, swap and bit-field insert and extract operations; and a condition code generation unit.
  • The [0342] ALU 370 may read two operands from the register file via the dual source bus (src1 and src2 in FIG. 37), or one operand from a register via the source bus and a second immediate operand via the immediate bus (Imm input to DALU on FIG. 37). The ALU 370 generates a result into a destination register via the destination bus (dest on FIG. 37).
  • The condition codes are optionally generated in the condition code register (part of the R[0343] 1 register, discussed further below) depending on the instruction type.
  • The [0344] ALU 370 may support both signed and unsigned arithmetic. Most of the unsigned arithmetic instructions are performed the same as the signed instructions. However, some operations may require special hardware and may be implemented as separate instructions. When performing an unsigned comparison, for example, the condition code computation is different from signed comparisons. The most significant bit of the unsigned operand has a positive weight, while in signed representation it has a negative weight. Special condition codes and instructions may be implemented to support both signed and unsigned comparisons.
  • The Load Store Unit (LSU) [0345] 372
  • The [0346] LSU 372 performs address calculations using integer arithmetic needed to address data operands in memory. In addition, the LSU 372 generates change-of-flow program addresses. The LSU 372 operates in parallel with other network processor core resources to minimize address generation overhead.
  • The effective address (EA) used to point to a memory location for a load or a store is calculated according to one of the following options. According to one embodiment, only the 16 least significant bits (LSBs) of the calculation result are considered. The options for calculating the EA include: [0347]
  • Register indirect, No update (Rn): The EA is the content of a register Rn from the register file. [0348]
  • Indexed by register Ri (Rn+Ri): The EA is the sum of the contents of the register Rn and the contents of the register Ri. [0349]
  • Indexed by a shifted register Ri (Rn+(Ri<<m)): The EA is the sum of the contents of the register Rn and the contents of the register Ri after Ri is pre-shifted to the left by m bits. [0350]
  • Indexed by displacement (Rn+xx): The EA is the sum of the contents of the register Rn and a displacement xx that occupies m bits in the instruction word. The displacement is sign-extended and added to Rn to obtain the operand address. [0351]
  • Absolute address: The EA is the absolute address expressed in the instruction. [0352]
  • The Network Processor Registers [0353]
  • The network processor registers are classified into three types: General Purpose Registers (GPR); Special Purpose Registers (SPR); and Hidden registers (HR). The general purpose registers may be used by the programmer to load data from memory, execute arithmetic or logic operations, and store the data back into memory. The special purpose registers are registers that have an associated functionality, such as a task SPR, and so forth. Generally, SPRs may not be loaded or stored directly from/to memory. According to one approach, a dedicated move instruction can move data between general purpose registers and special purpose registers. Hidden registers are registers which are not exposed to the programmer, but reside in the hardware as part of the machine control (e.g., a current PC [Program Counter] register). [0354]
  • The General [0355] Purpose Register File 374
  • The network processor of the present invention includes a special register file architecture and a memory block that are capable of managing a large number of tasks (threads) with substantially no cycle penalty. The memory block has the capacity to store the register context of the tasks. The register file architecture performs a reduced number of context save and restore operations and enables each active task with its own context registers. [0356]
  • The benefits of this approach, discussed in detail below, include at least some of the following: support of nearly unlimited tasks; no cycle overhead for context save and restore operations upon task switches; transparency to the programmer; and cost-effectiveness and low circuit overhead. [0357]
  • One conventional approach to the multi-task switching issue provides that every task switch is accompanied by a context save and restore cycle, usually performed by software. This approach takes extra cycles. Another conventional approach uses special circuitry that allows access to the memory using wide busses, thus enabling multiple registers to be saved or restored at a time. This approach reduces the number of cycles, but complicates the interface to the memory (the Tricore CPU from Siemens uses this approach). Another approach uses multiple register files, one for each task. This approach has the disadvantage of limiting the number of tasks to the number of register files, and this is also a costly and limiting solution. The large number of register files can also impact the frequency of operation due to fan-out limitations. (Products using this approach include, for example, the Intel IXP12000 and Lexra NetVortex LX8000 Network Processor.)According to one approach taken by the instant invention, the programming model of the network processor core has 32 general purpose registers. These registers can be read from or written to over the memory data buses (e.g., referring to FIG. 37, the src[0358] 1, src2, and dest buses). Source operands for ALU instructions originate from these registers. According to one beneficial aspect of the invention, the destination of an ALU instruction is a register and such a destination can be also be used as a source operand for a subsequent ALU instruction in the operation immediately following, without any time penalty.
  • At the heart of the [0359] network processor core 354 is a set of three register files and dedicated hardware that implements a mechanism for automatically saving and restoring the registers such that a task switch is accomplished with minimal overhead on the main flow. Upon entering a task, both the current and next task identification (task ID) are sampled. These three register files are as follows: the active register file—used to run the current task; the Shadow1 register file—contains the valid register values of the current task that do not exist in the active register file; and the Shadow2 register file—used to preload register values of the next task concurrent with the current task run. The active register file has 32 general purpose registers. These registers are part of the programming model and are exposed to the programmer. According to one approach, each register of the active register file has a 32-bit data field and a 6-bit tag field. The tag field holds the task ID, which identifies the task for which the data register value is valid.
  • The [0360] network processor core 354 includes a boundary register which specifies for each of the registers whether it is considered a global register or a general register. The global registers may store global values that can be shared among multiple tasks, or they may store temporal values that are not preserved when the task yields and resumes processing.
  • The Shadow register files (Shadow[0361] 1 and Shadow2) are not part of the programming model, i.e., they are not exposed to the programmer. Each of the Shadow1 and Shadow2 register files includes, for example, 32 registers of 32 bits.
  • According to one approach, task switches do not require an explicit save/restore of the general registers. Saves and restores of the general registers are done implicitly by hardware according to the following mechanisms. In case of a write to a general register, the task ID associated with the register of the active register file is first compared to the current task ID. If the result is equality, this means that the register is maintained by the current task, and, therefore, the register is overwritten with the new value and the current task ID is marked in its tag field. A non-equal result means that the register contains valid data for a different task. In this case, the old register content is first sent to a write queue buffer to be saved in memory in a task ID context table, and then the new value is overwritten to the register and the current task ID is marked in its tag field. [0362]
  • In case of a read from a general register, the task ID associated with the register is first compared to the current task ID. An equal result means that the register contains valid data for the current running task, and thus the data is read directly from the register. A non-equal result means that the register contains valid data for a different task. However, the valid data for the current task for that register resides in the Shadow[0363] 1 register file, as it was preloaded to Shadow2 concurrent with the execution of the previous task. As a result, the register value is read from the Shadow1 register file, and the register of the active register file remains unchanged.
  • A read or write access to a global register accesses the active register file directly without changing the register's tag. Concurrent with the execution flow of the current task, a special machine (the [0364] PBU 376 of FIG. 37) preloads the register values of the next task ID into the Shadow2 register file.
  • Upon a task switch request, the following actions should take place: the preload of the register values of the next task should be completed; the Bump buffer is emptied—all data which was sent to the bump unit is saved in the context table; the next task becomes the current active task; the Shadow[0365] 2 register file becomes the shadow for the current task (Shadow1); and a new next task is sampled and a new preload procedure is initiated onto Shadow2. Special care should be taken (and special logic may be implemented) to prevent hazard cases. For example, a mismatch in the register value occurs if a register in the active register file is tagged for a task ID which is identical to the next task ID, and that register is accessed as a destination in the current task. In this case the register value should be first saved in memory in its context location and then overwritten with the new value of the current task. However, since the previous task is identical to the next task, it could be that the register value is already preloaded into the next task shadow register file (Shadow2). In this case, the preloaded value into Shadow2 is no longer valid.
  • FIG. 38 illustrates the register files structure and a mechanism for low overhead task switch according to an embodiment of the invention in accordance with the discussion above. In the [0366] top half 390 of FIG. 38, the current task ID is Task_X, the next task ID is Task_Y. In the bottom half 392 of FIG. 38, after a task switch the current task ID becomes Task_Y and the next task ID becomes Task_Z.
  • In accordance with one embodiment of the present invention, a method for efficient processing of tasks in a communications system is provided. The method comprises sampling a current task identifier and a next task identifier, providing a first register file for storing values for a current task, and providing a second register file for storing values for the current task that are not in the first register file. The method further comprises providing a third register file for preloading values for the next task, and performing a task switch by making the next task identifier the current task identifier and sampling a further next task identifier. The method can further comprise the step of completing the preload of the register values for the next task identifier which after the task switch is the current task identifier. In this case, the method may also comprise using the third register file as the second register file after the task switch. [0367]
  • The first register file, in one embodiment, comprises registers with a data field and a task identifier field. In this case, the first register file has 32 registers, each register having a 32 bit data field and a 6 bit task identifier field. The first register file may be exposed to a programmer of the communications processor and the second register file and the third register file are hidden from the programmer. In one embodiment, task switches are performed without an explicit save/restore of the register files. [0368]
  • The method can further comprise performing a write during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; writing a value to the first register file when the current task identifier is the same as the task identifier in the first register file; and writing a value to the first register file when the current task identifier is not the same as the task identifier in the first register file after the content in the first register file is saved to a memory. The content in the first register file can be saved to a task identifier context table. [0369]
  • The method may also comprise performing a read during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; reading a value from the first register file when the current task identifier is the same as the task identifier in the first register file; andreading a value from the second register file when the current task identifier is not the same as the task identifier in the first register file. In this case, the content of the first register file may not be changed as a result of the read. [0370]
  • In an additional embodiment of the present invention, a system for efficient processing of tasks in a communications system is provided. The system comprises means for sampling a current task identifier and a next task identifier, a first register file for storing values for a current task, a second register file for storing values for the current task that are not in the first register file, a third register file for preloading values for the next task, and means for performing a task switch by making the next task identifier the current task identifier and sampling a further next task identifier. [0371]
  • In one embodiment, the means for performing a task switch completes the preload of the register values for the next task identifier which after the task switch is the current task identifier. Similarly, the means for performing a task switch uses the third register file as the second register file after the task switch. [0372]
  • The first register file comprises registers with a data field and a task identifier field, wherein the first register file can have 32 registers, each register having a 32 bit data field and a 6 bit task identifier field, and further wherein the second register file and the third register file each have 32 registers. [0373]
  • The system may further comprise a processor which performs a write during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; writing a value to the first register file when the current task identifier is the same as the task identifier in the first register file; and writing a value to the first register file when the current task identifier is not the same as the task identifier in the first register file after the content in the first register file is saved to a memory. The content in the first register file can be saved to a task identifier context table. The processor may comprise an ALU. [0374]
  • The system may also comprise a processor which performs a read during execution of the current task by: comparing the current task identifier to a task identifier in the first register file; reading a value from the first register file when the current task identifier is the same as the task identifier in the first register file; and reading a value from the second register file when the current task identifier is not the same as the task identifier in the first register file. In this case, the content of the first register file is not changed as a result of the read. In one embodiment, the means for performing a task switch comprises a preload and bump unit. The processor may comprise an ALU. [0375]
  • The Preload and Bump Unit (PBU) [0376] 376
  • Referring back to FIG. 37, The [0377] PBU 376 controls the access of data memory for the automatic save and restore of registers in their context table in memory. A save of a register content in its location in the table context is performed whenever the register in the active register file is addressed as a destination and the register contains valid data for a task different from the current running task. Generally, only one request for a save can be captured in the PBU 376 for a single instruction because only one destination can appear in an instruction.
  • The [0378] PBU 376 includes a write queue with a number of entries in order to minimize the interference with the main program flow, thus optimizing the total execution time. Whenever a register addressed as a source does not contain valid data for the current running task, the data is read from the Shadow1 register file where it was previously preloaded.
  • The [0379] PBU 376 is also responsible for controlling the preload of the next task registers into the Shadow2 register file. The PBU 376 generates the data memory accesses for save (write) and preload (read) using the context address and data busses. According to one embodiment of the invention, the load store cycles of the active flow have highest priority, followed by the preload cycles, and, at the lowest priority, are the save cycles from the write buffer.
  • The Program Sequencer Unit (P[0380] 5U) 378
  • The [0381] PSU 378 performs the instruction decoding and generate the controls for the other core units. The PSU 378 controls the program flow including all scenarios involving the change of flow.
  • Fetch Unit (FTU) [0382] 380
  • The [0383] FTU 380 is responsible for controlling the program counter (PC) for instruction fetch operations. According to one embodiment of the invention, the PC may be derived from one of the following sources: sequential increment; jump to an absolute address; jump to an address specified by a register; task switch to a next task entry point; relative change of flow; exception control (e.g., reset, breakpoint, patch, etc.); and return from trap.
  • Messaging Interface (Agent Interface) [0384] 382
  • A few instructions are executed in an external module (e.g., DMA, accelerators, etc.) connected to the network processor core. A messaging bus (Agent Interface or AGI) from the core to the external module enables the definition and support of such an extension of the instruction set. [0385]
  • [0386] Memory Interface 384
  • According to one aspect of the invention, the network processor core uses a unified memory space wherein each address can contain either program information or data. This memory space is typically based on on-chip RAM and ROM. The memory module should have separate ports for program, data and context accesses. Also, this memory module may have additional ports for accesses from the external world, such as the ring interface. [0387]
  • A Programming Model for a Flexible Packet Processor [0388]
  • The programming model describes the rules for writing network processor programs. After a brief introduction that explains in general terms the organization of the network processor code and the flow of data through the system, the programming model (e.g., state resources, interfaces and instruction groups) is outlined in high level terms. Then, the execution flow and performance issues are discussed. And last, the programming model is detailed. [0389]
  • Organization of the Network Processor Code [0390]
  • According to one embodiment of the invention, the network processor comprises a 32-bit single issue RISC processor tailored for real-time communication processing goals. According to an embodiment, the network processor has 32 general purpose registers, built-in support for multi-tasking, communication peripherals, on-chip SRAM, a DMA interface to external SDRAM, a built-in interface to an on-chip control processor (referred to as the host processor or the Packet processor [PP] or the Control Packet processor [CPP]). [0391]
  • It is desirable that the network processor have hardware support for up to 62 tasks. The hardware support includes generation of task activation triggers, automatic task scheduling, save and restore of registers to and from the shadow register area in internal SRAM, special instructions for yielding the CPU, and support for passing messages between tasks. [0392]
  • Each network processor task has a dedicated register set. The task registers are preserved across the periods in which the task is not running. A network processor task can access internal memory with load and store instructions, and can copy data from internal to external memory and vice-versa using special DMA instructions. [0393]
  • The data which a task operates upon can be classified into the following categories (with reference to FIG. 39): [0394]
  • Data from the communication peripherals (arrow [0395] 402):
  • This data is copied, using a special instruction from the peripheral's FIFO, into internal memory (arrow [0396] 406). On the transmit side, this data is copied, using a special instruction, from internal memory into the peripheral FIFO. This type of data, which is in transit through the device, can be referred to as stream data. Stream data exchanged with the host processor (arrow 408): This data is passed by a network processor task, usually in external memory, for further processing to the host processor. On the transmit side, the host processor passes this data to a network processor task for transmit-related tasks (such as encapsulation, shaping, scheduling, and so forth) and for transmission through a peripheral. Stream data is also handed over between network processor tasks. There are cases when the stream data is not touched by the host processor.
  • Configuration data: This data resides in internal memory and is set at initialization time by the host processor or by initialization procedures on the network processor (e.g., buffer size). Configuration data is consumed, but not produced, by the task. [0397]
  • Flow state data: This data is kept in internal or external memory, and describes, for example, the state of each ATM connection or the state of the current Ethernet frame. Part of this data is used and updated by the task (e.g., the cell count for a connection). [0398]
  • Task state data: This data is kept in internal memory (or registers), and is used by the task to keep information in case the task does not complete the work intended to be accomplished during a single period of possession of the CPU. [0399]
  • A High Level View of the Programming Model [0400]
  • According to an embodiment of the invention, the programming model for the flexible packet processor includes the following elements. state resources—the hardware memory entities which hold the state of the program; interfaces—of the ways in which the program should behave to interact with hardware resources which are external to the processor; and instruction set—the description of the basic tools with which the program performs its operations. [0401]
  • State Resources [0402]
  • FIG. 40 provides an [0403] overview 420 of the state resources for the network processor according to an embodiment of the invention.
  • Interfaces [0404]
  • DMA interface. The DMA interface controls the DMA machines, which copy data from the NP SRAM to external DRAM and vice versa. The DMA interface is set up by the PP at initialization time, and accepts action commands from the NP via special instructions. The DMA interface connects to the doorbells and the task scheduling mechanism. [0405]
  • Peripheral FIFO interface. The peripheral FIFOs are set up by the PP at initialization time, and are instructed by special NP instructions to copy a data unit to internal memory (from internal memory in the case of a TX). The peripheral FIFOs are connected to the doorbells and the task scheduling mechanism. [0406]
  • Accelerators/Coprocessors interface. In general, there may be two kinds of accelerators/coprocessors: (1) accelerators/coprocessors that are tightly connected to the network processor core and that are accessed via a special agent instruction (e.g. CRC, multireader, message sender, etc.). These reside within network processor Compound entity; and (2) accelerators/coprocessors that are ring members and can be accessed by any other ring member interposed on the ring (via messages over the ring). [0407]
  • Host (PP) processor interface. In general, the PP will be able to initialize NP configuration registers, to share data with the NP in internal and external memories, to request services from an NP task, and to receive interrupts and messages from the NP. [0408]
  • Instruction set. Instructions perform the various types of actions, such as the following: arithmetic, logic, register manipulation—modify data in registers; load/store—move data between SRAM and registers; flow control—changes in the program counter; task management—control of inter-task changes in the program counter; agent interface instructions—DMA (move data between the SRAM and the SDRAM), access to serial ports (move data between the SRAM and communication peripherals), and accelerators (specialized communication processing functions such as a CRC calculation on a block of data); special purpose register moves (and activation of coprocessors)—move data between GPRs and SPRs. [0409]
  • Execution Flow and Performance Considerations [0410]
  • Generally, the CPU executes instructions sequentially until it encounters an instruction which changes the program flow. For example, this instruction can be a conditional or unconditional branch or jump within the task, which checks a condition bit in one of the general purpose condition registers, or an instruction which terminates the current task and starts execution of another task. Instructions which cause a non-incremental change to the program counter take more then one cycle and are optionally followed by a one instruction delay slot. Other instructions which influence the program flow are: arithmetic and compare instructions which modify the condition code bits, and instructions which modify the task entry point (the address from which the task will resume execution in its next execution round). [0411]
  • Types and states of tasks. Tasks can be in one of three states: running, pending and dormant. At any given time there is one running task executing on the CPU. When something requests the service of a task, the task becomes pending. Each time the running task voluntarily yields the CPU, the highest priority task is selected from the pending tasks. Tasks for which nothing has requested their service are dormant, and they will not be enabled for execution and will not run. According to one embodiment of the invention, the number of tasks is determined at initialization time and there is no dynamic creation/elimination of tasks. [0412]
  • Tasks can be classified by the reason (trigger) that causes a task to become enabled for execution. In other words, tasks can be classified by the entity which they serve: [0413]
  • Peripheral: a task which serves a communication peripheral. Each time the RX peripheral receives a unit of data (e.g., 64 bytes of an Ethernet frame) in its FIFO or when a TX peripheral has space for a unit of data available in its FIFO, that peripheral sends a service request to their servant task. [0414]
  • Timer: A timer can be preprogrammed with a period cycle count. Each time it periodically expires, the timer sends a service request to its servant task. [0415]
  • Inter-task messages: Data (usually stream data) can be exchanged or handed over between tasks. One approach for this is to send a message (e.g., containing the data pointer) to the other task, accompanied by a service request. Usually, a task serves only one master (the master being the source of service requests). This means that peripherals, timers and inter-task messages can all request service in the same manner. [0416]
  • There are two more sources which can cause a task to become pending: [0417]
  • DMA: A task is permitted to yield the CPU during a DMA request (in this way the DMA will work in parallel with the CPU, and the CPU will not be stalled). The task usually wants to resume execution when the DMA action is completed. Upon completion, the DMA will send a service request to the originating task. [0418]
  • Self-request: There is a limit to an execution period (the time between two sequential task switch events) of tasks. The execution of the current task usually may not be preempted by an external event, so it is the programmer's responsibility to provide for yielding the CPU before reaching the time limit per task. When a task yields the CPU (e.g., to allow another task to execute) before it has completed the intended work, the task can issue the self-request service request before yielding in order to schedule itself for future execution. [0419]
  • Task Triggers and Task Doorbell Bits [0420]
  • Task doorbell bits are the place where the service requests are registered. A network processor task can be enabled for execution by several request sources: Ordinary priority request from a serial module (e.g., a data fragment is ready in the receive FIFO and was copied to a predefined SRAM location or the transmit FIFO finished the transmission of the previous data fragment.). [0421]
  • High priority request from a serial module. (e.g., the RX FIFO over a threshold or the TX FIFO under a threshold). [0422]
  • Completion of DMA requests. [0423]
  • Self-request (produced by the software). [0424]
  • Message from another task (produced by the software and using the same doorbell bit as an ordinary priority request from a serial module). [0425]
  • Message queue above threshold (produced by the software and using the same doorbell bit as the high priority request from a serial module). [0426]
  • Timer (uses the same doorbell bit as the ordinary priority request from a serial module). [0427]
  • According to one aspect, for each doorbell bit there is a mask bit. The exceptions are the first two doorbell bits, which have a common mask bit, and the self request bit, which can not be masked. If the mask bit is set, the task will be enabled for execution by the matching request; otherwise, the request is blocked. [0428]
  • According to one approach, about twelve tasks are expected to serve serial channels (e.g., 6 for receive and 6 for transmit). These tasks will usually be activated by requests from serial channels. The rest of the tasks are expected to be activated by timers, messages from other tasks, or the host (e.g., doorbell [0429] bits 1 and 2).
  • A task which has more work to do then the maximum allowable latency should yield and use the self-request (doorbell bit [0430] 5) to be scheduled again (e.g., a timer handler task). Any task can be activated by a completion of a DMA request that the task originated.
  • When a task is scheduled for execution, the request and mask bits of the service request that activated the task are cleared. In the case where there are regular and urgent bits, both are cleared. [0431]
  • Mask Bits and DMA [0432]
  • Mask bits can be set by software, and, in some cases, they are set automatically by hardware. A mask bit, together with the associated request bit, is cleared by hardware when the request is served by the task (the task becomes running). Mask bits can be set with a special instructions and can optionally be specified in DMA and YIELD instructions. When a task issues a DMA request and this DMA is not the last action in the task, the programmer should set a DMA doorbell mask bit and clear all other mask bits (this task should not return to execution because of any other request, for example the serial.). When the task returns to execution after completion of the DMA, all mask bits will be clear. [0433]
  • According to one approach, there is a default state of the mask bits for all tasks, with the first bit set and all the others cleared. Another option, the auto set in DMA and YIELD instructions, instructs the hardware upon DMA completion to set the mask bits to the default state. When a task issues its last DMA request, it sets the auto set indication. The last YIELD instruction of a task should also set the mask bits to the default state. [0434]
  • According to one approach, the network processor DMA is able to serve two external busses (it can be a single DMA machine in some implementations.) An immediate DMA ID field is specified in DMA instructions. Its value is an index into a translation table (the table may be programmed by the CPU or by writing to special purpose registers on the network processor). The translation result contains information like: big/little endian, and so forth. When all the DMAs initiated by a task (DMAs for which acknowledgement was requested) are complete, the DMA doorbell request bit is set. [0435]
  • Using a count field in one of the special purpose registers, it is possible to yield if all DMAs of the task have not been completed. Also, when a DMA instruction is executed, and there is no place in the pending DMA transactions queue, it is possible that the network processor may be stalled. [0436]
  • Task Priority and Scheduling [0437]
  • Each time the current task suspends its execution, the hardware scheduler selects from the pending tasks the one with the highest priority, and starts execution of that task. Various approaches could be taken to task scheduling. According to one approach, the algorithm for selecting the next task for execution is as follows. The tasks which participate in the selection of the next task for execution are the tasks for which their corresponding mask bit in the Task Global Mask Register (TGMR) is cleared. Tasks which participate in the selection of the next task and have unmasked requests are divided in to four groups and served in the following order: [0438]
  • 1. Highest priority group: includes urgent requests of task numbers [0439] 0-31.
  • 2. Second priority group: includes regular requests of task numbers [0440] 0-31.
  • 3. Third priority group: includes urgent requests of task numbers [0441] 32-63.
  • 4. Lowest priority group: includes regular requests of task numbers [0442] 32-63.
  • Within each group, the requests are serviced according to the task number. Lower task number requests are served before higher task number requests. The task resides in the higher priority class, starting from the time the urgent doorbell bit was set, until the time its doorbell mask is set to default by an option of the yield instruction, or until its doorbell mask is explicitly cleared by an instruction. According to one approach, the tasks are in an urgent state as long as the handling of all pending urgent events is not completed (including when the task yields while doing a DMA during such a period). [0443]
  • When a task starts execution, the doorbell request bit which caused it to run and the matching mask bit are cleared. The other request bits are not modified. The regular and the urgent request bits are considered to be two levels of the same request and have a common mask bit. They are both cleared when the request is serviced. A task can explicitly raise its priority to urgent, and return its priority to natural (normal priority, unless there is an urgent request pending) by using an agent instruction that writes to the doorbell register. This can be used to increase task priority for the period spent in a critical section or in an urgent code fragment. [0444]
  • Task Switching Performance [0445]
  • According to one aspect, instructions that yield the CPU take 2 cycles (they have a delay slot). The other performance issue is the time it takes to restore the registers of the new task. Usually the registers of the next task are pre-loaded during the execution of the current task. [0446]
  • Inter-task Communication [0447]
  • Global registers. A global register is a general purpose register that is shared between all network processor tasks, and which can be safely used and modified by each task. (A task has to make sure that it completes the whole sequence, which includes the shared register use/update, needed for the action performed, before yielding the CPU.) [0448]
  • Inter-task messages. Sending messages between tasks is done using queues. Additional information is provided in the discussion regarding data structures. [0449]
  • Common program. More then one task can execute the same object code, for example, such as two tasks that service the reception of two identical serial channels. Also, all tasks can share code in functions. [0450]
  • Internal and external memory. Sharing information in memory is a matter of convention between the tasks. For complex atomic modifications, it is possible to either have a server task with an exclusive right to access the structure or to use semaphores as described further below. (Complex atomic means that the modification requires a series of external memory accesses, between which the data structure is in an inconsistent, i.e., erroneous, state.) An example of a need for such a modification would be the update of a linked list queue whose descriptor is in external memory. Generally, it is recommended to avoid using such structures when possible. [0451]
  • Host-Network Processor Communication [0452]
  • Network Processor task to host messages and interrupts. Described in connection with the discussion on data structures. [0453]
  • Host to Network Processor task messages. The host is able to post a message to the input message queue of any task. The host also sets the doorbell bit of the target task. The host should not post messages to an input message queue to which a network processor task posts messages. [0454]
  • According to one approach the network processor, either with a hardware mechanism or a software task, should notify the host when the host message queue changes its position relative to a close to full threshold. Using such a threshold will permit a less time-constrained handling of messages on the network processor side and eliminates the need for a check if not full inquiry on the host side. [0455]
  • Host to Network Processor commands. There is a command register that is written to so that the host can control network processor execution. For example, such commands may include a reset, an activate task N, a deactivate task N (without aborting its current execution), and a start execution of task N (i.e., give task N a request without aborting the currently executing task). [0456]
  • Host-network processor parameters. According to one approach, for each task an area is allocated at compilation time to hold the parameters that are initialized by the host and used by the task. The addresses of these areas are maintained together with the frame pointers and the entry points, and are loaded by the boot initialization routine (into R[0457] 6, discussed further below) of each task. These parameters are also read by the host, and are used in the initialization drivers.
  • State Resources [0458]
  • General Purpose Registers [0459]
  • According to one approach, there are 32 general purpose 32-bit registers to be used by the tasks. Some of the registers, r[0460] 0-rN, do not preserve their values across task switching; they are common to all tasks. These are referred to as common registers. The other registers, rN+1−r31, are preserved across task switching. These registers are referred to as private registers. According to one embodiment of the invention, these private registers are saved and restored from their shadow location by the hardware, transparently to the programmer. N is a global value, preferably programmed at initialization time. According to one approach, N (which should be odd) is 15, although other values of N may be used depending on design considerations. The programmer should allocate the correct shadow area for the registers, which should be the number of tasks multiplied by the number of private registers. The programmer should use registers contiguously, starting from r31 downwards.
  • According to one aspect of the invention, some of the registers have special hardware support, as follows: [0461]
  • r[0462] 0 is interpreted as constant 0; writes are ignored.
  • FIG. 41 illustrates register r[0463] 1 (430) in greater detail in accordance with at least one embodiment of the present invention.
  • r[0464] 1 condition codes: sticky condition (1 bit); arithmetic conditions (equal/zero [1 bit], less than/negative [1 bit], greater than/positive [1 bit], carry [1 bit], overflow [1 bit], doorbell bits [6 bits], and user defined condition bits [16 bits]).
  • r[0465] 31: user defined condition codes (32 bits).
  • r[0466] 30: entry point address of the task.
  • r[0467] 28: link address 1 (function return address).
  • r[0468] 29: link address 2.
  • According to one approach, the convention for register allocation is similar to the approach taken for application binary interfaces, or ABI. ABI is a standard that allows object code interoperability of functions compiled by different compilers or written in different languages. Register allocation according to this approach is as follows: [0469]
  • r[0470] 27 and other r2x registers (26>2x>20) are allocated to a fixed meaning. Registers which are allocated to some meaning by convention are expected to maintain the meaning over function calls. They can be modified within functions, but only according to their meaning. Each task might have different registers allocated to fixed meanings.
  • r[0471] 27: parameter area pointer and stack pointer of the task. The compiler or the programmer statically allocates up to three stack frames per each task. The compiler computes the area used by level0 code (first frame), and the maximum area needed for automatic variables of level1 functions of the task (second frame) and of level2 functions of the task (third frame). There is a global limit of memory size of local function variables (enforced by the compiler). Whenever there is an indirect function call, the maximal stack frame will be allocated. All accesses to local variables will be translated by the compiler to offsets on r27, and there is no need for a stack pointer register for dynamically allocating frames on the stack and for modifying the stack pointer during function calls and returns.
  • According to one approach, the compiler limits the function call depth to two. The compiler may also identify those functions which do not yield and do not call other functions, allocate their frame in an area common to all tasks, and use absolute addresses to access local variables (this may save memory per task in this case). Other registers can also be allocated by convention to: data unit address in internal memory, data unit pointer in external memory, connection table base address, and so forth. Registers which are allocated to some meaning by convention are expected to maintain the meaning over function calls. Such registers can be modified within functions, but only according to their meaning. [0472]
  • r[0473] 16, r17: These registers do not preserve their value over any function call. They can be used without saving in level2 functions and in level1, which do not expect the value to be preserved over a level2 function call. The r16 and r17 registers are used to pass parameters and get results to/from level1 and level2 functions. Even in the case when there are no parameters passed, these registers do not preserve their value over any function call. Preferably, the compiler forbids functions of more than two parameters.
  • The compiler and the assembly programmer may use the r[0474] 16, r17 order for level1 functions and the r17, r16 order for level2 functions. This may eliminate saving and restoring of r16 when both level1 and level2 functions have a single parameter. Also, r16 and r17 are the only private registers which can be modified in level2 functions.
  • r[0475] 18-r19: These registers should not be modified within level2 functions. They can be used without saving in levell functions, and they do not preserve their value over level1 function calls.
  • r[0476] 20-r26: These registers should not be modified within level1 and level2 functions. These registers can be used without saving in level0 code. Some of these registers can be assigned to a fixed meaning, in which case they can be modified within functions according to their fixed meaning.
  • r[0477] 0-r15 are scratch or global registers that are common to all the tasks, and which are not changed by the hardware task switching.
  • r[0478] 2-r5 hold information that is frequently used and shared between tasks, such as the buffer array base address (r2) and the free buffer pool address (current) (r3). These registers can hold popular (often used) constants, such as a table base address or an arithmetic constant.
  • r[0479] 8-r15 are used to hold information which does not need to be preserved across yields, such as intermediate results of an arithmetic computation.
  • r[0480] 6-r11 do not preserve their value over function calls.
  • r[0481] 12, r13: These registers preserve their values over calls to level2 functions which do not yield.
  • r[0482] 14, r15: These registers preserve their values across calls to levels and level2 function which do not yield.
  • Table 3 summarizes the register conventions discussed above. [0483]
    TABLE 3
    private or special HW fixed modified by used as
    common handling meaning functions parameter
    r0 Common constant 0 NA Yes No
    r1 Common conditions No Yes No
    r2-r5 Common No Part within fixed No
    meaning
    r6-r11 Common No No level 1 & 2 No
    & yield
    r12, r13 Common No No level 1 & No
    yield
    r14, r15 Common No No No No
    r16, r17 Private No No level 1 & 2 Yes
    r18, r19 Private No No level 1 No
    r20-r27 Private No Part No No
    r28 Private level 1 return No No No
    address
    r29 Private level 2 return No level 2 No
    address
    r30 Private entry point NA Yes (TBD) No
    r31 Private conditions No Yes (TBD) No
  • By way of summary, registers can be safely used in the following cases: [0484]
  • r[0485] 8-r9: level2 function code which does not contain a yield; level1 function code which does not contain a yield or a call to a level2 function; and level0 code which does not contain a yield or a function call.
  • r[0486] 10-r11: level1 function code which does not contain a yield or a call to a level2 function which yields.
  • r[0487] 12-r15: level0 code which does not contain a yield or a call to a function which yields.
  • r[0488] 16, r17: any level2 function code; level0/1 function code which does not contain a function call.
  • r[0489] 18, r19: any level1 function code; level0 code which does not contain a function call.
  • r[0490] 20-r2X: any level0 code.
  • Indication Registers [0491]
  • According to one approach, registers r[0492] 1 and r31 contain indications which can be used in branch conditional instructions. They can be explicitly updated by any instruction, but some of the bits in r1 are implicitly updated by compare instructions and by arithmetic/load instructions. The carry bit is also implicitly updated by some arithmetic instructions.
  • R[0493] 1 is a global register; its value is not preserved after task switching. R31 has a copy per task.
  • The doorbell and mask fields in r[0494] 1. The doorbell sub-field contains a copy of the doorbell bits of the current task. The mask bits are a copy of the task's mask bits. Writes to these fields are ignored.
  • Compare instructions, the sticky bit options. Compare instructions modify the three condition code bits, LT, EQ, and GT. Optionally, the compare instructions can also update the sticky bit. These instructions specify a condition, such as one of NONE, LT (less than), LE (less than or equal to), EQ (equal to), NE (not equal), GT (greater than), or GE (greater than or equal to). If the condition is satisfied by the compare, the sticky bit is set; otherwise, the sticky bit is not altered. This feature is useful to efficiently implement several tests of error cases as well as other AND/OR conditions. Compare instructions also have an option to overwrite the sticky bit. FIGS. [0495] 87-90 (discussed below) illustrate various mechanisms for using the accumulative condition flag, i.e., the sticky bit, to execute branch instructions in processing systems, such as a network processor or communications processor.
  • Serial status. The serial status indications (e.g., error, over-run/under-run, and last), optionally together with the data fragment size, should be loaded by the programmer from a fixed memory location into r[0496] 1 or r31.
  • User defined indications. The user can keep state information in the user-defined part of r[0497] 1 or r31. It may be desirable for an indication to be created once and used several times. The user can also load to r1 or r31 a part of an array of indications.
  • Arithmetic instructions modify the condition codes. Arithmetic instructions can modify the zero, negative, and positive condition code bits. The following arithmetic instructions modify the carry condition code bit: ADD, SUB, ADD[0498] 1, SUB1, SRR, SLR, SL1, SR1, and CLB
  • Branch, jump and yield conditional. Conditional branch/jump and yield instructions test a single condition bit, which can be any bit in r[0499] 1 or r31, and compare that bit to either 0 or 1. Conditional branch/jump instructions take three cycles when taken and 1-2 cycles when not taken, while unconditional branch/jump instructions take two cycles; in both cases they have an optional delay slot. Conditional instructions. In most of the instructions the 3-bit conditional execution field is used to specify whether the instruction is unconditional or it is conditional upon the sticky condition bit being true or false. One of the three bits is reserved for future use.
  • Link Registers [0500]
  • Branch/jump instructions can be used to call subroutines. They have an opcode bit which specifies whether the return address is to be saved, and another opcode bit which specifies whether the return address should be saved in r[0501] 28 or r29. The return address is either PC+1 or PC+2, depending if the delayed branch option is used. The function call depth is limited to two, and the depth of each call/return is specified in the instruction. Functions which do not call other functions should be defined and called as depth 2.
  • The Task's Entry Point Register [0502]
  • R[0503] 30 contains the address at which the task will resume execution after a yield. It is modified by any instruction which modifies r30 and is optionally modified by the YIELD instruction. It can optionally be modified by DMA instructions which yield.
  • Hidden Registers [0504]
  • Program counter—according to one approach, there is a single program counter in the system (not per-task) and it is not directly accessible by the software in any manner. [0505]
  • Special Purpose Registers [0506]
  • Special Purpose Registers (SPRs) are network processor core registers that are not defined as one of the General Purpose Registers (GPRs). Special instructions (SPRL and SPRS) are defined to enable the movement of data between SPRs and GPRs. Special Purpose Registers in the network processor include the [0507] Refetch SPR 440, the Task SPR 442, the Trap SPR 444, and the Mindex SPR 446, as shown in FIG. 42.
  • [0508] Refetch SPR 440. The refetch SPR is a 32-bit register that holds the first and second program memory addresses of the instructions to be refetched when getting out of a trap. Bits 15:0 hold the first instruction address (called refetch) and bits 31:16 hold the second instruction address (called next_refetch). When the network processor receives a break request and is not already in the trap mode, it continues instruction execution from the program location pointed out by the break vector and the trap mode bit is set (in the task SPR). The address of the instruction that would have been executed but for the occurrence of the breakpoint is saved in bits 15:0 of the refetch SPR. The following instruction that was supposed to be executed but for the occurrence of breakpoint is saved in bits 31:16 of the refetch SPR.
  • Leaving the trap mode is performed by executing the RFT instruction. This instruction causes a program jump to the program location specified by the refetch SPR bits [0509] 15:0, followed by the program location specified by the refetch SPR bits 31:16. This also clears the trap mode bit.
  • The refetch SPR is a read/write register that can be accessed through the SPRL and SPRS instructions. [0510]
  • [0511] Task SPR 442. The task SPR is a 32-bit read only register. The task SPR contains information on the current executing task and on the next task to be executed:
  • DOORBELL REQ reflects the doorbell request bits of the current task. [0512]
  • CTID reflects the Current Task ID. [0513]
  • NTID reflects the Next Task ID. [0514]
  • NTV reflects Next Task Valid bit. [0515]
  • MASK reflects the doorbell mask bits of the current task. [0516]
  • UR reflects the urgency level of the task (1=urgent). [0517]
  • COUNT reflects the doorbell counter value of the current task. [0518]
  • When there is a yield and both the bump buffer is empty and the context of the next task is already pre-loaded, the network processor switches to the next task. At this point the NTID is loaded into the CTID and the next task ID together with the next task valid bit from the doorbell are sampled into the NTID and into the NTV, respectively. [0519]
  • If the NTV bit is set, then the NTID is locked and there will not be further sampling. If the NTV bit is cleared, then the doorbell next task ID will continue to be sampled on each cycle until the valid bit is set. [0520]
  • The new valid next task ID is used by the pre-load logic to pre-load the next task's context. The task SPR can be read by using the SPRL instruction. All other bits of the task SPR are reserved and will be read as zero. The CTID, NTID and NTV bits are cleared by reset. The default state (and the reset state) of the mask of each task is 0b100. [0521]
  • [0522] Trap SPR 444. The trap SPR is a 32-bit register. The trap SPR include the trap mode bit, the illegal instruction status bit, and the breakpoint status bits:
  • [0523] Bit 0—illegal Instruction (IL): When there is an illegal instruction, the IL bit is set. The IL bit can be cleared only by reset.
  • [0524] Bit 1—Trap Mode (TRAP): When TRAP bit is set, the network processor is in the trap mode. A breakpoint event causes the program flow to jump to a program location (pointed to by a given vector) and to enter the trap mode of execution by setting the trap mode bit. When in trap mode, no breakpoint and/or patch events will be accepted. The trap mode bit will be cleared by a RFT (Return From Trap) instruction or by writing zero to the trap mode bit. When the trap bit is cleared, further breakpoints and/or patches will be accepted.
  • [0525] Bit 2—Program Address Break (PAB): This is a breakpoint status bit, which when set, indicates that a program address breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • [0526] Bit 3—Data Address Break (DAB): This is a breakpoint status bit, which when set, indicates that a data address breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • [0527] Bit 4—Task Break (TB): This is a breakpoint status bit, which when set, indicates that a task ID breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • [0528] Bit 5—Yield Break (YB): This is a breakpoint status bit, which when set, indicates that a yield breakpoint occurred. This bit is cleared by an RFT instruction or by writing zero to it.
  • Semaphores [0529]
  • Semaphores are commonly used when a section of code that contains yields should not be executed by more then one task at a time. This happens when the code is handling some data structure resource that is shared between tasks. Current examples which might entail the use of semaphores are: adding and removing from a linked list queue whose descriptor is in external memory; releasing a multicast buffer (update of the reference count); emulation of a task's message queue in external memory; and a task that tries to put an inter-task message into a full message queue can use the hardware mechanism to wait until the queue is not full. [0530]
  • The alternative solution of not yielding while in the critical section is not efficient. The alternative solution of having a dedicated task responsible for the resource, and thus serializing the actions performed on the resource, is in some cases complicated to implement and is in some cases inefficient. [0531]
  • Network processor software semaphores in accordance with the present invention are implemented over a hardware mechanism which makes it possible to prevent the scheduling of tasks specified in a bitmap (the TGMR register). [0532]
  • The number of semaphores is limited only by size of the memory space allocated for semaphore support. Every semaphore requires a one byte indication of free/busy state plus a 64-bit mask of tasks registered for the particular semaphore. While performing the critical section protected by a semaphore, the task's priority should be raised and also all issued DMAs should be treated as urgent in order to minimize semaphore holding time. [0533]
  • There could not be too many semaphores in the system (e.g., in order to comply with the goal of keeping the internal memory requirement reasonable), yet there are many shared external memory resources (data queues, contexts, lookup tables, etc.) that may require semaphore protection. According to one approach, the semaphore ID (number) is chosen based on a simple arithmetic operation (e.g., a MOD of significant bits) on the resource address. [0534]
  • The network processor scheduler hardware includes a bitmap in an SPR register (SPR bitmap). Each bit in the bitmap, when set, prevents the scheduling of the task whose ID corresponds to the bit index. The network processor software can add or remove a list of tasks specified in the specified in a software bitmap to the above list. The software registers in the SPR bitmap those tasks which are prevented from execution because they are waiting for one of the currently occupied semaphores (see bad_list below). [0535]
  • The software holds an indication in internal memory for each semaphore that indicates whether that semaphore is currently in use/occupied (see semX_indic below.) The software also holds for each semaphore a 64 bit bitmap corresponding to the tasks that are currently awaiting access to the semaphore (see semX_mask below). For each task awaiting the semaphore, this bit, which corresponds to that task's ID, is set. [0536]
  • According to one embodiment (not reflected in the table below), the software also holds the task ID of each task in the form of a 64 bit mask (where only the bit corresponding to the task ID is set in this mask). [0537]
  • The following pseudocode in Table 4 illustrates the use of a semaphore: [0538]
    TABLE 4
    Pseudocode Illustrating the Use of a Semaphore
     bad_list - hardware 64-bit mask indicating which tasks can not be run.
     semX_indic - software indication per each semaphore (X) that indicates whether it is
     occupied.
     semX_mask - software 64-bit mask per each semaphore (X) comprises registration of the
     waiting tasks.
    produce X(semld) from the resource address
    checkX: ; This is the frequently used code fragment - efficiency
      is vital.
     1d.b r2,semX_indic ; load the “semaphore is busy” indication - a byte or
      a bit.
     bc.neq sem_occupied ; and test it.
    ; Do the critical section code and release the
      semaphore.
     sti Oxff,semX_indic ; If it was not occupied, grab it and do the critical
      section.
     seturg on
     CRITICAL
     SECTION X
     seturg off
    sti 0, semX_indic ; Release the semaphore
     clear semX_mask bits in bad_list ; agentw. Let all in, highest priority task will be
      selected.
     . . . ; Rest of the task code and yield.
    sem_occupied: ; Register myself on the semaphore, and prevent myself
      from running.
     1d.d r2,r3,semX mask ; Get the 64-bit mask of tasks waiting for this
      semaphore.
     set bit of current task in r2,r3
    ; “Optimization”: the current task_id is prepared in a
      doubleword mask in the init routine.
     st.d ;2,r3,semX_mask ; Save the mask for common use.
     set semX_mask bits-in bad_list ; agentw. Prevent everyone (and myself) who is
      waiting to semX from being scheduled in.
     set my task's doorbell bit ; Re-activate my request
     yield.epsem_released ; Go to sleep until it is my turn to use the semaphore.
    sem_released: ; The semaphore was held by someone, but now it
      might be free.
     1d.d r2,r3,semX_mask ;
     clear bit of current task in r2,r3 ;
     st.d r2,r3,semX mask ;
     set semX_mask bits in bad_list ; agentw. Prevent everyone else who is waiting to
      semX from being scheduled in.
     b  checkX ; Re-check the lock - avoids nasty bugs.
  • The general operation of the use of semaphores is as follows. Whenever a task seeks to enter critical section number X, the task checks the internal memory indication of semaphore X to determine if there is currently any other task in the critical section. [0539]
  • If the semaphore indication is clear, the task sets the indication and enters the critical section. After completion of the critical section (e.g., which contains external memory accesses and task switches), the task clears the semaphore indication. It is possible that while the task was in the critical section other tasks may have registered themselves as awaiting access to the semaphore and prevented themselves from being scheduled in by the hardware scheduler. So the current task will enable these other tasks, which are registered as awaiting scheduling for the semaphore, by removing their list from the hardware bitmap. [0540]
  • If the semaphore is set, the task branches to semX_occupied, registers itself in the list of tasks awaiting the semaphore, and disables those tasks by adding the list to the hardware bitmap. Task switching is then initiated after setting the resumed execution in the semX_released label. When the task resumes execution, the task deregisters itself from the list of tasks that are awaiting the semaphore, and prevents other tasks on the list from being scheduled by adding them to the hardware bitmap. The task then executes the code, which checks the semaphore indication. [0541]
  • In accordance with one embodiment of the present invention, a method of employing semaphores to limit access to a shared resource used by a multi-tasking processor is provided. The method comprises the steps of providing a first bitmap in a register that prevents specified tasks from running because the specified tasks are awaiting access to an occupied semaphore, storing an indication in memory that indicates whether the semaphore is occupied, storing a second bitmap in memory that identifies tasks that are awaiting access to the semaphore, and attempting to access the semaphore based on checking the indication in memory. Wherein a task checking the indication in memory determines that the semaphore is available, the method can further comprise the steps of setting the indication to indicate that the semaphore is occupied and performing the processing for the task, wherein performing the processing for the task includes critical section execution. The critical section can include at least one of external memory accesses and task switches. [0542]
  • The method can further comprise the step of resetting the indication to indicate that the semaphore is available after the step of performing the processing for the task. Furthermore, the method additionally can comprise the step of removing from the first bitmap those tasks now included in the second bitmap in memory that identifies tasks that are awaiting access to the semaphore, thereby allowing those tasks to be scheduled for access to the semaphore. [0543]
  • In one embodiment, when a task checking the indication in memory determines that the semaphore is occupied, the method can further comprise the steps of including the task in the second bitmap and revising the first bitmap to reflect the tasks from the list in the second bitmap. The method further can include the steps of removing the task from the second bitmap when the indication reflects that the semaphore is available and revising the first bitmap to reflect the tasks from the list in the second bitmap, thereby allowing the task to access the semaphore and perform the task processing. [0544]
  • In accordance with another embodiment of the present invention, a system employing semaphores to limit access to a shared resource used by a multi-tasking processor is provided. The system comprisesa first bitmap in a register that prevents specified tasks from running because the specified tasks are awaiting access to an occupied semaphore, an indication in memory that indicates whether the semaphore is occupied, a second bitmap in memory that identifies tasks that are awaiting access to the semaphore, and means for attempting to access the semaphore based on checking the indication in memory, The means for attempting can be a processor executing a task, wherein the task can be enabled to access the semaphore when the indication reflects that the semaphore is available. Also, the task can be enabled to register itself with the second bitmap and updates the first bitmap when the reflects that the semaphore is occupied. The task execution can include processing a critical section including at least one of external memory accesses and task switching, wherein the indication in memory is reset to indicate that the semaphore is available after processing the critical section. [0545]
  • The Software Data Model [0546]
  • Referring now to FIG. 43, an exemplary [0547] software data model 450 is illustrated in accordance with at least one embodiment of the present invention. There are two major types of data allocated in internal memory: global data and task/function data.
  • Global data. [0548]
  • .adata start [0549]
  • global data definitions, examples: [0550]
  • .long generic_taskmessage_q[[0551] 8];
  • .struct_structure_name instance_name; [0552]
  • .adata end [0553]
  • Global data has a global name scope and can be symbolically referenced from anywhere in the code. References are translated to absolute addressing. [0554]
  • Task/function data: [0555]
  • .task [common] task_type_name [0556]
  • task data definitions and task code. [0557]
  • .task end [task_type_name][0558]
  • .func level1/2 function_name [0559]
  • function data definitions and function code. [0560]
  • .func end [function_name][0561]
  • Local data definitions have a local name scope (detailed below) and references are translated by the assembler to r[0562] 27+ immediate offset. Functions can be defined either within a task definition or outside of any task definition. Function names, which are defined outside of any task definition, have global name scope and can be called from any place in the code. They can access their local data and the global data. Function names which are defined within a task definition have a scope of the task definition. They can be called only by level0 code of that task type. They can access the common data of the task (detailed below).
  • There is hardware support for keeping return addresses for two levels of nesting of function calls. A static stack frame will be maintained, made of three parts, for each task instance. This should solve the problem of allocation of the correct size of dynamic stacks. It will also make function calls more efficient by eliminating handling of the stack pointer and of the return address. This means that at definition time the level (1 or 2) of each function is specified. Functions which do not call other functions will be defined as level2 functions. [0563]
  • For each task type, the assembler creates two data sections, level0 data and level1 data. Their sizes will be used by the PP software to allocate memory for the static frame of each task instance of this task type, and to initialize r[0564] 27 of the task instance. A task definition can appear several times for the same task type. Such a definition shall be referred to as a task fragment. The data definitions in each of the fragments are in union with the data definitions in each of the other fragments (overlap, occupy the same memory location).
  • During a task fragment definition, an optional common keyword can be used, in which case the data definitions will overlap with any other data definitions, and the scope of the data names will be all the fragments of the same task type. [0565]
  • The non-common fragments of a task can be used to implement the different functions (referred to as handlers), which the generic task does. The pointer to the handler is passed in the inter-task message. All the handlers will return to a label in the common part of the task. The common part of the task will only handle the input message queue and dispatch to the handlers. [0566]
  • The size of the level0 frame for a task type is the size of the data definitions in the common part plus the maximum of the sizes of the data definitions in non-common fragments of the task type. [0567]
  • Level1 functions can be called only explicitly (i.e., they can not be called using a pointer.) The assembler will find all the calls to level1 functions and will compute the level1 frame size for this task type as the maximum of the sizes of the data definitions of level1 functions called by this task type. [0568]
  • Level2 functions can be called via a pointer. The assembler will check that the data allocated in each level2 function is not more then a system level constant (80 bytes) and will add this constant to the offsets of data definitions of level1 functions. [0569]
  • Scope of labels: local in functions and task fragments. Global to all fragments of that task type when in the common task fragment. Labels in task fragments and level2 function names can be passed to the PP software (flow manager) in the object file using the directive: .export label_name. [0570]
  • According to one approach, the assembler will produce a single code section, which will contain the code of all the tasks and functions. Other function types might be considered, such as ones which do not have local data in memory or which receive as a parameter a pointer to a scratchpad area for their use. Also to be considered is code which is not associated with tasks and functions. (All the labels in this code will have global scope. It might be used for additional types of functions.) In cases when the caller's frame is no longer needed (an error condition, for example), it might call a function of the same level, which will use the caller's frame. [0571]
  • The Instruction Set [0572]
  • Addressing modes: [0573]
  • Instruction addressing. All instruction addresses are word addresses, they are shifted left 2 bits to generate the memory address. [0574]
  • Absolute: Jump to the absolute address specified in the 16-bit immediate instruction field. [0575]
  • PC relative: Branch to an offset from the current program counter specified in the 12-bit immediate signed instruction field. [0576]
  • Register: Jump to the address, which is contained in the register specified in the instruction. [0577]
  • Implicit task entry point: During task switch, jump to the entry point of the next enabled task (in r[0578] 30 of that task).
  • Data addressing: Data addresses are byte addresses that are taken as is, regardless of the access size. [0579]
  • Register with offset: The address is the sum of the value contained in the register, with the sign extended 8-bit immediate instruction field. [0580]
  • Register with index register: The address is the sum of the value contained in the register, with the value contained in the index register. [0581]
  • Instruction Groups [0582]
  • According to one embodiment of the packet processor of the present invention, the following instruction groups are supported: arithmetic and logic operations; register data manipulation; load/store (to internal memory); program flow; task yielding; and agent instructions (DMA, communication peripherals, CRC, CAM, etc.). [0583]
  • Instruction Pipeline for a Flexible Packet Processor [0584]
  • Referring now to FIG. 44, an exemplary [0585] network processor pipeline 460 is illustrated. According to one embodiment of the invention, the network processor pipeline 460 consists of five stages: fetch, decode, address, execute and write. The network processor pipeline 460 enables a standard design flow and standard memories. The network processor can perform an instruction together with a data load or store from/to a unified internal memory in each cycle. The network processor pipeline 460 enables an arithmetic instruction to use as its source operands data that was loaded by the previous instruction without any bubble. Conditional jump and branch instructions have no penalty when the condition is not taken while a penalty of 2 cycles occurs if the condition is taken and there is a change of flow. To reduce this penalty, delayed jump and branch instructions are provided. In addition to the data ALU there is an address ALU to enable efficient pointer calculation on data access. The network processor general purpose registers (r0-r31) are updated during the write stage without distinction as to whether they are updated from a load operation or from a data ALU operation.
  • Pipeline Stages [0586]
  • There are five pipeline stages: Fetch; Decode; Address; Execute; and Write. [0587]
  • The Fetch Stage [0588]
  • During the fetch stage, the network processor core places the next instruction fetch address. This next fetch address can originate from the Program Counter (PC) in the normal sequential flow or can come from the address ALU when there is a jump or branch instruction. A 32-bit new fetched instruction is assumed to be ready during the next clock cycle after a specific access time from the specific internal memory. Since the network processor internal SRAM is unified for both data and programs, and since it should support 64-bit access for data, the network processor initiates a fetch of 2 instructions (64 bits). The Fetch Unit (FTU) contains a fetch buffer to hold fetched instructions that were still not processed. [0589]
  • The Decode Stage [0590]
  • At the decode stage, the new instruction fetch is complete and the decoding of the new instruction is performed. The decode logic determine the type of the incoming instruction and the operations that should be performed at each pipeline stage for the execution of the instruction. [0591]
  • The Address Stage [0592]
  • During the address stage the data address for a load from memory or for a store to memory is calculated by the address ALU. The address ALU get its source operands, which can originate from one or two of the GPR registers, an immediate address offset or an absolute address. In jump or branch instructions, the destination address is also calculated by the address ALU. One of the address ALU inputs is the PC itself for branch address calculation. After address calculation is performed, the core places the new data address on the Data Address Bus (DAB) or the new program address (for change of flow) on the Fetch Address Bus (FAB). If the instruction is a store, data to be stored into memory is placed on the Store Data Bus (SDB) during this stage. [0593]
  • The Execute Stage [0594]
  • The data ALU execution is done at the execute stage. Source operands are read from the register file to the Data ALU, and data arithmetic is performed. For example, if the instruction is an ADD of r[0595] 1 with r2, then r1 and r2 are mux-ed into the data ALU and arithmetic addition is performed during the execute stage. Condition Codes (CC) are also calculated at this stage. By the end of the execute stage, the data arithmetic execution result together with the CC are ready.
  • The Write Stage [0596]
  • At the write cycle, the register file is updated. The update can come from various sources: a destination of an arithmetic result, loaded data from memory, a move from a Special Purpose Register (SPR), or a move of an immediate value into the register file. In case of a jump or branch to a subroutine, the PC is also latched into one of the two LINK registers inside of the register file. The CC register is also updated at this stage. [0597]
  • Restricted Sequences [0598]
  • The network processor pipeline is designed to enable a standard design flow with standard memory interfaces. It is a five stage pipeline which is optimized for sequences that are frequently used and sequences that have a large effect on performance. By optimizing some of the sequences, there may be other sequences that might be problematic. These may be solved by inserting software restrictions. Table 5 below lists some of the sequence restrictions according to one embodiment of the invention. [0599]
    TABLE 5
    No. Sequence Restriction Description
    1 Register update followed by Any instruction which updates an r register (for example: move
    a store instructions, ALU instructions, load instructions, etc.) may not be followed
    immediately by a store instruction of that same r register. This includes
    instructions that update CC flags in r1 following by a store of r1.
    2 Register update followed by Any instruction which updates an r register (for example. move
    a use of this register as a instructions, ALU instructions, load instructions. etc.) may not be followed
    memory pointer immediately by an instruction which uses that same r register as a memory
    pointer or as a source for a memory pointer calculation. Instructions that
    might use an r register as a pointer include: load, store, jump, branch,
    yield, and case. This includes instructions that update CC flags in r1
    followed by an instruction that use r1 as a memory pointer.
    3 Register update followed by Any instruction which updates an r register (for example: move
    a use of this register by instructions, ALU instructions, load instructions, etc) may not be followed
    AGENT WRITE immediately by AGENT WRITE instructions or DMA instructions which
    instructions or by DMA use that same r register.
    instructions
    4 Instructions inside a delay Change of flow instructions are not allowed in any kind of a delay slot.
    slot Change of flow instructions include:
     Jump or Branch instructions
     Yield instructions
     Case instruction
     RFT instruction
     DMA instructions with the yield option set
    5 Instruction inside the delay The only instructions that are allowed in a delay slot of a yield instruction
    slot of a “yield” are:
     Store instructions
     Agent Write instructions
     DMA instructions (only when the yield option is not set)
    6 Change of True sticky bit Any instruction which updates the conditional sticky bit may not be
    before a conditional store or followed immediately by a:
    conditional agent write or  conditional store instruction.
    agent read instruction  conditional agent write instruction.
     conditional agent read instruction
    7 SPRS to nrefetch SPR SPRS instruction with nrefetch SPR as its destination may not be followed
    followed by an RFT immediately by an RFT instruction
    instruction
    r31 register update followed Any instruction which updates the r31 register may not be followed
    by a conditional change of immediately by a conditional change of flow instruction which uses one of
    flow with one of r31 bits as r31 bits as a condition
    a condition
  • Pipeline Timing Diagram [0600]
  • The pipeline timing and stages [0601] 480 are illustrated with reference to FIG. 45. This diagram 480 together with the pipeline block diagram 460 from FIG. 44 illustrates the basic flow through the pipeline stages inside the network processor core. FIG. 45 starts with the update of the Program Counter (PC) with the address of the next instruction. The Fetch Address Bus (FAB) gets its content from the PC and starts a memory fetch access. A new instruction is available on the Fetch Data Bus (FDB) during the decode cycle and passed directly to the decode logic. The address ALU operates during the address stage and sends a new data address to the data memory. If the operation is a load then the loaded data is available on the Load Data Bus (LDB) during the execute stage. If the operation is a store then the stored data is placed on the Store Data Bus (SDB) during the address stage. The Data ALU gets its source operands and executes the data arithmetic at the execute stage. By the end of the execute stage, data arithmetic result and the Condition Codes (CC) are ready to be latched into the destination register on the next clock edge of the write cycle. If it is a load instruction then the loaded data is also latched into the destination register on the positive clock edge of the write cycle. All register update operations are going through the rf_in_mux and the actual update is on the write cycle. An update to any one of the Special Purpose Registers (SPRs) is also done at the write stage.
  • An Internal Memory to be Used with the Flexible Packet Processor [0602]
  • Referring now to FIG. 46, an exemplary [0603] internal memory 500 for implementation in the network processor (NP) is illustrated. According to one aspect of the invention, the Vobla (network processor [NP]) Memory (VMEM) 500 is a small and fast memory located near the network processor NP core. The VMEM 500 serves the NP with three separate ports and the rest of the system with two ports. The main features of the VMEM according to one embodiment of the invention include: operates with the NP clock; supports multiple ports (e.g., five ports); maximum bandwidth of, for example, about 8 Gbytes/second (5 accesses×200 MHz×8 bytes); 64 Kbytes of SRAM—first area between 0 to 48 KB and second area between 64 to 80 KB.
  • SRAM Mapping and Priority [0604]
  • The SRAM, in one embodiment, is divided into three sub areas: 0 to 8 K—data and tasks context; 8 to 48 K—data and program; and 64 to 80K—program. The above 64 KB memory space can be accessed by the ring for writes and by the multireader for reads. According to one embodiment, the priority in each one of the memory areas is according to the following rule: (1) ring interface—highest priority; (2) program; (3) data (load/store); (4) context; and (5) multi reader—lowest priority. [0605]
  • Interfaces of the VMEM [0606]
  • The VMEM supports the NP by three ports: data (load/store), program, and context. The VMEM supports the ring interface and the NP compound by two ports: multireader and ring writer. [0607]
  • Network Processor Program Bus (v_program) [0608]
  • This is a read port from the NP. Each access of this bus is for aligned double words (64 bits): 15 bits for Address bus, A([0609] 17:3). This allows access to 32K double words or 256 Kbytes. A(2:0) are don't care bits in this case and 64 bits data out bus.
  • Network Processor Data Port (v_data) [0610]
  • This is a read and write port from the NP. The data size can be a byte (8 bits), half-word (16 bits), word (32 bits), or double word (64 bits). The access has to be aligned to the data size (half word on the boundary of half word, etc.). All the accesses are right aligned: byte in [0611] bits 0 to 7, half-word in bits 0 to 15, and word in bits 0 to 31. A special data aligner for this port will arrange the incoming and outcoming data according to the address and size transaction. The interface will generate the byte enable signals to the VMEM according to address bits A(2:0) and the size of the transaction, where: 16 bits Address bus—A(15:0)—Allows access to the first 64 Kbytes of the VMEM address space; A(2:0) and data size control enable signal; 48 Kbytes of SRAM in current implementation; 64 bits data out bus for read access; and 64 bits data in bus for write access.
  • Network Processor Context Port (v_context) [0612]
  • This is a read and write port from the NP. The data size is a word (32 bits) for write access and a double word (64 bits) for read access. The interface will generate the byte enable signals to the VMEM according to address bit A([0613] 2). No data aligner is needed for this interface, where: 11 bits Address bus—A(12:2)—allows access to the first 2K words (8 Kbytes) of the memory space—A(1:0) and A(15:3) are don't care bits in this case; 64 bits data out bus for read access; and 32 bits data in bus for write access.
  • Multireader Port (v_mrd) [0614]
  • This is a read port from the multireader. The data size is a double word (64 bits). [0615]
  • 13 bits Address bus—A([0616] 17:3). Allows access to all the VMEM address space.
  • A([0617] 2:0)—don't care.
  • 64 bits data out bus for read access. [0618]
  • Ring Interface Write Port (rif_i) [0619]
  • This is a write port from the ring interface. The data size can be from 1 to 8 bytes and the data should be in a one aligned double word so only one access to the memory is needed. The data is left aligned (big endian) and a special data aligner for this port will arrange the incoming data according to the VMEM address. The interface will generate the byte enable signals to the VMEM according to address bits A([0620] 2:0) and the size of the transaction, where 18 bits Address bus—A(17:0)—allows access to all the VMEM address space; and 64 bits data in bus for write access.
  • VMEM Micro Architecture [0621]
  • Basic SRAM Module [0622]
  • According to one approach, the VMEM uses two kinds of SRAM modules: a single port SRAM organized as 512 words of 64 bits (4 KB) and a single port SRAM organized as 2048 words of 64 bits (16 KB). Each SRAM gets 8 Byte Enable (BEs) control signals. [0623]
  • SRAM Memory Array [0624]
  • The SRAM array is divided into 13 SRAM modules and the overall size is 64 Kbytes. The first group is between 0 to 48K bytes. In term of address space, each pair of SRAMs occupies 8 Kbytes. The odd SRAM contains the first, third 8 bytes, etc. ([0625] 0-7, 16-23, etc.), while the even SRAM will contains the second, fourth 8 bytes, etc. (8-15, 24-31, etc.). The second group is between 64 to 80K bytes. This group include a single 16K byte SRAM.
  • VMEM Control [0626]
  • The control is responsible for supporting the SRAM macros with addresses and data, and for routing the data from the SRAMs to the right bus. A contention occurs when there are two or more accesses to the same SRAM macro. In that case, a priority mechanism is needed for avoiding starvation. The VMEM sends a stall signal and the delayed transaction is kept by the VMEM until receiving service. The write access from the ring Interface port has the highest priority. [0627]
  • Restrictions. Any access to an unimplemented memory will respond with garbage information without a special notification to the system. Any access that crosses the eight byte boundary of the SRAM macro (i.e., a transaction to address and size of 8) is invalid and the result is unpredictable and without an error notification. [0628]
  • Data In Path [0629]
  • Data In aligners. There are two data aligners in the Data In Path: Data aligner for the NP Data bus. The input to the data aligner is aligned to the right with a size of 1, 2, 4 and 8 bytes. [0630]
  • Data aligner for the Ring write bus. The input to the data aligner is aligned to the left (big endian) with a length of 1 to 8 bytes which is part of a one double word (64 bits) entry in the SRAM. [0631]
  • Data In buffers. There are two 64-bits data buffers for storing the incoming data from the NP data bus and NP context bus in case of a contention in the VMEM. Since the ring write bus has the highest priority it does not need a buffer. [0632]
  • Address In path [0633]
  • Address In buffers. There are four 16-bit address buffers for storing the incoming address from the NP data address bus, NP context address bus, NP program address bus, and the multireader address bus in case of contention in the VMEM. Since the ring interface has the highest priority it does not need a buffer. [0634]
  • Address In Muxes. There is a 4 to 1 mux (multiplexer) for each of the SRAM macros. The first two ports of all muxes are connected to the ports: ring write address and multireader address. [0635]
  • There are a two options for the third port: NP Context address port—connects to the two muxes that support the two SRAM [0636] macros occupying address 0 to 8K bytes; and NP Program address bus—connects to the ten muxes that support the ten SRAM macros in address 8K to 48K bytes. The NP data address bus is connected to the 12 address in muxes (the last SRAM is not connected to the data bus).
  • Data Out path [0637]
  • Data Out Muxes. There are four data out muxes of 64 bits. A 13 to 1 mux for the multireader data out bus. This mux is connected to the 13 SRAM macros that reside in [0638] address 0 to 48K bytes and 64 to 80K bytes. A 12 to 1 mux for the NP data out bus. This mux is connected to the 12 SRAM macros that reside in address 0 to 48K bytes. A 11 to 1 mux for the NP program data out bus. This mux is connected to 10 SRAM macros that reside in address 8K to 48K bytes and to the one SRAM macro that resides in address 64K to 80K bytes. A 2 to 1 mux for the NP context data out bus. This mux is connected to 2 SRAM macros that reside in address 0 to 8K bytes.
  • Data Out aligner. There is a data aligner for the NP data out bus. The output of this aligner is right aligned according to the access size (1, 2, 4 and 8 bytes) and the access address. [0639]
  • The Core of the Flexible Packet Processor and Associated Compounds (Agents and Non-agents) [0640]
  • A block diagram of the network processor core according to one embodiment of the invention was provided in FIG. 37. The network processor compounds are those modules of the ring network implemented by the network processor that are tightly connected to the network processor core. Network processor compounds share a single ring interface and address space with the network processor core. In other words, according to one embodiment of the invention incorporating the network processor into a SOC using rings-type architecture, the network processor core and the network processor compounds are all elements of a single ring member. [0641]
  • Network processor compounds include agents and non-agents. Agents are programmed by network processor commands through the network processor agent interface, discussed below. Non-agents are programmed by internal agents or through the ring interface by external members. [0642]
  • FIG. 47 is a schematic diagram of the [0643] network processor 500 according to an embodiment of the invention. FIG. 47 illustrates the ring interface 512 (dotted box at the bottom) and the network processor, which includes the network processor core 514 and the various compounds. The compounds include agents such as the doorbell agent 516, CRC/snoop agent 520, multireader agent 524, timer agent 526, message_sender agent 528, and DMA agent 530.
  • [0644] Multireader Agent 524
  • The multireader module is an engine that serves requests to read portions of data from the network processor memory and sends the received data back to the destination. In one embodiment of the network processor, the destination is most likely to be located external to the network processor compound (the only internal modules that might use this data are the CRC snooper or the memory in a mode when portions of the memory are copied from one location to another location). The multireader is connected to the ring write interface, and to the agent interface, from which it could get requests to read data from the memory. [0645]
  • Operation [0646]
  • The multireader agent and the network processor memory share the same address space. Hence the multireader responds only to messages of work read type. The memory will respond only to messages of work write type. [0647]
  • According to one aspect of the invention, the multireader can get requests for data from the following modules: 1) local network processor (via the agent interface); 2) the three DMA controllers; 3) remote (external to the compound) network processor; and 4) the host (PP). [0648]
  • All the external requests for memory reads are stored in request FIFO. The local network processor requests are stored in a special request entry. There are two reasons why two different queues are used for the requests. The first reason is to have the ability to stall the local network processor if it asks for a new multiread request before the previous one was served. The second reason is to have the ability to know when the local network processor multiread was finished. These features can not be implemented by hardware for the other request sources, since the other requests sources are generated by members connected to the ring. [0649]
  • The network processor request entry is written from the agent interface and the request FIFO is written from the ring. All the requests are stored in the request entry or FIFO until they are serviced. [0650]
  • The order of serving the multiread requests is as follows: If the network processor entry has a valid multiread request, it will served before any other request in the request FIFO. If the network processor request entry is empty other requests will be served on first-in-first-out basis. [0651]
  • The multireader, in one embodiment, has the ability to stall data sent to the ring. A stall of data delivery could occur if the output FIFO of the ringis full, or there is a higher priority message that should be sent to the ring (for example DMA, message sender messages). [0652]
  • The multireader request FIFO preferably is 8 entries deep (which should be sufficient to avoid the overrun case). FIG. 48 is a schematic diagram of the [0653] multireader agent 524 according to an embodiment of the invention.
  • Data Packing and Alignment [0654]
  • The network processor memory, in one embodiment, uses a 64-bit data port. The multireader wants to take advantage of this fact so every memory read will be of eight bytes. In this system there is a need to allow byte size data transfers over the ring from any memory location to any destination address. [0655]
  • The data that is read from the memory and sent on the ring in a ring is aligned to the left (MSB [most significant bit] of the message) because big endian byte orientation is used. Because of those requirements there is a need to add an aligner in the multireader. [0656]
  • Another goal is to minimize data transfers over the ring and enable straight forward writing to FIFOs. This goal is satisfied using data packing logic which means that all the transferred messages except the last one will contain 8 valid bytes. The last message might contain less than 8 bytes, in which case the message type will indicate how many valid bytes there are. [0657]
  • The alignment and packing is done in the following manner. FIG. 49 describes the [0658] data alignment 550 in case the last message contains 8,7 . . . , 1 valid bytes, when reading from an aligned address. It should be noted that when data is written to memory, the opposite alignment should be performed. For example, consider the following scenario: reading 10 bytes starting at address=5. The multireader will send the following data in the messages (X in the data part of the message means that this byte is don't care).
  • Multireader—Memory Interface [0659]
  • The multireader starts to issue memory read cycles if there is at least one multiread request pending in the multireader request FIFO or request entry. Every read cycle that the multireader issues to the memory is a 8 byte request (in order to reduce the number of requests). The memory read cycle starts when the multireader generates the address and read strobe for the memory. The memory detects this request and, if not busy with other requests, it drives the data to the multireader on the following cycle. If the memory is busy and can not drive the data to the multireader, it stalls the multireader. The multireader waits for the data from the memory as long as the stall signal is asserted. [0660]
  • It is desirable that the originator of the multiread request will have the ability to know that the multiread operation is complete. If the originator of the multireader request is the local network processor, it will have the ability to know if the multiread operation had finished. The multireader will send the network processor a signal indicating that the multireader did not finish the multireader transfer of the local network processor. The multireader busy indication will be asserted when the multiread request is registered in the network processor entry and negated after the last message containing data of this request is sent to the ring. [0661]
  • For other originators of multiread requests (like the remote network processor or PP), the indication of multiread transfer end is controlled by software. The software control is achieved by preparing a special data word at the end of the transferred block. The destination of the multiread operation snoops this data. When this data is detected the multiread operation is finished. Note that only one transfer can be active during the time of the snoop (otherwise it will not be possible to detect which operation is finished). [0662]
  • Sending a message with first/last data in frame indication. [0663]
  • The multireader looks in the type field of the incoming message (multiread request) or in the options bits of the network processor multiread request, and, if the bit F is set, the first message in the multiread process will be sent with a destination address which indicates the first byte in the frame. [0664]
  • The multireader also looks in the type field of the incoming message or in the options bits of the network processor multiread request, and, if the bit L is set, the last message in the multiread process will be sent with a destination address which indicates the last byte in the frame. (Every FIFO in the system should have three addresses which when writing to it indicates first, last data in the frame). The Multireader will modify [0665] bits 2,3 of the destination address according to the F,S bits.
  • Calculating CRC of Message Data In case there is a need to calculate the CRC of the message data, the multiread request must set the S option bit. This bit will cause the multireader to send all the messages with the type in which the S (snoop) bit set. The CRC machine will snoop those messages and calculate the data CRC. Since the CRC machine is a 32-bit machine and the message data is 64 bits wide, the CRC machine should have ability to stall the multireader from sending data to the ring when the CRC calculation on the data has been completed. [0666]
  • Multireader Input and Output Message Formats [0667]
  • A general multireader message will have the following format, as set forth in Table 6, for multireader input and output message format. [0668]
    TABLE 6
    Field Description
    type[7:0] The type field describes the incoming message type. The
    following types are valid:
    type[7:0] = 00000XXX: idle
    type[7:0] = 010XXLFI: work read.
    address[23:0] The field describes the starting address for reading data
    from Vobla memory.
    data[31:0] This field contains information required for generating
    the output message and the operation of the multireader.
    data[23:0] = Destination address of the data.
    data[31:24] = The number of bytes to read
    from the Vobla memory. (if
    data[31:0] is zero the multireader
    reads 256 bytes.)
  • Table 7 illustrates the multireader output message format for the multireader sending data to the rings. It should be noted that the multireader input message type is always a read type, and the output message is always a work_write type. [0669]
    TABLE 7
    Field Description
    type[7:0] The type field describes the outgoing message
    type. The following types are valid:
    type[7.0] = 00000XXX: idle
    type[7:0] = 100FLZZZ: work write.
    address[23:0] The address of the destination. This information
    is based on what was extracted from the input
    message data field, and the option bits of the
    message type (L/F/I).
    data[63:0] data[63:0] - This field contains the data that was
    read by the multireader.
  • Network Processor Multiread Request Format [0670]
  • When the network processor initiates a multiread request, it has to write to the network processor entry in the multireader. FIG. 50 describes how the multireader maps the data on the [0671] agent bus 556 to the multireader operation 558. The options are:
  • L—indication of last multireader request in frame (L=I last). [0672]
  • F—indication of first multireader request in frame (F=I first). [0673]
  • S—snoop indication for the CRC snooper (S=I snoop this message). [0674]
  • I—increment destination address, after every multiread transfer. [0675]
  • If the network processor sends new multiread requests while the multireader is busy serving previous requests those requests will stall network processor. (Note: If count value is zero the multireader reads 256 bytes from the memory.) [0676]
  • Requests Serving Priority [0677]
  • According to one approach, if there are more than one multiread request pending, the priority of serving them will be: (1) serving local network processor requests if there are pending requests; and (2) serving all other requests on a FIFO basis. [0678]
  • Multireader Operation Scenarios—Examples. [0679]
  • Example A—Sending data to serial transmit FIFO: (1) The serial sends a request to fill its transmit FIFO. [0680]
  • (2) The request is registered in the doorbell logic. When this request is serviced, the network processor sends an agent write command to the multireader asking for data transfer. [0681]
  • (3) The multireader decodes the message (or the agent command) and initializes its operation. [0682]
  • (4) The multireader initiates memory read cycles and data from the memory is sent to the multireader. [0683]
  • (5) The multireader packs the data, generates the output message, and sends it to the ring if the ring is vacant. The destination is the transmit FIFO in the peripheral. [0684]
  • (6) The process of reading data and sending it to the destination repeats itself until all the data transfer is complete. [0685]
  • Example B—Sending data to DMA write (transmit) buffer: (1) The DMA controller issues a multireader message. This message asks for data transfer from the memory to the DMA controller write buffer (The message will contain the destination address and the number of bytes that are required and the starting location in the network processor memory). [0686]
  • (2) The multireader decodes the message and initialize its operation. [0687]
  • (3) The multireader initiates memory read cycles and data from the memory is sent to the multireader. [0688]
  • (4) The multireader packs the data, generates the output message, and sends it to the ring if the ring is vacant. The destination is the write buffer in the DMA controller. [0689]
  • (5) The process repeats it self until all the data transfer is completed. [0690]
  • Software/Hardware Restrictions [0691]
  • According to one embodiment of the invention, the following restrictions may apply: do not activate more than one multireader at a time from each source (except the DMA, which can send two) in order not to cause overflow in the FIFO; and if the destination of the multiread request is one of the NP memories, only aligned transactions are supported because the memory does not support overflow of memory entry during a write (split one write command to two). [0692]
  • [0693] Message Sender Agent 528
  • The [0694] message sender agent 528 is a module which translates a network processor AGENT command to a message to be sent to a destination on the ring. The message sender is connected to the network processor agent interface. The message sender is a powerful module since it can generate messages in all the different messages types that are available in the system. This means that the network processor can send messages to all the modules that are connected to the ring, and even replace the host in sending supervisor messages. This feature can be very beneficial while debugging the system. The block diagram of the message sender 528 is shown as FIG. 51.
  • There are three instructions dedicated for agent commands: AGENTW, AGENTWI, and AGENTR. The message sender ignores the AGENTR command. The AGENTW/I commands drive the value of three registers, or two registers and an immediate value, on the agent bus. Those registers are marked RA, RAP, and RB (or imm[0695] 8). The message sender will interpret the content of those registers in the following way (shown in FIG. 52):
  • Mapping for the AGENTW command is as follows: [0696]
  • RAP[[0697] 23:0]—The destination address or the 32 LS (least significant) bits of the data. This is a 24-bit address of a module (destination) that is connected to the ring or the 4 LS bytes of the data that is sent to the ring when using the 64-bit data mode.
  • RA[[0698] 31:0]—The data that will be sent to the destination (typically in work read messages it will include the return address for the data that was read from the module and the number of bytes to read).
  • RB[[0699] 7:0]—The message type that will be sent to the destination (only the LSB of RB will be used). In a 64-bit data message RB is the address of the message destination.
  • The AGENTWI command drives the value of two registers, eight bit immediate value (imm[0700] 8) on the agent bus. The registers are marked RA and RAP. The message sender will use the content of those register in the following way:
  • RAP[[0701] 23:0], RA[31:0]—same as AGENTW command.
  • imm[0702] 8—the message type that will be sent to the destination.
  • Note: If the AGENTWI command is used there is no possibility to send a 64-bit data message. Both commands also drive option bits, which are part of the AGENT opcode. Each module uses those bits in a different way. The message sender will use 7 option bits. FIG. 52 illustrates a mapping an [0703] agent write command 560 to a message 562.
  • If the network processor sends new requests for message sending while the message sender is busy serving previous requests, those requests will stall network processor. The message sender will have an internal queue of 2 entries so it can store 2 requests for sending messages before stalling the network processor. [0704]
  • Message Sender Output Message Types [0705]
  • Table 8 illustrates the message sender output message format according to an embodiment of the invention. [0706]
    TABLE 8
    Field Description
    type[7:0] The type field describes the outgoing message
    type. The following types are valid. (see message
    type table for more details).
    type[7:0] = 00000XXX: idle
    type[7:0] = 11111NNN: supervisor.
    type[7:0] = 010XXLFI: work read.
    type[7:0] = 100FLZZZ: work write.
    address[23:0] The address of the destination. This is the content
    of RAP or RB according to the mode used (option
    bit 6). If option[6] is one the address is taken from
    RB
    data[63:0]/[31:0] The message data. The content of RA or RA and
    RAP according to the mode used (option bit 6). If
    option[6] is one RA, RAP are used.
  • Data alignment. The alignment of the message data is determined according to the message size and type. The following Table 9 describes message data alignment. [0707]
    TABLE 9
    output Operation mode data size
    message type (64/32) (in bytes) output message format
    work write 64 8 {RA[31:0], RAP[31:0]}
    work write 32 1,2,3,4 {RA[31:0],32'b0}
    {RA[31:0],32'b0}
    {RA[31:0],32'b0}
    {RA[31:0],32'b0}
    work write 32 8 {32'b0,RA[31:0]}
    work read don't care don't care {32'b0,RA[31:0]}
    supervisor don't care don't care {32'b0,RA[31:0]}
  • Sending a 64-bit Data Message [0708]
  • The message sender can send a 64-bit data message. Sending a 64 bit message is done by setting option bit[[0709] 6] of the AGENTW command to one (this option is not available for the AGENTWI command). If this option is used the message sender uses the content of RA,RAP as the source for the raw data, and RB as the source for the raw address. In this mode the message type is always work write, with 8 valid data bytes. There is no provision for sending less than 8 bytes.
  • Handling Data and Address Options [0710]
  • The message sender uses six option bits that are driven by network processor in order to modify the value of the raw_data and raw_address. This feature is useful when the value in the registers are used as constants and are modified as required. For example, when writing to a FIFO the content of RAP will be the FIFO address, and when the system seeks to write the first in frame or last in frame locations the address will be modified using the option bits. Data modification is useful when sending a doorbell request. The data for the doorbell request is only 3 bits. Hence the raw data can be modified to generate data for the doorbell request. The address and data modification may be performed as follows: (1) the content of RAP[[0711] 4:2] or RB (in 64 bit data mode) is OR'd with the option[2:0] bits to generate the message destination address; and (2) if the value of options bits[5:3] is not zero, the content of RA[2:0] or RAP (in 64 bit data mode) is replaced with options[5:3] bits to generate the message data. Address and data modification are active regardless of the message sender operation mode.
  • Software/Hardware Restrictions [0712]
  • Software/hardware restrictions include the following in one embodiment of the invention: (1) the 64-bit data mode is available only when using AGENTW command; and (2) in 64 bit mode the message type is always work write. [0713]
  • DMA Agent [0714]
  • In a system with multiple processors (e.g., a system on a chip with multiple network processors) that can send DMA transfer requests to one of multiple DMA controllers in the system, one challenge is knowing whether the DMA request can be serviced prior to issuing the request to a particular DMA controller. Otherwise, a DMA controller can be overloaded with DMA requests that it can not service. [0715]
  • According to one beneficial aspect of the present invention, this challenge is met by providing a DMA agent module as a peripheral to each processor in the system. For the network processor (Vobla) described herein, for example, such a DMA agent may be implemented as one of the tightly linked compounds on the overall network processor. In other words, the DMA agent is a compound that shares the same ring interface as the overall network processor existing as a ring member. [0716]
  • According to this approach, the DMA agent operates to control the DMA transfer requests that are sent by the processor as follows: [0717]
  • (1) Each DMA controller has a dynamic pool of tokens that the DMA controllers allocate for use by the DMA agents linked to the various processors. In other words, each DMA controller has a pool of tokens that the DMA controller can distribute among the various DMA agents. [0718]
  • (2) Each valid token allows a DMA agent to send one DMA request to the DMA controller that owns the token. If there are no valid tokens, no DMA requests can be issued by the DMA agent and the processor will stall. [0719]
  • (3) The DMA agent periodically queries the DMA controllers for tokens whenever the number of valid tokens in the DMA agent's pool is less than a number prespecified by software. The maximum number set by software can change. [0720]
  • In sum, this approach avoids the scenario of the DMA agent issuing requests that can not be serviced because the maximum number of requests that can be sent does not exceed the number of tokens held by the DMA agent. [0721]
  • The DMA agent module [0722] 530 (illustrated in FIG. 53) translates network processor DMA commands to ring messages used to initialize the DMA controller.
  • According to one embodiment of the invention, each network processor has one DMA agent. Each DMA agent has the ability to control each and every one of the DMA controllers that are available in the system, using the context table (e.g., in the implementation there are 3 DMA controllers, and each DMA agent can control up to 4 DMA controllers). According to one approach, the fourth DMA controller is provided for future system expansion. [0723]
  • The DMA agent is connected to the network processor agent interface and to the ring write interface. The DMA agent registers can be written by the host only via the write bus using ring messages. The context table is initialized by the PP once, and it is not changed during regular work. The token registers should be written only by the DMA controllers. [0724]
  • The Sources for Requests [0725]
  • The DMA agent can receive requests to initialize a DMA channel only via the agent interface using special network processor DMA commands. The DMA agent has a small request queue of two entries in order to minimize the need to stall the network processor if the DMA request could not be serviced (e.g., this could happen if for example there are no available tokens, or if the DMA is unable to send the messages to the DMA controller because the ring is busy). [0726]
  • Requests Priority. There are two priority levels for DMA requests in the DMA controller. The lower priority level is regular and the higher priority level is urgent. By default all DMA requests are regular. A DMA request can become urgent if the processor defines it as urgent. Requests that have urgent priority have the urg bit in the message set, and will get a higher priority in the DMA controller queue. The DMA agent ignores the urg bit (it sends it on to the DMA controller), and serves the requests in the order they arrive. [0727]
  • DMA Agent Context Table [0728]
  • The DMA agent context table maps a network processor DMA command to the actual request that will be sent to the DMA controller that was selected. The actual request defines the parameters for the current DMA transfer. The context table has four entries. The table entry to be used is determined by a two bit pointer encoded out of the 4 MSB (most significant bits) of the DRAM address in the DMA command. (The reason that 4 bits are used is because the DRAM address space is divided into 16 parts and only 4 could be accessed by the DMA). The entry allocation, which is hard coded. The context table could be written using write messages. The table should be initialized before starting any DMA access. The context table could be read using read messages. [0729]
  • ADDR=DMA−AGENT−BASE to DMA_AGENT_BASE+$F. Note: The maximum number of tokens which could be allocated for one channel is 15. Table 10 provides a description of the DMA context table. [0730]
    TABLE 10
    field description
    address[13:0] The physical base address of the DMA controller
    to be used.
    visitor[2:0] The number of the request and mask bits to set for
    the current DMA transfer. This field is common
    to all the contexts.
    max_tokens[3:0] This field describes the maximum number of
    tokens that could be used by this DMA channel.
  • DMA agent token control. In order to manage DMA transfers from different sources with different contexts, a free token transfer based approach is used. According to this approach, the DMA agent has a pool of tokens. The service of a DMA request can start only if there are available valid tokens allocated for this DMA channel in the DMA agent. If there are valid tokens, the processing of the DMA request can start as previously described. If there are no available tokens to execute the DMA request, it will be registered in the DMA agent queue, and will wait for execution until the DMA agent gets a token from the DMA controller (note that of the DMA agent queue is full the request will stall the network processor). [0731]
  • Token distribution is performed using messages. The DMA agent issues a request for a token to the DMA controller each time the number of valid tokens is less than the maximum allowed tokens (which is specified in the context table). The DMA controller sends the token back to the agent and marks this token as used in its token list. The DMA controller will free the token again when the DMA transfer is finished (i.e., before sending the message to the doorbell). If the DMA controller has no free tokens then it sends the DMA agent an invalid token (i.e., all the bits in the token response are zero). [0732]
  • The DMA controller sends the DMA agent a valid token to the address of the token that was used (the DMA agent sends this address in the token request message). According to one embodiment of the invention, each DMA controller has a pool of a maximum of 16 tokens for each DMA channel. Of course, the number of tokens that is available for each DMA controller is flexible and could change according to system needs. The DMA agent token registers contains the token numbers that the DMA controllers allocated for use (the valid tokens are marked by setting the appropriate bit to one). The token registers can be written only by the DMA controllers. There are four token registers in the DMA agent. Table 11 illustrates the DMA agent channel[i] token register. [0733]
    TABLE 11
    ADDR=DMA_AGENT−BASE+$10--DMA_AGENT_BASE+$1F
    20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
    novt req token[15:0]
    0 0 0
  • Table 12 provides a description of the DMA agent channel[i] token register. [0734]
    TABLE 12
    field description
    token[15:0] This field describes which tokens are valid and
    can be used for DMA transters:
    token[i] = 0 token not valid.
    token[i] = 1 token is valid.
    req This field indicates that the DMA agent had
    issued a token replacement request but did not get
    a response:
    req = 0 no token request is pending.
    req = 1 token request is pending.
    novt[3.0] This field describes the number of valid tokens
    that used by the DMA agent for this DMA
    channel.
  • When a DMA request is registered with the DMA agent, the DMA agent searches the appropriate token register to see if there are valid tokens. If there are valid tokens, the DMA agent uses one of them (e.g., the first one it finds) and marks that token as invalid. Then, the DMA agent starts the data transfer for channel initialization. The DMA agent also sends the DMA controller a message to replace the used token with a new one (this will be work read type message). The indication that the DMA agent issued a token replacement request is made by setting the req bit of the relevant token register. If the DMA controller has a free token available it will send it to the DMA agent, and the agent will replace the used token with the new one (i.e., the request bit is cleared). If the DMA controller does not have a free token available, it will send the DMA agent an invalid token (i.e., all the token bits are cleared and the req bit is cleared). The DMA agent issues a new token replacement request after a maximum of 4 cycles. [0735]
  • Address Error Control [0736]
  • The DMA agent has the ability to recognize if the DMA transfer is made to an illegal external address for each of the external DMA channels. When the DMA agent identifies such an access, it sends a special error message to the PP, informing the PP of the illegal access parameters. [0737]
  • Address error calculation is performed on the SDRAM address written by the network processor using the DMA command. The SDRAM address is split into two parts. The first part is bits [[0738] 31:28] of the address and the second part is bits [27:20] of the address. The address error logic compares the first part of the SDRAM address to each one of the values (0, x2, 0x4, 0xf), which correspond to the 4 MS bits of the SDRAM areas. If a match is not found, an address error occurs and a special error message is generated by the DMA agent. If there is a match, the bits of the second part are compared according to a programmed mask to zero. If the result is not equal to zero an address error is generated, and an error message is sent.
  • Address error mask register. Four (one for each external channel) 8-bit registers are used to store the mask values for address error computation. The mask value will be used to mask the comparison of some of the bits in the second part of the SDRAM address (bits [0739] 27-20). If a bit in the mask register is set, the corresponding SDRAM address bit will not be compared in the address error calculation. The reset value of the register is zero so as to enable the comparison of all 8 bits. Table 13 illustrates the DMA address error mask register[i].
    TABLE 13
    ADDR=DMA_AGENT_BASE+$30--
    DMA_AGENT_BASE+$3F
    7 6 5 4 3 2 1 0
    mask[7:0]
  • Table 14 provides a description of DMA address error mask register. [0740]
    TABLE 14
    field description
    mask[7:0] This field describes which bits of the SDRAM
    address are masked during the process of address
    error calculation.
    mask[i] = 0 the corresponding SDRAM address
    is not masked.
    mask[i] = 1 the corresponding SDRAM address
    is masked.
  • (Note: There could be cases in which the DMA controller accesses an invalid external address that the address error logic does not detect. For example, this could happen if the base address of the transfer is in the real or normal range, but the address generated by the DMA during the transfer overflows this range.) (Note: If the network processor issues a DMA request to a channel that was not initialized [i.e., the corresponding context table entry was not initialized] and address error will occur.) [0741]
  • DMA Agent Input and Output Message Formats [0742]
  • The DMA agent input and output message format is now described. A general DMA agent message will have the format as shown in Table 15. [0743]
    TABLE 15
    field description
    type[7:0] The type field describes the incoming message
    type. The following types are valid. If the last bits
    are X they are ignored:
    type[7:0] = 00000XXX: idle.
    type[7:0] = 010WXLFI: work read.
    type[7:0] = 100FLZZZ: work write.
    address[23:0] The field describes the starting address space of
    the DMA agent. The DMA agent register address
    is from DMA_AGENT_BASE_ADD to
    DMA_AGENT_BASE_ADD+$1F.
    data[31:0] The data to be written to the registers.
  • The DMA agent output message format encoding is shown in Table 16 below. [0744]
    TABLE 16
    field description
    type[7:0] The type field describes the outgoing message
    types. If the last bits are X they are ignored:
    type[7:0] = 00000XXX: idle.
    type[7:0] = 11111101: error.
    type[7:0] = 010WXLFI: work read.
    type[7:0] = 100FLZZZ: work write.
    address[23:0] The address of the destination. This address is a
    function of the base address written in the context
    table and the token number (see FIG. 33 for
    more details).
    data[63:0] data[63:0] - This file contains the data for the
    DMA controller.
  • DMA Controller Message DataAccording to one approach, the DMA agent will send the DMA controller two messages for each DMA transfer that was initiated by the network processor. The following tables describes the data part of each message. Table 17 illustrates the DMA [0745] controller message number 1.
    TABLE 17
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3  2  1  0
    address[23:0]=destination_addr1 type-10000000
    rsrvd doorbell_address[23:0]
    rsrvd sram_address[23:0]
  • The first message that will be sent from the DMA agent to the DMA controller contains the return address for the DMA request doorbell and the internal SRAM address. The doorbell and the SRAM address are 24 bits wide: [0746]
  • doorbell address[[0747] 23:0]—the 24 bits of the doorbell register to which the DMA controller should send the acknowledgement at the end of the transfer. The 6 LSB bits of this address are the task ID number at the time the DMA command was initiated.
  • SRAM address[[0748] 23:0]—24 bit address inside the internal SRAM (this is a full ring address).
  • Table 18 illustrates the DMA [0749] controller message number 2.
    TABLE 18
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3  2  1  0
    address[23:0]=destination_addr1 type-10000000
    dram_address[31:0]
    rsrvd count[7:0] rsrvd end vst [2:0] ack dir urg
  • The second message contains the external DRAM address and control information for the DMA transfer. The control information includes: [0750]
  • urg—1 bit of urgent DMA request. [0751]
  • dir—1 bit of the transfer direction (SRAM to DRAM or DRAM to EXAM). This information is found in the DMA command.(dir=0 SRAM to DRAM; dir=1 DRAM to SRAM). [0752]
  • [0753] ack 1—the bit of doorbell acknowledgement enable. This bit will tell the DMA whether it should send a doorbell at the end of the transfer. This information is found in the DMA command.
  • count[[0754] 7:0]—8 bits of the transfer size. This information comes from the DMA command.
  • vst[[0755] 2:0]—3 bits of visitor code. These bits indicate which request bit the DMA controller should set in the doorbell request register.
  • end—endian mode bit. The endian bit is the LSB bit of the DMA agent ID. (end=0 big endian mode). [0756]
  • Token request and token reply messages. Tables 19 and 20 illustrate a token request and token reply message, respectively. The data part of the token request contains the address in the token register that should be written with a new token. [0757]
    TABLE 19
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3  2  1  0
    type=11111101
    sdram_address[31:0]
    rsrvd doorbell_address[23:0]
  • [0758]
    TABLE 20
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3  2  1  0
    address[23:0]=destination_addr3 type=01000000
    rsrvd token_register_address[23:0]
    rsrvd
  • DMA agent calculating the message destination address. According to one approach, the messages that the DMA agent sends to the DMA controller are sent to three different destinations. The first two of these message destinations are: [0759]
  • DESTINATION_ADDRESS[0760] 1={DMA_BASE_ADDRESS[13:0],0,1, token_number[3:0], 0,0,0,0}
  • DESTINATION_ADDRESS[0761] 2={DMA_BASE_ADDRESS[13:0],0,1, token_number [3:0],1,0,0,0}.
  • The destination address of the token request is: [0762]
  • DESTINATION_ADDRESS[0763] 3={DMA_BASE_ADDRESS[13:0],10′b0}.
  • Error Message Format [0764]
  • Table 21 illustrates the error message format. [0765]
    TABLE 21
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3  2  1  0
    token_register_address[23:0] type=10000000
    rsrvd token[15:0]
    rsrvd
  • (Note: The doorbell address is the address to which a doorbell should have been sent at the end of the DMA transfer if an address error has not occurred. This address contains the task ID information in the six LSB bits and the base address of the network processor from which the message error was sent in bits [0766] 23-6.)
  • Network Processor DMA Request Format [0767]
  • When the network processor initiates a DMA request, FIG. 54 describes how the DMA agent maps the data on the [0768] agent bus 576 to the DMA request 578.
  • The options as shown in FIG. 54 are as follows: [0769]
  • D—direction of data transfer (D=0 SRAM to DRAM; D=1 DRAM to SRAM). [0770]
  • NA—no acknowledgement at the end of DMA transfer (NA=0 send acknowledgement; NA=1 do not send acknowledgement). Setting this bit will also cause NOT to set the DMA mask bit in the doorbell agent when the DMA agents sends the messages to the DMA controller. [0771]
  • A—set auto set bit in the doorbell mask register. [0772]
  • U—urgent DMA request. [0773]
  • M—Modify address. Setting this bit enables the modification of the SRAM address and the DRAM address. [0774]
  • L—long address mode. Use 24 bits of RA as the SRAM internal address (in the regular mode [L=0] only 16 bits are used and the 8 MSB of the ring base address are appended to the 16 bits of RA to form the internal SRAM address). [0775]
  • The DMA agent will have two request entries for storing network processor DMA requests. If both entries are full and the network processor issues a new request, the network processor will be stalled until one of the requests is served. [0776]
  • Address Modification [0777]
  • One common operation in control code writing (such as for controlling the operation of the network processor of the instant invention) is the calculation of the destination address for read/write operations (such as read/write commands for the Vobla network processor). Destination addresses can be calculated, for example, according to several modes: [0778]
  • (1) Immediate addressing—the destination address is included in the command and no calculations are required. [0779]
  • (2) Register A+Register B—the destination address is the sum of the values of Register A and Register B. [0780]
  • (3) Register+Offset—the destination address is the sum of the value of Register A and an immediate offset value. [0781]
  • Often one of the arguments of an address calculation is used to point to the base address of a data structure and the other argument is used to point to an offset within the data structure. One difficulty is that if the same data structure is to be accessed multiple times with different offsets, or if different data structures are to be accessed using the same offset, the address calculation must be performed repeatedly (in the first case, computing a new offset each time; in the second case, computing a new base address each time). These redundant address calculations impose cycle costs and decrease overall efficiency. [0782]
  • Accordingly, one beneficial aspect of the present invention provides for adding a special address computation mode to the network processor data structure access commands. When activated, this special mode causes the destination address to be automatically computed using a base address, offset, and an address modifier. [0783]
  • According to one implementation, the destination address in this special mode is computed as:[0784]
  • DEST_ADDRESS=BASE_ADDRESS+OFFSET+MODIFIER
  • Accordingly, according to one embodiment of this approach, if agent option bit [0785] 9 (in one of the DMA commands) is set the DMA agent will modify the value of the SRAM address and the DRAM address (that were written by the network processor) before sending the control message to the DMA controller. Address modification is accomplished in the following fashion. DRAM address bits 1,2,3 are OR'd with count bits 2,3,4 (respectively), and SRAM address bits 1,2,3 are OR'd with count bits 5,6,7 (respectively). When address modification is used, the DMA transfer size is limited to one of the four options listed in Table 22 below.
    TABLE 22
    count[1:0] transfer size
    00  2 bytes
    01  4 bytes
    10  8 bytes
    11 16 bytes
  • An example of the special mode of addressing is instructive. Assume that a data structure located inside internal memory for a communication processor including the Vobla starts at address X. The size of the structure is SIZE bytes. Further assume that we want to copy a part of this structure starting at offset address X+OFFSET[0786] 1 from X to an external data structure which starts at address Y starting at address Y+OFFSET2. Thus, the X and Y based addresses are stored in a register. According to the conventional approach, address computation is as follows:
  • ADD1=X+OFFSET1
  • ADD2=Y+OFFSET2
  • DMA ADD1, ADD2, SIZE
  • This conventional approach takes at least 3 cycles to execute and consumes 3 program memory locations. Using the special mode according to the invention, the code using address modification will be only this line:[0787]
  • DMA ADD[1], ADD[2], SIZE, OFFSET1, OFFSET[2]
  • This code takes 1 cycle to execute and consumes 1 program memory location, which, therefore, saves program space and increase performance. [0788]
  • In accordance with one embodiment of the present invention, a method for performing address computation for a data structure address command in a communications processor is provided. The method comprises providing a library of read commands and write commands for a network processor in a rings based architecture, including an option bit in the read commands and write commands for an address calculation modification mode, providing an agent module for forwarding read requests and write requests to a DMA controller in response to requests including an address issued by the network processor, and modifying the value of the address when the option bit is set before forwarding the read requests and write requests to the DMA controller. The method, in one embodiment, permits repeated accesses to an external data structure without recomputing the destination address in its entirety each time. [0789]
  • Modifying the value of an address, in one embodiment, comprises automatically computing a destination address using a base address, an offset, and an address modifier. [0790]
  • Further, modifying the value of an address, in one embodiment, allows computation of the destination address using a single read command or write command. [0791]
  • Doorbell Set Mask [0792]
  • The DMA agent is responsible for setting the DMA mask bit in the doorbell agent each time a DMA command is issued. The DMA mask bit will be set only if the NA bit is cleared (if acknowledgement is not needed for the DMA transfer there is no need to set the mask). If the auto set option bit is set and the NA bit is cleared, then two mask bits will be set at the same time in the doorbell. The index of the bit that should be set is determined according to the visitor bits in the context table (the auto set code is fixed) DMA Agent Operation Scenario Examples [0793]
  • Example A—The network processor asks for write DMA access: [0794]
  • (1) The Host has to initialize the DMA context table with all of the channel configurations. This should be done once for all possible configurations. [0795]
  • (2) The network processor issues a DMA command on the agent bus. [0796]
  • (3) The DMA agent registers the request in the request queue and extracts parameters. [0797]
  • (4) The DMA agent checks whether there is an available token from the DMA controller to start processing the request. If there is no token available the request waits in the queue for execution until there is an available token. If the request queue is also full, the network processor will be stalled. [0798]
  • (5) Assuming there is an available token, the processing of the request begins. The DMA agent sends the DMA controller two messages containing all the parameters of the transfer. [0799]
  • (6) Since this is a write request, the DMA controller issues a multireader message. The multireader message requests a data transfer from the network processor memory to the DMA write buffer. [0800]
  • (7) When the DMA transfer is finished, the DMA controller sends a message to the doorbell. [0801]
  • Example B—The network processor asks for read DMA access: [0802]
  • (1) The host has to initialize the DMA context table with all the channel configurations. This should be done at one time for all the possible configurations. [0803]
  • (2) The network processor issues a DMA command on the agent bus. [0804]
  • (3) The DMA agent registers the request in the request entries and extracts parameters. [0805]
  • (4) The DMA agent checks whether there is an available token from the DMA controller to start processing the request. If there is no token available, the processing is stalled until there will be an available token. [0806]
  • (5) Assuming there is an available token, the processing of the request begins. The DMA agent sends the DMA controller two messages which contain all the parameters of the transfer. [0807]
  • (6) When the transfer is finished, the DMA controller sends a message to the doorbell. The DMA controller can now send a new token to the DMA agent. [0808]
  • Software/Hardware restrictions. According to one embodiment of the invention, only the DMA controller can write to the token register. [0809]
  • In accordance with one embodiment of the present invention, a communications processor implemented as on at least one ring network is provided. The communications processor comprises a plurality of processors comprising ring members on the at least one ring network and a plurality of DMA controllers on the at least one ring network, the DMA controllers controlling servicing of DMA requests by the plurality of processors. The communications processor further comprises a plurality of DMA agents coupled to the plurality of processors, each DMA agent being part of a ring member including a processor, wherein each DMA agent is adapted to service processor DMA requests by determining whether a valid token exists from a pool of tokens reflecting available DMA controllers. [0810]
  • The tokens may be DMA controller specific tokens issued by the DMA controllers to the DMA agents to indicate when specific DMA controller access is available. Each time a processor issues a DMA request, in one embodiment, the associated DMA agent determines whether a valid token exists and, if a valid token exists, services that DMA request using the DMA controller associated with that token. The token can be marked as used or invalid when the token is used to service a DMA request. If no valid token exists the DMA agent queues the DMA request until a valid token exists. The associated DMA agent can be adapted to automatically request a new valid token after an existing valid token is used to service the DMA request. Each DMA agent, in one embodiment, is adapted to request additional valid tokens when the number of valid tokens in the pool falls below a maximum number. The processors comprise, in one embodiment, a plurality of network processors and the at least one ring network comprises a plurality of ring networks. [0811]
  • In one embodiment, the pool of tokens is stored in a register written to by the DMA controllers. [0812]
  • The DMA agents can be adapted to service processor DMA requests by converting them to messages transmitted onto the at least one ring network. Likewise, the DMA controllers can distribute valid tokens by transmitting messages on the ring network that are received by specific DMA agents. Each DMA controller further may be adapted to maintain a list of tokens including those tokens that have been distributed as valid tokens. [0813]
  • The DMA controllers can be adapted to respond to requests from the DMA agents for additional tokens with an invalid token when no valid tokens are available. Each DMA controller can have a pool of up to, for example, 16 tokens for each DMA channel. The DMA controllers, in one embodiment, are capable of reading registers having the pools of tokens for the DMA agents by issuing read messages traveling on the at least one ring network. [0814]
  • CRC Agent (Snoop) [0815] 520
  • FIG. 55 is a schematic diagram of the [0816] CRC agent 520 according to one embodiment of the present invention. The Cyclic Redundancy Check (CRC) agent is a network processor compound module which implements logic to perform CRC calculations. The CRC agent supports different types of CRC calculations like CRC32, CRC16, CRC10, and so forth, for different data sizes (1 to 8 bytes). According to one approach, the CRC agents works in two major operational modes. The first mode is a snoop mode and the second mode is on-demand mode. In the snoop mode the CRC agent snoops for messages in which the S bit is set. The CRC will detect those messages and will calculate the selected CRC on the message data. The second mode of operation is the on-demand mode. In on-demand mode the network processor writes data to the CRC, and the CRC uses this data for its calculations.
  • The network processor can write the CRC registers via the agent bus using AGENTW/I commands. The network processor can read the CRC residue via the agent bus using an AGENTR command. The CRC agent can stall the network processor if the network processor reads the CRC results and the results are not yet ready. The CRC module may also be able to generate a 32 bit random number. [0817]
  • Features of the CRC Agent [0818]
  • Performs CRC calculations of: CRC[0819] 32 for ATM cell processing AAL5; and CRC10 for OAM ATM cells. This requires the support of: calculating the CRC10 on 22-bit data of the last transmit word; merging the 10-bit CRC into the 22-bit data to generate the last 32-bit word to be transmitted by the multireader; BIP 16 for ATM performance monitoring—this process is done in parallel with the CRC calculation; CRC5 for ATM cell processing AAL2 (on-demand mode only); calculating CRC5 for 19-bit data for CRC generation (transmit)—(unless CRC5 is init by 0); calculating CRC5 for 24-bit data for the CRC check (receive); checksum for IP streams. This will be done on 32-bit (or 64-bit) data. The convergence to 16-bit data will be performed by software.
  • The CRC Agent has two modes of operation: [0820]
  • On-demand mode, performed for any data transferred (e.g., CRC[0821] 5, hashing function); and snoop mode, performed for a continuous data sequence transferred from/to the serial interfaces.
  • The CRC agent can be adapted to calculates CRC for 8, 16, 24 or 32 bits of data in a single cycle. If CRC is enabled for snooping, a network processor agent read instruction from a CRC residue register stalls until the last indication arrives with the last data word. Special control enables the CRC residue to be calculated on partial data (e.g. 22-bits in CRC[0822] 10, or 0 bits in CRC32); then the CRC residue is combined with the partial data to form the 32-bit last word of the frame, and this is exposed to the multireader block for transmission. In CRC5, the CRC module is capable of calculating the 5-bit CRC out of 19-bit data for transmit, or out of 24-bit data for the CRC check in receive (on-demand mode).
  • CRC Agent in one embodiment is adapted to interface to: transmit bus—for snooping TX data and calculating CRC; and agent bus—for configuration, on-demand activation and read/write residue. [0823]
  • Network Processor Writing to the CRC. [0824]
  • The [0825] network processor 514 can write to the CRC agent 520 using AGENTW commands. The mapping of the AGENT command 590 to CRC data 592 is described in FIG. 56.
  • The options include: [0826]
  • TYPE[[0827] 2:0]—3 bit CRC. The types are: 000-CRC 32; 001-CRC 10; 010-CRC 5; 011—checksum; 100—CRC16; 111-BIP16 (only for writing BIP16 reside register).
  • The [0828] BIP 16 machine works in parallel to all of those machines.
  • SIZE[[0829] 2:0]—The number of valid bytes in the data (1 to 8) starting at the LSB of RA (size=0 means 8 valid bytes in the message).
  • G—This bit indicates if the CRC agent works in the generate CRC or the check CRC mode. [0830]
  • S—The operation mode of CRC module. If S=1 the CRC works in the snoop mode. If S=0 the CRC works in the on-demand mode. When working in on-demand mode, the data for the CRC calculation and the residue are written by the network processor. Since the data in the memory is stored in big endian format, and the data in the network processor register file is stored in little endian format, the CRC module may perform some manipulation of the message data before the CRC calculation (especially if the data size is not 32 or 64 bit). [0831]
  • O—overwrite residue. If O=1 the new residue from RB/imm[0832] 8 is used for the CRC calculation. If O=0 then the current value of the residue register is used.
  • CRC Residue Registers. [0833]
  • The CRC module contains two residue registers. The first residue register is a 64 bit register containing the residue for the CRC and checksum calculations. The second residue register is 32 bit register containing the residue for the [0834] BIP 16 calculation.
  • Reading CRC Registers by the Network Processor [0835]
  • The network processor can read the results of the CRC calculations using the AGENTR command. The result of the CRC machine that will be read is determined according to the operational mode that was selected. [0836]
  • The BIP[0837] 16 machine calculation result will have a different register that could be read by the network processor (i.e., the two residue registers have two different addresses). If the network processor reads one of the CRC registers and the result is not ready, the network processor will be stalled.
  • The CRC calculation is considered to be complete after all the data had arrived (last indication in the message) in snoop mode. In on-demand mode the result of the CRC calculation will be available for reading one cycle after it was written if the data size is smaller than four bytes, and two cycles after it was written for larger data sizes. [0838]
  • CRC Agent Operation Scenarios, Examples [0839]
  • Example A—calculating CRC in on-demand mode: [0840]
  • (1) The network processor writes the CRC agent using AGENTW command. The data that is written to the CRC agent contains: CRC type; the data on which the CRC is to be calculated, the size of the data (number of valid bytes), and a new residue if the current residue is to be overwritten; the operational mode is set to work in the on-demand mode; and in the [0841] CRC 5 mode the G should also be written.
  • (2) One or two cycles after the data was written to the CRC (depending on the number of valid bytes in the data, the CRC machine can calculate CRC on 32 bits in one cycle), the network processor can read the CRC result. [0842]
  • Example B—calculating CRC on transmit data (multireader data out): [0843]
  • The CRC machine can calculate the CRC of the transmit data by snooping the S and L bits of the multireader output messages. The network processor initializes the CRC agent in the following manner: [0844]
  • (1) CRC type. [0845]
  • (2) A new residue if the current residue is to be overwritten. The data and the data size of the residue will be taken from the message data and type parts, respectively (the data part of the agent bus is ignored in the snoop mode). [0846]
  • (3) The operational mode must be set to work in the snoop mode, selecting the transmit data bus as a source for the data. [0847]
  • (4) One or two cycles after the last data has arrived at the CRC (depending on the number of valid bytes in the data, the CRC machine can calculate the CRC on 32 bits in one cycle) the network processor can read the CRC result. [0848]
  • Example C—calculating CRC of receive data: The CRC machine can calculate the CRC of the receive data by snooping the S and L bits of the agent write bus messages. The network processor initializes the CRC agent as follows: (1) CRC type. [0849]
  • (2) A new residue if the current residue is to be overwritten. The data and the data size will be taken from the message data and type parts, respectively (the data of the agent bus is ignored in the snoop mode). [0850]
  • (3) The operational mode must be set to work in the snoop mode. [0851]
  • (4) One or two cycles after the last data has arrived at the CRC (depending on the number of valid bytes in the data, the CRC machine can calculate CRC on 32 bits in one cycle) the network processor can read the CRC result. [0852]
  • [0853] Timer Agent 526
  • Referring now to FIG. 57, an exemplary embodiment of the [0854] timer agent 526 is illustrated in accordance with one embodiment of the present invention. The timer module is designed to allow the assignment of time stamps to various events within network processor tasks. According to one approach, the timer contains a 32 bit count-up free running counter. The counter counts at a frequency which could be calculated using the following formula.
  • F(counter)=[F(clock)]/[2*(prescale value+1)]
  • Usually the counter frequency will be set to 1 MHz (which corresponds to a 1 microsecond period). The prescale counter is a 10 bit down-counter, which divides its input clock frequency by the prescale value. If the prescale value is equal to zero the prescaler will be bypassed. [0855]
  • The time stamp value could be read by the network processor from the time stamp register using the agent interface. [0856]
  • Network Processor Writes to the Timer [0857]
  • The network processor can write to the timer using the AGENTW/AGENTWI commands. In order to enable timer operation only two values are required. The first value is the control information which resides in register RB or the imm[0858] 8 value (according to one approach, only one bit is used). The second value is the prescale value which determines the counting frequency of the timer. The prescale value is taken from the 10 LSB of RA. The value of RAP is ignored. FIG. 58 illustrates the mapping of the AGENTW command 602 to the timer data 604.
  • Timer Control Register [0859]
  • The timer control register is used to store the prescale value and to enable/disable the timer count operation. The timer control register is written using AGENTW/I commands and read using the AGENTR command. Tables 23 and 24 show the timer control register and a description of the timer control register, respectively. [0860]
    TABLE 23
    16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
    ten RSRVD tps[9:0]
    reset = 0
  • [0861]
    TABLE 24
    field description
    ten Timer enable bit. This bit enables the timer
    operation.
    tps[9:0] This field describes the division factor of the
    clock after it was divided by 2.
  • Time Stamp Register [0862]
  • The timestamp register contains the value of the timer counter at the time of an agent read operation. The register is read by the network processor using the AGENTR command. Table 25 illustrates the time stamp register, and Table 26 provides the time stamp register description. [0863]
    TABLE 25
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3  2  1  0
    tsv[31.0]
    reset = 0
  • [0864]
    TABLE 26
    field description
    tsv[31:0] Timer stamp value. This value of the timer
    counter at the time of the read operation.
  • [0865] Doorbell Agent 516
  • FIG. 59 is a schematic diagram of the [0866] doorbell agent 516 according to one embodiment of the invention. The doorbell agent is the scheduler module which handles the execution sequence of the tasks. The doorbell is connected to the network processor agent interface and to the ring write interface. The doorbell registers can be accessed by the network processor using the one of the special AGENT commands, or via the write bus using ring messages (e.g., by the serials and the host). All the possible service requests from the different sources go into the doorbell agent via the write bus. When the doorbell detects a request message it registers the request in the doorbell logic.
  • According to one embodiment of the invention, the doorbell agent can handle requests of up to 64 different tasks. The doorbell chooses the highest priority pending request (out of all the un-masked tasks), and sends its task ID to the network processor as the next task ID. The network processor sends back to the doorbell the current task ID that it is executing. The network processor uses the task ID information to perform the prefetch, bump and task switching, as previously described. [0867]
  • The Sources for Requests [0868]
  • The sources for doorbell requests include: Regular serial, timer, or software request: (e.g., a message from another task) This request indicates that a data fragment had been received in the RX FIFO or there is a place to write more data into the TX FIFO for transmission, or that a timer finished its count. [0869]
  • DMA request: The DMA had finished its data transfer. [0870]
  • Self-request: When a task yields itself (i.e., when the task execution time exceed the maximum allowed execution time), the software can resume its execution by setting the self-request bit. The starting point of the task will depend on what is written in the EP (entry point) register. The EP register can be updated by hardware or by software. [0871]
  • According to one approach, every request bit has its own mask bit (except the self-request). When the mask bit is cleared the request is ignored and the task can not trigger task switching. The self-request constitutes the only request bits that can not be masked. When a task enters execution, its corresponding request bit and all the mask bits are automatically cleared. (except the auto set [aset] and the urgent status bits [urg]). This is done to avoid serving the same request more than once. [0872]
  • Selecting Next Task for Execution [0873]
  • According to one approach, the algorithm for selecting the next task for execution is as follows. The tasks which participate in the selection of the next task for execution are the tasks for which their corresponding mask bit in the Task Global Mask Register (TGMR) is cleared. Tasks which participate in the selection of the next task and have unmasked requests are divided into four groups and served in the following order: [0874]
  • (1) Highest priority group include urgent requests of task numbers [0875] 0-31.
  • (2) Second priority group include regular requests of task numbers [0876] 0-31.
  • (3) Third priority group include urgent requests of task numbers [0877] 32-63.
  • (4) Lowest priority group include regular requests of task numbers [0878] 32-63.
  • Within each group the requests are served according to the task number. Lower task number requests are served before higher task number requests. [0879]
  • Accessing the Doorbell Registers from the Network Processor [0880]
  • The network processor can access the doorbell registers via the agent interface using one of special AGENT commands. [0881]
  • The network processor can directly modify only the register bits of the current task (the request, mask, counter bits value), or the global mask register (TGMR). Modifying other task register bits can be done via the ring write bus by sending a message from the message sender agent to the doorbell. [0882]
  • The [0883] data 612 for modifying the mask, request and the counter bits 614 of the current task is encoded in the RB/imm8 part of the agent command as illustrated in FIG. 60. The doorbell logic decodes the 8 LSB of RB/imm8 and sets the appropriate bits in the current task register, counter, urgent or TGMR.
  • Setting a request or mask bit is performed by writing 5 bits of the command index in the RB/imm[0884] 8 part of the AGENT command and then 3 bits of the index or the request bit that is to be set, and then 3 bits of the mask bit that is to be set. Note: Only one mask bit at a time can be set by the network processor using a single agent command (if other mask bits were set they will be cleared by the agent write command, except for the autoset bit. Writing the auto set bit will not clear other mask bits). Writing to the request bits will not clear other requests bits if they were already set. If the index value is zero the write to that part of the register is ignored.
  • Table 27 describes the decoding of the RB/imm[0885] 8 part of the message and the operations that take place.
    TABLE 27
    index
    operation RB/lmm8 value mask request
    Write task (0,0,0,0,0,mask_bit_index[2:0]) 000 don't change mask don't change request bits
    register (0,0,0,0,1,request_bit_index[2:0]) bits
    mask and 001 set the aset bit set the preq bit (self
    request don't change other request) Other request
    bits mask bits bits are not changed.
    011 set the mdma decrement the DMA
    bit clear all other request counter by 1
    mask bits
    100 set the mpreq set the preq bit. Other
    bit.clear all other request bits are not
    mask bits changed.
    write (1,0,0,0,0,counter value)
    request
    counter
    write (0,1,0,0,0,0,0,0)
    TGMR
    write (1,1,0,0,0,0,0,urgent value)
    urgent
  • The options in the agent command that are used by the doorbell are: [0886]
  • CM—Clear mask. Setting this bit will clear all the bits of the current task mask bits (including the auto set bit). [0887]
  • CR—Clear request. Setting this bit will clear all the bits of the current task request bits. [0888]
  • SG—Set global. This bit determines whether the task global mask register (TGMR) bits will be set or cleared according to the data in the RA/RAP part of the agent command. If the SG bit is set then the TGMR bits will be set at the locations corresponding to the set bits in RA,RA+I data. [0889]
  • Clearing the mask registers bits is accomplished by writing 1 to the clear mask (CM) bit in the command. If the CM option is used at the same time another mask bit is written, the set operation overwrites the CM operation. [0890]
  • Clearing the requests bits is done by writing 1 to the clear requests (CR) bit in the command. If clearing the requests happens at the same time the request is set from the ring then the request will be set (set will overwrite the reset). [0891]
  • The peripheral request mask bit (mpreq) could also be set by the network processor when there is a YIELD command and the set default mask option is used. Other request bits will be cleared. [0892]
  • The network processor can initialize the DMA requests counter of the current task by setting the RB/imm[0893] 8 part of the agent bus to {1,0,0,0,0,count_value[2:0]}.
  • Writing the TGMR is done by setting the RB/imm[0894] 8 part of the agent bus to {0,1,0,0,0,0,0,0} (see also the discussion on the Task Global Mask Register (TGMR) below.
  • Writing the current task priority bit is done by setting the RB/imm[0895] 8 part of the agent bus to {1,1,0,0,0,0,0,urgent_value} (see also the discussion on Task Priority Control below).
  • Reading Doorbell Registers from the Network Processor. [0896]
  • The doorbell bits of the current task (i.e., the request bits, mask bits and the counter value) are reflected in the task SPR register of the network processor. The TGMR could be read using the agent read command (AGENTR). [0897]
  • Setting the Doorbell Mask Bits from the DMA Agent [0898]
  • Another option for setting the DMA mask (mdma) bit and the auto set (aset) bit is by using the network processor DMA commands. The DMA commands have an option to set the DMA mask bit and the auto set bit. [0899]
  • When the DMA agent detects a DMA command, it can set the appropriate mask bit in the doorbell using the DMA context table (the context table stores the information as to which bit to set). The mask setting will be done if the NA bit in the DMA command is cleared. The auto set bit will be set if the A option bit in the DMA command is set. [0900]
  • Setting the Doorbell Requests Bits from the Ring [0901]
  • The doorbell registers could be accessed by the peripherals, the network processor and the host using ring messages. Every time a peripheral wants to set a request bit, the peripheral sends a write message with a destination address of the doorbell entry it wants to set. The doorbell will set the appropriate request bit in the doorbell registers according to the content in the data field of the message. [0902]
  • If a request bit and the corresponding mask bit are set, a valid request is sent to the doorbell priority logic. The mask and auto set bits can not be modified from the ring write bus. Table 28 shows the encoding for the input message format. The doorbell responds to messages from types mentioned in Table 28. [0903]
    TABLE 28
    field description
    type[7:0] The type field describes the outgoing message
    types. (If the last bits are X they are ignored).
    type[7:0] = 00000XXX: idle
    type[7:0] = 100FLZZZ: work write
    address[23:0] The address of the doorbell register. The doorbell
    register space ranges from
    DOORBELL_BASE_ADD to
    DOORBELL_BASE_ADD + $3F.
    data[2:0] The value of the doorbell bit that should be set
    data[2:0] = 000 do not change any request bit.
    data[2:0] = 001 set self request (sreq) bit.
    data[2:0] = 011 decrement request counter by 1.
    data[2:0] = 100 set peripheral request (preq) bit.
    P Doorbell request priority status. This bit reflects
    the current status of the doorbell request.
    P = 0 Current request status is normal.
    P = 1 Current request status is urgent.
    O Overwrite task current priority status with
    doorbell request status.
    O = 0 current priority status is not overwritten.
    O = 1 current priority status is overwritten.
  • Doorbell Register File Format [0904]
  • According to one embodiment, the doorbell register file contains 64 registers. Thus, each possible task has its own doorbell register. The doorbell registers have the format set forth in Table 29. [0905]
    TABLE 29
    31-21 20 19 18 17 16 15-12 11 10 8 3-7 2 1 0
    rsrvd urg rsrvd Count[2:0] rsrvd preq dma sreq rsrvd mpreq mdma aset
    reset = 0 0 0 0 0 0 0 0 0 0 0 1 0 0
  • ADDR=DOORBELL_BASE to DOORBELL_BASE+ $3F (Note: Current task register bits are reflected in the network processor status register.) (Note: All of the request and mask bits [not including the auto set bit] are automatically cleared when the task enters execution.) Table 30 provides a description of the doorbell register according to an embodiment of the invention. [0906]
    TABLE 30
    field description
    urg The urg (urgent) bit is used to allow the software
    to control the priority level of a task (as opposed
    to the urgent request status which is being
    generated automatically and could not be
    controlled by software). If the bit is set the task
    has high priority. This bit is written only by the
    Vobla
    count[2:0] These bits represent the number of DMA requests
    that should be acknowledged. Every DMA
    activation that requires acknowledgement at the
    end of the DMA transfer will cause the DMA
    agent to increment the counter value by 1. Every
    acknowledgement that is written to the dma bit in
    the doorbell register decrements the counter value
    by 1. If the counter value is equal to zero and the
    current task was yielded, the dma bit will be set
    (only if the counter was incremented at least once
    during the current task). If the dma mask (mdma)
    bit is set then a task switch will be triggered.
    Those bits can be written by the Vobla using the
    AGENT command.
    preq Regular peripheral request.
    preq=0 no regular peripheral request is pending.
    preq=1 regular peripheral request is pending.
    This bit can be set from the write bus or by the
    Vobla, and can be cleared by Vobla. In case the
    bit is set and cleared at the same time, the set will
    overwrite the reset.
    dma This bit indicates that the request counter had
    decremented to zero after a valid Vobla yield.
    dma=0 the request counter did not decrement to
    zero.
    dma=1 the request counter had decremented to
    zero.
    This bit can be set by the doorbell logic. Writing
    to this bit from the write bus will decrement the
    request counter value by 1. This bit can be cleared
    by the Vobla. In case the bit is set and cleared at
    the same time, the set will overwrite the reset.
    sreq Self-request bit. This request is non-maskable.
    sreq=0 self-request is not pending.
    sreq=1 self-request is pending.
    This bit can be set from the write bus or the
    Vobla, and can be cleared by the Vobla. In case
    the bit is set and cleared at the same time, the set
    will overwrite the reset.
    mpreq Peripheral request mask bit.
    mpreq=0 peripheral request is masked and can
    not trigger task switch.
    mpreq=1 peripheral request is not masked, and
    will trigger task switch when it is the highest
    priority pending request.
    This bit can be set by the Vobla and the DMA
    agent and can be cleared by the Vobla. In case
    the bit is set and cleared at the same time, the set
    will overwrite the reset.
    mdma DMA request mask bit.
    mdma=0 DMA request bit is masked and can not
    trigger task switch.
    mdma=1 DMA request bit is not masked and will
    trigger task switch when it is the highest priority
    pending request.
    This bit can be set by the Vobla and DMA agent,
    and can be cleared by the Vobla. In case the bit is
    set and cleared at the same time, the set will
    overwrite the reset.
    aset Automatically sets the mask bits to their default
    value after serving the current request.
    aset=0 do not set the mask bits to their default
    after serving the current request.
    aset=1 set the mask bits to their default after
    serving the current request
    This bit can be set by the Vobla and DMA agent
    and can be cleared by the Vobla. In case the bit is
    set and cleared at the same time, the set will
    overwrite the reset.
    rsrvd Reserved bits are read as zero and can not be
    written.
  • Task Global Mask Register (TGMR). The task global mask register (TGMR) is a 64 bit register (one bit per each task), which could be accessed by the network processor using the AGENT commands. The TGMR is used to determine which tasks are taken into consideration when calculating the next task for execution. Every set bit will prevent the corresponding task from being selected as the next task for execution, even if that task has valid requests to serve (at least one corresponding mask and request bits are set). [0907]
  • Writing the TGMR is done in the following way according to one embodiment. The AGENT write command must contain the value 01000000 in the LSB of RB or the imm[0908] 8 field. Based on the value of the SG option bit and the value of RA,RAP, the TGMR bits are set or cleared. Only bits which have the corresponding RA, RAP bits set are affected.
  • The TGMR could be read using AGENTR commands. The 32 LSB of TGMR are located at [0909] address 0 of the doorbell, and the 32 MSB are located at address 1. The user can read all 64 bits using the read double option of the AGENTR command. If only 32 bits are read, the other part of the data will be zeroed.
  • Handling DMA Requests [0910]
  • In a system with multiple processors capable of running multiple tasks that can issue DMA requests to the multiple DMA controllers, one challenge is knowing at certain points in time whether all of the DMA requests issued by a specific task running on a processor are finished. The challenge can be significant because DMA requests may be issued by different tasks running on a processor to different DMA controllers. Also, the DMA requests may finish out of the order in which they were issued. [0911]
  • According to one approach, the invention provides that a DMA agent (previously discussed) be associated with each of the processors in the system. The role, in this instance, of the DMA agent is to control the DMA transfer requests made by the associated processor. For each DMA request issued by the DMA agent the DMA agent sends an indication to a book-keeping unit. In one embodiment, the book-keeping unit is a request counter in the doorbell task register for each processor. The book-keeping unit receives this indication and increments the request counter. Because the preferred system performs multi-tasking, the request counter may include a separate entry (or separate request counter) for each task performed by the processor. [0912]
  • When the target DMA controller completes the DMA transfer, the DMA controller issues a decrement counter message to the book-keeping unit. The relevant entry (or relevant request counter) is then decremented by one. When the relevant entry (ore relevant request counter) reaches zero, the system knows that all DMA transfers for that task have been completed. [0913]
  • Therefore, according to one embodiment of the invention, during normal task execution, there is a possibility that more than one DMA transfer is initiated. Each one of them could finish its data transfer at any given time, perhaps not in the order in which they were initiated. Typically it is preferable to trigger a valid request only after all DMA transfers from all the different DMA channels within a task have finished. In order to implement this requirement each doorbell task register has its own request counter. [0914]
  • The request counter is incremented every time it gets an increment counter indication. The increment counter indication is an option in the network processor DMA commands (this is the NA bit). Every time a DMA command is issued and NA bit is cleared, the counter is incremented by 1. [0915]
  • When the DMA controller or peripheral sends its acknowledgement back to the doorbell by writing to the DMA bit in the request register, the counter is decremented by 1. When the counter reaches zero and a valid YIELD was executed by the network processor, the DMA bit in the doorbell register will be set. If the mdma bit is also set, a task switch request will be issued. [0916]
  • In accordance with one embodiment of the present invention, a communications processor implemented as on at least one ring network is provided. The communications processor comprises a plurality of processors comprising ring members on the at least one ring network, a plurality of DMA controllers on the at least one ring network, the DMA controllers controlling servicing of DMA requests by the plurality of processors, and a plurality of DMA agents coupled to the plurality of processors. Furthermore, each DMA agent being part of a ring member including a processor, wherein each DMA agent is adapted to issue an indicator to a request counter coupled to the DMA agent for each DMA request issued by the DMA agent to a DMA controller, thereby allowing each DMA agent to maintain a count of the outstanding DMA requests that have been issued on behalf of the processor associated with the DMA agent. In one embodiment, the request counter maintains a separate count for each task being executed by the processor, wherein the request counter is contained in a doorbell register supporting up to 64 tasks. [0917]
  • Upon satisfaction of the DMA request by a target DMA controller, the target DMA controller can be adapted to issue a response that causes the request counter to decrement the count by one. In this case, the DMA requests issued by the DMA agent to the DMA controller and the response issued by the target DMA controller can be transmitted as messages on the at least one ring network. Also, upon the counter returning to zero the processor can be enabled to switch to other tasks because all DMA requests for a given task have been satisfied. In this case a new DMA request for a different task can be deferred until the counter has returned to zero for the given task. [0918]
  • In accordance with another embodiment of the present invention, a method of controlling access to DMA controllers in a multi-tasking communications processor implemented as on at least one ring network is provided. The method comprises issuing DMA requests to a target DMA controller, maintaining a count of DMA requests on a per-task basis, and issuing an acknowledgement that a DMA request has been satisfied by the target DMA controller. The method further comprises reducing the count based on the acknowledgement and enabling a processor responsible for issuing the DMA requests to perform new activity when the count has returned to zero. In one embodiment, the DMA requests are issued as messages on the at least one ring network. Similarly, the acknowledgement can be issued as a message on the at least one ring network. [0919]
  • Auto Set [0920]
  • In order to increase performance (e.g., to eliminate the need to set the default mask at the end of every task), the auto set functionality is defined. When the aset (auto set) bit is set, the mask bits will be set to their default value after the desired request has occurred without triggering a request to the network processor and a task switch. The auto set bit can be written by the network processor using the agent interface, or by using the DMA command (this is one of the options of the DMA command). [0921]
  • The default mask is: the peripheral request mask bit (mpreq) is set and all the other mask bits are cleared (see Table 28). [0922]
  • Task Priority Control [0923]
  • It is desirable to have the ability to control task priority level in order to influence task scheduling. The doorbell module supports this requirement in two ways. The first way is software control using the urg bit in the doorbell task register (not the task SPR). Each doorbell task has an urgent priority bit in its task register (urg). When this bit is set the task becomes urgent and all of its requests are considered as urgent requests. The urgent bit remains set as long as it is not cleared by the network processor. [0924]
  • A second way to control the request priority level is by sending messages to the doorbell with the urgent status indicating the request priority level. If the overwrite current status is also set then the request priority status bit in the doorbell is also updated. If the task urgent status bit is set the task requests are also considered urgent. This bit is mainly controlled by hardware. [0925]
  • It should also be noted that the task priority is reflected in the network processor status register. [0926]
  • Doorbell Operational Scenarios [0927]
  • Example A—regular serial request: [0928]
  • (1) A serial sends a message with the destination address of its task requests register in the doorbell register file. The data part of the message specifies which bit to set. [0929]
  • (2) If the corresponding mask bit for this task is set (this is the default mask), then a valid request is sent to the doorbell priority logic. [0930]
  • (3) When this request becomes the highest priority pending request, it can trigger the network processor task switch. [0931]
  • (4) The doorbell samples the task number of highest priority pending request every time a yield is executed. If there are no pending tasks the doorbell waits until the first time there is a pending task (except if the next task is the current task, in which case the network processor waits until the yield indication, because there will be no task switch), and then samples the next task ID. [0932]
  • (5) After the next task ID is sampled by the network processor, the network processor performs the prefetch of the next task registers. [0933]
  • (6) The next task ID becomes current task ID. [0934]
  • (7) The doorbell logic clears the request bit and the mask register of the task which caused the task switch. [0935]
  • (8) The doorbell calculates a new next task ID. [0936]
  • Example B—DMA request: [0937]
  • The handling of a DMA request is very similar to the handling of a serial request. The only difference is the process of setting the DMA request and the mask bits. At the time DMA command is issued there is no information as to which request mask bit should be set. The doorbell logic will get this information from the DMA agent. This will be done using the DMA context table and a special option in the Network processor DMA command (the NA bit in the DMA command). When the DMA request is registered with the DMA agent, the DMA agent will set the DMA mask bit in the doorbell register. The DMA agent will also tell the DMA controller which request bit it should send the acknowledgement when the DMA transfer is finished, in order to decrement the request counter. When the counter reaches zero and if the appropriate mask bit is set, a valid task switch request will be issued to the doorbell logic. [0938]
  • Example C—DMA request with auto set: [0939]
  • When the auto set bit is set, the doorbell logic will set the mask to the default mask value after the current task is finished without asserting a request for task switching. [0940]
  • Software/Hardware Restrictions [0941]
  • According to one embodiment of the invention, the following restriction is imposed: Only eight pending DMA requests (DMA requests that were issued by the DMA agent for which acknowledgement has not reached the doorbell) per task are handled by the doorbell. [0942]
  • Network Processor Debug Module [0943]
  • According to one embodiment of the invention, the network processor compound includes a debug module. The debug module supports various breakpoints and enables program code patching. The debug module can be programmed through the ring interface. The debug module contains two breakpoint channels and eight patch channels. Each one of the patch channels can be configured to be used as a patch channel or as an additional program address breakpoint channel. A single step program trace is supported. [0944]
  • A Breakpoint Event and a Patch Event [0945]
  • The network processor core supports two kinds of program breaks: a breakpoint and a patch. A breakpoint event causes the program flow to jump to a program location pointed by a given vector and to enter the trap mode of execution by setting the trap mode bit located in the network processor task SPR. When in trap mode, no further breakpoint will be accepted. The trap mode bit will be cleared by executing an RFT (Return From Trap) instruction or by writing a zero to the trap mode bit. When the trap bit is cleared, the network processor returns to the normal execution mode where further breakpoints are accepted. A patch event causes the program flow to jump to a program location pointed by a given vector. In a patch event the trap mode bit will not be set, thus remaining in the normal execution mode. A patch event is useful for program patching of code written in ROM. [0946]
  • Patch Channels [0947]
  • According to one embodiment, there are eight patch channels. Each of the patch channels can be configured to operate as a patch channel or as an additional program address breakpoint channel. If a patch channel is enabled and is configured as a patch, a patch event will occur whenever there is a fetch from a program location equal to the catch address (discussed below). If a patch channel is enabled and is configured as a break, a breakpoint event will occur whenever there is a fetch from a program location equal to the catch address. Each one of the patch channels will cause the network processor program to jump to a different vector location according to a vector table (see the discussion on the vector table and Table 37 below). [0948]
  • Each of the patch channels includes a patch register as shown in Table 31. [0949]
    TABLE 31
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13
    patch 0 cn b p catch address
    patch
    1 cn b p catch address
    patch
    2 cn b p catch address
    patch
    3 cn b p catch address
    patch
    4 cn b p catch address
    patch
    5 cn b p catch address
    patch
    6 cn b p catch address
    patch
    7 cn b p catch address
    12 11 10 9 8 7 6 5 4 3 2 1 0
    patch 0 catch address
    patch
    1 catch address
    patch
    2 catch address
    patch
    3 catch address
    patch
    4 catch address
    patch
    5 catch address
    patch
    6 catch address
    patch
    7 catch address
  • Patch Register. This is a 32 bit read/write register (through the ring). This register is cleared by a hardware reset: [0950]
  • Bits [0951] 15:0—Catch Address: This is the 16 bit program address which causes a patch event or a breakpoint event.
  • [0952] Bit 16—Break or Patch (B/P): When the B/P bit is cleared, the patch channel operates as a patch channel. When the B/P is set, the patch channel operates as an additional program address breakpoint channel.
  • [0953] Bit 17—EN: This is the channel enable bit. When EN is set, the channel is enabled. When EN is cleared, the channel is disabled.
  • Bits [0954] 31-18—reserved: These bits are reserved. Reserved bits are read as zero.
  • Address Breakpoint Channels [0955]
  • According to one approach, the debug unit includes two address breakpoint channels. Address breakpoint channels can be configured to cause a breakpoint when there is a program or data memory access to specific locations. Each of the address breakpoint channels is configured by its address register and by the address breakpoint control register. [0956]
  • Address Registers. Each of the two address breakpoint channels include an Address Register. See Table 32 and Table 33, which show the [0957] channel 0 address register and the channel 1 address register, respectively. These are 32 bit read/write registers which are cleared by a hardware reset. Bits 15:0 hold the break address and bits 31:16 hold the break mask. The break address is the program location at which cause a breakpoint event. A breakpoint event occurs only if the address breakpoint is enabled and there is a match between the memory address accessed and the break address. The break mask is used to specify what address bits to compare. For example, if all the mask bits are set then the address comparison will be done on all address bits. If, for example, mask bit 0 is cleared and all the rest are set then the comparison will not include bit 0 of the address. This way, an address breakpoint can be generated not only on a specific address but also on a window range of addresses. Table 34 shows the address breakpoint control register.
    TABLE 32
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
    break mask[15.0] break address[15:0]
  • [0958]
    TABLE 33
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
    break mask[15.0] break address[15:0]
  • [0959]
    TABLE 34
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11
    10 9 8 7 6 5 4 3 2 1 0
    en; mode1 eno mode0
  • Address Breakpoint Control Register. The address breakpoint control register is a 32 bit read/write register. This register is used to configure the operation of each one of the address breakpoint channels. [0960]
  • Bits [0961] 1:0—MODE0: These two bits specify for channel 0 on which event to cause an address breakpoint as specified in Table 35. Table 35 illustrates the Address Mode (AMODE) corresponding to bits 1:0.
    TABLE 35
    Mode Breakpoint On
    00 Program Fetch
    01 Data Read
    10 Data Write
    11 Data Read or Write
  • [0962] Bit 2—Enable 0 (ENO): When ENO is set, address breakpoint channel 0 is enabled and can cause a breakpoint event. When this bit is cleared, address breakpoint channel 0 is disabled.
  • Bits [0963] 4:3—MODE1: These two bits specify for channel 1 on which event to cause an address breakpoint as specified in Table 35. Bit 5—Enable 1 (EN1): When EN1 is set, address breakpoint channel 1 is enabled and can cause a breakpoint event. When this bit is cleared, address breakpoint channel 1 is disabled.
  • Debug Control Register [0964]
  • The Debug Control Register is a 32 bit read/write register. This register is cleared by a hardware reset. Table 36 illustrates the debug control register according to one embodiment of the invention. [0965]
    31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11
    iraec cnyb cnlb inoi iand tid
    10 9 8 7 6 5 4 3 2 1 0
    vba
  • Bits [0966] 10:0—Vector Base Address (VBA): The is the Vector Base Address. The VBA points to the starting location in memory of the vector table. The vector table is a 32 word table explained further below.
  • Bits [0967] 16:11—Task ID (TID): The TID is the task ID on which to cause or not to cause a breakpoint. It is used by the task breakpoint and can be used by the address breakpoints as explained by the following control bits.
  • [0968] Bit 19—TAND: When TAND is set, then an address breakpoint will occur only if there is both an address match and the current task ID is equal to the TID. Note: When a patch channel is configured to operate as a program address breakpoint channel, it has the same rules as the dedicated address channels and the TAND is treated the same.
  • [0969] Bit 20—TNOT: When TNOT is set, then an address breakpoint will occur only if there is an address match and the current task ID is different from the TID. Note: When a patch channel is configured to operate as a program address breakpoint channel, it has the same rules as the dedicated address channels and the TNOT is treated the same.
  • Bit [0970] 21—Enable Task Breakpoint (ENTB): This bit enables the task ID breakpoint. When ENTB is set, a task switch to a task ID which is equal to TID will cause a breakpoint event. When this bit is cleared, the task ID breakpoint is disabled. When setting the ENTB bit, the current task ID is compared to the TID and. if equal, there will be a breakpoint. Further task ID breakpoints will occur only upon switching to a new task which is equal to the TID.
  • [0971] Bit 22—Enable Yield Breakpoint (ENYB): This bit enables the yield breakpoint. When ENYB is set, any yield (task switch) will cause a breakpoint event. When this bit is cleared, the yield breakpoint is disabled.
  • [0972] Bit 31—TRACE: When the TRACE bit is set, a breakpoint will occur on every new instruction execution, thus allowing a single step instruction trace. When the TRACE bit is cleared, trace is disabled.
  • The Vector Table [0973]
  • In case of a breakpoint event or a patch event, the debug module supplies the network processor core with a vector for where to jump. The vector table is illustrated in Table 37. Each event has a different vector that is calculated by taking the 11 bit VBA and concatenating to it a 5 bit offset. For example, assume that the 11 bit VBA is all zeros. In this case, the breakpoint vector will point to program address $2, [0974] patch 0 will point to $4, and so on. The increments are of 2 instruction spaces for each of the events.
    TABLE 37
    Address For
    VBA + $0 Reserved for reset
    VBA + $2 Breakpoint
    VBA + $4 Patch 0
    VBA + $6 Patch 1
    VBA + $8 Patch 2
    VBA + $A Patch 3
    VBA + $C Patch 4
    VBA + $E Patch 5
    VBA + $10 Patch 6
    VBA + $12 Patch 7
    VBA + $14 − VBA + $1F Reserved
  • Breakpoint Status Bits [0975]
  • According to one aspect of the invention, special status bits located in the processor Task SPR for reflecting the cause of the breakpoint event. These bits are the PAB, DAB, TB and YB bits. The PAB bit is for a program address breakpoint. The DAB bit is for a data address breakpoint. The TB bit is for a task breakpoint. The YB bit is for a yield breakpoint. These bits are set whenever the relevant breakpoint occurs. These bits are cleared by the RFT instruction. [0976]
  • Agent Interface [0977]
  • According to one aspect of the invention, the agent interface connects the processor to all of the agents in the compound. This interface is used by the network processor to read and write data. [0978]
  • Signal Description [0979]
  • Table 38 provides the agent interface signal list. [0980]
    TABLE 38
    direction (relative
    signal name description to Vobla) remarks
    V_AGENT_RA[31:0] The content of Output
    register RA from
    the AGENT opcode.
    V_AGENT_RAP[31:0] The content of Output
    register RAP. RAP
    is the RA+1
    register.
    V_AGENT_RB[31:0] The content of Output May be
    register RB from the reduced to 16
    AGENT opcode, or bits.
    a 8 bit immediate
    value.
    V_AGENT_ID[4:0] Agent ID. The ID of Output
    the selected agent.
    V_AGENT_OPTIONS[9:0] Various options Output
    used by the agents.
    [module_prefix]_read_DATA[63:0] Data from the Input
    agents.
    V_AGENT_WR Write to agent Output
    indication.
    V_AGENT_RD Read from agent Output
    indication.
    V_AGENT_DOUBLE Load double from Output
    agent.
  • Agent ID Allocation [0981]
  • Table 39 below provides the agent ID allocation. [0982]
    TABLE 39
    Agent Name ID Number
    DMA agent 00000-0011
    CRC 00100
    Multireader 01000
    Doorbell 01001
    Timer 01010
    message sender 01011
  • Agent Register Mapping [0983]
  • Table 40 illustrates agent register mapping. [0984]
    TABLE 40
    Agent Name Register Name Register Address
    CRC crc_residue
    0
    bip16_residue 1
    Doorbell TGMR_L 0
    TGMR_H 1
    Timer timer_control 0
    time_stamp 1
  • Network Processor Compound Memory Map [0985]
  • Table 41 illustrates the network processor compound memory map according to an embodiment of the invention. [0986]
    TABLE 41
    Name Address
    dma_agent_context_table0 Vobla_compound_register_base + $0
    dma_agent_context_table1 Vobla_compound_register_base + $2
    dma_agent_context_table2 Vobla_compound_register_base + $4
    dma_agent_context_table3 Vobla_compound_register_base + $F
    dma_token_register0 Vobla_compound_register_base + $10
    dma_token_register1 Vobla_compound_register_base + $12
    dma_token_register2 Vobla_compound_register_base + $14
    dma_token_register3 Vobla_compound_register_base + $1F
    dma_address_error_mask_register0 Vobla_compound_register_base + $20
    dma_address_error_mask_register1 Vobla_compound_register_base + $22
    dma_address_error_mask_register2 Vobla_compound_register_base + $24
    dma_address_error_mask_register3 Vobla_compound_register_base + $2F
    channel0_address_register Vobla_compound_register_base+$30
    channel1_address_register Vobla_compound_register_base+$31
    address_breakpoint_control_register Vobla_compound_register_base+$38
    debug_control_register Vobla_compound_register_base+$39
    debug_petch_register0 Vobla_compound_register_base+$40
    debug_petch_register1 Vobla_compound_register_base+$41
    debug_petch_register2 Vobla_compound_register_base+$42
    debug_petch_register3 Vobla_compound_register_base+$43
    debug_petch_register4 Vobla_compound_register_base+$44
    debug_petch_register5 Vobla_compound_register_base+$45
    debug_petch_register6 Vobla_compound_register_base+$46
    debug_petch_register7 Vobla_compound_register_base+$47
    doorbell_request_register[63:0] Vobla_compound_register_base +$80
    Vobla_compound_register_base +$BF
  • Communications Processor Implementing a Ring Network [0987]
  • The inventive aspects of the ring network and/or the network processor, as described above, find particular benefit when implemented in combination in a high-performance communications processor in accordance with the present invention. The high performance communications processor (HPCP) of the invention may on occasion be referred to as the Trajan. As will be evident from the following written description, the HPCP may be implemented in various fashions without departing from the true spirit and scope of the invention. Just by way of example, the number of DMA modules, the characteristics of the control processor, the number of interfaces supported to ATM, the number of flexible packet processors, may vary. Generally, the flexible packet processor of the present invention may on occasion be referred to herein as the Vobla. [0988]
  • Generally, the HPCP should be capable of supporting a variety of applications in a range of markets. For example, the HPCP may be used for Customer Premises Equipment (CPE) applications, such as for Digital Subscriber Line (DSL) services. DSL, sometimes generically referred to as xDSL, refers to the family of digital lines that carriers may provide, such as ADSL, HDSL, SDSL, and so forth. These technologies are all well understood in the art. DSL CPE applications for the HPCP may include bridges for Ethernet and USB; DSL-Ethernet routers; DSL-home wireless routers; Voice Integrated Access Devices (IADs); and service gateways. The HPCP may also be used for consumer networking equipment, such as home routers (Ethernet and/or wireless) and networked appliances (e.g., Universal Plug 'n Play [UPnP] devices). The HPCP may also be used for access network equipment applications, line card applications, and voice processing applications (e.g., voice gateways). Generally, the HPCP will find beneficial application in any voice or communication processing application. [0989]
  • In sum, the goal of the HPCP is to provide a PHY-neutral communications processor that can be readily integrated with appropriate PHY functionality (e.g., ADSL PHY, SHDSL PHY, xDL PHY, etc.) to support a myriad of applications on a variety of network platforms based on a single system on a chip (SOC) building block. [0990]
  • According to just one embodiment, the HPCP (e.g., the so-called Trajan I) would have the baseline specifications set forth in Table 42 below. Table 42 is offered solely for purposes of example and the invention is in no way limited to this embodiment. In fact, it is anticipated that continuing advances in the processor art will result in continually changing parameters. [0991]
    TABLE 42
    Router/Bridge Shaped/
    Clock Throughput Unshaped
    Speed Network Expansion Hardware ATM-Eth ATM
    Processors (MHz) Interfaces Interfaces Accelerators (kpps) throughput
    2 × NP, 200 2 × Utopia External Cell/Packet 400 2 × OC-3
    1 × MIPS 266 (8/16 bit), Peripheral lookup;
    MMU 4 × Ethernet Bus (EPB) 3 × DMA IO
    MII/RMII
    (10/100)
    256 time
    slots
    TDM I/f
  • The communications architecture employed by the HPCP could be a conventional bus-based architecture, a switch fabric type architecture, star-based architecture, or other architecture known in the art. Preferably, the HPCP employs the rings architecture and message based protocol of the present invention, discussed above, whereby each module of the HPCP occupies a position on a ring, as discussed below. [0992]
  • In accordance with one embodiment of the present invention, a communications processing system utilizing a ring network architecture is provided. The communications processing system comprises a plurality of ring members connected in point-in-point fashion along the ring network, a transaction based connectivity for communicating at least one message among at least a portion of the ring members, wherein the message includes information indicative of a destination ring member for which the message is intended and the message is passed around the ring network until reaching the destination ring member, and wherein the destination ring member is adapted to receive the message and remove it from the ring network. The communication processing system, in one embodiment, is implemented on a single chip, while in other embodiments the system is implemented on more than one chip. The information indicative of a destination ring member can comprises a ring member identifier and/or an address corresponding to the destination ring member. In one embodiment, the ring network includes a bridge across the ring network that allows messages to travel from one side to another side without passing through intermediate ring members. [0993]
  • The transaction based connectivity of the system may provide for messages to be passed around the ring network according to a clocking scheme. In one implementation, the clocking scheme provides for the messages to travel one ring member per clock cycle. Similarly, the transaction based connectivity can provide for a plurality of messages to travel the ring network, each message traveling one ring member per clock cycle unless a message is consumed at a given ring member. Likewise, the connectivity may provide for messages comprising transactions to travel the ring network, and wherein the messages comprise one or more of a command, an instruction, a type, an address, and data. [0994]
  • In one embodiment, the message arriving at a non-destination ring member will be passed to the next ring member on the ring network. Alternatively, the message arriving at a destination ring member will be consumed by the destination ring member. In this case, the message can be removed from the ring network while being consumed so that a slot on the ring network is made available. The available slot may enable a downstream ring member to insert a message in the slot. [0995]
  • Furthermore, in one embodiment, each ring member receiving a message is adapted to check a destination address portion of the message to determine if the message is intended for that ring member, and if the destination address portion corresponds to that ring member, the ring member takes the message off of the ring network and consumes the message. [0996]
  • In one embodiment, the at least one message comprises a message that causes ring members to assign address space during configuration of the ring network. This message may comprise an enumeration message. The assignment of address space during configuration allows a processing ring member to subsequently infer the configuration of the ring network. [0997]
  • In accordance with another embodiment of the present invention, a communications processing system utilizing a ring network architecture is provided. The communications processing system comprises a plurality of ring members having unique addresses and connected in a point-in-point fashion along the ring network, a transaction based connectivity for communicating at least one message among at least a portion of the ring members, wherein the message includes a destination ring member address for which the message is intended and the message is passed around the ring network until reaching the destination ring member, and where the destination ring member being adapted to receive the message and remove it from the ring network. The communication processing system, in one embodiment, is implemented on a single chip, while in other embodiments the system is implemented on more than one chip. In one embodiment, the ring network includes a bridge across the ring network that allows messages to travel from one side to another side without passing through intermediate ring members. [0998]
  • The transaction based connectivity of the system may provide for messages to be passed around the ring network according to a clocking scheme. In one implementation, the clocking scheme provides for the messages to travel one ring member per clock cycle. Similarly, the transaction based connectivity can provide for a plurality of messages to travel the ring network, each message traveling one ring member per clock cycle unless a message is consumed at a given ring member. Likewise, the connectivity may provide for messages comprising transactions to travel the ring network, and wherein the messages comprise one or more of a command, an instruction, a type, an address, and data. The connective also may provide for messages comprising transactions to travel the ring network, and wherein the messages comprise one or more of a command, an instruction, a type, an address, and data. The destination ring member address can comprise a starting address for the destination ring member and/or an address within the address space assigned for the destination ring member. [0999]
  • In one embodiment, the message arriving at a non-destination ring member will be passed to the next ring member on the ring network or consumed by the destination ring member. In one embodiment, each ring member receiving a message checks the destination ring member address of the message to determine if the message is intended for that ring member, and if the destination ring member address corresponds to that ring member, the ring member takes the message off of the ring network and consumes the message. If consumed, the message can be removed from the ring network while being consumed so that a slot on the ring network is made available. The available slot may enable a downstream ring member to insert a message in the slot. [1000]
  • In one embodiment, the at least one message comprises a message that causes ring members to assign address space during configuration of the ring network. This message may comprise an enumeration message. The assignment of address space during configuration allows a processing ring member to subsequently infer the configuration of the ring network. [1001]
  • In accordance with yet another embodiment of the present invention, a communications processing system utilizing a ring network is provided. The system comprises a plurality of ring members having unique addresses and communicatively connected in a point-in-point fashion along the ring network and a transaction based connectivity for communicating at least one message among at least a portion of the ring members, wherein the message is travels from a first ring member to a second ring member based at least in part on an address assigned to the second ring member, the second ring member being the destination ring member for which the message is intended. The message is passed along the ring network from the first ring member to the second ring member by one or more other ring members each having an address intermediate the addresses of the first and second ring members, wherein the message is received and removed from the ring network upon receipt by the second ring member. The message can include information indicative of the address of second ring member. The communication processing system, in one embodiment, is implemented on a single chip, while in other embodiments the system is implemented on more than one chip. In one embodiment, the ring network includes a bridge across the ring network that allows messages to travel from one side to another side without passing through intermediate ring members. [1002]
  • In one embodiment, the transaction based connectivity provides for messages to be passed around the ring network according to a clocking scheme. The clocking scheme, in one implementation, provides for the messages to travel one ring member per clock cycle. Similarly, the transaction based connectivity can provide for a plurality of messages to travel the ring network, each message traveling one ring member per clock cycle unless a message is consumed at a given ring member. The message arriving at a non-destination ring member can be passed to the next ring member on the ring network or consumed by the destination ring member. In one embodiment, each ring member receiving a message checks a destination address portion of the message to determine if the message is intended for that ring member, and if the destination address portion corresponds to that ring member, the ring member takes the message off of the ring network and consumes the message. If consumed, the message can be removed from the ring network while being consumed so that a slot on the ring network is made available, where the available slot enables a downstream ring member to insert a message in the slot. The connectivity also may provide for messages comprising transactions to travel the ring network, and wherein the messages comprise one or more of a command, an instruction, a type, an address, and data. [1003]
  • In one embodiment, the at least one message comprises a message that causes ring members to assign address space during configuration of the ring network. This message may comprise an enumeration message. The assignment of address space during configuration allows a processing ring member to subsequently infer the configuration of the ring network. [1004]
  • In accordance with an additional embodiment of the present invention a communications processor implemented on a chip. The communications processor comprises a network processor including means for processing a plurality of protocols including ATM, frame relay, Ethernet, and IP, said means being programmable using a set of library commands to process additional protocols, and a protocol processor for controlling the network processor, wherein the protocol processor performs control plane processing and the network processor performs data plane processing. Further, the network processor and the protocol processor are ring members on at least one ring network, and wherein the communications processor further comprises a plurality of other ring members on the at least one ring network. The network processor, in one embodiment, includes a plurality of compounds that share a single ring interface to the ring network. The communications processor can be PHY neutral. [1005]
  • The at least one ring network, in one embodiment, comprises multiple ring networks including a protocol processor ring network and a network processor ring network, where the network processor ring network can include a first network processor for transmitting packets and a second network processor for receiving packets. [1006]
  • In another embodiment, the network processor includes ultrafast task switching using active registers for current tasks and shadow registers for preloading next tasks. The communications processor may further comprise multiple DMA controllers for access to external memories. [1007]
  • The protocol processor, in one embodiment, is adapted to perform the following: signaling protocols; protocol management; exception handling; and system configuration and control. Similarly, the network processor can be adapted to perform the following: per-packet processing; packet forwarding; packet classification; quality-of-service handling; and packet reformatting. [1008]
  • The control path protocol support can be provided by the protocol processor and the data path protocol support can be provided by the network processor. Furthermore, the network processor can be adapted to perform zero overhead task switching. [1009]
  • In one embodiment, the network processor includes compound modules operating as parallel engines. The communications processor can be implemented to provide an enterprise integrated access device (EIAD), a multi-tenant unit (MTU) or remote terminal unit (RTU), a media gateway, and/or a voice gateway. [1010]
  • Exemplary Architectures of the HPCP [1011]
  • According to one embodiment, the HPCP is implemented using the rings architecture as illustrated in FIG. 61. This rings-type architecture is implemented on a semiconductor (e.g., on a chip) and is unlike token-ring arrangements in networks. According to FIG. 61, the [1012] HPCP SOC 620 employs four rings 622-628 that are connected by three inter-ring bridges 630-634. These bridges, also called sea bridges because they interconnect two disparate rings, have logic such that messages will traverse from the near side ring across the bridge if addressed to the far side ring. If messages are addressed to an address contained within the near side ring, the message is forwarded along the ring as in the usual case.
  • As illustrated, the HPCP [1013] 620 generally divides the modules along the rings according to functionality. There is a receiver (Rx) ring 628 for receiving data transmitted from outside the HPCP chip. There is a transmitter (Tx) ring 626 for transmitting data to go outside the HPCP chip. There is a main ring or control ring (PP Ring) 622 which includes the PP (packet processor) 636, which can be considered the host or CPU (anchor) of the HPCP. There is a packet processor ring 624 which includes several packet processors (i.e., the VC0 638 and VC1 640 network processors) and DMAs 642, 644 for packet processing of the various protocols that are handled by the HPCP 620. In order to reduce latency in messaging, the packet processor ring 624 includes several intra-ring bridges 646, 648, also called land bridges because they provide a bridge-type connection within a single ring.
  • In certain of the figures that follow, the illustration of the HPCP is not graphically depicted as a rings-type arrangement. However, unless stated otherwise, the arrangements correspond to a rings-type arrangement and logical path. [1014]
  • Generally, the improvement in the HPCP over other communications processors can be tied to, individually and in combination, the use of (1) a flexible packet processor with ultrafast task-switching, and (2) the any-to-any mesh internal rings-type communications architecture. This ensures architecture scalability for higher speed ports or higher port density. Additionally, the HPCP provides (3) a design for low system cost. The usage of low cost memories (DDR-SDRAM) and the unique streamline memory architecture eliminates the need for high speed SRAM or external lookup engines (CAM). The primary beneficiaries of the HPCP are relatively high-end applications for the CPE and access markets. [1015]
  • Preferably, the HPCP supports an about 1.2 Gbps (simplex serial rate) rate for L2/L3 wire speed IP/ATM/TDM protocol processing. As indicated above, the HPCP platform includes a core flexible packet processor (RISC [Reduced Instruction Set Computer] network processor technology) and an SOC rings-type interconnect technology. This approach provides a high performance programmable networking platform that permits rapid introduction of new features, new standards, and other enhancements. The robustness of the HPCP allows it to be shared among multiple product lines. According to one embodiment, the HPCP is designed as a 0.18 micron, 520 HS-PBGA (Heat Spread Plastic Ball Grid Array) chip. [1016]
  • FIG. 62 is a schematic diagram of an embodiment of the HPCP [1017] 620, sometimes referred to herein as the Trajan. According to this embodiment, the HPCP 620 employs a rings-type communication architecture, which is indicated on FIG. 62 as the Fabric on a Chip 670. The packet processor 672 (also referred to as control packet processor, MIPS, CPU, or simply, the host) functions as the control processor for the HPCP 620. The packet processor 672 can be implemented using any suitable processor. Preferably, the packet processor 672 has the following characteristics: 266 MHz (preferably, MIPS) processor; MIPS-I Instruction Set; 16K I, 16K D cache; supports Write back and Write forward or through; has cache coherency; supports Direct Map; and has a MMU 64 TLBs (Translation Look-aside Buffers). Other suitable alternatives to a MIPS processor could be employed. The HPCP embodiment of FIG. 62 employs two network processors 674, 676 (Voblas) for packet processing. Preferably, the network processors 674, 676 are designed in accordance with the flexible packet processor discussed elsewhere herein. Each of the network processors 674, 676 preferably communicates with an operatively connected multi-access SRAM, which preferably has 72 Kbytes of memory.
  • The HPCP embodiment of FIG. 62 employs three DMA modules, [1018] DMA 678, DMA 680, and DMA 682. There also are two DDR- SDRAM controllers 684, 686, each of which is capable of interfacing to a DDR-SDRAM 688, 690 running at 133/166/200 MHz. Each controller supports a 32 bit data bus. The controller 684, 686 supports two masters (DMA and PP) and arbiters between them. An efficient packing algorithm is used to optimize memory transactions. Coherency is reserved between the two masters and READ and WRITE operations. DMA 678 and DMA 680 can master the two memory controllers accordingly. Each can arbitrate for the memory bus and is capable of bursts up to 64 bytes on a transaction.
  • The EPB (External Peripheral Bus) interface (I/f) [1019] 692 is used to interface to a boot EPROM, Security Accelerators and a DSP (collectively figure element 694). The EPB bus runs at 80 MHz with asynchronous address/data protocol. The EPB 692 also has five (5) dedicated Chip Selects (CS) and a special 32 bit CS bus transaction.
  • The HPCP of FIG. 62 includes a number of peripheral modules, including [1020] TDM 696, 4×Ethernet (MII and RMII) 698, a first ATM Utopia Level 2 700, a second ATM Utopia Level 2 702, a 3×MFSU 704, and an 12C/SPI SW base 706.
  • The [1021] TDM module 696 may be used to support time division multiplexing connectivity, such as for T1/E1. Preferably, the TDM module 696 supports the following: up to 256 time slots; HDLC (high-level data link control) and a transparent mode. The TDM module can also interface high-speed TDM busses (backplane) such as H-MVIP, SCSA, H110, and ST-BUS. The 4×EthernetMII/RMII module 698 preferably supports 10/100 Ethernet connectivity. The 3×MFSU module 704 preferably supports high speed (up to 52 Mbps) HDLC or high-speed UART (Universal Asynchronous Receiver-Transmitter).
  • The HPCP has two [1022] ATM interfaces 700, 702 using Utopia Level 2. Each port can be configured for an 8 bit or 16 bit data path. The ATM port can be configured as master or as a slave. In a master configuration, one port (subscriber port) can master up to 124 PHYs and the second port (uplink or network port) can master up to 15 PHYs. Both ports can support an Extended Utopia Mode where the ATM cell length can be extended from 53 bytes up to 64 bytes programmable.
  • Ring Interface on an EPB [1023]
  • As discussed previously, in at least one embodiment, the HPCP is implemented using a ring architecture and message protocol as disclosed herein. As illustrated with reference to FIGS. [1024] 63-67 an external interface 720 may be implemented along with the EBP 692. An external FPGA 722 that sits on EPB busses may play the roll of external ring keeper, the Anchor can be external the network processor and on the FPGA. Of course, instead of FPGA it could be another HPCP. The input's job is to disable EPB operation for current transaction, and enable movement of ring data. This input is driven by either by the FPGA or by second HPCP. The output's job is to tell the second HPCP or the external FPGA who is the ring keeper, that the output data is for him. Regular EPB customers (like Flash) will look at the output as additional enable. One advantage of this arrangement is that a number of pre-existing pins are used for part-time ring transactions. The speed of ring messaging is reasonably high (same speed as of original EPB). The changes to existing EPB are minimal. The ring side implementation is exact copy of a bridge plus state machine.
  • In the implementation illustrated in FIG. 63, 32 bits of data in/out is used to carry messages. There also is the potential to use also the address bits, thus increasing the throughput, but complicating the design. [1025] Message_sync 724 is a relatively simple block that takes care of turning 60 (or 92) bits of outgoing message into several (2 . . . 3) transactions on EPB like interface. It also turns incoming data (from 2 . . . 3 transactions) to messages. On the inside part message_sync interfaces wit a regular bridge 726. Since EPB DMA 728 can potentially sit on a busy ring, message_sync 724 and its bridge 726 can be placed on the less busy ring.
  • The [1026] mux 730 takes data either from EPB 692 or from message_sync 724 depending on the transaction. Handshake signals basically ask the EPB 692 to give up a cycle. And in the other direction EPB 692 acknowledges the tristate or mux surrender. Using this fact, the chip selects can be disabled in ring-oriented transactions.
  • In order to program the [1027] message_sync 724 and EPB 692 to enable/disable external ring operation, the hardware can sample a pin during power up reset. First, hardware reset puts message_sync in disabled mode such that during Enumeration, it passes on the Enumeration without attempting to talk to the other chip. The message_sync 724 assigns to itself space of one address. After initial Enumeration, PP enables (or not) the message_sync to work. If message_sync 724 is enabled., second Enumeration is done. This time message_sync transmits Enumeration message to the other chip. Then it waits for the message to circle back to it.
  • The HPCP chip requires interfaces to various devices, which can serve as both slaves and masters (or both). Some of these devices are: [1028] DSPs 732; encryption engines 734; external buses such as PCI; external memories; and other HPCP chips. Some of these devices may directly connect to the EPB port, on the chip. However, in order to use these devices, a complex handshake is often required which would force the PP to assist in each transfer. In the case where these devices should initiate a data transfer into the HPCP, a special mechanism is required, in order to avoid polling on the EPB port. The interface described is designed to allow a more robust and efficient connection of such devices to the chip, and is consistent with the HPCP hardware and software architecture. FIGS. 64-67 describe the interface, starting from a system view and ending with detailed block diagrams of the components.
  • Operation [1029]
  • The interface described above implements a ring interface to external logic, allowing the [1030] HPCP 740 to write out messages, and external devices to generate arbitrary ring messages in the HPCP 740. The FPGA 742, making the interface between the HPCP and the external devices 744, 746, serves as shared memory. This memory can be independently accessed from both sides. In addition, accesses from the HPCP 740 can send messages to the external devices, and accesses from the external devices can generate messages on the HPCP ring. The DPR (dual port RAM) 780 is seen as both random access memory (RAM) or as a FIFO, depending on the access address. Two FIFOs 748, 750 are implemented. One for receiving ring messages from the HPCP and one for sending messages to the HPCP.
  • Message Generation by the HPCP [1031]
  • When the ring interface recognizes a message to the external interface, a write burst is issued to the [1032] memory controller 760. This write has a fixed length of 128 bits. The write is always targeted to the same address, being the write FIFO address in the external device. The external device indicates to the HPCP 740 when data is being read from the FIFO. The HPCP 740 knows in advance the size of the write FIFO, and therefore knows when it is possible to issue more write commands to the memory controller. When it is no longer possible to issue writes, and all write buffers on the way are full, the OK signal to the ring interface is de-asserted.
  • Message Generation by the External Device [1033]
  • The FIFO mapping is used to queue messages to be read by the HPCP. The FIFO memory is 128 bits wide (not all bits have to be implemented in hardware). Each ring message occupies four 32-bit data entries, to be read be the HPCP. When the message is complete (all 128 bits written) the SYNC output to the HPCP is activated, indicating that a message has been written to the message queue. This allows the HPCP to keep track of the number of messages written, and to read the appropriate number of messages. [1034]
  • The HPCP counts the number of messages entered into the queue in a request counter, and the number of read messages in a service counter. When there are pending messages (the request counter is greater than the service counter) and the appropriate read port is free, the HPCP issues a 128-bit read. The implementation of this read request depends on the port type. Ports that support burst transfers are issued one 128-bit burst read. Ports that support only 32-bit data transfers are issued 4 reads. When the read request is complete, the service counter is incremented, indicating that an external message is served. [1035]
  • The data read from the port is used to generate a message. When all 128 data bits are received in the message sender, a message is sent to the ring interface. [1036]
  • Interrupts from HPCP to the External Device [1037]
  • The [1038] HPCP 740 can write data to special addresses that cause an interrupt to the external device. These addresses can either be mapped for interrupts only or interrupts and data (in the DPR 780).
  • General-Purpose Data Transfer [1039]
  • Besides sending messages from the external device to the HPCP, this interface serves as a buffered link for general-purpose data transfer. The DPR can be read and written by both the HPCP and the external device. When the HPCP moves data for processing in the external device, it writes the data to the DPR, and then causes an interrupt in the external device by writing to an interrupt address. [1040]
  • The external device processes the data and returns it to the DPR. It then generates a message to the HPCP, indicates it to read back the data from the DPR. Since the entire DPR can be mapped as a FIFO, the external device can also write the entire data directly to the NP memory in the HPCP, and then notify the NP that the data is complete. [1041]
  • Supporting Multiple External Eevices [1042]
  • One interface can support several external devices. Many DPR blocks can be implemented in a single FPGA, letting each of the external devices function independently. The message queue can either be unified into a single FIFO with write arbitration, or can be made of several FIFOs arbitrated during the message reads. Anyway, read or write arbitration is performed in the FPGA and is transparent to the HPCP chip. [1043]
  • Traffic Management [1044]
  • In one embodiment, traffic that is already on the ring gets priority. Also, modules may be designed to consume incoming messages without delay—or with well bounded delay. Futher, a virtual watch dog timer can be implemented in the PP or one of the network processors. In this case, the watch dog timer periodically sends a message to itself via the ring. If this message is not there by the time the task is reawakened, indicating that the ring is locked and in need of a reset. [1045]
  • Memory Considerations [1046]
  • Network processor RAM can grow up to, for example 64 KB. The problem is, however, that this RAM uses 16 bits of ring addressing space. So with 20 bits of address there can be approximately [1047] 8 network processors in a reasonable system. Maximum theoretical number of network processors is 16. But space may be needed for other modules as well. There is no great penalty to extend ring address space to more than 20 bits and this can be done to accommodate design necessities, for this example just 20 bits.
  • Inside network processor compound there is more than RAM. There are doorbells, debug, timer and some more. They all need address space, but much more smaller. If they are assigned their own address space, the resulting address space used by network processor compound will be 128k bytes. This is because 65 KB is actually used, but because the address space is rounded to next power of 2, 65k become 128k. [1048]
  • Another aspect of the present invention is to steal a little bit of space from RAM on the rings and assign the low 1 k of bytes of ring address space to all the little modules. For example, doorbells take 64 entries of address space (32bit entries). When work write message arrives for (vobla_base_address+32) it is routed to doorbells and not to RAM. [1049]
  • This effectively protects the lower portion of the RAM from the ring network. network processor can still load/store and even fetch from there, because load/store does not access the Rings. [1050]
  • FIGS. 68 and 69 illustrate two typical scenarios for Tx and Rx Ethernet as may be implemented in accordance with the present invention. To summarize the Ethernet compounds: [1051]
  • The [1052] Rx manager 802 mis adapted to: send regular request, doorbell, taskid and viscode; header send Ahead—knows how many bytes, status and where; Multi read request service, for moving data to network processor RAM; and know when to switch to urgent request.
  • The [1053] Tx manager 812 is adapted to: know when to start transmitting, when to retransmit; when and how to issue aregular request—doorbell, taskid, viscode; perform free buffer count send ahead; perform urgent request—last buffer and not last in it; resend doorbell on request, if there are free entries in fifo—this is used by task that adds frames to transmit queue; keep RAM status fifo of finished frames—it sends tx complition status word and place to put it.
  • Rx operation: [1054]
  • (1) Rx Frame starts incoming. [1055]
  • (2) It fills one entry (64 bytes) in fifo. [1056]
  • (3) Header+Status is pushed ahead to network processor RAM. [1057]
  • (4) Ring doorbell. [1058]
  • (5) Network processor switches to service the task. [1059]
  • (6) Network processor examines the header. [1060]
  • (7) Network processor sets up CRC snooper, especially the count. [1061]
  • (8) Network processor sends multi read request from the rx fifo.it takes 12+4 clocks, so network processor doesn't switch out, just polls the crc snooper at the end if after rewarding whole fifo entry, there are still valid entries, new doorbell is ringed and new header is sent ahead. [1062]
  • (9) Network processor issues DMA write request and yields out. [1063]
  • (10) DMA agent in network processor builds the messages to DMA based on the DMA opcode, src registers data and DMA context registers. This context has the knowledge of DMA address, token availability, little/big endian, etc. Part of communication with DMA is also a new token request. [1064]
  • (11) when the DMA is done, it sends doorbell to re-awake the task, to continue the work. [1065]
  • Tx Operation: [1066]
  • (1) ask that adds frames to transmit queue, adds a frame and also sends a message to transmitter fifo if transmitter is not doing anything, ring doorbell of transmit task. [1067]
  • (2) transmit task is waken up by doorbell [1068]
  • (3) DMA read issued and network processor switches out [1069]
  • (4) when DMA is finished, multi read is issued from network processor RAM to enet tx [1070]
  • (5) when fifo entry is full, tx starts transmitting [1071]
  • (6) tx fifo updates the number of empty fifo entries in network processor RAM. [1072]
  • (7) if task detects empty buffers it can fill, it fills them and retires(8) when fifo entry is empty, the free count is sent ahead and doorbell is rung. [1073]
  • (9) if last buffer is half full and it is not last, Enet fifo requests urgent (10) each time the frame is finished by Tx, the manager sends status word to circular fifo in RAM. the manager uses single address plus 2-3 bits counter to create the address, it also writes the counter value in fixed location. [1074]
  • Programming Model for the HPCP [1075]
  • FIG. 70 illustrates the [1076] programming model 830 that may be employed for the HPCP. According to FIG. 70, the packet processor 832 (PP or control packet processor [CPP]) operates as the controller for the HPCP, performing such control plane functions as signaling protocols, protocol management, handling exceptions (faults), and system control and configuration. The network processors 834, 836 perform data plane functions such as per-packet handling, forwarding decisions, packet classification, quality-of-service (QOS) handling, queuing, scheduling and packet re-formatting.
  • Data Path Protocol Support for the HPCP [1077]
  • FIG. 71 illustrates the data path and control [1078] path protocol support 840 provided according to a preferred embodiment of the HPCP. This data path protocol support is provided by the network processor (e.g., flexible packet processor) engine of the HPCP. Each protocol capability shown in FIG. 71 is labeled according to its position in the Open Systems Interface (OSI) layered protocol model. The legend for FIG. 71 is as follows: (1)=Layer 1; (2)=Layer 2; (2*)=Layer 2 inter-working; (2.5*)=Layer 2.5 inter-working; and (3*)=Layer 3 inter-working.
  • The boxes in FIG. 71 labeled as (SM) illustrate the signaling and management provided in order to manage the data path protocol support according to a preferred embodiment of the HPCP. Preferably, the signaling and management operations shown in FIG. 71 correspond to the control plane operations performed by a CPP such as that shown in FIG. 63. [1079]
  • A Packet Processor for the HPCP [1080]
  • A flexible packet processor that could be employed in the HPCP typically includes capabilities, such as zero-overhead switching, not normally present in general purpose processors. Accordingly, the preferred packet processor provides the following characteristics: [1081]
  • nearly zero overhead task switch; [1082]
  • a Hardware scheduler (next_task_id)—strict priority scheme; [1083]
  • support for unlimited number of threads/tasks (e.g., 32 simultaneous tasks); [1084]
  • allows connection to multiple external memories in parallel; [1085]
  • modular interface to accelerators; [1086]
  • compiler friendly; [1087]
  • tailored instruction set, with about 60 instructions for: ALU (Arithmetic Logic Unit), data manipulation, flow control, load/store, task management (yield), agent (Accelerators), SPR (Special Purpose Register) move, and the like. [1088]
  • FIG. 72 is a block diagram of the [1089] packet processor 636 employed in the HPCP 620 (FIG. 61) according to one embodiment of the invention. The packet processor 636 of FIG. 72 includes a packet processor core 850 (Vobla core), an internal memory 852 for programming and data; and a series of support submodules (compounds) for the packet processor, such as a core debug 854, a doorbell 856, a CRC 858, timers 860, DMA agent 862, and other agents 864. There is also an external interface 866 for interfacing to the fabric. The packet processor core 850 includes a program sequencer 870 that further includes a sequencer 872, a decoder 874 and a task switch block 876. There is also a load/store unit 880, a preload/bump unit 882, a register file unit 884, an arithmetic logic unit 886, and an agent interface module 888. A multiplexer 890 is disposed between the internal memory 852 and the load/store unit 880 and preload/bump unit 882. The packet processor 636 of FIG. 72 includes two source buses and a destination bus in the core, and an agent bus for interfacing with the agents.
  • FIG. 73 illustrates an [1090] exemplary processing pipeline 900 for a packet processor used in the HPCP according to an embodiment of the invention. The pipeline 900 of FIG. 73 shows the steps carried out for the execution of each packet processor. According to FIG. 73, first an instruction is fetched. Then the instruction is decoded. The address for data to be accessed is then calculated. The source registers are read and the instruction is executed. The result is then written into the destination register.
  • Quality of Support Features for the HPCP [1091]
  • The HPCP may incorporate a number of quality of support (QOS) features according to one embodiment of the invention. For example, the HPCP may incorporate one or more of the following QOS operations: output queuing and scheduling; cell/frame pacing; IP classification (behavior aggregator); lookup engines; and congestion management. Preferably, these QOS operations are carried out by the packet processor implemented in the HPCP. The HPCP may provide frame-based output scheduling using an output scheduler. The output scheduler may provide a frame-based service to include: up to 8 configurable queues 910-924 per virtual/physical transmit queue; up to M ports Strict Priority (SP) [1092] 930, up to N ports of WFQ (Weighted Fair Queue) 932; and up to L ports Low Priority (LP) 934.
  • FIG. 74 illustrates the output scheduling for the HPCP according to an embodiment of the invention. [1093]
  • Work conserving schedulers: Scheduling order empty 1−M, empty M N according to scheduler, and then empty N-L. The HPCP may provide cell/frame pacing according to an embodiment of the invention. For example, an ATM pacer could employ a calendar wheel algorithm and provide a cell-based service with traffic management for UBR, UBR+, CBR, VBR, and VBRrt. [1094]
  • A frame-based pacer (bandwidth limiter) may provide pacing per port in order to limit the port overall output to a predefined rate (e.g., allow a 100 Mbps uplink to be limited to a 12 Mbps service if required). [1095]
  • Combining QOS for scheduling and pacing may be implemented in the HPCP as shown in FIG. 75. According to FIG. 75, the ports are fed to the configurable queues, which are then output as a UBR (unspecified bit rate) [1096] 940, VBR (variable bit rate) 942 or CBR (constant bit rate) data stream 946 to the calendar wheel algorithm 948. The output of the calendar wheel algorithm 948 is fed to the Utopia interface 950.
  • The HPCP may provide IP packet classification according to an embodiment of the invention. Preferably, the HPCP provides IPv4 packet classification. [1097]
  • The HPCP may provide this feature based on up to 512 classification rules that are prioritized by order. The packet classification is based on 5 or as many as 7 (see italicized fields) matching fields: IP Source Address; IP Destination Address; Protocol ID; TCP/UDP Source Port Number; TCP/UDP Destination Port Number; Type Of Service (TOS) bits; and Physical/Logical I/f Port Number.The matching criteria may be based on an exact match, a prefix match, and/or a range match on each field. Classification rules can be set dynamically by protocols such as MPLS or RSVP, or manually. [1098]
  • The HPCP may also provide address lookup engines according to an embodiment of the invention. At [1099] Layer 2, the following address lookup capability is provided:
  • Ethernet MAC (Media Access Control) Address Uni-cast/Multicast. [1100]
  • ATMVPI (Virtual Path ID)/VCI (Virtual Connection ID). Algorithmic approach supports single PHY and multi PHY. [1101]
  • MPLS Label Lookup. [1102]
  • At [1103] Layer 3, the following address lookup capability is provided:
  • IPv4 LPM(Longest Prefix Match) lookup. [1104]
  • The HPCP may also provide congestion management QOS according to an embodiment of the invention. The congestion management QOS includes random early detection (RED) per queue for frame based transmit queues and ATM congestion recovery EPD and PPD (Early Packet Discard and Partial Packet Discard, respectively). [1105]
  • Exemplary Embodiments Showing Beneficial Applications for the HPCP [1106]
  • The HPCP (Trajan) is a versatile communications processor that can be used in many application scenarios. The HPCP's frame, cell and circuit processing capabilities makes it well-suited for access applications. Set forth below are some exemplary application scenarios where HPCP can be used as a SBC (Single Board Computer) or on a line card application in a chassis configuration. [1107]
  • FIG. 76 illustrates a exemplary application of the HPCP in order to provide an Enterprise Integrated Access Device (E-IAD) [1108] 960.
  • Enterprise IADs are used at the edge of a corporate network. This class of box or device is usually used at the edge of a corporate remote office. The enterprise IAD manages the traffic from the internal LAN (Local Area Network) to the external WAN (Wide Area Network). The WAN connectivity can be a dedicated leased line (Intranet) and/or connectivity to an ISP (Internet Service Provider), or both. An IAD will typically also handle voice traffic, which may be from a direct connection to a PBX, or derived voice (over either ATM or IP networks). [1109]
  • The major tasks that an IAD needs to perform include routing, bridging, QoS prioritization (for voice packets), and inter-working functionality ([1110] RFC 1483, T1 emulation using CES or FRF). The various uplinks (WAN access methods) may be ATM, Frame Relay, and Ethernet. The media used by the uplink is typically either nxT1 for both ATM and Frame Relay and fiber for Ethernet and ATM.
  • FIG. 77 illustrates an exemplary application of the HPCP in order to provide _Toc530832294a Multi Tenant Unit (MTU)/Remote Terminal Unit (RTU_Toc530832294) [1111] 970. An MTU is very similar to the IAD in design. Both applications reside in the customer premises.
  • The MTU device is physically located in a basement of a building, providing distribution of high speed Internet access to a building. Typical applications will distribute xDSL connections to the offices/flats of a building using the existing copper infrastructure. The networking architecture will be stackable boxes using Ethernet or ATM as the backbone network. The MTU will be connected to an external edge router or the router functionality can be integrated into the system. [1112]
  • RTU's have similar functionality to an MTU (e.g., distribution of xDSL connectivity to a remote neighborhood). Unlike an MTU, however, an RTU is physically located outside a premise: it is managed and maintained by the ILEC or CLEC (Competitive Local Exchange Carrier). RTU functionality may be considered as a DSLAM, meaning the aggregation of subscriber's traffic into high-speed uplink. In terms of functionality, the RTU may be considered as an ATM switch. [1113]
  • The exemplary embodiment of FIG. 77 shows the MTU configuration where the HPCP can provide up to 62 xDSL subscribers ports and 10/100 Ethernet to the backbone network. In this scenario, the HPCP will perform the IP routing functionality or Ethernet bridging via [1114] RFC 1483.
  • In the RTU case, HPCP will perform ATM switching functionality, whereby user traffic will be policed according to the subscriber's contracts on the subscriber side, and shaped towards the network side on the aggregate (VP shaping). In this case, there is a support for total of 61 subscribers. In the RTU case, the POTS (Plain Old Telephone System) lines that are terminated at the RTU can be either backhauled on a separate TDM link, in which case there is no processing involving the HPCP, or can be packetized over ATM (CES or AAL2 trunking) using one pipe to backhaul both data and voice services. [1115]
  • Other exemplary uses of the HPCP include its application for a_Toc530832295 media gateway (MG) and voice gateway_Toc530832295 (VG). Many Telecom operators are updating their networks to support packetized voice services. One of the main driving forces is the savings in infrastructure support that result from an operator being able to maintain one network supporting both voice and data services. [1116]
  • A media gateway is a network element that links dissimilar networks, such as TDM to ATM or TDM to IP networks. Conceptually, the media gateway consists of four elements: a TDM I/f, a transcoding engine, a gateway controller, and a packet network interface. On the circuit-switched network side, a line card is used to connect the time TDM channels from the PSTN to the gateway. A transcoding engine performs processing to convert between standards. A gateway controller manages the gateway and call routing. Finally, a packet network interface routes calls between the gateway and the packet infrastructure. [1117]
  • FIG. 78 illustrates one exemplary application of the HPCP (Trajan) in a [1118] media gateway application 980. In the proposed scheme, the HPCP will perform the networking protocols—both data path (termination and packetization of AAL2 or RTP) and control using the PP (signaling protocols such as MGCP, V5.2, GR-303). External DSPs will perform the transcoding functions.
  • As shown in FIG. 78, an array of DSPs can be connected to the HPCP EPB (External Peripheral Bus). According to a proposed approach, FPGA mediator logic is used in order to boost the total system performance and to offload the PP processing bottleneck. Since many DSP vendors have a HOST PORT I/f as the mechanism to transfer data into/out of DSP memory, each transfer requires some control transactions (write to host port control register). This operation is costly and requires the involvement of the PP in each transfer. When the number of transactions is high, the PP will become a bottleneck. The solution is to create a protocol between the FPGA and the HPCP that can run in a burst mode and have the FPGA handle and manage the control side. The HPCP provides packet network interfaces both for ATM (Utopia) and for IP (Ethernet or POS). Signaling information for the TDM network can be transferred to the TDM cross connect using the HPCP's TDM ports. [1119]
  • In a trunking gateway application, the HPCP can be connected both to the TDM network and to the packet network and can perform the entire application. [1120]
  • FIG. 79 illustrates another exemplary application of the HPCP for a wireless access network (AN) [1121] 990. Wireless access networks (AN) consist of Base Transceiver Stations (BTS in 2G and NODE-B in 3G) and Base Station Controllers (BSC in 2G and RNC in 3G) that aggregates BTSs. The BTS interfaces between the radio network (RN) and the wireline access network. The BSCs manage radio resources and network functions between multiple BTSs and exchange traffic with the media gateway and the packet switching node in the wireline core transport network.
  • Generally, a BTS is connected to the WAN using T1/E1 lines. The transport layer on the WAN is either ATM or IP. For utilization and QoS reasons in an ATM transport choice, AAL2 is chosen as the transport layer. In this case, the BTS needs the following functionalities: ATM UNI functionality; wire-speed support for AAL2-Mux (1.366.1, 1.366.2); and Inverse Multiplexer for ATM (IMA). [1122]
  • When the transport layer is IP-based, the BTS architecture will require the following functionalities: IP termination point; IP QoS support IP classification, Diffserv and enhanced queuing/scheduling algorithms; RTP/UDP/IP header compression; and wire-speed support for PPP-Mux and/or ML-PPP. [1123]
  • In both architectures, the HPCP can be used as the central system processor based on its ability to process wire speed ATM and IP with 8 T1/E1 Interfaces to the WAN and Utopia or 10/100 interface to the backplane. The HPCP can also be used in the BSC as the aggregation processor. In this case, the processor needs to perform IP routing and ATM switching (AAL2 switching) at OC-3 rates (wire-speed). [1124]
  • FIG. 80 illustrates an exemplary application of the HPCP for a [1125] multi-service access platform 1000. A multi-service access platform combines numerous functions, services, access technologies and protocols in one network element. This flexibly configurable network element simplifies network design, planning, roll-out, and network management. Typical functions include the following:
  • Optical carrier (OC)-3c/12c/48c optical multiplexer [1126]
  • T3/OC3c aggregator [1127]
  • GR303 gateway [1128]
  • ATM switch [1129]
  • IP router [1130]
  • Access technologies include the following: [1131]
  • T1 [1132]
  • T1—inverse multiplexing over ATM (IMA) [1133]
  • T3 [1134]
  • XDSL ADSL, VDSL [1135]
  • Single-line highbit rate DSL (SHDSL) [1136]
  • Ethernet [1137]
  • Time division multiplexing (TDM), frame relay, ATM, and IP are supported as protocols. The multi-service access platform provides optimized network architecture and transport efficiency from the customer premises into the metropolitan area network (MAN). [1138]
  • The architecture of a multi-service access platform is shelf based with an ATM and TDM backplane. Numerous subscriber (downlink) line cards connect customer premise equipment such as IADs, routers, and PABX and network elements as DSLAMS to the platform. The uplink connectivity is usually to an SDH/SONET network via an optical link. A special voice gateway subsystem can be added for termination of VoPacket. [1139]
  • The HPCP is positioned to fit in or be compatible with many line cards and trunk cards in a multi-service access platform application. For example, the HPCP can handle up to 8 T1/E1 Frame-Relay to ATM interworking functions (FRF.5, FRF.8) on a line card; it can perform ATM switching both on a LC or at the trunk card at 2xOC-3 rate. It can also be used to terminate 4 10/100 Ethernet links and perform 1483 Ethernet bridging, IP routing or SAR frames. Additionally, the HPCP can be used to terminate PPP, PPPOE or PPPoATM traffic on an xDSL line card. [1140]
  • In terms of voice support, the HPCP can be used in the voice gateway subsystem to terminate VoATM or VoIP; it can also be used for trunking application on the trunk card to take the narrowband traffic off the TDM backplane and trunk it (AAL2 trunking or/and CES) towards the ATM network. [1141]
  • A major advantage for using the HPCP in a multi-service access platform application is its versatility in terms of [1142] 10 interfaces and protocol support. A system designer can re-use board design, system knowledge and expertise to leverage the HPCP as a networking platform in the access space.
  • Exemplary Approaches to the Software in the HPCP [1143]
  • The software provided for the HPCP (HPCP software) is preferably fully integrated with the HPCP hardware and architecture, highly optimized, and includes complete applications to support the myriad of uses for the HPCP. According to one embodiment, the software developed and sold by, for example, GlobespanVirata, Inc. known as Integrated Software on Silicon (ISOS) (e.g., ISOS version R8.0, etc.) can be run on the HPCP. The ISOS software includes tools and a developmental environment and is well-suited to the HPCP. The HPCP software includes a complete port to various operating systems, such as VxWorks, Linux, OSE (a real-time kernel from Enea Systems), and ATMOS-2 (ATM-Operating System [Virata's proprietary operating system]). The HPCP software may be integrated with other software products, such as for Web management (e.g., the emweb™ [embedded Web server] management product sold by GlobespanVirata, Inc.), UPnP, security and firewall functions. The HPCP software may be integrated with voice processing software (e.g., the vCore™ voice DSP software sold by GlobespanVirata, Inc.) for voice processing solutions. [1144]
  • Preferably, the HPCP software combines the software solutions for both the CPP (MIPS) for the control plane and the packet processor for the data plane. The HPCP software may include basic drivers for ATM AAL0, AAL1, AAL2, AAL5, Ethernet, HDLC, UART, Transparent (PCM), SPI and 12C. [1145]
  • The data applications include support for bridging, such as for spanning tree ([1146] 802.1 d), prioritized bridge (802.1 p), Ethernet to Ethernet, and Ethernet to AAL5 (via RFC 1483). The data applications may also include support for routing and IP forwarding (such as RIP [Routing Information Protocol], OSPF [Open Shortest Path First] and MPLS), and for frame relay.
  • The HPCP software may include voice applications, such as for VOATM (AAL2 [SS-SAR]). According to one embodiment, the HPCP software is fully integrated with the vCore™ voice DSP software sold by GlobespanVirata, Inc. of Red Bank, N.J. The HPCP voice applications include support for circuit emulation (e.g., CES [Circuit Emulation Services]) and VoIP (e.g., RTP/RTPC in the packet processor and MEGACO, MGCP and SIP [Session Initiation Protocol] in the CPP). [1147]
  • According to one embodiment, the CPP software package includes a flow manager element. The flow manager element creates applications by linking micro-coded building blocks, isOS (operating system) independent, and provides a convenient API (Application Program Interface) for customers not wishing to use all other CPP software. [1148]
  • FIG. 81 illustrates of the [1149] flow manager functionality 1020 according to an embodiment of the invention. As stated above, the HPCP software may be integrated with voice processing software such as, for example, the vCore voice DSP software sold by GlobespanVirata, Inc. for voice processing.
  • Development of software for the HPCP may be facilitated through the use of certain data plane development tools. For example, a functional network processor (packet processor) simulator may be employed. GlobespanVirata, Inc. markets a packet processor simulator called Vsim™ which may be employed for this purpose. Vsim™ is a high speed system simulator which simulation includes the following: packet processor core Instruction Set (IS); functional behavior for DMAs; internal and external memories; and functional level peripherals. Vsim™ provides performance analysis and includes traffic generators. Another data plane development tool that may be employed is Vas™, which is a stand-alone packet processor assembler. Another data plane development tool that may be employed is V-bug™, which is an assembler level debugger. Another data plane development tool that could be employed would be VCC™, a packet processor C compiler. Another data plane development tool is V-GDB™, which is a packet processor C source level debugger (like V-bug™). Each of these tools can be hosted on a Windows NT™ or Sun™ platform. Each of the aforementioned exemplary development tools is marketed by GlobespanVirata, Inc. FIG. 82 illustrates an exemplary [1150] data plane development 1030 that could be employed for software development for the HPCP. The Vobla IS simulator 1032 refers to the packet processor simulator. According to another approach, software development could be undertaken using reference platform hardware instead of the simulated modules.
  • Specific Strategies for the Software in the HPCP [1151]
  • Development of software to power the HPCP processor as described herein is well within the skill of the ordinary artisan. Some of the considerations in designing the HPCP software are now discussed. In developing the HPCP software, there are various tradeoffs to consider in providing a software end-product that provides an acceptable balance between performance, robustness, portability, and other factors. For the balance of the discussion in this section, the HPCP includes the packet processor (PP) (or control packet processor [CPP]) and the flexible packet processor referred to as the Vobla or NP (network processor). [1152]
  • Operating System and Portability [1153]
  • The main goal of HPCP software is to perform functions in cooperation with the HPCP hardware to enable HPCP/Vobla chips to perform as desired in communication systems. Taking into account the vast diversity of different software embedded platforms currently used in the market of communications processors (VxWorks, Linux, Nucleus, OSE, etc.), it seems reasonable to try to offer sufficient flexibility in HPCP software package to address different embedded environments and different customer expectations for value-added software components. [1154]
  • In one manner, the HPCP could be an OEM (Original Equipment Manufacturer) product with a very limited software support package, such as drivers and initialization sequence applications. On the other hand, main embedded software platform providers offer solutions allowing potential customers to choose any preferable platform based on different considerations (e.g., existing code base and experience, performance, value-added components, reference platforms and applications, etc.). [1155]
  • Balancing these considerations, the goal should be should try to find those points where HPCP could be more attractive not only as a more powerful communications processor but also as a more flexible and convenient solution in different environments with more value-added components. One more consideration relates to system performance, which may depend on the particular embedded environment. For many popular embedded platforms (VxWorks, OSE, Linux, etc.), the introduced system overhead (which is usually measured in average system call processing time and interrupt latency) is unacceptable for many applications. This triggers suggestions to use other light dedicated environments (e.g., ATMOS, many home-grown simple monitors). Although the main network processor driving force is moving most or all of the critical data path code to the NP microcode area (including most popular switching, interworking, bridging, routing and forwarding scenarios), the CPP-termination data path still needs to be efficient. Therefore, OS-dependent overheads must be kept to a minimum. [1156]
  • From the above considerations it is reasonable to formulate the following HPCP SW-to-RTOS (Real-Time Operating System) integration strategy principles: [1157]
  • (1) HPCP software is to be provided in such portable form which enables its easy integration with different existing (and future) embedded platforms. [1158]
  • (2) HPCP software should meet different customer expectations for value-added components. In other words, there should be the possibility to offer different levels of support starting from simple object libraries providing low-level network processor drivers, through source-level packages allowing the generation of different libraries for different customer applications, including glue interfaces for different third-party components and deliveries with more value-added components with different implementations. Exemplary embedded platforms that may be the target for HPCP software integration include: VxWorks, Linux, OSE, CHAOS (a next generation ATMOS), Nucleus, PSOS, and others. [1159]
  • Configuring microcode applications. One of the innovations of the HPCP software is the placing of the critical data path functions to the NP microcode area. In this case the CPP serves mostly as a control/management plane for those data paths (data flows) created in the NP and acts as the NP flow manager which represents the look and feel model of the HPCP software. This approach assumes that other software requirements and/or software design decisions should strive meet the following main goal: NP processing should be as simple and effective as possible, meaning that: [1160]
  • All data structures (tables, flow contexts, etc.) used by the NP (and possibly shared with CPP) should be designed to be the most effective from the NP code perspective. [1161]
  • NP should blindly perform flow-specific processing by calling different functional blocks—the work for linking (stacking) of these NP functional blocks should be done in run-time by the CPP flow manager code when a request for new flow creation comes from the user application and/or control/management plane in the CPP. Such functional stacking is done by proper linkage of flow contexts in the shared RAM. To implement these points, NP data structures are known to the CPP. [1162]
  • FIG. 83 illustrates a HPCP look and [1163] feel model 1040 as described above.
  • NP load configuration. Considering the vast diversity of network applications for the targeted market and also the intention to provide an open communications processor architecture (i.e., the ability to program and add custom implementations to the NP microcode area), it is desirable that the NP software load be configurable at compile-time. Configuration files (for setting compile-time parameters) may be set either manually or, alternatively, via, for example, the System-Builder™ tool available from GlobespanVirata, Inc. Each one of the several NPs within a HPCP device may be loaded with a different microcode image. [1164]
  • Loading microcode. Dynamic NP code reload (i.e., changing the NP code contents during run-time) is not be supported. The NP microcode image will be loaded only once at NP reset time and will contain all functionality needed by a particular network device. Note that the NP may be reset by the CPP without a complete system reset occurring. This allows the user to change an NP load, after which the NP is soft-reset. [1165]
  • Control versus data plane processing. Much of the design of the HPCP software is aimed at extracting critical-path processing from the CPP and executing it in the NP. Critical path processing, in this context, means processing that is performed on virtually all data packets (or cells) on an interface. It varies from one application to another and covers all layers of processing performed on the packet by the HPCP. Therefore, there is a divergence from a strictly layered architecture where the NP performs (for example) [1166] layer 2 and 3 and the CPP performs all higher layer processing in favor of a model in which the NP will preferably perform all critical path processing, irrespective of the layers involved ( layers 2, 3 and, at times, layers 4 and higher). The CPP, then, will perform all non-critical (or control plane) processing—from layers 2 and up.
  • For example, in an OSPF router, the critical path may consist of IP forwarding table lookups, ARP (Address Resolution Protocol) cache table lookups (where successful) and forwarding. Non-critical path functions will include all of the OSPF control plane (learning next-hops, etc.), generating the ARP requests, and handling the ARP responses. [1167]
  • Network Processor software design approach. Network Processor microcode covering most of the data path processing is a component implemented from scratch in the HPCP SW project which makes its performance efficiency an important design goal. Other design goals are flexibility, expandability and architectural openness. [1168]
  • From the HPCP software look and feel model defined above, the ATIC-like approach could be pretty useful for network processor microcode design, which involves the following concepts. [1169]
  • Network Processor objects and contexts. The network processor microcode may be divided into functional blocks, which may be operationally joined (e.g., chained) in various combinations by the application builder in order to create different execution paths. [1170]
  • The concept of an object is introduced to describe a section of code that has a particular state. The object is an instantiation of any entity that executes this code and has its own state information (referred to as its context). The context contains protocol state information, necessary data structures and resources that have been dynamically allocated to the object. For example, an object's context may include a protocol state value, transmit queue of frames, timer information and links to subsequent objects in the execution path. [1171]
  • The context (i.e., associated data structures) belonging to an object is object-dependent and the known only to the object itself. Objects have “next object” pointers and “next function” pointers. The “next object” indicates the object that will be activated after the current object has completely handled its current event (similar to the “this” pointer for the next object in C++ terminology). The “next function” pointer is the address of the routine that the next object will execute. [1172]
  • Different contexts for the Rx and the Tx parts of a flow (as is done in the Helium™ communications processor sold by GlobespanVirata, Inc.) may be employed because in most cases Rx and Tx processing are independent. This helps minimize the amount of control data needed to be transferred within the system. [1173]
  • Flexible mapping of Network Processor execution threads. In one manner, mapping of functional processing blocks to the network processor's threads (tasks) is performed not based on functional breakdown (i.e., task=protocol entity), but rather based on operational effectiveness. [1174]
  • With this approach, the network processor task is considered as an abstract operational vehicle capable of performing different functional blocks and/or protocol stack layers depending on the type of message in its input queue. [1175]
  • In order to optimize incoming message decoding, every message will contain at a pre-defined place (e.g., the first word) the pointer to the routine that will be called to handle the incoming message. This concept, of course, could be used only for network processor tasks having input queues. So-called HW network processor tasks (i.e., related to physical port specific processing) should be hard-coded to some port specific function. [1176]
  • Task boundaries will break the continuous execution of a flow, but these do not necessarily need to coincide with protocol (or layer) boundaries. In general, these breaks in a flow should be avoided unless functionally required since they add overhead. For example, in configurations involving a few different physical ports and/or networking applications, dedicated tasks empty/fill the serial port's FIFOs in order to guarantee low latency, while other tasks run application code which does not have such hard real-time requirements. [1177]
  • Memory allocation/handling approach. It appears that all port level contexts would be better allocated in internal network processor SRAM (for the sake of effectiveness and also because their number is limited by physical chip configuration and allows static preallocation), while all other data structures (connection level contexts and lookup tables) are stored in external SDRAM and allocated dynamically. [1178]
  • Certain structures (e.g., lookup tables) may be partially located in internal and external memory spaces or configured to reside in either one or the other. [1179]
  • Memory allocations in both of the network processor's SRAM and the external SDRAM are performed by the CPP. The network processor recognizes SRAM partitioning either via compile-time definitions (initialization is done by CPP, which initializes the memory data structures for the NP's tasks and for the different protocols) or via pointers in a well-known area filled in by the CPP in run-time by its SRAM manager. [1180]
  • Context and lookup data allocated dynamically in external SDRAM are processed by the network processor code after DMA'ing this data (only the needed part of it) to special areas in the network processor's internal SRAM. Buffer area for this data in SRAM is to be reserved in a per-task scratchpad area, which means that for abstract tasks (i.e., tasks that are not oriented to some particular processing), the scratchpad area should be allocated to be big enough to fit the maximum size of the context data being processed. [1181]
  • In one manner, only one copy of any context data should exist in SRAM at any given time. It is assumed that all context data is always copied to a fixed offset in the task's scratchpad and that there is a one-to-one correspondence between any context data field and the network processor task dealing with it (i.e., any data field is to be processed by only one network processor task). [1182]
  • At the same time, there could be considerable flexibility in context data processing with the goal of gaining processing effectiveness. For example, context data may be subdivided into sub-blocks where data in a sub-block is grouped based on a common processing principle: few fields on context are grouped together to be DMA'd at one time (in one shot) when all/most of these fields are to be in use by a specific functional block. On the other hand, for example, specific statistics counters in context could be read-modify-written only when the need arises (at end of a PDU [Packet Data Unit] or upon an error). This allows processing of different context sub-blocks by different network processor tasks. Of course, this approach makes context data design more tricky and difficult. FIG. 84 illustrates the network processor [1183] software design approach 1050 for an AAL5 receiver flow example.
  • Timers in Network Processor. A CPP-based timer service may be employed via the network processor-to-CPP command interface (especially when needed timers are big and are started/used rarely). Whenever possible, the internal free running timer for time-stamping of different events (e.g., to recognize reassembly timeouts) may be used. In this case, instead of getting a timer expiration event, a delta between the current free-running timer and the previous timestamp is calculated every time (each timer event) and a timer expiration event is generated where needed locally, without any message passing. [1184]
  • CPP software design approach. The CPP software design goals may include the following: [1185]
  • (1) A simple and convenient API should be designed allowing easy integration of CPP software with both different RTOS platforms and third party products while using thin SW shims. [1186]
  • (2) Maximum possible reuse of existing control/management plane code base should be sought. This may entail the introduction of a new simple SW shim and/or some restructuring of existing SW (i.e., the existing GlobespanVirata ISOS code). [1187]
  • (3) The ISOS-ATIC convergence program and principles are to be considered when decisions about code base choice are made. [1188]
  • The aforedescribed look and feel model of HPCP software having the CPP SW function as the NP flow manager has the following consequences. [1189]
  • CPP control and data API considerations. A control API may be provided for NP flows creation/deletion and their attributes change/query. This API is to be used mostly by user applications, but also (e.g., through a shim) by control/management plane SW (e.g., by signaling protocol and/or an SNMP [Simple Network Management Protocol] agent). [1190]
  • It may be desirable to provide a generic control API with a minimal and fixed set of control primitives (e.g., similar to the so-called ISOS White interface). According to this approach, flow of any (including any future) type may be created/deleted using the same control primitive (e.g., FLOW_CREATE) while flow type and other attributes are provided as primitive parameters. Flow attribute change/query may be handled via generic primitives (e.g., FLOW_GET_/FLOW_SET). [1191]
  • The text string used to pass flow type and attributes as a FLOW_CREATE primitive parameter seems to meet the requirement of API generality, flexibility and expandability. [1192]
  • The FLOW_CREATE primitive can both create the data path protocol layer components and also link them together in different ways. Also, it is desirable to have primitive syntax traceable to protocol specifications which makes its usage easier. It is feasible to start the needed control plane component implicitly while processing FLOW_CREATE primitives when proper parameters are supplied in the parameter string. Another requirement concerns the possibility of access to various layers/components created/linked by the FLOW_CREATE primitive, because the same protocol components could be involved in different flows. [1193]
  • For linkage of previously created termination flows in interworking/bridging/routing applications, a special primitive (FLOW_LINK, FLOW_UNLINK) may be employed. [1194]
  • Implementation of the FLOW_CREATE primitive for a specific data path protocol component (e.g., the CPP driver activated for particular flow type) can also be provided in the CPP data path processing transparently for upper application if the proper network processor microcode block is not yet available. [1195]
  • There may be a data API provided as well for termination data passing to/from the NP. This API may be used both by user applications and the control/management plane SW. Receive termination and transmit confirmation are bound via a standard call-back technique. [1196]
  • The goal that the NP code be simple, small and effective means that the CPP driver software activated via the control API for NP flow creation/deletion/alteration/query must recognize the flow context internal structure (even though this contradicts a strict object-oriented approach). However, this is useful because it allows both effective flow building/removing without NP interaction and also permits easy integration with MIBs (Management Information Bases). One consequence is that versions of the CPP and NP code should match exactly and should be tightly linked to each other. [1197]
  • Linking of NP flows by the CPP assumes that CPP knows the addresses of NP functional blocks which are inserted as next function pointer in contexts. This could be achieved when the CPP load is built while using symbol information of the previously built NP load. However, a difficulty arises when multiple NPs (e.g., with different functionalities) are served by the same CPP. Thus, some mapping (flow type to function block address) is needed that is specific for each NP. This could be implemented using a mapping array created in the NP internal SRAM during NP initialization, which is then read by the CPP for flow linking information retrieving. [1198]
  • The knowledge about internal flow context structure should still be localized in the particular CPP driver responsible for specific flow manipulation. Additionally, care should be taken while updating context data shared by the CPP and the NP. The object is simple: For every field it is desirable to have only one write owner operating without memory locking. If this is not possible, the CPP-to-NP command interface is to be used to pass a write request to the write access owner of the data. Also, additional means may exist to ensure that both the CPP and the NP code view or recognize a context structure in the same way. This may involve various checks of the compatible software loads used in the CPP and the NP. [1199]
  • The same approach as outlined above is to be adopted for the various look-up tables used/updated by both the CPP and the NP. These tables seem may be handled by the control/management plane software in the CPP. There may be no need to introduce a special API for the table update in the CPP. Alternatively, there may be some table-specific driver code which knows the particular table structure (chosen to be more effective from the NP perspective) and which is activated (via a SW shim) from the control/plane software. Again, care should be taken in implementing table update operations if a table could be changed from both cores, as well as in the case when the table update is a complicated operation requiring a set of changes in different places/entries. [1200]
  • Control and data API proposal. The following exemplary API meets the above design functionality and could be used as a basis for further design decisions: [1201]
  • NewFlowHandle=FLOW_CREATE (ExistingFlowHandle, /type=FLOW_TYPE/param=PARAM); [1202]
  • status=FLOW_DELETE(ExistingFlowHandle); [1203]
  • status=FLOW_TRANSMIT(ExistingFlowHandle, Frame); [1204]
  • status=FLOW_SET(ExistingFlowHandle, attribute_name, attribute_value); [1205]
  • status=FLOW_GET(ExistingFlowHandle, attribute_name, &attribute_value); [1206]
  • status=FLOW_LINK(ExistingUpperFlow, ExistingTerminationFlow); [1207]
  • status=FLOW_UNLINK(ExistingUpperFlow, ExistingTerminationFlow); [1208]
  • Enabling and disabling of flows in the Tx and/or the Rx directions could be implemented through a FLOW_SET primitive with proper attributes (e.g., TxEn, TRUE) which also could be provided in the FLOW_CREATE parameter string. [1209]
  • Starting of the control plane component may be initiated via the same FLOW_CREATE (or FLOW_SET) primitive. For example, creation of an AAL5 termination connection while starting corresponding OAM F5 process could be as follows: [1210]
  • AtmPortHandle[1211] 1=FLOW_CREATE(Voblald+PhysycalPortNumber, “e=UTOPIA/Phy=0/Name=A1) Aal5Handle FLOW_CREATE (AtmPortHandle1,/Type=AAL5/TxVci=5/TxVpi=0/Pcr=100/OamF5=Yes) FLOW_SET (Aal5Handle, RxHandler,0xADDRESS0) FLOW_SET(Aal5Handle, TxConfirmationHandler,0xADDRESS1)=/Type
  • The following example demonstrates the creation of a bridge application over one Ethernet port and two [1212] RFC 1483 encapsulated AAL5 connections created on different network processors. The IP termination flow is multiplexed on one of the AAL5 VCI, starting spanning tree process as a control plane of bridge application, OAM F5 flow started for the other AALS VCI and ILMI initiated for one of ATM ports.
  • EthernetPortHandle=FLOW_CREATE(Voblal+PhysicalPort[1213] 2, /Type=Ethernet/Promisc=Yes) BridgeHandle=FLOW_CREATE (EthernetPortHandle, /Type=Bridge/Spanning=Yes)AtmPortHandle1=FLOW_CREATE (Vobla1+PhysicalPort1, /Type=UTOPIA/Phy=1) Aal5Hanldle1=FLOW_CREATE (AtmPortHandle1,/Type=AAL5/TxVci=5/TxVpi=0/Pcr=10000) Rfc1483Handle1=FLOW_CREATE(Aal5Hanldle1,/Type=Rfc1483) IpHandle1=FLOW_CREATE (Rfc1483Handle1, /Type=Ip/IpAddr=10.0.0.1/Mask=255.0.0.0) FLOW_SET(IpHandle1, IpRxHandler,0xADDRESS0) LanHandle1=FLOW_CREATE (Rfc1483Handle1, /Type=Ethernet) ATM PortHandle2=FLOW_CREATE(Vobla2+PhysicalPort3, /Type=UTOPIA/Phy=5/llmi=Yes) Aal5Hanidle2=FLOW_CREATE (AtmPortHandle2, /Type=AAL5/TxVci=20/TxVpi=1/Pcr=10000) FLOW_SET (Aal5Handle2, OamF5,Yes); Rfc1483Handle2=FLOW_CREATE (Aal5Handle2,/Type=Rfc1483) LanHandle2=FLOW_CREATE (Rfc1483Handle2, /Type=Ethernet) FLOW_LINK(BridgeHandle,LanHandle1) FLOW_LINK (BridgeHandle,LanHandle2)
  • CPP API thread safety. Both control and data termination APIs in the CPP may be represented as a passive library (possibly provided in binary form as a part of the platform specific BSP) handling primitives from the user/control/management SW. These APIs should be thread safe and also should provide effective separation of control and data primitive flows. This avoids the scenario where processing of a termination data primitive is delayed because of control primitive handling. An ATIC-like vertical thread optimization model can help to solve such problems, and, in this case, API functions could be implemented as wrappers that cause message sending where needed. [1214]
  • CPP system software base. The goal of supporting a vast diversity of different RTOS platforms suggests the use of ATIC system services and the ATIC RTOS porting technique as a system base for CPP software development. [1215]
  • This approach is further desirable because ATIC system services have been chosen as well as a preferable base for the ATIC-to-ISOS convergence strategy. [1216]
  • Due to the high degree of similarity, the ISOS BUN framework could be reused as the CPP API implementing framework, perhaps with few changes. This conceivably may allow the reuse of existing BUN drivers and the same legacy peripheral ports for re-implementation on the network processor. [1217]
  • HPCP Software Partitioning [1218]
  • The goal of this section is to characterize the HPCP software partitioning as more or less independent blocks while trying to roughly define: [1219]
  • Functional specification of every block. [1220]
  • Interfaces between blocks and interfaces to outer world (external) software. [1221]
  • Strategy and estimation of possible software reuse and the definition of any needed shims. [1222]
  • The guiding principles used for software partitioning are the design approach defined in the previous discussion and the traditional information hiding approach. [1223]
  • CPP software partitioning. FIG. 85 illustrates suggested partitioning and interfaces. According to an embodiment, the functional blocks and interfaces of [1224] 85 are provided as follows. A first set may correspond to user or third party components. This first set may include the following blocks in FIG. 85: user application 1070, socket interface 1072, control plane software 1074, management plane software 1076, file system 1078, and console 1080. A second set may correspond to new components created for the HPCP. This second set may include the following blocks in FIG. 85: BSP 1082, Flow manager framework 1084, Functional driver 1086, Lookup table manager 1088, Vobla RAM loader and initializer 1090, Vobla SRAM manager 1098, Vobla queue interface 1092, Shims 1-5, Tracers and diags extension 1094, and Vobla frames/cells 1096. A third set may correspond to existing (e.g., ATIC/ISOS) components. This third set may include the following blocks in FIG. 85: Network interface 1100 (between the Socket interface and Flow manager framework) and System services and OS porting 1102 (above Tracers, diags extension).
  • Software Block Functional Specification. [1225]
  • Flow Manager Framework [1226]
  • This Flow [1227] Manager Framework block 1084 implements the network processor Flow Manager API and provides the framework and services (attribute parsing and registration, data path stacking, etc.) for functional drivers. This component should also deal with API thread safety mechanisms, control and data thread separation, and message sending, wrapping, and queuing, as needed.
  • [1228] Shim 1—Flow Manager-to-Control Plane and Flow Manager-to-Management Plane.
  • The control plane software to be supported may entail the use of a set of shim layers for different control plane implementations. The purpose of [1229] Shim 1 is to provide for translation of connection creation/deletion primitives from the control plane to the network processor flow creation/deletion primitives, and also to connect the control plane to the flow termination data path. The same may be done for different management plane implementations as well. For management plane integration this shim also provides mapping of MIB GET/SET methods to proper FLOW_SET/FLOW_GET calls.
  • Functional Driver Blocks [1230]
  • The number of different supported functional drivers may depend on the number of supported network protocols/applications. A particular driver is responsible for implementation of flow create/delete primitives for flow of a particular type and also for linkage of flows. Termination data path functionality should be provided for all drivers primarily as a general service of the Flow Manager Framework. [1231]
  • The functions of the driver include: [1232]
  • Low level serial port initialization/deinitialization while processing port level flow creation/deletion primitives. [1233]
  • Allocation and initialization/deallocation and deinitialization of port level static contexts in internal SRAM (via services of the network processor SRAM manager) and lookup tables in external SDRAM (or internal SRAM when so requested) while processing of port level flow creation/deletion primitives. [1234]
  • Allocation/deallocation in external SDRAM connection level contexts and its initialization/deinitialization as result of connection level flow creation/deletion primitive processing. [1235]
  • Linkage/delinkage of flows by setting next and next_function pointers in proper contexts and lookup tables as result of flow create/delete/link/unlink primitive processing via using flow_type-to-function mapping provided via services of the network processor SRAM manager. [1236]
  • Implementation of driver specific FLOW_SET/GET primitives, particularly, create/start control plane protocols when possible and so requested through attributes of FLOW_CREATE and FLOW_SET primitives. [1237]
  • Implementation of not ready yet data flow fragments, for example, for the AAL2 termination path. The SSSAR (Service Specific Segmentation and Reassembly) sublayer may be implemented by a functional driver in the CPP if a microcode solution does not exist. [1238]
  • [1239] Lookup Table Manager 1088 and Shim 2
  • The [1240] Lookup Table Manager 1088 manages the modification of lookup tables of particular types and, accordingly, it recognizes or knows the internal table structure (optimized for network processor microcode usage). For various control/management plane components, Shim 2 glue layers (which may be specific for each particular implementation) are provided to implement access to the tables. Instead of providing a generic API, every particular control/management plane component may be restructured to be operable with the network processor's lookup tables using a specific Shim 2 layer. When the lookup table is allocated in SRAM, the network processor SRAM Manager 1098 services are used for accessing the lookup table. When the network processor is a table write owner, modification of the table is done by sending command messages through the network processor Queue Interface 1092 (discussed below).
  • Network [1241] Processor Queue Interface 1092
  • The network [1242] processor Queue Interface 1092 is responsible for the CPP-to-network processor interface. This component performs interface polling and/or interrupt processing, as well as the handling of messages going to/from the queues on the interface and routing them to proper recipients. Network Processor SRAM Manager 1098The network processor SRAM Manager 1098 coordinates all SRAM allocations and per-network processor task SRAM partitioning and initialization. This component provides flow_type-to-microcode_function mapping functionality. It also may initialize all needed mapping information for access to different agents on the network processor rings via learning the results of ring enumeration process (discussed previously).
  • Network Processor RAM Loader and [1243] Initializer 1090 and Shim 3
  • The network processor RAM Loader and [1244] Initializer 1090 is responsible for the process of network processor image loading and handshaking with the network processor starting code. Through different Shim 3 implementations, the network processor RAM Loader and Initializer 1090 interfaces with different file system components to get the network processor image for loading into the proper network processor.
  • System Services and [1245] OS Porting 1102
  • According to one approach, ATIC system services and the OS porting technique are to be used. Additionally, network processor-specific frame/cell re-implementation is to be undertaken. It is desirable to extend existing ATIC tracing/diags support to produce a more generic and convenient framework. Such a framework will allow activation both in compile- and run-time for tracing of events registered by different components both in the network processor and the CPP. For example, based on the suggested design approach for the network processor and the CPP Flow Manager Framework, various tracers/injectors may be dynamically linked inside the data path between any of its flow fragments (e.g., similar to trace/debug BUN drivers). [1246]
  • [1247] Network Interface 1100 and Shim 4
  • The Network Interface (NI) [1248] 1100 connects the termination data path to/from the Flow Manager with the native IP stack. Shim 4 is used for existing NI implementations for primitive translation.
  • [1249] Shim 5
  • [1250] Shim 5 is defined to connect the existing console implementations with the Flow Manager FLOW_GET/SET interface.
  • [1251] BSP 1082
  • According to one approach, it is desirable to reuse an existing [1252] BSP 1082 for a similar chip (i.e., a chip with a MIPS core). This may impose additional requirements for reference board design. In that case, it might be feasible to reuse some of the BSP components (e.g., flash drivers, memory initialization, etc.). At the same time, the main BSP function (i.e., to provide basic connectivity, typically for UART and Ethernet/IP connections) is to be reimplemented in the network processor. This might entail delivery as part of a BSP a simple network processor image containing UART and Ethernet/IP support and needed CPP drivers. In this case, the network processor image is a part of the CPP load on flash that is loaded to network processor via the network processor RAM Loader and Initializer during system initialization. In this case, if a particular end-user gets the appropriate tools for customized network processor load building, this task should be a part of the BSP building process (UART+Ethernet/IP support should be selected). According to another approach, a BSP with JTAG (a serial debug port)—based connectivity with the target could be employed. In this case, the combined CPP plus network processor(s) image can be viewed as the usual application load build.
  • CPP drivers should be integrated (through Flow Manager Framework and the proper shim) with the particular BSP driver framework. [1253]
  • Network Processor software partitioning. The goal of the network processor software partitioning approach may be to have a maximum reuse of common code/algorithms while preserving processing efficiency by using inlining and/or macros in the coding practice. FIG. 86 provides one [1254] possible partitioning approach 1200 for the network processor. Performance estimates for RFC 1483 bridgingby way of example, a performance estimate for RFC 1483 bridging can be computed as follows in Table 43.
    TABLE 43
    RFC 1483 Bridging Performance Estimate
    Receive 128 byte Ethernet back to back frames
    Receive frame - 120 cycles
    802.1d - Ethernet bridging
    Bridge learning process - 50 cycles
    Enet address lookup - 50 cycles
    Optional QoS support
    QoS decision via IP classification - 500 cycles
    Per IP src/dst, src/dst port numbers and protocol id
    Forward to transmit object - 10 cycles
    Transmit side operations
    Append 1483 encapsulation header - 10 cycles
    Optional QoS support
    AAL5 queue scheduling - 35 cycles
    RED - 15 cycles
    AAL5 segmentation and transmit - 320 cycles (100+100+120)
    General overhead (inter-task msgs, etc.) - 50 cycles
    Total processing = 619 cycles (@ 200MHz = 323K pps)
    With QoS support = 1169 cycles (@ 200MHz = 171 K pps)
    Wire speed (full duplex) = 2*100M/(8*128) = 200K pps
  • Executing Branch Instructions Based on an Accumulative Condition Flag [1255]
  • As discussed previously, in at least one embodiment, an accumulative condition flag, i.e., sticky bit, is used by the HPCP and/or network processor to execute branch instructions. A conventional processing device commonly performs a branching operation by pairing a compare instruction with a branch instruction. More specifically, such a processing device commonly performs the compare operation by subtracting a first specified operand from a second specified operand. As a result of this operation, the processing device sets various conditions flags. Such flags provide information regarding the magnitude of the first operand relative to the second operand, and well as other information regarding the operation. The subsequent branch instruction provides a branch in program execution on the basis of the values of the condition flags. The condition flags are typically overwritten based on the next instruction executed by the processing device. Hence, the programmer will typically include the branch instruction directly subsequent to a relevant compare instruction. [1256]
  • A typical program may contain a complex series of such pairings of compare and branch instructions. FIG. 87 illustrates the execution of such a [1257] program 1400. In step 1402, the processing device executes a first compare instruction (i.e., the compare1 instruction). As mentioned above, in this step, a first operand is subtracted from a second operand. The processing device also sets condition flags on the basis of the outcome of the comparing operation. Subsequently, in step 1404, the processing device executes a branch instruction on the basis of the values of the condition flags. That is, if the condition flags contain prescribed values, the processing device advances to a specified branch address. In the illustrated case of FIG. 87, the processing device branches to address A if the compare1 instruction satisfies prescribed conditions, as reflected by the values of the condition flags.
  • As shown, the [1258] program 1400 contains multiple additional pairings of compare and branch instructions. For instance, in step 1406, the processing device performs a second comparison operation (i.e., the compare2 instruction). The processing device also resets the condition flags on the basis of the outcome of the second comparing operation. In step 1408, the processing device executes a branch instruction of the basis of the new values of the condition flags. Namely, the processing device branches to address B if the compare2 instruction satisfies prescribed conditions, as reflected by the value of the condition flags.
  • In [1259] step 1410, the processing device performs a third comparison operation (i.e., the compare3 instruction). Again, the processing device also resets the condition flags on the basis of the outcome of the comparing operation. In step 1412, the processing device executes a branch instruction on the basis of the new values of the condition flags. Namely, the processing device branches to address C if the compare3 instruction satisfies prescribed conditions, as reflected by the value of the condition flags.
  • Yet additional pairings of compare and branch instructions may be included (although not illustrated). Following the series of compare and branch instructions, the program may include [1260] additional processing 1414.
  • The known technique shown in FIG. 87 may be applied in numerous applications, such as in performing error check operations. For example, a network processor often performs a series of error checks prior to performing a prescribed main processing task. In the IPv4 packet network protocol, for instance, the network processor checks to determine whether the protocol version of information being processed is equal to 4. The processing device may also determine whether the header of the information being processed is at least five words. The processing device may also determine whether the total length of the packet of information is not grater than the length specified by the MAC layer. [1261]
  • The processing device may assign a different pair of compare and branch instructions to each of the above requirements, as indicated in Table 44. [1262]
    TABLE 44
    Instruction
    Index Action
    1 compare1
    2 branch if “not equal” to error1
    3 compare2
    4 branch if “less equal” to error2
    5 compare3
    6 branch if “greater than” to error3
    7-n additional processing
  • The first and second instructions identified correspond to [1263] steps 1402 and 1404 of FIG. 87. The third and fourth instructions correspond to steps 1406 and 1408 of FIG. 87. The fifth and sixth instructions correspond to steps 1410 and 1412 of FIG. 87. The indicated additional processing in steps 7 et seq. corresponds to step 1412 of FIG. 87.
  • The technique described above has shortcomings. Namely, the proliferation of branch instructions in a program reduces the efficiency of the processing device. For instance, each of the branch instructions takes a prescribed amount of time to perform. Thus, a program that includes a multitude of such instructions may suffer from processing delays. Further, a lengthy program comprising several compare and branch instructions also requires sufficient memory capacity to store the program, and therefore detracts from efforts to deploy the processing device in computationally sparse technical environments. [1264]
  • Further, in the above-noted IPv[1265] 4 application, the processing device may encounter the above-described error conditions relatively infrequently. In this sense, these conditions are considered rare. Nevertheless, the processing device must sequence through the above-described six error checking instructions before advancing to the main processing routine (e.g., in step 1414 of FIG. 87). In view of these factors, the use of multiple branching instructions appears to impose an unwarranted bottleneck in the course of normal processing of IPv4 data. For all of the above reasons, the use of branch instructions is considered expensive to a design implementation.
  • The apparatus and method described herein is applicable to any type of processing environment. For example, FIG. 88 provides one such [1266] general processing environment 1500 for the purposes of illustration. The environment 1500 includes a processing device 1502, including a central processing unit (CPU) 1504. The processing device 1502 may also include other conventional processing units coupled to the processing unit 1504, such as memory 1508, cache 1506, and communication interface 1510. The CPU 1504 serves as a central engine for executing machine instructions. The memory 1508 (such as a Random Access Memory, or RAM) and cache 1506 serve the conventional role of storing program code and other information for use by the processor 1504 in performing its ascribed functions. The communication interface 1510 serves the conventional role of interacting with external equipment, such as the network 1402, or some other peripheral device.
  • The [1267] processing device 1502 also includes program functionality 1512 for executing various processing functions. This program functionality 1512 may be implemented as software stored in memory (e.g., memory 1508, or some other memory). As indicated in FIG. 88, the program functionality 1512 may include one or more programs 1514 that are specifically designed to make use of the unique branching technique of the present invention, to be described in greater detail below.
  • The [1268] processing device 1502 may include additional hardware and/or software to serve specific computational roles. For instance, the processing device 1502 may comprise an apparatus having hardware and/or software functionality specifically adapted for communication with a packet network, such as network 1516. For instance, the packet network 1516 may comprise any type of local-area or wide-area network for transmitting data in packet format. More specifically, the packet network 1516 preferably comprises some type of network governed by the IP/TCP protocol, such as the Internet, or an intranet. The network may include any types of physical links, such as fiber-based links, wireless links, copper-based links, etc.
  • FIG. 89 provides additional details regarding an exemplary architecture of the [1269] processing unit 1504. The processing unit 1504 may include an arithmetic logic module (ALU) 1602, a control logic module 1604, input/output (I/O) logic module 1606, and various working registers 1608.
  • The [1270] control logic module 1604 includes logic for decoding and executing machine instructions. To this end, this module 1604 may include conventional features, such as an instruction register for holding an instruction while it is being processed by the processing device 1502, a program counter, etc. The control logic module 1604 may further include one or more storage locations 1630 for storing condition flags. As described above in the Background section, the processing device 1502 modifies the contents of the condition flags when an instruction is performed by the processing device 1502, so as to indicate the outcome of the instruction. Different processing devices designed by different manufacturers employ different sets of processing flags. Known flags include an SF flag which is equal of the MSB (most significant bit) of the result of an operation, indicating whether the result was negative or non-negative. A ZF flat is set to 1 if the result of an operation is 0. A CF is set 1 if the result of an operation produces a carry. Still other types of flags are known to those skilled in the art.
  • In addition, the solution described herein provides at least one additional condition flag referred to as an [1271] accumulative flag 1632. Unlike the other flags, the accumulative flag 1632 may provide a value that reflects the outcome of more than one instruction. For instance, after a sequence of three compare instructions, the condition flag may be set to indicate whether any of these three instructions satisfy pre-established conditions. In other words, the accumulative flag 1632 in this case represents the logical OR of the separate compare instructions. The flag is referred to accumulative in the sense that its final status reflects the accumulation of separate determinations made in separate compare instructions (or other instructions). It is also appropriate to refer to this flag as a sticky flag. The flag is sticky in the sense that it can remain set for multiple computer instructions (such as multiple compare instructions). That is, unlike the known art, the accumulative (or sticky) flag 1632 not change after every computer instruction (such as after every compare instruction). Additional details regarding the use of the accumulative flag are presented below.
  • The flags stored in [1272] storage 1630 may comprise binary information expressed in one or more bits. The storage 1630 may contain a single accumulative flag, or multiple accumulative flags.
  • The [1273] ALU 1602 performs various logical and arithmetic operations in a conventional manner. The I/O logic 1606 coordinates transfer of information between the processing unit 1504 and other modules in the environment 1500 in a conventional manner. The working registers 1608 retain information for use in the execution of program instructions, and may include various conventional address registers and arithmetic registers.
  • FIG. 90 describes an exemplary method for executing program instructions based on the value of the accumulative flag. It begins in [1274] step 1402, where the processing device executes a first compare instruction (i.e., the compare1 instruction). As mentioned above, in this step, a first operand is subtracted from a second operand. The processing device also sets the value of the accumulative flag to reflect whether the compare1 instruction satisfies a first prescribed condition. In step 1404, the processing device executes a second compare instruction (i.e., the compare2 instruction). The processing device also updates the value of the accumulative flag to reflect whether either the compare1 instruction satisfies the first prescribed condition, or whether the compare2 instruction satisfies a second prescribed condition. In step 1404, the processing device executes a third compare instruction (i.e., the compare3 instruction). The processing device also updates the value of the accumulative flag to reflect whether any of the compare1, compare2, or compare3 instructions satisfy their respective prescribed conditions. Yet additional compare instructions may be included (although not illustrated).
  • After the series of compare instructions, in [1275] step 1708, the processing device executes a branch instruction based on the value of the accumulative flag. At this stage, the accumulative flag reflects whether any one of the first through third compare instructions produced an outcome which satisfies its respective prescribed condition. In this sense, the accumulative flag reflects the logical OR of individual condition flag values produced in preceding comparison steps. This is in marked contrast with the known prior art, where the condition bits strictly reflected the outcome of the single instruction that was last performed.
  • If the accumulative flag is set, then the processing device branches to an indicated address (in this case, address D). If the accumulative flag is not set, then the processing device advances to the remainder of the program, generically represented as [1276] instructions 1710 in FIG. 90.
  • Two examples serve to further clarify the exemplary use of the above-described technique. [1277]
  • 1) Example A: Error Checking [1278]
  • The technique shown in FIG. 90 may be applied in numerous applications, such as in performing error checks. As mentioned above, a network processor often performs a series of error checks prior to performing a prescribed main processing task. In the IPv4 packet network protocol, for instance, the network processor checks to determine whether the protocol version of information being processed is equal to 4. The processing device may also determine whether the header of the information being process is at least five words. The processing device may also determine whether the total length of the packet of information is not greater than the length specified by the MAC layer. [1279]
  • In contrast to the approach described in FIG. 87, the technique shown in FIG. 90 performs the above-described three comparison operations, followed by a single branch instruction based on the accumulative flag that reflects the accumulative outcome of the three comparison operations. Table 45 illustrates the series of instructions used to perform the error check using the technique of FIG. 90. [1280]
    TABLE 45
    Instruction
    Index Action
    1 compare1, overwrite accumulative flag with “not equal”
    condition
    2 compare2, set accumulative flag if “less equal,” and otherwise
    maintain accumulative flag if set in prior
    operation
    3 compare3, set accumulative flag if “greater than,” and
    otherwise maintain accumulative flag if set in prior
    operations
    4 branch if accumulative flag is true to error1_or2_or3
    5 additional processing
  • The first through third instructions correspond to [1281] steps 1402 to 1706, respectively, of FIG. 90. The accumulative outcome of these three compare operations sets the value of the accumulative flag if any of the error conditions reflected in the three comparison operations hold true. The fourth instruction corresponds to step 1708 in FIG. 90. The indicated additional processing in steps 7 et seq. corresponds to step 1710 of FIG. 88.
  • A comparison of the technique shown in FIG. 90 with the technique shown in FIG. 87 illustrates the merits of the present invention with respect to the known art. For instance, the technique shown in FIG. 87 uses six instructions to accomplish the error checking operation. In contrast, the technique shown in FIG. 90 uses only four instructions to accomplish the error checking. [1282]
  • It will be noted that the technique shown in FIG. 90 provides a single branch instruction when any of the extreme error conditions are present, and hence does not provide branching that is specific to individual error conditions. Nevertheless, these extreme error conditions are relatively rare. Thus, it is preferred to streamline the process which checks for these errors by reducing the number of required branching operations. In the relatively rare event that an error condition is encountered, then the processing device can then discriminate the exact cause of the failure in a separate routine without presenting a bottleneck situation to normal error-free processing. [1283]
  • 2) Example 2: Logical Operations (e.g., AND and OR operations) [1284]
  • The technique shown in FIG. 90 also may streamline the execution of various logical operations, such as various operations that involve AND and OR logical operations. Consider, for example, the case where a program requires branching in the event that the following condition (1) is true:[1285]
  • if (a>=7 AND b<8) then goto label D  (1).
  • In the known technique, testing this condition would require the execution of multiple pairs of compare and branch instructions. In the present technique, the operation may be performed using a series of compare operations following by a single branch instruction. [1286]
  • More specifically, it should first be noted that condition (1) may be rephrased in the negative using OR logic (e.g., the expression c AND d can be expressed as NOT (NOT c OR NOT d)). With this in mind, the condition (1) can be executed by performing the following series of instructions using the accumulative flag: [1287]
  • cmp.o.It a, 7 [1288]
  • cmp.ge b, 8 [1289]
  • bc.accumulative[1290] 0 label D.
  • The first instruction commands the processing device to compare operand “a” with the [1291] value 7, and then set the accumulative flag if operand “a” is equal to or less than 7 (and clear it otherwise). The second instruction commands the processing device to compare operand “b” with the value 8, and then to set the accumulative flag if the operand “b” is greater than or equal to 8. It will be noted that these operations are the opposite of the condition (1) because the instructions are executing using the negative counterpart of this equation. The third instruction commands the processing device to branch to label D if the final value of the accumulative flag is 0.
  • The following Truth Table 46 illustrates different scenarios depending on the input values of operands “a” and “b”. [1292]
    TABLE 46
    accumulative
    accumulative flag after
    flag after second
    a>=7 b<8 result first compare compare
    0 0 0 1 1
    0 1 0 1 1
    1 0 0 0 1
    1 1 1 0 0
  • A similar, but complementary, series of instructions may be used to implement the condition:[1293]
  • if (a>=7 OR b<8) then goto label D  (2).
  • Namely, the instructions for implementing this condition are as follows. [1294]
  • cmp.o.ge a, 7 [1295]
  • cmp.It b, 8 [1296]
  • bc.accumulative[1297] 1 label D.
  • The first instruction commands the processing device to compare operand “a” with the [1298] value 7, and then set the accumulative flag if operand “a” is equal to or greater than 7 (and clear it otherwise). The second instruction commands the processing device to compare operand “b” with the value 8, and then to set the accumulative flag if the operand “b” is less than 8. The third instruction commands the processing device to branch to label D if the final value of the accumulative flag is 1. It will be noted that there is no need to negate the operations described in the above condition, as a logical OR is being performed in this case (rather than an AND operation).
  • Finally, the following Truth Table 47 illustrates different scenarios depending on the input values of operands “a” and “b”. [1299]
    TABLE 47
    accumulative accumulative
    flag after flag after
    first second
    a >= 7 b < 8 result compare compare
    0 0 0 0 0
    0 1 1 0 1
    1 0 1 1 1
    1 1 1 1 1
  • In typical processors, many instructions can be predicated (conditional) on any condition code. In the ARM processor, for example, 4 opcode bits are required. However, in one implementation of the present invention, instructions can be predicated using only the sticky bit, requiring only two opcode bits (one bit for conditional/unconditional and one bit for [1300] bit 0/bit 1).
  • Although the above-described invention was described in the context of multiple compare instructions following by a single branch instruction, it has general applicability to other types of processing instructions. Likewise, the present invention can be implemented for any number of compares in combination with any number of AND/OR operations (e.g., (a>7 AND b==8) OR c !=9)). Generally, the invention may be applied to the generic case where an accumulative flag is set based on whether either a first or second instruction satisfy their respective prescribed conditions. Then, a third instruction performs some other operation that is conditional on the value of the accumulative flag. [1301]
  • In accordance with one embodiment of the present invention, a method for executing machine instructions in a processing device is provided. The method comprises the steps ofexecuting a first instruction, identifying whether an outcome of the execution of the first instruction satisfies a first specified condition, and setting an accumulative flag result which reflects whether the first instruction satisfies the first specified condition. The method further comprises the steps of executing at least a second additional instruction, identifying whether an outcome of the execution of the second instruction satisfies a second specified condition, and updating the accumulative flag depending on whether either the first instruction or the second instruction satisfy their respective first and second specified conditions, and a third instruction based on the value of the accumulative flag subsequent to the execution of the first and second instructions. The first and second instructions, in one embodiment, are compare instructions that each compare a first operand with a second operand. The third instruction, in one embodiment, is a branch instruction which bases its branching determination on the value of the accumulative flag. In another embodiment, the first and second instructions are compare instructions that each compare a first operand with a second operand, and wherein the third is a branch instruction which bases its branching determination on the value of the accumulative flag. [1302]
  • In one embodiment, the compare instructions of the above method determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed. [1303]
  • In accordance with another embodiment of the present invention, a computer readable medium containing program code for execution by a processing device is provided. The medium includes a first instruction for performing a first operation, which, when executed by the processing device, generates a first outcome result, at least a second additional instruction for performing a second operation, which, when executed by the processing device, generates a second outcome result, and at least an additional third instruction for performing a third operation based on an accumulative flag, wherein the accumulative flab represents the logical OR of the first and second outcomes. In one embodiment, the first and second instructions are compare instructions that each compare a first operand with a second operand. In another embodiment, the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag. In yet another embodiment, the first and second instructions are compare instructions that each compare a first operand with a second operand, and wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag. [1304]
  • In one embodiment, the compare instructions determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed. [1305]
  • In accordance with another embodiment of the present invention, an apparatus for executing machine instructions is provided. The apparatus comprises a storage for storing an accumulative flag, logic for executing instructions and for determining whether the outcomes of the instructions satisfy respective prescribed conditions, logic for setting the accumulative flag to reflect the outcomes of the instructions, wherein the logic for setting the accumulative flag includes logic for determining the value of the accumulative flag based on the logical OR of at least first and second instructions, and wherein the logic for executing instructions also includes logic for executing at least an additional third instruction based on the value of the accumulative flag stored in the storage. In one embodiment, the first and second instructions are compare instructions that each compare a first operand with a second operand. The third instruction can include a branch instruction which bases its branching determination on the value of the accumulative flag. Furthermore, the first and second instructions can include compare instructions that each compare a first operand with a second operand, and wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag. The compare instructions, in one embodiment, determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed. [1306]
  • In accordance with an additional embodiment of the present invention, an apparatus for executing machine instructions is provided. The apparatus comprises a storage for storing an accumulative flag, logic for executing instructions and for determining whether the outcomes of the instructions satisfy respective prescribed conditions, logic for setting the accumulative flag depending on the outcomes of the executed instructions, wherein the logic for setting the accumulative flag includes logic for determining the value of the accumulative flag based on whether at least one instruction within a group of at least two instructions had an outcome which satisfied its respective prescribed condition, and another storage for storing a program that comprises plural instructions, including: a first instruction for performing a first operation, which, when executed by the processing device, generates a first outcome result; at least a second additional instruction for performing a second operation, which, when executed by the logic for executing, generates a second outcome result; and at least an additional third instruction for performing a third operation based on an accumulative flag. [1307]
  • The first and second instructions, in one embodiment, are compare instructions that each compare a first operand with a second operand. The third instruction can include a branch instruction which bases its branching determination on the value of the accumulative flag. Furthermore the first and second instructions can include compare instructions that each compare a first operand with a second operand while the third instruction includes a branch instruction which bases its branching determination on the value of the accumulative flag. [1308]
  • In one embodiment, the compare instructions determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed. [1309]
  • While the foregoing description includes many details and specificities, it is to be understood that these have been included for purposes of explanation only, and are not to be interpreted as limitations of the present invention. Many modifications to the embodiments described above can be made without departing from the spirit and scope of the invention. [1310]

Claims (20)

What is claimed is:
1. A method for executing machine instructions in a processing device, comprising the steps of:
executing a first instruction;
identifying whether an outcome of the execution of the first instruction satisfies a first specified condition, and setting an accumulative flag result which reflects whether the first instruction satisfies the first specified condition;
executing at least a second additional instruction;
identifying whether an outcome of the execution of the second instruction satisfies a second specified condition, and updating the accumulative flag depending on whether either the first instruction or the second instruction satisfy their respective first and second specified conditions; and
executing a third instruction based on the value of the accumulative flag subsequent to the execution of the first and second instructions.
2. The method of claim 1, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand.
3. The method of claim 1, wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag
4. The method of claim 1, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand, and wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag.
5. The method of claim 4, wherein the compare instructions determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed.
6. A computer readable medium containing program code for execution by a processing device, wherein medium includes:
a first instruction for performing a first operation, which, when executed by the processing device, generates a first outcome result;
at least a second additional instruction for performing a second operation, which, when executed by the processing device, generates a second outcome result; and
at least an additional third instruction for performing a third operation based on an accumulative flag, wherein the accumulative flab represents the logical OR of the first and second outcomes.
7. The medium of claim 6, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand.
8. The medium of claim 6, wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag.
9. The medium of claim 6, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand, and wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag.
10. The medium of claim 9, wherein the compare instructions determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed.
11. An apparatus for executing machine instructions, comprising:
a storage for storing an accumulative flag;
logic for executing instructions, and for determining whether the outcomes of the instructions satisfy respective prescribed conditions;
logic for setting the accumulative flag to reflect the outcomes of the instructions, wherein the logic for setting the accumulative flag includes logic for determining the value of the accumulative flag based on the logical OR of at least first and second instructions,
wherein the logic for executing instructions also includes logic for executing at least an additional third instruction based on the value of the accumulative flag stored in the storage.
12. The apparatus of claim 11, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand.
13. The apparatus of claim 11, wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag
14. The apparatus of claim 11, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand, and wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag.
15. The apparatus of claim 14, wherein the compare instructions determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed.
16. An apparatus for executing machine instructions, comprising:
a storage for storing an accumulative flag;
logic for executing instructions, and for determining whether the outcomes of the instructions satisfy respective prescribed conditions;
logic for setting the accumulative flag depending on the outcomes of the executed instructions, wherein the logic for setting the accumulative flag includes logic for determining the value of the accumulative flag based on whether at least one instruction within a group of at least two instructions had an outcome which satisfied its respective prescribed condition;
another storage for storing a program that comprises plural instructions, including:
a first instruction for performing a first operation, which, when executed by the processing device, generates a first outcome result;
at least a second additional instruction for performing a second operation, which, when executed by the logic for executing, generates a second outcome result;
at least an additional third instruction for performing a third operation based on an accumulative flag.
17. The apparatus of claim 16, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand.
18. The apparatus of claim 16, wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag.
19. The apparatus of claim 16, wherein the first and second instructions are compare instructions that each compare a first operand with a second operand, and wherein the third instruction is a branch instruction which bases its branching determination on the value of the accumulative flag.
20. The apparatus of claim 19, wherein the compare instructions determine whether two respective error conditions are present, and the branch instruction bases it branching determination on whether either of the two respective error conditions are present, as reflected by the value of the accumulative flag after the second compare instruction is performed.
US10/064,338 2001-07-02 2002-07-02 Communications system using rings architecture Abandoned US20030196076A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/064,338 US20030196076A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US30184301P 2001-07-02 2001-07-02
US33351601P 2001-11-28 2001-11-28
US34723502P 2002-01-14 2002-01-14
US10/064,338 US20030196076A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture

Publications (1)

Publication Number Publication Date
US20030196076A1 true US20030196076A1 (en) 2003-10-16

Family

ID=27404840

Family Applications (15)

Application Number Title Priority Date Filing Date
US10/064,330 Abandoned US20030195989A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,337 Abandoned US20030212830A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,333 Abandoned US20030200342A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,328 Abandoned US20030172189A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,342 Abandoned US20030200343A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,341 Abandoned US20030204636A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,339 Abandoned US20030195991A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,336 Abandoned US20030200339A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,340 Active 2024-12-08 US7103008B2 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,332 Abandoned US20030191862A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,329 Abandoned US20030191861A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,343 Abandoned US20030191863A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,334 Abandoned US20030189940A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,338 Abandoned US20030196076A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,335 Abandoned US20030195990A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture

Family Applications Before (13)

Application Number Title Priority Date Filing Date
US10/064,330 Abandoned US20030195989A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,337 Abandoned US20030212830A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,333 Abandoned US20030200342A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,328 Abandoned US20030172189A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,342 Abandoned US20030200343A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,341 Abandoned US20030204636A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,339 Abandoned US20030195991A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,336 Abandoned US20030200339A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,340 Active 2024-12-08 US7103008B2 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,332 Abandoned US20030191862A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,329 Abandoned US20030191861A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,343 Abandoned US20030191863A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture
US10/064,334 Abandoned US20030189940A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/064,335 Abandoned US20030195990A1 (en) 2001-07-02 2002-07-02 Communications system using rings architecture

Country Status (6)

Country Link
US (15) US20030195989A1 (en)
EP (1) EP1413098A2 (en)
JP (1) JP2005516432A (en)
AU (1) AU2002327187A1 (en)
TW (1) TW589825B (en)
WO (1) WO2003005152A2 (en)

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040236876A1 (en) * 2003-05-21 2004-11-25 Kondratiev Vladimir L. Apparatus and method of memory access control for bus masters
US20060026400A1 (en) * 2004-07-27 2006-02-02 Texas Instruments Incorporated Automatic operand load, modify and store
US20060098674A1 (en) * 2004-11-08 2006-05-11 Motoshi Hamasaki Frame transmitting apparatus and frame receiving apparatus
US20080288780A1 (en) * 2004-09-02 2008-11-20 Beukema Bruce L Low-latency data decryption interface
US20090144564A1 (en) * 2004-09-02 2009-06-04 International Business Machines Corporation Data encryption interface for reducing encrypt latency impact on standard traffic
US20100150139A1 (en) * 2008-10-01 2010-06-17 Jeffrey Lawson Telephony Web Event System and Method
US20100241826A1 (en) * 2009-03-17 2010-09-23 Canon Kabushiki Kaisha Data processing apparatus, data processing method and program
US8077974B2 (en) 2006-07-28 2011-12-13 Hewlett-Packard Development Company, L.P. Compact stylus-based input technique for indic scripts
US20120033552A1 (en) * 2009-06-23 2012-02-09 Abel Dasylva Utilizing Betweenness to Determine Forwarding State in a Routed Network
US20130225141A1 (en) * 2012-02-29 2013-08-29 Alcatel-Lucent Usa Inc. System and/or method for using mobile telephones as extensions
US20140289420A1 (en) * 2012-05-09 2014-09-25 Twilio, Inc. System and method for managing media in a distributed communication network
US8938053B2 (en) 2012-10-15 2015-01-20 Twilio, Inc. System and method for triggering on platform usage
US8948356B2 (en) 2012-10-15 2015-02-03 Twilio, Inc. System and method for routing communications
US8995641B2 (en) 2009-03-02 2015-03-31 Twilio, Inc. Method and system for a multitenancy telephone network
US9001666B2 (en) 2013-03-15 2015-04-07 Twilio, Inc. System and method for improving routing in a distributed communication platform
US9037122B2 (en) 2012-02-29 2015-05-19 Alcatel Lucent Fixed line extension for mobile telephones
US9137127B2 (en) 2013-09-17 2015-09-15 Twilio, Inc. System and method for providing communication platform metadata
US9160696B2 (en) 2013-06-19 2015-10-13 Twilio, Inc. System for transforming media resource into destination device compatible messaging format
US9210275B2 (en) 2009-10-07 2015-12-08 Twilio, Inc. System and method for running a multi-module telephony application
US9225840B2 (en) 2013-06-19 2015-12-29 Twilio, Inc. System and method for providing a communication endpoint information service
US9226217B2 (en) 2014-04-17 2015-12-29 Twilio, Inc. System and method for enabling multi-modal communication
US9247062B2 (en) 2012-06-19 2016-01-26 Twilio, Inc. System and method for queuing a communication session
US9253254B2 (en) 2013-01-14 2016-02-02 Twilio, Inc. System and method for offering a multi-partner delegated platform
US9270833B2 (en) 2012-07-24 2016-02-23 Twilio, Inc. Method and system for preventing illicit use of a telephony platform
US9282124B2 (en) 2013-03-14 2016-03-08 Twilio, Inc. System and method for integrating session initiation protocol communication in a telecommunications platform
US9306982B2 (en) 2008-04-02 2016-04-05 Twilio, Inc. System and method for processing media requests during telephony sessions
US9325624B2 (en) 2013-11-12 2016-04-26 Twilio, Inc. System and method for enabling dynamic multi-modal communication
US9338280B2 (en) 2013-06-19 2016-05-10 Twilio, Inc. System and method for managing telephony endpoint inventory
US9338064B2 (en) 2010-06-23 2016-05-10 Twilio, Inc. System and method for managing a computing cluster
US9338018B2 (en) 2013-09-17 2016-05-10 Twilio, Inc. System and method for pricing communication of a telecommunication platform
US9336500B2 (en) 2011-09-21 2016-05-10 Twilio, Inc. System and method for authorizing and connecting application developers and users
US9344573B2 (en) 2014-03-14 2016-05-17 Twilio, Inc. System and method for a work distribution service
US9350642B2 (en) 2012-05-09 2016-05-24 Twilio, Inc. System and method for managing latency in a distributed telephony network
US9398622B2 (en) 2011-05-23 2016-07-19 Twilio, Inc. System and method for connecting a communication to a client
US9455949B2 (en) 2011-02-04 2016-09-27 Twilio, Inc. Method for processing telephony sessions of a network
US9459926B2 (en) 2010-06-23 2016-10-04 Twilio, Inc. System and method for managing a computing cluster
US9459925B2 (en) 2010-06-23 2016-10-04 Twilio, Inc. System and method for managing a computing cluster
US9483328B2 (en) 2013-07-19 2016-11-01 Twilio, Inc. System and method for delivering application content
US9495227B2 (en) 2012-02-10 2016-11-15 Twilio, Inc. System and method for managing concurrent events
US9553799B2 (en) 2013-11-12 2017-01-24 Twilio, Inc. System and method for client communication in a distributed telephony network
US9588974B2 (en) 2014-07-07 2017-03-07 Twilio, Inc. Method and system for applying data retention policies in a computing platform
US9590849B2 (en) 2010-06-23 2017-03-07 Twilio, Inc. System and method for managing a computing cluster
US9596274B2 (en) 2008-04-02 2017-03-14 Twilio, Inc. System and method for processing telephony sessions
US9602586B2 (en) 2012-05-09 2017-03-21 Twilio, Inc. System and method for managing media in a distributed communication network
US9648006B2 (en) 2011-05-23 2017-05-09 Twilio, Inc. System and method for communicating with a client application
US9672185B2 (en) 2013-09-27 2017-06-06 International Business Machines Corporation Method and system for enumerating digital circuits in a system-on-a-chip (SOC)
WO2017136101A1 (en) * 2016-02-04 2017-08-10 Intel Corporation Processor extensions to protect stacks during ring transitions
US9805399B2 (en) 2015-02-03 2017-10-31 Twilio, Inc. System and method for a media intelligence platform
US9811398B2 (en) 2013-09-17 2017-11-07 Twilio, Inc. System and method for tagging and tracking events of an application platform
US9906607B2 (en) 2014-10-21 2018-02-27 Twilio, Inc. System and method for providing a micro-services communication platform
US9942394B2 (en) 2011-09-21 2018-04-10 Twilio, Inc. System and method for determining and communicating presence information
US9948703B2 (en) 2015-05-14 2018-04-17 Twilio, Inc. System and method for signaling through data storage
US9967224B2 (en) 2010-06-25 2018-05-08 Twilio, Inc. System and method for enabling real-time eventing
US10063713B2 (en) 2016-05-23 2018-08-28 Twilio Inc. System and method for programmatic device connectivity
US10116733B2 (en) 2014-07-07 2018-10-30 Twilio, Inc. System and method for collecting feedback in a multi-tenant communication platform
US10165015B2 (en) 2011-05-23 2018-12-25 Twilio Inc. System and method for real-time communication by using a client application communication protocol
US10212237B2 (en) 2014-07-07 2019-02-19 Twilio, Inc. System and method for managing media and signaling in a communication platform
US10394556B2 (en) 2015-12-20 2019-08-27 Intel Corporation Hardware apparatuses and methods to switch shadow stack pointers
US10419891B2 (en) 2015-05-14 2019-09-17 Twilio, Inc. System and method for communicating through multiple endpoints
US10659349B2 (en) 2016-02-04 2020-05-19 Twilio Inc. Systems and methods for providing secure network exchanged for a multitenant virtual private cloud
US10686902B2 (en) 2016-05-23 2020-06-16 Twilio Inc. System and method for a multi-channel notification service
US10757200B2 (en) 2014-07-07 2020-08-25 Twilio Inc. System and method for managing conferencing in a distributed communication network
US11012915B2 (en) * 2018-03-26 2021-05-18 Qualcomm Incorporated Backpressure signaling for wireless communications
US11637934B2 (en) 2010-06-23 2023-04-25 Twilio Inc. System and method for monitoring account usage on a platform
US11656805B2 (en) 2015-06-26 2023-05-23 Intel Corporation Processors, methods, systems, and instructions to protect shadow stacks
US11936609B2 (en) 2021-04-23 2024-03-19 Twilio Inc. System and method for enabling real-time eventing

Families Citing this family (315)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7436815B2 (en) * 2000-02-25 2008-10-14 Telica, Inc. Switching system and method having low, deterministic latency
US7436839B1 (en) * 2001-10-09 2008-10-14 At&T Delaware Intellectual Property, Inc. Systems and methods for providing services through an integrated digital network
US7596139B2 (en) 2000-11-17 2009-09-29 Foundry Networks, Inc. Backplane interface adapter with error control and redundant fabric
GB2370380B (en) 2000-12-19 2003-12-31 Picochip Designs Ltd Processor architecture
US6801994B2 (en) * 2000-12-20 2004-10-05 Microsoft Corporation Software management systems and methods for automotive computing devices
US20030167348A1 (en) * 2001-07-02 2003-09-04 Globespanvirata, Inc. Communications system using rings architecture
US20030039256A1 (en) * 2001-08-24 2003-02-27 Klas Carlberg Distribution of connection handling in a processor cluster
DE10147419A1 (en) * 2001-09-26 2003-04-24 Siemens Ag Method for creating a dynamic address table for a coupling node in a data network and method for transmitting a data telegram
US7227845B2 (en) * 2001-12-11 2007-06-05 Motorola, Inc. Method and apparatus for enabling a communication resource reset
US7110411B2 (en) * 2002-03-25 2006-09-19 Erlang Technology, Inc. Method and apparatus for WFQ scheduling using a plurality of scheduling queues to provide fairness, high scalability, and low computation complexity
US7400722B2 (en) * 2002-03-28 2008-07-15 Broadcom Corporation Methods and apparatus for performing hash operations in a cryptography accelerator
US7672452B2 (en) * 2002-05-03 2010-03-02 General Instrument Corporation Secure scan
US7187687B1 (en) 2002-05-06 2007-03-06 Foundry Networks, Inc. Pipeline method and system for switching packets
US20120155466A1 (en) 2002-05-06 2012-06-21 Ian Edward Davis Method and apparatus for efficiently processing data packets in a computer network
US7468975B1 (en) 2002-05-06 2008-12-23 Foundry Networks, Inc. Flexible method for processing data packets in a network routing system for enhanced efficiency and monitoring capability
US20030212937A1 (en) * 2002-05-07 2003-11-13 Marc Todd System and method for exposing state based logic signals within an electronics system over an existing network conduit
JP2004048658A (en) * 2002-05-17 2004-02-12 Yazaki Corp Optical communication system, signal relay apparatus and optical communication connector
US20030229685A1 (en) * 2002-06-07 2003-12-11 Jamie Twidale Hardware abstraction interfacing system and method
ES2246484T3 (en) * 2002-06-11 2006-02-16 Siemens Aktiengesellschaft PROCEDURE AND ACCESS MULTIPLEXER FOR QUICK ACCESS TO DATA NETWORKS.
US7453870B2 (en) * 2002-06-12 2008-11-18 Intel Corporation Backplane for switch fabric
US9003376B2 (en) * 2002-08-09 2015-04-07 Texas Instruments Incorporated Software breakpoints with tailoring for multiple processor shared memory or multiple thread systems
AU2003266876A1 (en) * 2002-09-09 2004-03-29 Nortel Networks Limited Combined layer-2 and layer-3 virtual private network
US7924828B2 (en) 2002-10-08 2011-04-12 Netlogic Microsystems, Inc. Advanced processor with mechanism for fast packet queuing operations
US7346757B2 (en) * 2002-10-08 2008-03-18 Rmi Corporation Advanced processor translation lookaside buffer management in a multithreaded system
US8478811B2 (en) 2002-10-08 2013-07-02 Netlogic Microsystems, Inc. Advanced processor with credit based scheme for optimal packet flow in a multi-processor system on a chip
US7467243B2 (en) * 2002-10-08 2008-12-16 Rmi Corporation Advanced processor with scheme for optimal packet flow in a multi-processor system on a chip
US8176298B2 (en) 2002-10-08 2012-05-08 Netlogic Microsystems, Inc. Multi-core multi-threaded processing systems with instruction reordering in an in-order pipeline
US7334086B2 (en) * 2002-10-08 2008-02-19 Rmi Corporation Advanced processor with system on a chip interconnect technology
US7984268B2 (en) 2002-10-08 2011-07-19 Netlogic Microsystems, Inc. Advanced processor scheduling in a multithreaded system
US7961723B2 (en) 2002-10-08 2011-06-14 Netlogic Microsystems, Inc. Advanced processor with mechanism for enforcing ordering between information sent on two independent networks
US9088474B2 (en) 2002-10-08 2015-07-21 Broadcom Corporation Advanced processor with interfacing messaging network to a CPU
US20050044324A1 (en) * 2002-10-08 2005-02-24 Abbas Rashid Advanced processor with mechanism for maximizing resource usage in an in-order pipeline with multiple threads
US20050033831A1 (en) * 2002-10-08 2005-02-10 Abbas Rashid Advanced processor with a thread aware return address stack optimally used across active threads
US7461213B2 (en) * 2002-10-08 2008-12-02 Rmi Corporation Advanced processor system using request, data, snoop, and response rings
US8037224B2 (en) 2002-10-08 2011-10-11 Netlogic Microsystems, Inc. Delegating network processor operations to star topology serial bus interfaces
US7627721B2 (en) * 2002-10-08 2009-12-01 Rmi Corporation Advanced processor with cache coherency
US8015567B2 (en) 2002-10-08 2011-09-06 Netlogic Microsystems, Inc. Advanced processor with mechanism for packet distribution at high line rate
US20040076204A1 (en) * 2002-10-16 2004-04-22 Kruschwitz Brian E. External cavity organic laser
US20040103086A1 (en) * 2002-11-26 2004-05-27 Bapiraju Vinnakota Data structure traversal instructions for packet processing
US7043579B2 (en) * 2002-12-05 2006-05-09 International Business Machines Corporation Ring-topology based multiprocessor data access bus
US7379429B1 (en) * 2002-12-20 2008-05-27 Foundry Networks, Inc. Optimizations and enhancements to the IEEE RSTP 802.1w implementation
AU2003300303A1 (en) * 2002-12-31 2004-07-29 Globespanvirata Incorporated System and method for providing quality of service in asynchronous transfer mode cell transmission
US7512717B2 (en) * 2003-01-21 2009-03-31 Nextio Inc. Fibre channel controller shareable by a plurality of operating system domains within a load-store architecture
US7174413B2 (en) * 2003-01-21 2007-02-06 Nextio Inc. Switching apparatus and method for providing shared I/O within a load-store fabric
US7493416B2 (en) * 2003-01-21 2009-02-17 Nextio Inc. Fibre channel controller shareable by a plurality of operating system domains within a load-store architecture
US7917658B2 (en) 2003-01-21 2011-03-29 Emulex Design And Manufacturing Corporation Switching apparatus and method for link initialization in a shared I/O environment
US7188209B2 (en) * 2003-04-18 2007-03-06 Nextio, Inc. Apparatus and method for sharing I/O endpoints within a load store fabric by encapsulation of domain information in transaction layer packets
US7046668B2 (en) * 2003-01-21 2006-05-16 Pettey Christopher J Method and apparatus for shared I/O in a load/store fabric
US7219183B2 (en) * 2003-01-21 2007-05-15 Nextio, Inc. Switching apparatus and method for providing shared I/O within a load-store fabric
US8102843B2 (en) 2003-01-21 2012-01-24 Emulex Design And Manufacturing Corporation Switching apparatus and method for providing shared I/O within a load-store fabric
US7502370B2 (en) * 2003-01-21 2009-03-10 Nextio Inc. Network controller for obtaining a plurality of network port identifiers in response to load-store transactions from a corresponding plurality of operating system domains within a load-store architecture
US7617333B2 (en) 2003-01-21 2009-11-10 Nextio Inc. Fibre channel controller shareable by a plurality of operating system domains within a load-store architecture
US7664909B2 (en) * 2003-04-18 2010-02-16 Nextio, Inc. Method and apparatus for a shared I/O serial ATA controller
US8346884B2 (en) 2003-01-21 2013-01-01 Nextio Inc. Method and apparatus for a shared I/O network interface controller
US7836211B2 (en) 2003-01-21 2010-11-16 Emulex Design And Manufacturing Corporation Shared input/output load-store architecture
US7953074B2 (en) 2003-01-21 2011-05-31 Emulex Design And Manufacturing Corporation Apparatus and method for port polarity initialization in a shared I/O device
US7457906B2 (en) 2003-01-21 2008-11-25 Nextio, Inc. Method and apparatus for shared I/O in a load/store fabric
US7698483B2 (en) 2003-01-21 2010-04-13 Nextio, Inc. Switching apparatus and method for link initialization in a shared I/O environment
US7103064B2 (en) 2003-01-21 2006-09-05 Nextio Inc. Method and apparatus for shared I/O in a load/store fabric
US8032659B2 (en) * 2003-01-21 2011-10-04 Nextio Inc. Method and apparatus for a shared I/O network interface controller
DE10311250B4 (en) * 2003-03-14 2020-02-06 Robert Bosch Gmbh Microprocessor system and method for protecting the system from the replacement of components
US7793005B1 (en) * 2003-04-11 2010-09-07 Zilker Labs, Inc. Power management system using a multi-master multi-slave bus and multi-function point-of-load regulators
US7330484B2 (en) * 2003-04-23 2008-02-12 Sun Microsystems, Inc. Method and system for transmitting packet chains
US20040223497A1 (en) * 2003-05-08 2004-11-11 Onvoy Inc. Communications network with converged services
US6901072B1 (en) 2003-05-15 2005-05-31 Foundry Networks, Inc. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US7114042B2 (en) * 2003-05-22 2006-09-26 International Business Machines Corporation Method to provide atomic update primitives in an asymmetric heterogeneous multiprocessor environment
US6934612B2 (en) * 2003-06-12 2005-08-23 Motorola, Inc. Vehicle network and communication method in a vehicle network
US7509558B2 (en) * 2003-07-09 2009-03-24 Thomson Licensing Error correction method for reed-solomon product code
US9861346B2 (en) 2003-07-14 2018-01-09 W. L. Gore & Associates, Inc. Patent foramen ovale (PFO) closure device with linearly elongating petals
US7526350B2 (en) 2003-08-06 2009-04-28 Creative Technology Ltd Method and device to process digital media streams
EP1511227A1 (en) * 2003-08-27 2005-03-02 Alcatel Ring network system
US7925768B1 (en) * 2003-10-10 2011-04-12 Ciena Corporation Method and network for adapting a transaction language for network elements
EP1527777A1 (en) * 2003-10-31 2005-05-04 MERCK PATENT GmbH Composition with antioxidant properties comprising an ester of ascorbic acid and a benzoyl rest
CN1322705C (en) * 2003-12-23 2007-06-20 华为技术有限公司 A method of datum plane reset for forwarding equipment
US7716314B1 (en) 2003-12-30 2010-05-11 Dinochip, Inc. Traffic management in digital signal processor
US6920586B1 (en) * 2004-01-23 2005-07-19 Freescale Semiconductor, Inc. Real-time debug support for a DMA device and method thereof
JP3835459B2 (en) * 2004-03-09 2006-10-18 セイコーエプソン株式会社 Data transfer control device and electronic device
US7474661B2 (en) * 2004-03-26 2009-01-06 Samsung Electronics Co., Ltd. Apparatus and method for distributing forwarding table lookup operations among a plurality of microengines in a high-speed routing node
US7817659B2 (en) 2004-03-26 2010-10-19 Foundry Networks, Llc Method and apparatus for aggregating input data streams
US20050220073A1 (en) * 2004-03-30 2005-10-06 Asicexpert, Llc. System and method for transmitting signals
US7525958B2 (en) * 2004-04-08 2009-04-28 Intel Corporation Apparatus and method for two-stage packet classification using most specific filter matching and transport level sharing
US8730961B1 (en) 2004-04-26 2014-05-20 Foundry Networks, Llc System and method for optimizing router lookup
US7957428B2 (en) * 2004-05-21 2011-06-07 Intel Corporation Methods and apparatuses to effect a variable-width link
US7539141B2 (en) * 2004-05-28 2009-05-26 Intel Corporation Method and apparatus for synchronous unbuffered flow control of packets on a ring interconnect
US7551564B2 (en) * 2004-05-28 2009-06-23 Intel Corporation Flow control method and apparatus for single packet arrival on a bidirectional ring interconnect
JP4397292B2 (en) * 2004-07-09 2010-01-13 富士通株式会社 Control packet loop prevention method and bridge device using the same
US20060045123A1 (en) * 2004-07-14 2006-03-02 Sundar Gopalan Method of forming a communication system, a communication card with increased bandwidth, and a method of forming a communication device
US20070266370A1 (en) * 2004-09-16 2007-11-15 Myers Glenford J Data Plane Technology Including Packet Processing for Network Processors
US7523351B2 (en) * 2004-09-27 2009-04-21 Ceva D.S.P. Ltd System and method for providing mutual breakpoint capabilities in computing device
US7330457B2 (en) * 2004-10-07 2008-02-12 Polytechnic University Cooperative wireless communications
US7363574B1 (en) * 2004-10-12 2008-04-22 Nortel Networks Limited Method and system for parallel CRC calculation
US7657703B1 (en) 2004-10-29 2010-02-02 Foundry Networks, Inc. Double density content addressable memory (CAM) lookup scheme
US20060168379A1 (en) * 2004-12-13 2006-07-27 Tim Frodsham Method, system, and apparatus for link latency management
US7706262B2 (en) * 2005-09-30 2010-04-27 Alcatel-Lucent Usa Inc. Identifying data and/or control packets in wireless communication
JP2006185000A (en) * 2004-12-27 2006-07-13 Hitachi Ltd Storage system
US20060140191A1 (en) * 2004-12-29 2006-06-29 Naik Uday R Multi-level scheduling using single bit vector
US20060168420A1 (en) * 2005-01-27 2006-07-27 Innovasic, Inc. Microcontroller cache memory
US7526579B2 (en) * 2005-01-27 2009-04-28 Innovasic, Inc. Configurable input/output interface for an application specific product
WO2006081094A2 (en) * 2005-01-27 2006-08-03 Innovasic, Inc. Deterministic microcontroller
US7680967B2 (en) * 2005-01-27 2010-03-16 Innovasic, Inc. Configurable application specific standard product with configurable I/O
US7562207B2 (en) * 2005-01-27 2009-07-14 Innovasic, Inc. Deterministic microcontroller with context manager
US7516311B2 (en) * 2005-01-27 2009-04-07 Innovasic, Inc. Deterministic microcontroller context arrangement
US7406550B2 (en) 2005-01-27 2008-07-29 Innovasic, Inc Deterministic microcontroller with configurable input/output interface
US20060168421A1 (en) * 2005-01-27 2006-07-27 Innovasic, Inc. Method of providing microcontroller cache memory
US7676646B2 (en) * 2005-03-02 2010-03-09 Cisco Technology, Inc. Packet processor with wide register set architecture
US7792108B2 (en) * 2005-04-08 2010-09-07 Interdigital Technology Corporation Method and apparatus for transmitting concatenated frames in a wireless communication system
US7583662B1 (en) * 2005-04-12 2009-09-01 Tp Lab, Inc. Voice virtual private network
DE502005001065D1 (en) * 2005-05-02 2007-08-30 Accemic Gmbh & Co Kg Method and device for emulation of a programmable unit
US7548842B2 (en) * 2005-06-02 2009-06-16 Eve S.A. Scalable system for simulation and emulation of electronic circuits using asymmetrical evaluation and canvassing instruction processors
US20060277020A1 (en) * 2005-06-02 2006-12-07 Tharas Systems A reconfigurable system for verification of electronic circuits using high-speed serial links to connect asymmetrical evaluation and canvassing instruction processors
US7539151B2 (en) * 2005-06-30 2009-05-26 Intel Corporation Channel selection for mesh networks having nodes with multiple radios
US7487206B2 (en) * 2005-07-15 2009-02-03 International Business Machines Corporation Method for providing load diffusion in data stream correlations
EP1762943B1 (en) * 2005-09-09 2014-07-09 STMicroelectronics Srl Chip-to-chip communication system
WO2007033442A1 (en) 2005-09-21 2007-03-29 Interuniversitair Microelektronica Centrum (Imec) System with distributed analogue resources
US20070140215A1 (en) * 2005-12-15 2007-06-21 Tingting Lu Methods and systems for providing voice network services using regulated and unregulated telecommunications infrastructures
KR100730279B1 (en) * 2005-12-16 2007-06-19 삼성전자주식회사 Computer chip for connecting devices on chip utilizing star-torus topology
ES2327428T3 (en) 2005-12-23 2009-10-29 Alcatel Lucent CONTROL OF ADMISSION OF RESOURCES FOR RESERVATION REQUESTS ACTIVATED BY CUSTOMER AND ACTIVATED BY NETWORK.
US8448162B2 (en) 2005-12-28 2013-05-21 Foundry Networks, Llc Hitless software upgrades
US8224638B1 (en) 2006-01-31 2012-07-17 Xilinx, Inc. Managing programmable device configuration
US7636653B1 (en) * 2006-01-31 2009-12-22 Xilinx, Inc. Point-to-point ethernet hardware co-simulation interface
US7991139B2 (en) * 2006-02-13 2011-08-02 At&T Intellectual Property I, L.P. Methods and apparatus to limit ring trees in voice over internet protocol networks
US20070211702A1 (en) * 2006-03-08 2007-09-13 Doradla Anil K Methods and apparatus to perform parallel ringing across communication networks
US8994700B2 (en) 2006-03-23 2015-03-31 Mark J. Foster Artifact-free transitions between dual display controllers
US7548937B2 (en) * 2006-05-04 2009-06-16 International Business Machines Corporation System and method for scalable processing of multi-way data stream correlations
US20070271450A1 (en) * 2006-05-17 2007-11-22 Doshi Kshitij A Method and system for enhanced thread synchronization and coordination
US7856012B2 (en) * 2006-06-16 2010-12-21 Harris Corporation System and methods for generic data transparent rules to support quality of service
KR100819271B1 (en) * 2006-06-30 2008-04-03 삼성전자주식회사 Packet switch device and bandwidth control method thereof
US9143585B2 (en) * 2006-07-07 2015-09-22 Wi-Lan Inc. Method and system for generic multiprotocol convergence over wireless air interface
US7617421B2 (en) * 2006-07-27 2009-11-10 Sun Microsystems, Inc. Method and apparatus for reporting failure conditions during transactional execution
US7787491B2 (en) * 2006-08-25 2010-08-31 Broadcom Corporation Method and system for synchronizable E-VSB enhanced data interleaving and data expansion
KR100819104B1 (en) * 2006-09-07 2008-04-03 삼성전자주식회사 Circuit for parallel bit test and method for parallel bit test by the same
US7610517B2 (en) 2006-09-14 2009-10-27 Innovasic, Inc. Microprocessor with trace functionality
US8479201B2 (en) * 2006-09-18 2013-07-02 Innovasic, Inc. Processor with hardware solution for priority inversion
US8238255B2 (en) 2006-11-22 2012-08-07 Foundry Networks, Llc Recovering from failures without impact on data traffic in a shared bus architecture
US8130662B1 (en) * 2006-12-31 2012-03-06 At&T Intellectual Property Ii, L.P. Method and apparatus for providing transcoding in a network
US20090279441A1 (en) 2007-01-11 2009-11-12 Foundry Networks, Inc. Techniques for transmitting failure detection protocol packets
US7796594B2 (en) * 2007-02-14 2010-09-14 Marvell Semiconductor, Inc. Logical bridging system and method
JP4933932B2 (en) 2007-03-23 2012-05-16 ソニー株式会社 Information processing system, information processing apparatus, information processing method, and program
US8037399B2 (en) * 2007-07-18 2011-10-11 Foundry Networks, Llc Techniques for segmented CRC design in high speed networks
US8271859B2 (en) 2007-07-18 2012-09-18 Foundry Networks Llc Segmented CRC design in high speed networks
FR2921507B1 (en) * 2007-09-26 2011-04-15 Arteris ELECTRONIC MEMORY DEVICE
US8149839B1 (en) 2007-09-26 2012-04-03 Foundry Networks, Llc Selection of trunk ports and paths using rotation
EP2043313B1 (en) * 2007-09-28 2013-08-14 Alcatel Lucent Circuit emulation service method and telecommunication system for implementing the method
US7916728B1 (en) 2007-09-28 2011-03-29 F5 Networks, Inc. Lockless atomic table update
GB2454865B (en) 2007-11-05 2012-06-13 Picochip Designs Ltd Power control
WO2009070280A1 (en) * 2007-11-26 2009-06-04 One Laptop Per Child Association, Inc. Method and apparatus for maintaining connectivity in a network
US9013999B1 (en) * 2008-01-02 2015-04-21 Marvell International Ltd. Method and apparatus for egress jitter pacer
US9596324B2 (en) 2008-02-08 2017-03-14 Broadcom Corporation System and method for parsing and allocating a plurality of packets to processor core threads
GB2457310B (en) * 2008-02-11 2012-03-21 Picochip Designs Ltd Signal routing in processor arrays
US20130165967A1 (en) 2008-03-07 2013-06-27 W.L. Gore & Associates, Inc. Heart occlusion devices
US9584446B2 (en) * 2008-03-18 2017-02-28 Vmware, Inc. Memory buffer management method and system having multiple receive ring buffers
JP5326314B2 (en) 2008-03-21 2013-10-30 富士通株式会社 Processor and information processing device
CN101577662B (en) 2008-05-05 2012-04-04 华为技术有限公司 Method and device for matching longest prefix based on tree form data structure
WO2009136402A2 (en) * 2008-05-07 2009-11-12 Cosmologic Ltd. Register file system and method thereof for enabling a substantially direct memory access
US7971041B2 (en) * 2008-05-29 2011-06-28 Advanced Micro Devices, Inc. Method and system for register management
CN102119397A (en) * 2008-06-09 2011-07-06 佳售乐公司 Systems and methods facilitating mobile retail environments
US8306036B1 (en) 2008-06-20 2012-11-06 F5 Networks, Inc. Methods and systems for hierarchical resource allocation through bookmark allocation
US8239597B2 (en) * 2008-07-18 2012-08-07 Intersil Americas Inc. Device-to-device communication bus for distributed power management
US8120203B2 (en) * 2008-07-18 2012-02-21 Intersil Americas Inc. Intelligent management of current sharing group
US8120205B2 (en) * 2008-07-18 2012-02-21 Zilker Labs, Inc. Adding and dropping phases in current sharing
US8237423B2 (en) * 2008-07-18 2012-08-07 Intersil Americas Inc. Active droop current sharing
US8769681B1 (en) 2008-08-11 2014-07-01 F5 Networks, Inc. Methods and system for DMA based distributed denial of service protection
US8122239B1 (en) * 2008-09-11 2012-02-21 Xilinx, Inc. Method and apparatus for initializing a system configured in a programmable logic device
US8385215B1 (en) 2008-11-13 2013-02-26 Cisco Technoogy, Inc. System and method for providing testing in an Ethernet network environment
US8447884B1 (en) 2008-12-01 2013-05-21 F5 Networks, Inc. Methods for mapping virtual addresses to physical addresses in a network device and systems thereof
US20100135661A1 (en) * 2008-12-01 2010-06-03 Electronics And Telecommunications Research Institute Ethernet-based next generation optical transport network apparatus and traffic grooming method thereof
US20100166173A1 (en) * 2008-12-31 2010-07-01 Xun Yang Subscriber line interface circuitry with integrated serial interfaces
US8880696B1 (en) 2009-01-16 2014-11-04 F5 Networks, Inc. Methods for sharing bandwidth across a packetized bus and systems thereof
US9152483B2 (en) 2009-01-16 2015-10-06 F5 Networks, Inc. Network devices with multiple fully isolated and independently resettable direct memory access channels and methods thereof
US8112491B1 (en) 2009-01-16 2012-02-07 F5 Networks, Inc. Methods and systems for providing direct DMA
US8880632B1 (en) 2009-01-16 2014-11-04 F5 Networks, Inc. Method and apparatus for performing multiple DMA channel based network quality of service
US8103809B1 (en) 2009-01-16 2012-01-24 F5 Networks, Inc. Network devices with multiple direct memory access channels and methods thereof
EP2394378A1 (en) * 2009-02-03 2011-12-14 Corning Cable Systems LLC Optical fiber-based distributed antenna systems, components, and related methods for monitoring and configuring thereof
US9673904B2 (en) 2009-02-03 2017-06-06 Corning Optical Communications LLC Optical fiber-based distributed antenna systems, components, and related methods for calibration thereof
US8139668B2 (en) * 2009-03-31 2012-03-20 Mitsubishi Electric Research Laboratories, Inc. Unified STTC encoder for WAVE transceivers
GB2470037B (en) 2009-05-07 2013-07-10 Picochip Designs Ltd Methods and devices for reducing interference in an uplink
US8090901B2 (en) 2009-05-14 2012-01-03 Brocade Communications Systems, Inc. TCAM management approach that minimize movements
GB2470891B (en) 2009-06-05 2013-11-27 Picochip Designs Ltd A method and device in a communication network
GB2470771B (en) 2009-06-05 2012-07-18 Picochip Designs Ltd A method and device in a communication network
US9636094B2 (en) 2009-06-22 2017-05-02 W. L. Gore & Associates, Inc. Sealing device and delivery system
US20120029556A1 (en) 2009-06-22 2012-02-02 Masters Steven J Sealing device and delivery system
JP5460143B2 (en) * 2009-06-29 2014-04-02 キヤノン株式会社 Data processing apparatus, data processing method and program
US8312088B2 (en) * 2009-07-27 2012-11-13 Sandisk Il Ltd. Device identifier selection
US8392614B2 (en) 2009-07-27 2013-03-05 Sandisk Il Ltd. Device identifier selection
US8214105B2 (en) * 2009-08-21 2012-07-03 Metra Electronics Corporation Methods and systems for automatic detection of steering wheel control signals
US8599850B2 (en) 2009-09-21 2013-12-03 Brocade Communications Systems, Inc. Provisioning single or multistage networks using ethernet service instances (ESIs)
GB2474071B (en) 2009-10-05 2013-08-07 Picochip Designs Ltd Femtocell base station
CN102474449B (en) * 2009-11-02 2016-05-04 马维尔国际贸易有限公司 Switch based on virtual interface and method
US9313047B2 (en) 2009-11-06 2016-04-12 F5 Networks, Inc. Handling high throughput and low latency network data packets in a traffic management device
KR20110064153A (en) * 2009-12-07 2011-06-15 삼성전자주식회사 Metallic organic precursor, method for preparing the same, and method for forming conductive metal layer or pattern
JP5454255B2 (en) * 2010-03-16 2014-03-26 ヤマハ株式会社 Acoustic signal processing apparatus and acoustic signal processing system
CN102200977B (en) * 2010-03-23 2014-10-29 国际商业机器公司 Method and system for extending database table under multi-tenant environment
CZ303038B6 (en) * 2010-04-06 2012-03-07 Vysoké ucení technické v Brne Data distribution system using a loop and loop distribution block for making the same
US9268664B2 (en) * 2010-04-06 2016-02-23 Paypal, Inc. Method and system for synchronous and asynchronous monitoring
US9755947B2 (en) * 2010-05-18 2017-09-05 Intel Corporation Hierarchical self-organizing classification processing in a network switch
WO2011156746A2 (en) * 2010-06-11 2011-12-15 California Institute Of Technology Systems and methods for rapid processing and storage of data
US8332546B2 (en) * 2010-07-20 2012-12-11 Lsi Corporation Fully asynchronous direct memory access controller and processor work
US8964742B1 (en) 2010-07-28 2015-02-24 Marvell Israel (M.I.S.L) Ltd. Linked list profiling and updating
GB2482869B (en) 2010-08-16 2013-11-06 Picochip Designs Ltd Femtocell access control
US8554851B2 (en) * 2010-09-24 2013-10-08 Intel Corporation Apparatus, system, and methods for facilitating one-way ordering of messages
US8909716B2 (en) 2010-09-28 2014-12-09 International Business Machines Corporation Administering truncated receive functions in a parallel messaging interface
US9569398B2 (en) 2010-09-28 2017-02-14 International Business Machines Corporation Routing data communications packets in a parallel computer
US8744920B2 (en) 2010-10-05 2014-06-03 Guestlogix, Inc. Systems and methods for integration of travel and related services and operations
US9252874B2 (en) 2010-10-13 2016-02-02 Ccs Technology, Inc Power management for remote antenna units in distributed antenna systems
US8769500B2 (en) * 2010-10-29 2014-07-01 Fujitsu Limited Node computation initialization technique for efficient parallelization of software analysis in a distributed computing environment
US9052974B2 (en) 2010-11-05 2015-06-09 International Business Machines Corporation Fencing data transfers in a parallel active messaging interface of a parallel computer
US8527672B2 (en) 2010-11-05 2013-09-03 International Business Machines Corporation Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer
US9069631B2 (en) 2010-11-05 2015-06-30 International Business Machines Corporation Fencing data transfers in a parallel active messaging interface of a parallel computer
US9075759B2 (en) 2010-11-05 2015-07-07 International Business Machines Corporation Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer
US8949453B2 (en) 2010-11-30 2015-02-03 International Business Machines Corporation Data communications in a parallel active messaging interface of a parallel computer
US8484658B2 (en) 2010-12-03 2013-07-09 International Business Machines Corporation Data communications in a parallel active messaging interface of a parallel computer
US8490112B2 (en) 2010-12-03 2013-07-16 International Business Machines Corporation Data communications for a collective operation in a parallel active messaging interface of a parallel computer
US8572629B2 (en) 2010-12-09 2013-10-29 International Business Machines Corporation Data communications in a parallel active messaging interface of a parallel computer
US8650262B2 (en) * 2010-12-09 2014-02-11 International Business Machines Corporation Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer
US8732229B2 (en) 2011-01-06 2014-05-20 International Business Machines Corporation Completion processing for data communications instructions
US8775531B2 (en) 2011-01-06 2014-07-08 International Business Machines Corporation Completion processing for data communications instructions
US8584141B2 (en) 2011-01-17 2013-11-12 International Business Machines Corporation Data communications in a parallel active messaging interface of a parallel computer
US8892850B2 (en) 2011-01-17 2014-11-18 International Business Machines Corporation Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer
US10135831B2 (en) 2011-01-28 2018-11-20 F5 Networks, Inc. System and method for combining an access control system with a traffic management system
JPWO2012108411A1 (en) 2011-02-10 2014-07-03 日本電気株式会社 Encoding / decoding processor and wireless communication apparatus
US8825983B2 (en) 2011-02-15 2014-09-02 International Business Machines Corporation Data communications in a parallel active messaging interface of a parallel computer
US8918791B1 (en) * 2011-03-10 2014-12-23 Applied Micro Circuits Corporation Method and system for queuing a request by a processor to access a shared resource and granting access in accordance with an embedded lock ID
US9003096B2 (en) 2011-03-16 2015-04-07 Texas Instruments Incorporated Serial interface
WO2012132180A1 (en) * 2011-03-29 2012-10-04 パナソニック株式会社 Transfer control device, integrated circuit thereof, transfer control method, and transfer control system
US8725923B1 (en) * 2011-03-31 2014-05-13 Emc Corporation BMC-based communication system
GB2489716B (en) 2011-04-05 2015-06-24 Intel Corp Multimode base system
GB2489919B (en) 2011-04-05 2018-02-14 Intel Corp Filter
US9026434B2 (en) * 2011-04-11 2015-05-05 Samsung Electronic Co., Ltd. Frame erasure concealment for a multi rate speech and audio codec
EP2702780A4 (en) 2011-04-29 2014-11-12 Corning Cable Sys Llc Systems, methods, and devices for increasing radio frequency (rf) power in distributed antenna systems
GB2491098B (en) 2011-05-16 2015-05-20 Intel Corp Accessing a base station
CN102323786B (en) * 2011-07-01 2013-06-19 广西工学院 Timer device comprising advanced reduced instruction set computer machine (ARM) and field programmable gate array (FPGA) and implementation method thereof
US8495654B2 (en) 2011-11-07 2013-07-23 International Business Machines Corporation Intranode data communications in a parallel computer
US8528004B2 (en) 2011-11-07 2013-09-03 International Business Machines Corporation Internode data communications in a parallel computer
US8732725B2 (en) 2011-11-09 2014-05-20 International Business Machines Corporation Managing internode data communications for an uninitialized process in a parallel computer
US9270373B2 (en) * 2011-11-21 2016-02-23 Samtec, Inc. Transporting data and auxiliary signals over an optical link
WO2013081580A1 (en) * 2011-11-29 2013-06-06 Intel Corporation Raw memory transaction support
US20150058524A1 (en) * 2012-01-04 2015-02-26 Kenneth C. Creta Bimodal functionality between coherent link and memory expansion
US9036822B1 (en) 2012-02-15 2015-05-19 F5 Networks, Inc. Methods for managing user information and devices thereof
US20130227190A1 (en) * 2012-02-27 2013-08-29 Raytheon Company High Data-Rate Processing System
EP2842245A1 (en) 2012-04-25 2015-03-04 Corning Optical Communications LLC Distributed antenna system architectures
US8745320B2 (en) * 2012-05-04 2014-06-03 Riverbed Technology, Inc. Ensuring write operation consistency using multiple storage devices
US8904076B2 (en) 2012-05-31 2014-12-02 Silicon Laboratories Inc. Coder with snoop mode
US8781086B2 (en) * 2012-06-26 2014-07-15 Adc Dsl Systems, Inc. System and method for circuit emulation
US9602433B2 (en) 2012-07-26 2017-03-21 Qualcomm Incorporated Systems and methods for sharing a serial communication port between a plurality of communication channels
JP5836229B2 (en) * 2012-09-04 2015-12-24 株式会社日立製作所 Stream processing device, server, and stream processing method
US10033837B1 (en) 2012-09-29 2018-07-24 F5 Networks, Inc. System and method for utilizing a data reducing module for dictionary compression of encoded data
US9270602B1 (en) 2012-12-31 2016-02-23 F5 Networks, Inc. Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions
US20140201326A1 (en) 2013-01-16 2014-07-17 Marvell World Trade Ltd. Interconnected ring network in a multi-processor system
US10828019B2 (en) 2013-01-18 2020-11-10 W.L. Gore & Associates, Inc. Sealing device and delivery system
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US9191262B2 (en) * 2013-02-22 2015-11-17 Dell Products L.P. Network communication protocol processing optimization system
US8860457B2 (en) 2013-03-05 2014-10-14 Qualcomm Incorporated Parallel configuration of a reconfigurable instruction cell array
US9565139B2 (en) 2013-03-15 2017-02-07 Comcast Cable Communications, Llc Remote latency adjustment
US9544247B2 (en) 2013-03-15 2017-01-10 Innovasic, Inc. Packet data traffic management apparatus
US9537776B2 (en) * 2013-03-15 2017-01-03 Innovasic, Inc. Ethernet traffic management apparatus
WO2014146271A1 (en) * 2013-03-21 2014-09-25 华为技术有限公司 Transmission apparatus, connecting mechanism and method
TWI506953B (en) * 2013-04-12 2015-11-01 Via Tech Inc State machine circuit and state-adjusting method
US9136842B2 (en) * 2013-06-07 2015-09-15 Altera Corporation Integrated circuit device with embedded programmable logic
US10205666B2 (en) * 2013-07-29 2019-02-12 Ampere Computing Llc End-to-end flow control in system on chip interconnects
US9864606B2 (en) 2013-09-05 2018-01-09 F5 Networks, Inc. Methods for configurable hardware logic device reloading and devices thereof
US10819791B2 (en) * 2013-10-11 2020-10-27 Ge Aviation Systems Llc Data communications network for an aircraft
US9582356B1 (en) * 2013-11-01 2017-02-28 Marvell International Ltd. System and method for DDR memory timing acquisition and tracking
CN103744823A (en) * 2013-12-13 2014-04-23 青岛歌尔声学科技有限公司 Method, device and system for transmitting signals from USB main equipment to CPU
EP3085051A1 (en) 2013-12-16 2016-10-26 F5 Networks, Inc Methods for facilitating improved user authentication using persistent data and devices thereof
US10110700B2 (en) * 2014-03-31 2018-10-23 Oracle International Corporation Multiple on-die communication networks
US10628789B2 (en) * 2014-05-20 2020-04-21 Gimme Vending LLC Communication device for vending machine and method of using the same
US10015143B1 (en) 2014-06-05 2018-07-03 F5 Networks, Inc. Methods for securing one or more license entitlement grants and devices thereof
US9808230B2 (en) 2014-06-06 2017-11-07 W. L. Gore & Associates, Inc. Sealing device and delivery system
US11838851B1 (en) 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
EP3001618A1 (en) * 2014-09-29 2016-03-30 F5 Networks, Inc Method and apparatus for multiple DMA channel based network quality of service
US10171532B2 (en) * 2014-09-30 2019-01-01 Citrix Systems, Inc. Methods and systems for detection and classification of multimedia content in secured transactions
CN104539547B (en) * 2014-11-14 2017-10-10 中国科学院计算技术研究所 A kind of router and method for routing for three dimensional integrated circuits network-on-chip
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US10176012B2 (en) 2014-12-12 2019-01-08 Nxp Usa, Inc. Method and apparatus for implementing deterministic response frame transmission
US10505757B2 (en) * 2014-12-12 2019-12-10 Nxp Usa, Inc. Network interface module and a method of changing network configuration parameters within a network device
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof
US9756106B2 (en) 2015-02-13 2017-09-05 Citrix Systems, Inc. Methods and systems for estimating quality of experience (QoE) parameters of secured transactions
US10021221B2 (en) 2015-02-24 2018-07-10 Citrix Systems, Inc. Methods and systems for detection and classification of multimedia content in secured transactions using pattern matching
US9531556B2 (en) * 2015-03-25 2016-12-27 International Business Machines Corporation Supporting low latency applications at the edge of wireless communication networks
US10122588B2 (en) 2015-03-26 2018-11-06 Hewlett Packard Enterprise Development Lp Ring network uplink designation
US10516578B2 (en) * 2015-03-31 2019-12-24 Micro Focus Llc Inferring a network topology
US9681313B2 (en) 2015-04-15 2017-06-13 Corning Optical Communications Wireless Ltd Optimizing remote antenna unit performance using an alternative data channel
US9507891B1 (en) * 2015-05-29 2016-11-29 International Business Machines Corporation Automating a microarchitecture design exploration environment
US9864604B2 (en) * 2015-06-04 2018-01-09 Oracle International Corporation Distributed mechanism for clock and reset control in a microprocessor
CN105022608B (en) * 2015-06-30 2017-12-08 广西科技大学 A kind of timer IP kernel being connected with microprocessor of 16 bit application system and its realize the time-controlled method of timer
CN105183430B (en) * 2015-06-30 2018-01-19 广西科技大学鹿山学院 A kind of timer IP kernel being connected with 8-bit microprocessor application system and its realize the time-controlled method of timer
JP6525824B2 (en) * 2015-08-31 2019-06-05 国立大学法人名古屋大学 Relay device
CN105262801A (en) * 2015-09-24 2016-01-20 广东亿迅科技有限公司 Method and system for message distribution of cloud platform
CN105630726B (en) * 2016-01-29 2018-11-20 努比亚技术有限公司 The binary channels mobile terminal of multiplexing USB port
CN105893304B (en) * 2016-03-28 2019-02-15 努比亚技术有限公司 A kind of terminal and its implementation for supporting double network data transmission
US10587526B2 (en) * 2016-05-30 2020-03-10 Walmart Apollo, Llc Federated scheme for coordinating throttled network data transfer in a multi-host scenario
US20180012247A1 (en) * 2016-07-08 2018-01-11 365 Retail Market LLC System for monitoring a vending machine
US10628352B2 (en) 2016-07-19 2020-04-21 Nxp Usa, Inc. Heterogeneous multi-processor device and method of enabling coherent data access within a heterogeneous multi-processor device
US10120829B2 (en) * 2016-11-23 2018-11-06 Infineon Technologies Austria Ag Bus device with programmable address
US10972453B1 (en) 2017-05-03 2021-04-06 F5 Networks, Inc. Methods for token refreshment based on single sign-on (SSO) for federated identity environments and devices thereof
CN107220024B (en) * 2017-05-19 2021-08-31 郑州云海信息技术有限公司 Data processing method, device and system based on FPGA
US10922606B2 (en) 2017-06-13 2021-02-16 International Business Machines Corporation Multi-directional reduction in large scale deep-learning
US10628547B1 (en) * 2017-06-30 2020-04-21 Xilinx, Inc. Routing circuit designs for implementation using a programmable network on chip
US10573598B2 (en) * 2017-09-28 2020-02-25 Xilinx, Inc. Integration of a programmable device and a processing system in an integrated circuit package
JP6606565B2 (en) * 2018-01-18 2019-11-13 本田技研工業株式会社 Ring network and robot equipped with the same
CN108304343A (en) * 2018-02-08 2018-07-20 深圳市德赛微电子技术有限公司 A kind of chip-on communication method of complexity SOC
US11855898B1 (en) 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof
TWI720345B (en) * 2018-09-20 2021-03-01 威盛電子股份有限公司 Interconnection structure of multi-core system
US11537716B1 (en) 2018-11-13 2022-12-27 F5, Inc. Methods for detecting changes to a firmware and devices thereof
TWI704494B (en) * 2018-12-28 2020-09-11 技嘉科技股份有限公司 Processor performance optimization method and motherboard using the same
US11599644B2 (en) 2019-05-17 2023-03-07 Walmart Apollo, Llc Blocking insecure code with locking
CN110780937B (en) * 2019-09-16 2023-12-08 腾讯大地通途(北京)科技有限公司 Task issuing method, device, computer readable storage medium and equipment
CN110602238B (en) * 2019-09-23 2021-09-24 Oppo广东移动通信有限公司 TCP session management method and related product
CN113051199A (en) 2019-12-26 2021-06-29 阿里巴巴集团控股有限公司 Data transmission method and device
US11243882B2 (en) * 2020-04-15 2022-02-08 International Business Machines Corporation In-array linked list identifier pool scheme
CN114302195B (en) * 2021-01-14 2023-04-14 海信视像科技股份有限公司 Display device, external device and play control method
CN113110099B (en) * 2021-03-04 2023-03-14 清华大学 Multi-mode integrated mixed real-time simulation platform
US11552882B2 (en) * 2021-03-25 2023-01-10 Mellanox Technologies, Ltd. Efficient propagation of fault routing notifications
KR102338517B1 (en) * 2021-04-28 2021-12-13 주식회사 인코어드 테크놀로지스 Remote terminal unit for solar power generation
US20230112720A1 (en) * 2021-10-12 2023-04-13 Texas Instruments Incorporated Hardware system for automatic direct memory access data formatting

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US625264A (en) * 1899-05-16 Friedrich hoffmann and wilhelm ohlsen
US3577189A (en) * 1969-01-15 1971-05-04 Ibm Apparatus and method in a digital computer for allowing improved program branching with branch anticipation reduction of the number of branches, and reduction of branch delays
US4672607A (en) * 1985-07-15 1987-06-09 Hitachi, Ltd. Local area network communication system
US4760571A (en) * 1984-07-25 1988-07-26 Siegfried Schwarz Ring network for communication between one chip processors
US4933938A (en) * 1989-03-22 1990-06-12 Hewlett-Packard Company Group address translation through a network bridge
US5229993A (en) * 1991-02-25 1993-07-20 Old Dominion University Control of access through local carrier sensing for high data rate networks and control of access of synchronous messages through circulating reservation packets
US5341369A (en) * 1992-02-11 1994-08-23 Vitesse Semiconductor Corp. Multichannel self-routing packet switching network architecture
US5371862A (en) * 1991-02-27 1994-12-06 Kabushiki Kaisha Toshiba Program execution control system
US5485578A (en) * 1993-03-08 1996-01-16 Apple Computer, Inc. Topology discovery in a multiple-ring network
US5594728A (en) * 1994-01-26 1997-01-14 International Business Machines Corporation Realtime addressing for high speed serial bit stream
US5850553A (en) * 1996-11-12 1998-12-15 Hewlett-Packard Company Reducing the number of executed branch instructions in a code sequence
US6049824A (en) * 1997-11-21 2000-04-11 Adc Telecommunications, Inc. System and method for modifying an information signal in a telecommunications system
US6069514A (en) * 1998-04-23 2000-05-30 Sun Microsystems, Inc. Using asynchronous FIFO control rings for synchronous systems
US6111859A (en) * 1997-01-16 2000-08-29 Advanced Micro Devices, Inc. Data transfer network on a computer chip utilizing combined bus and ring topologies
US6222409B1 (en) * 1999-07-16 2001-04-24 University Of Utah Research Foundation Variable analog delay line for analog signal processing on a single integrated circuit chip
US6311212B1 (en) * 1998-06-27 2001-10-30 Intel Corporation Systems and methods for on-chip storage of virtual connection descriptors
US6309915B1 (en) * 1998-02-05 2001-10-30 Tessera, Inc. Semiconductor chip package with expander ring and method of making same
US6339788B1 (en) * 1998-06-12 2002-01-15 International Business Machines Corporation Method for encapsulating hardware to allow multi-tasking of microcode
US6429536B1 (en) * 2000-07-12 2002-08-06 Advanced Semiconductor Engineering, Inc. Semiconductor device
US6437618B2 (en) * 2000-06-30 2002-08-20 Hyundai Electronics Industries Co., Ltd. Delay locked loop incorporating a ring type delay and counting elements
US6452931B1 (en) * 1994-02-28 2002-09-17 Sprint Communications Company L.P. Synchronous optical network using a ring architecture
US6456407B1 (en) * 1998-02-13 2002-09-24 Nokia Networks Oy Optical telecommunications networks
US6463497B1 (en) * 1999-07-30 2002-10-08 International Business Machines Corporation Communication method for integrated circuit chips on a multi-chip module
US6463033B1 (en) * 1998-05-04 2002-10-08 Lucent Technologies Inc. Dual hubbed tree architecture for a communication network
US6647489B1 (en) * 2000-06-08 2003-11-11 Ip-First, Llc Compare branch instruction pairing within a single integer pipeline

Family Cites Families (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4318182A (en) * 1974-04-19 1982-03-02 Honeywell Information Systems Inc. Deadlock detection and prevention mechanism for a computer system
DE3241402A1 (en) * 1982-11-09 1984-05-10 Siemens AG, 1000 Berlin und 8000 München METHOD FOR CONTROLLING THE DATA TRANSFER BETWEEN A DATA TRANSMITTER AND A DATA RECEIVER VIA A BUS WITH THE AID OF A CONTROL UNIT CONNECTED TO THE BUS
BE895438A (en) * 1982-12-22 1983-06-22 Bell Telephone Mfg COMMUNICATION SYSTEM WITH MULTIPLE RINGS
US4567590A (en) * 1983-12-27 1986-01-28 International Business Machines Corp. Message stripping protocol for a ring communication network
US4621362A (en) * 1984-06-04 1986-11-04 International Business Machines Corp. Routing architecture for a multi-ring local area network
AU622208B2 (en) * 1988-08-12 1992-04-02 Digital Equipment Corporation Frame removal mechanism for token ring networks
US4982400A (en) * 1988-12-29 1991-01-01 Intel Corporation Ring bus hub for a star local area network
US5371887A (en) * 1989-09-05 1994-12-06 Matsushita Electric Industrial Co., Ltd. Time-shared multitask execution device
US5165024A (en) * 1990-04-12 1992-11-17 Apple Computer, Inc. Information transfer and receiving system with a ring interconnect architecture using voucher and ticket signals
US5410652A (en) * 1990-09-28 1995-04-25 Texas Instruments, Incorporated Data communication control by arbitrating for a data transfer control token with facilities for halting a data transfer by maintaining possession of the token
CA2104133A1 (en) * 1992-08-17 1994-02-18 Tsutomu Tanaka Data transmission system with packets having occupied, idle, released, and reset states
US5504747A (en) * 1993-03-03 1996-04-02 Apple Computer, Inc. Economical payload stream routing in a multiple-ring network
US5457683A (en) * 1993-05-07 1995-10-10 Apple Computer, Inc. Link and discovery protocols for a ring interconnect architecture
JP2576762B2 (en) * 1993-06-30 1997-01-29 日本電気株式会社 Information collection method between nodes in ring network
US5597728A (en) * 1993-07-23 1997-01-28 Wyatt; Caryl H. Methods for biodegradation separation of natural fibers to release particulate contamination
US5864554A (en) * 1993-10-20 1999-01-26 Lsi Logic Corporation Multi-port network adapter
US5613114A (en) * 1994-04-15 1997-03-18 Apple Computer, Inc System and method for custom context switching
EP0701346A3 (en) * 1994-09-09 2000-04-12 ABBPATENT GmbH Method for consistent data transmission
KR0157248B1 (en) * 1994-12-31 1998-11-16 이계철 Media connection control method by using dispersed cycle reset protocol
US5886992A (en) * 1995-04-14 1999-03-23 Valtion Teknillinen Tutkimuskeskus Frame synchronized ring system and method
US5907685A (en) * 1995-08-04 1999-05-25 Microsoft Corporation System and method for synchronizing clocks in distributed computer nodes
US5760571A (en) * 1995-08-16 1998-06-02 Signal Restoration Technologies I Limited Partnership Power supply damping circuit and method
JPH09200239A (en) * 1996-01-19 1997-07-31 Hitachi Ltd Data transfer method using ring connection and information processing system
US5925123A (en) * 1996-01-24 1999-07-20 Sun Microsystems, Inc. Processor for executing instruction sets received from a network or from a local memory
EP0802655A3 (en) * 1996-04-17 1999-11-24 Matsushita Electric Industrial Co., Ltd. Communication network
US5822568A (en) * 1996-05-20 1998-10-13 Advanced Micro Devices, Inc. System for improving the real-time functionality of a personal computer which employs an interrupt servicing DMA controller
JPH1049369A (en) * 1996-08-07 1998-02-20 Ricoh Co Ltd Data processor
US6266797B1 (en) * 1997-01-16 2001-07-24 Advanced Micro Devices, Inc. Data transfer network on a computer chip using a re-configurable path multiple ring topology
US5978882A (en) * 1997-04-25 1999-11-02 Novell, Inc. Real-mode, 32-bit, flat-model execution apparatus and method
US6611537B1 (en) * 1997-05-30 2003-08-26 Centillium Communications, Inc. Synchronous network for digital media streams
US6233599B1 (en) * 1997-07-10 2001-05-15 International Business Machines Corporation Apparatus and method for retrofitting multi-threaded operations on a computer by partitioning and overlapping registers
US6134240A (en) * 1997-09-10 2000-10-17 Voloshin; Moshe Chip address allocation through a serial data ring on a stackable repeater
US6128641A (en) * 1997-09-12 2000-10-03 Siemens Aktiengesellschaft Data processing unit with hardware assisted context switching capability
US6065114A (en) * 1998-04-21 2000-05-16 Idea Corporation Cover instruction and asynchronous backing store switch
US6226680B1 (en) * 1997-10-14 2001-05-01 Alacritech, Inc. Intelligent network interface system method for protocol processing
US6151654A (en) * 1997-12-24 2000-11-21 Intel Corporation Method and apparatus for encoded DMA acknowledges
US6552264B2 (en) * 1998-03-11 2003-04-22 International Business Machines Corporation High performance chip packaging and method
JPH11313094A (en) * 1998-04-27 1999-11-09 Yazaki Corp Supervisory system of ring type network
US6108346A (en) * 1998-08-27 2000-08-22 Xiox Corporation Combined synchronous and asynchronous message transmission
US6286052B1 (en) * 1998-12-04 2001-09-04 Cisco Technology, Inc. Method and apparatus for identifying network data traffic flows and for applying quality of service treatments to the flows
US6496516B1 (en) * 1998-12-07 2002-12-17 Pmc-Sierra, Ltd. Ring interface and ring network bus flow control system
AUPQ005099A0 (en) * 1999-04-29 1999-05-20 Canon Kabushiki Kaisha Sequential bus architecture
KR100296556B1 (en) * 1999-05-20 2001-07-12 김덕중 A circuit for driving 3-phase brushless direct current motor
US6359859B1 (en) * 1999-06-03 2002-03-19 Fujitsu Network Communications, Inc. Architecture for a hybrid STM/ATM add-drop multiplexer
IL130796A (en) * 1999-07-05 2003-07-06 Brightcom Technologies Ltd Packet processor
US6252264B1 (en) * 1999-07-30 2001-06-26 International Business Machines Corporation Integrated circuit chip with features that facilitate a multi-chip module having a number of the chips
US6768742B1 (en) * 1999-10-08 2004-07-27 Advanced Micro Devices, Inc. On-chip local area network
JP3780153B2 (en) * 1999-10-25 2006-05-31 富士通株式会社 Optical transmission device for ring transmission system and optical transmission method for ring transmission system
US6848006B1 (en) * 2000-05-30 2005-01-25 Nortel Networks Limited Ring-mesh networks
US20030120822A1 (en) * 2001-04-19 2003-06-26 Langrind Nicholas A. Isolated control plane addressing
US6820142B2 (en) * 2000-12-14 2004-11-16 International Business Machines Corporation Token based DMA
US6469497B2 (en) * 2001-01-09 2002-10-22 Delphi Technologies, Inc. Magnetic position sensor system composed of two reference magnetoresistors and a linear displacement sensing magnetoresistor
US6665742B2 (en) * 2001-01-31 2003-12-16 Advanced Micro Devices, Inc. System for reconfiguring a first device and/or a second device to use a maximum compatible communication parameters based on transmitting a communication to the first and second devices of a point-to-point link
US20020191601A1 (en) * 2001-06-15 2002-12-19 Alcatel, Societe Anonyme On-chip communication architecture and method

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US625264A (en) * 1899-05-16 Friedrich hoffmann and wilhelm ohlsen
US3577189A (en) * 1969-01-15 1971-05-04 Ibm Apparatus and method in a digital computer for allowing improved program branching with branch anticipation reduction of the number of branches, and reduction of branch delays
US4760571A (en) * 1984-07-25 1988-07-26 Siegfried Schwarz Ring network for communication between one chip processors
US4672607A (en) * 1985-07-15 1987-06-09 Hitachi, Ltd. Local area network communication system
US4933938A (en) * 1989-03-22 1990-06-12 Hewlett-Packard Company Group address translation through a network bridge
US5229993A (en) * 1991-02-25 1993-07-20 Old Dominion University Control of access through local carrier sensing for high data rate networks and control of access of synchronous messages through circulating reservation packets
US5371862A (en) * 1991-02-27 1994-12-06 Kabushiki Kaisha Toshiba Program execution control system
US5341369A (en) * 1992-02-11 1994-08-23 Vitesse Semiconductor Corp. Multichannel self-routing packet switching network architecture
US5485578A (en) * 1993-03-08 1996-01-16 Apple Computer, Inc. Topology discovery in a multiple-ring network
US5594728A (en) * 1994-01-26 1997-01-14 International Business Machines Corporation Realtime addressing for high speed serial bit stream
US6452931B1 (en) * 1994-02-28 2002-09-17 Sprint Communications Company L.P. Synchronous optical network using a ring architecture
US5850553A (en) * 1996-11-12 1998-12-15 Hewlett-Packard Company Reducing the number of executed branch instructions in a code sequence
US6111859A (en) * 1997-01-16 2000-08-29 Advanced Micro Devices, Inc. Data transfer network on a computer chip utilizing combined bus and ring topologies
US6049824A (en) * 1997-11-21 2000-04-11 Adc Telecommunications, Inc. System and method for modifying an information signal in a telecommunications system
US6309915B1 (en) * 1998-02-05 2001-10-30 Tessera, Inc. Semiconductor chip package with expander ring and method of making same
US6456407B1 (en) * 1998-02-13 2002-09-24 Nokia Networks Oy Optical telecommunications networks
US6069514A (en) * 1998-04-23 2000-05-30 Sun Microsystems, Inc. Using asynchronous FIFO control rings for synchronous systems
US6463033B1 (en) * 1998-05-04 2002-10-08 Lucent Technologies Inc. Dual hubbed tree architecture for a communication network
US6339788B1 (en) * 1998-06-12 2002-01-15 International Business Machines Corporation Method for encapsulating hardware to allow multi-tasking of microcode
US6311212B1 (en) * 1998-06-27 2001-10-30 Intel Corporation Systems and methods for on-chip storage of virtual connection descriptors
US6222409B1 (en) * 1999-07-16 2001-04-24 University Of Utah Research Foundation Variable analog delay line for analog signal processing on a single integrated circuit chip
US6463497B1 (en) * 1999-07-30 2002-10-08 International Business Machines Corporation Communication method for integrated circuit chips on a multi-chip module
US6647489B1 (en) * 2000-06-08 2003-11-11 Ip-First, Llc Compare branch instruction pairing within a single integer pipeline
US6437618B2 (en) * 2000-06-30 2002-08-20 Hyundai Electronics Industries Co., Ltd. Delay locked loop incorporating a ring type delay and counting elements
US6429536B1 (en) * 2000-07-12 2002-08-06 Advanced Semiconductor Engineering, Inc. Semiconductor device

Cited By (207)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6922740B2 (en) * 2003-05-21 2005-07-26 Intel Corporation Apparatus and method of memory access control for bus masters
US20040236876A1 (en) * 2003-05-21 2004-11-25 Kondratiev Vladimir L. Apparatus and method of memory access control for bus masters
US20060026400A1 (en) * 2004-07-27 2006-02-02 Texas Instruments Incorporated Automatic operand load, modify and store
US7533250B2 (en) * 2004-07-27 2009-05-12 Texas Instruments Incorporated Automatic operand load, modify and store
US20080288780A1 (en) * 2004-09-02 2008-11-20 Beukema Bruce L Low-latency data decryption interface
US20090144564A1 (en) * 2004-09-02 2009-06-04 International Business Machines Corporation Data encryption interface for reducing encrypt latency impact on standard traffic
US8069353B2 (en) * 2004-09-02 2011-11-29 International Business Machines Corporation Low-latency data decryption interface
US20060098674A1 (en) * 2004-11-08 2006-05-11 Motoshi Hamasaki Frame transmitting apparatus and frame receiving apparatus
US8238348B2 (en) * 2004-11-08 2012-08-07 Fujitsu Limited Frame transmitting apparatus and frame receiving apparatus
US8077974B2 (en) 2006-07-28 2011-12-13 Hewlett-Packard Development Company, L.P. Compact stylus-based input technique for indic scripts
US11706349B2 (en) 2008-04-02 2023-07-18 Twilio Inc. System and method for processing telephony sessions
US10893079B2 (en) 2008-04-02 2021-01-12 Twilio Inc. System and method for processing telephony sessions
US10986142B2 (en) 2008-04-02 2021-04-20 Twilio Inc. System and method for processing telephony sessions
US10694042B2 (en) 2008-04-02 2020-06-23 Twilio Inc. System and method for processing media requests during telephony sessions
US11575795B2 (en) 2008-04-02 2023-02-07 Twilio Inc. System and method for processing telephony sessions
US11611663B2 (en) 2008-04-02 2023-03-21 Twilio Inc. System and method for processing telephony sessions
US11856150B2 (en) 2008-04-02 2023-12-26 Twilio Inc. System and method for processing telephony sessions
US9596274B2 (en) 2008-04-02 2017-03-14 Twilio, Inc. System and method for processing telephony sessions
US11444985B2 (en) 2008-04-02 2022-09-13 Twilio Inc. System and method for processing telephony sessions
US9906571B2 (en) 2008-04-02 2018-02-27 Twilio, Inc. System and method for processing telephony sessions
US11843722B2 (en) 2008-04-02 2023-12-12 Twilio Inc. System and method for processing telephony sessions
US9306982B2 (en) 2008-04-02 2016-04-05 Twilio, Inc. System and method for processing media requests during telephony sessions
US9591033B2 (en) 2008-04-02 2017-03-07 Twilio, Inc. System and method for processing media requests during telephony sessions
US11831810B2 (en) 2008-04-02 2023-11-28 Twilio Inc. System and method for processing telephony sessions
US10893078B2 (en) 2008-04-02 2021-01-12 Twilio Inc. System and method for processing telephony sessions
US11283843B2 (en) 2008-04-02 2022-03-22 Twilio Inc. System and method for processing telephony sessions
US11765275B2 (en) 2008-04-02 2023-09-19 Twilio Inc. System and method for processing telephony sessions
US9906651B2 (en) 2008-04-02 2018-02-27 Twilio, Inc. System and method for processing media requests during telephony sessions
US11722602B2 (en) 2008-04-02 2023-08-08 Twilio Inc. System and method for processing media requests during telephony sessions
US10560495B2 (en) 2008-04-02 2020-02-11 Twilio Inc. System and method for processing telephony sessions
US8964726B2 (en) 2008-10-01 2015-02-24 Twilio, Inc. Telephony web event system and method
US11665285B2 (en) 2008-10-01 2023-05-30 Twilio Inc. Telephony web event system and method
US11641427B2 (en) 2008-10-01 2023-05-02 Twilio Inc. Telephony web event system and method
US11005998B2 (en) 2008-10-01 2021-05-11 Twilio Inc. Telephony web event system and method
US9407597B2 (en) 2008-10-01 2016-08-02 Twilio, Inc. Telephony web event system and method
US11632471B2 (en) 2008-10-01 2023-04-18 Twilio Inc. Telephony web event system and method
US10455094B2 (en) 2008-10-01 2019-10-22 Twilio Inc. Telephony web event system and method
US20100150139A1 (en) * 2008-10-01 2010-06-17 Jeffrey Lawson Telephony Web Event System and Method
US9807244B2 (en) 2008-10-01 2017-10-31 Twilio, Inc. Telephony web event system and method
US10187530B2 (en) 2008-10-01 2019-01-22 Twilio, Inc. Telephony web event system and method
US11785145B2 (en) 2009-03-02 2023-10-10 Twilio Inc. Method and system for a multitenancy telephone network
US9894212B2 (en) 2009-03-02 2018-02-13 Twilio, Inc. Method and system for a multitenancy telephone network
US9621733B2 (en) 2009-03-02 2017-04-11 Twilio, Inc. Method and system for a multitenancy telephone network
US10708437B2 (en) 2009-03-02 2020-07-07 Twilio Inc. Method and system for a multitenancy telephone network
US10348908B2 (en) 2009-03-02 2019-07-09 Twilio, Inc. Method and system for a multitenancy telephone network
US8995641B2 (en) 2009-03-02 2015-03-31 Twilio, Inc. Method and system for a multitenancy telephone network
US11240381B2 (en) 2009-03-02 2022-02-01 Twilio Inc. Method and system for a multitenancy telephone network
US9225547B2 (en) * 2009-03-17 2015-12-29 Canon Kabushiki Kaisha Apparatus, method, and medium for controlling transmission of data
US20100241826A1 (en) * 2009-03-17 2010-09-23 Canon Kabushiki Kaisha Data processing apparatus, data processing method and program
US8605628B2 (en) * 2009-06-23 2013-12-10 Rockstar Consortium Us Lp Utilizing betweenness to determine forwarding state in a routed network
US20120033552A1 (en) * 2009-06-23 2012-02-09 Abel Dasylva Utilizing Betweenness to Determine Forwarding State in a Routed Network
US9210275B2 (en) 2009-10-07 2015-12-08 Twilio, Inc. System and method for running a multi-module telephony application
US10554825B2 (en) 2009-10-07 2020-02-04 Twilio Inc. System and method for running a multi-module telephony application
US11637933B2 (en) 2009-10-07 2023-04-25 Twilio Inc. System and method for running a multi-module telephony application
US9338064B2 (en) 2010-06-23 2016-05-10 Twilio, Inc. System and method for managing a computing cluster
US11637934B2 (en) 2010-06-23 2023-04-25 Twilio Inc. System and method for monitoring account usage on a platform
US9590849B2 (en) 2010-06-23 2017-03-07 Twilio, Inc. System and method for managing a computing cluster
US9459926B2 (en) 2010-06-23 2016-10-04 Twilio, Inc. System and method for managing a computing cluster
US9459925B2 (en) 2010-06-23 2016-10-04 Twilio, Inc. System and method for managing a computing cluster
US9967224B2 (en) 2010-06-25 2018-05-08 Twilio, Inc. System and method for enabling real-time eventing
US11088984B2 (en) 2010-06-25 2021-08-10 Twilio Ine. System and method for enabling real-time eventing
US9455949B2 (en) 2011-02-04 2016-09-27 Twilio, Inc. Method for processing telephony sessions of a network
US11848967B2 (en) 2011-02-04 2023-12-19 Twilio Inc. Method for processing telephony sessions of a network
US10230772B2 (en) 2011-02-04 2019-03-12 Twilio, Inc. Method for processing telephony sessions of a network
US11032330B2 (en) 2011-02-04 2021-06-08 Twilio Inc. Method for processing telephony sessions of a network
US10708317B2 (en) 2011-02-04 2020-07-07 Twilio Inc. Method for processing telephony sessions of a network
US9882942B2 (en) 2011-02-04 2018-01-30 Twilio, Inc. Method for processing telephony sessions of a network
US11399044B2 (en) 2011-05-23 2022-07-26 Twilio Inc. System and method for connecting a communication to a client
US10165015B2 (en) 2011-05-23 2018-12-25 Twilio Inc. System and method for real-time communication by using a client application communication protocol
US10560485B2 (en) 2011-05-23 2020-02-11 Twilio Inc. System and method for connecting a communication to a client
US9648006B2 (en) 2011-05-23 2017-05-09 Twilio, Inc. System and method for communicating with a client application
US10819757B2 (en) 2011-05-23 2020-10-27 Twilio Inc. System and method for real-time communication by using a client application communication protocol
US10122763B2 (en) 2011-05-23 2018-11-06 Twilio, Inc. System and method for connecting a communication to a client
US9398622B2 (en) 2011-05-23 2016-07-19 Twilio, Inc. System and method for connecting a communication to a client
US9336500B2 (en) 2011-09-21 2016-05-10 Twilio, Inc. System and method for authorizing and connecting application developers and users
US9942394B2 (en) 2011-09-21 2018-04-10 Twilio, Inc. System and method for determining and communicating presence information
US10841421B2 (en) 2011-09-21 2020-11-17 Twilio Inc. System and method for determining and communicating presence information
US10212275B2 (en) 2011-09-21 2019-02-19 Twilio, Inc. System and method for determining and communicating presence information
US10686936B2 (en) 2011-09-21 2020-06-16 Twilio Inc. System and method for determining and communicating presence information
US10182147B2 (en) 2011-09-21 2019-01-15 Twilio Inc. System and method for determining and communicating presence information
US11489961B2 (en) 2011-09-21 2022-11-01 Twilio Inc. System and method for determining and communicating presence information
US10467064B2 (en) 2012-02-10 2019-11-05 Twilio Inc. System and method for managing concurrent events
US11093305B2 (en) 2012-02-10 2021-08-17 Twilio Inc. System and method for managing concurrent events
US9495227B2 (en) 2012-02-10 2016-11-15 Twilio, Inc. System and method for managing concurrent events
US9037122B2 (en) 2012-02-29 2015-05-19 Alcatel Lucent Fixed line extension for mobile telephones
US9042874B2 (en) * 2012-02-29 2015-05-26 Alcatel Lucent System and/or method for using mobile telephones as extensions
US20130225141A1 (en) * 2012-02-29 2013-08-29 Alcatel-Lucent Usa Inc. System and/or method for using mobile telephones as extensions
US9240941B2 (en) * 2012-05-09 2016-01-19 Twilio, Inc. System and method for managing media in a distributed communication network
US20140289420A1 (en) * 2012-05-09 2014-09-25 Twilio, Inc. System and method for managing media in a distributed communication network
US10637912B2 (en) 2012-05-09 2020-04-28 Twilio Inc. System and method for managing media in a distributed communication network
US9350642B2 (en) 2012-05-09 2016-05-24 Twilio, Inc. System and method for managing latency in a distributed telephony network
US9602586B2 (en) 2012-05-09 2017-03-21 Twilio, Inc. System and method for managing media in a distributed communication network
US10200458B2 (en) 2012-05-09 2019-02-05 Twilio, Inc. System and method for managing media in a distributed communication network
US11165853B2 (en) 2012-05-09 2021-11-02 Twilio Inc. System and method for managing media in a distributed communication network
US10320983B2 (en) 2012-06-19 2019-06-11 Twilio Inc. System and method for queuing a communication session
US11546471B2 (en) 2012-06-19 2023-01-03 Twilio Inc. System and method for queuing a communication session
US9247062B2 (en) 2012-06-19 2016-01-26 Twilio, Inc. System and method for queuing a communication session
US11882139B2 (en) 2012-07-24 2024-01-23 Twilio Inc. Method and system for preventing illicit use of a telephony platform
US9270833B2 (en) 2012-07-24 2016-02-23 Twilio, Inc. Method and system for preventing illicit use of a telephony platform
US9948788B2 (en) 2012-07-24 2018-04-17 Twilio, Inc. Method and system for preventing illicit use of a telephony platform
US10469670B2 (en) 2012-07-24 2019-11-05 Twilio Inc. Method and system for preventing illicit use of a telephony platform
US11063972B2 (en) 2012-07-24 2021-07-13 Twilio Inc. Method and system for preventing illicit use of a telephony platform
US9614972B2 (en) 2012-07-24 2017-04-04 Twilio, Inc. Method and system for preventing illicit use of a telephony platform
US11595792B2 (en) 2012-10-15 2023-02-28 Twilio Inc. System and method for triggering on platform usage
US10257674B2 (en) 2012-10-15 2019-04-09 Twilio, Inc. System and method for triggering on platform usage
US9654647B2 (en) 2012-10-15 2017-05-16 Twilio, Inc. System and method for routing communications
US10757546B2 (en) 2012-10-15 2020-08-25 Twilio Inc. System and method for triggering on platform usage
US8938053B2 (en) 2012-10-15 2015-01-20 Twilio, Inc. System and method for triggering on platform usage
US11246013B2 (en) 2012-10-15 2022-02-08 Twilio Inc. System and method for triggering on platform usage
US9319857B2 (en) 2012-10-15 2016-04-19 Twilio, Inc. System and method for triggering on platform usage
US9307094B2 (en) 2012-10-15 2016-04-05 Twilio, Inc. System and method for routing communications
US10033617B2 (en) 2012-10-15 2018-07-24 Twilio, Inc. System and method for triggering on platform usage
US11689899B2 (en) 2012-10-15 2023-06-27 Twilio Inc. System and method for triggering on platform usage
US8948356B2 (en) 2012-10-15 2015-02-03 Twilio, Inc. System and method for routing communications
US9253254B2 (en) 2013-01-14 2016-02-02 Twilio, Inc. System and method for offering a multi-partner delegated platform
US10051011B2 (en) 2013-03-14 2018-08-14 Twilio, Inc. System and method for integrating session initiation protocol communication in a telecommunications platform
US9282124B2 (en) 2013-03-14 2016-03-08 Twilio, Inc. System and method for integrating session initiation protocol communication in a telecommunications platform
US11032325B2 (en) 2013-03-14 2021-06-08 Twilio Inc. System and method for integrating session initiation protocol communication in a telecommunications platform
US10560490B2 (en) 2013-03-14 2020-02-11 Twilio Inc. System and method for integrating session initiation protocol communication in a telecommunications platform
US11637876B2 (en) 2013-03-14 2023-04-25 Twilio Inc. System and method for integrating session initiation protocol communication in a telecommunications platform
US9001666B2 (en) 2013-03-15 2015-04-07 Twilio, Inc. System and method for improving routing in a distributed communication platform
US9225840B2 (en) 2013-06-19 2015-12-29 Twilio, Inc. System and method for providing a communication endpoint information service
US10057734B2 (en) 2013-06-19 2018-08-21 Twilio Inc. System and method for transmitting and receiving media messages
US9338280B2 (en) 2013-06-19 2016-05-10 Twilio, Inc. System and method for managing telephony endpoint inventory
US9160696B2 (en) 2013-06-19 2015-10-13 Twilio, Inc. System for transforming media resource into destination device compatible messaging format
US9240966B2 (en) 2013-06-19 2016-01-19 Twilio, Inc. System and method for transmitting and receiving media messages
US9992608B2 (en) 2013-06-19 2018-06-05 Twilio, Inc. System and method for providing a communication endpoint information service
US9483328B2 (en) 2013-07-19 2016-11-01 Twilio, Inc. System and method for delivering application content
US10439907B2 (en) 2013-09-17 2019-10-08 Twilio Inc. System and method for providing communication platform metadata
US11539601B2 (en) 2013-09-17 2022-12-27 Twilio Inc. System and method for providing communication platform metadata
US9137127B2 (en) 2013-09-17 2015-09-15 Twilio, Inc. System and method for providing communication platform metadata
US9959151B2 (en) 2013-09-17 2018-05-01 Twilio, Inc. System and method for tagging and tracking events of an application platform
US10671452B2 (en) 2013-09-17 2020-06-02 Twilio Inc. System and method for tagging and tracking events of an application
US11379275B2 (en) 2013-09-17 2022-07-05 Twilio Inc. System and method for tagging and tracking events of an application
US9811398B2 (en) 2013-09-17 2017-11-07 Twilio, Inc. System and method for tagging and tracking events of an application platform
US9338018B2 (en) 2013-09-17 2016-05-10 Twilio, Inc. System and method for pricing communication of a telecommunication platform
US9853872B2 (en) 2013-09-17 2017-12-26 Twilio, Inc. System and method for providing communication platform metadata
US9672185B2 (en) 2013-09-27 2017-06-06 International Business Machines Corporation Method and system for enumerating digital circuits in a system-on-a-chip (SOC)
US10628376B2 (en) 2013-09-27 2020-04-21 International Business Machines Corporation Method and system for enumerating digital circuits in a system-on-a-chip (SOC)
US10423570B2 (en) 2013-09-27 2019-09-24 International Business Machines Corporation Method and system for enumerating digital circuits in a system-on-a-chip (SOC)
US10628375B2 (en) 2013-09-27 2020-04-21 International Business Machines Corporation Method and system for enumerating digital circuits in a system-on-a-chip (SOC)
US10394752B2 (en) 2013-09-27 2019-08-27 International Business Machines Corporation Method and system for enumerating digital circuits in a system-on-a-chip (SOC)
US11831415B2 (en) 2013-11-12 2023-11-28 Twilio Inc. System and method for enabling dynamic multi-modal communication
US9325624B2 (en) 2013-11-12 2016-04-26 Twilio, Inc. System and method for enabling dynamic multi-modal communication
US11394673B2 (en) 2013-11-12 2022-07-19 Twilio Inc. System and method for enabling dynamic multi-modal communication
US10686694B2 (en) 2013-11-12 2020-06-16 Twilio Inc. System and method for client communication in a distributed telephony network
US9553799B2 (en) 2013-11-12 2017-01-24 Twilio, Inc. System and method for client communication in a distributed telephony network
US11621911B2 (en) 2013-11-12 2023-04-04 Twillo Inc. System and method for client communication in a distributed telephony network
US10063461B2 (en) 2013-11-12 2018-08-28 Twilio, Inc. System and method for client communication in a distributed telephony network
US10069773B2 (en) 2013-11-12 2018-09-04 Twilio, Inc. System and method for enabling dynamic multi-modal communication
US9628624B2 (en) 2014-03-14 2017-04-18 Twilio, Inc. System and method for a work distribution service
US11330108B2 (en) 2014-03-14 2022-05-10 Twilio Inc. System and method for a work distribution service
US11882242B2 (en) 2014-03-14 2024-01-23 Twilio Inc. System and method for a work distribution service
US10291782B2 (en) 2014-03-14 2019-05-14 Twilio, Inc. System and method for a work distribution service
US10003693B2 (en) 2014-03-14 2018-06-19 Twilio, Inc. System and method for a work distribution service
US9344573B2 (en) 2014-03-14 2016-05-17 Twilio, Inc. System and method for a work distribution service
US10904389B2 (en) 2014-03-14 2021-01-26 Twilio Inc. System and method for a work distribution service
US10440627B2 (en) 2014-04-17 2019-10-08 Twilio Inc. System and method for enabling multi-modal communication
US9226217B2 (en) 2014-04-17 2015-12-29 Twilio, Inc. System and method for enabling multi-modal communication
US11653282B2 (en) 2014-04-17 2023-05-16 Twilio Inc. System and method for enabling multi-modal communication
US9907010B2 (en) 2014-04-17 2018-02-27 Twilio, Inc. System and method for enabling multi-modal communication
US10873892B2 (en) 2014-04-17 2020-12-22 Twilio Inc. System and method for enabling multi-modal communication
US11755530B2 (en) 2014-07-07 2023-09-12 Twilio Inc. Method and system for applying data retention policies in a computing platform
US10757200B2 (en) 2014-07-07 2020-08-25 Twilio Inc. System and method for managing conferencing in a distributed communication network
US10116733B2 (en) 2014-07-07 2018-10-30 Twilio, Inc. System and method for collecting feedback in a multi-tenant communication platform
US10212237B2 (en) 2014-07-07 2019-02-19 Twilio, Inc. System and method for managing media and signaling in a communication platform
US11341092B2 (en) 2014-07-07 2022-05-24 Twilio Inc. Method and system for applying data retention policies in a computing platform
US10229126B2 (en) 2014-07-07 2019-03-12 Twilio, Inc. Method and system for applying data retention policies in a computing platform
US11768802B2 (en) 2014-07-07 2023-09-26 Twilio Inc. Method and system for applying data retention policies in a computing platform
US9588974B2 (en) 2014-07-07 2017-03-07 Twilio, Inc. Method and system for applying data retention policies in a computing platform
US9858279B2 (en) 2014-07-07 2018-01-02 Twilio, Inc. Method and system for applying data retention policies in a computing platform
US10747717B2 (en) 2014-07-07 2020-08-18 Twilio Inc. Method and system for applying data retention policies in a computing platform
US10637938B2 (en) 2014-10-21 2020-04-28 Twilio Inc. System and method for providing a micro-services communication platform
US11019159B2 (en) 2014-10-21 2021-05-25 Twilio Inc. System and method for providing a micro-services communication platform
US9906607B2 (en) 2014-10-21 2018-02-27 Twilio, Inc. System and method for providing a micro-services communication platform
US11544752B2 (en) 2015-02-03 2023-01-03 Twilio Inc. System and method for a media intelligence platform
US10853854B2 (en) 2015-02-03 2020-12-01 Twilio Inc. System and method for a media intelligence platform
US9805399B2 (en) 2015-02-03 2017-10-31 Twilio, Inc. System and method for a media intelligence platform
US10467665B2 (en) 2015-02-03 2019-11-05 Twilio Inc. System and method for a media intelligence platform
US11265367B2 (en) 2015-05-14 2022-03-01 Twilio Inc. System and method for signaling through data storage
US10419891B2 (en) 2015-05-14 2019-09-17 Twilio, Inc. System and method for communicating through multiple endpoints
US9948703B2 (en) 2015-05-14 2018-04-17 Twilio, Inc. System and method for signaling through data storage
US11272325B2 (en) 2015-05-14 2022-03-08 Twilio Inc. System and method for communicating through multiple endpoints
US10560516B2 (en) 2015-05-14 2020-02-11 Twilio Inc. System and method for signaling through data storage
US11656805B2 (en) 2015-06-26 2023-05-23 Intel Corporation Processors, methods, systems, and instructions to protect shadow stacks
US11029952B2 (en) 2015-12-20 2021-06-08 Intel Corporation Hardware apparatuses and methods to switch shadow stack pointers
US10394556B2 (en) 2015-12-20 2019-08-27 Intel Corporation Hardware apparatuses and methods to switch shadow stack pointers
US11663006B2 (en) 2015-12-20 2023-05-30 Intel Corporation Hardware apparatuses and methods to switch shadow stack pointers
US11176243B2 (en) 2016-02-04 2021-11-16 Intel Corporation Processor extensions to protect stacks during ring transitions
US10430580B2 (en) 2016-02-04 2019-10-01 Intel Corporation Processor extensions to protect stacks during ring transitions
TWI715704B (en) * 2016-02-04 2021-01-11 美商英特爾股份有限公司 Processor and method for processor extensions to protect stacks during ring transitions
US10659349B2 (en) 2016-02-04 2020-05-19 Twilio Inc. Systems and methods for providing secure network exchanged for a multitenant virtual private cloud
WO2017136101A1 (en) * 2016-02-04 2017-08-10 Intel Corporation Processor extensions to protect stacks during ring transitions
US11171865B2 (en) 2016-02-04 2021-11-09 Twilio Inc. Systems and methods for providing secure network exchanged for a multitenant virtual private cloud
TWI796031B (en) * 2016-02-04 2023-03-11 美商英特爾股份有限公司 Apparatus for processor extensions to protect stacks during ring transitions
CN113836523A (en) * 2016-02-04 2021-12-24 英特尔公司 Processor extensions for protecting a stack during ring transitions
US11762982B2 (en) 2016-02-04 2023-09-19 Intel Corporation Processor extensions to protect stacks during ring transitions
TWI749999B (en) * 2016-02-04 2021-12-11 美商英特爾股份有限公司 Apparatus, method and machine-readable medium for processor extensions to protect stacks during ring transitions
US10686902B2 (en) 2016-05-23 2020-06-16 Twilio Inc. System and method for a multi-channel notification service
US11627225B2 (en) 2016-05-23 2023-04-11 Twilio Inc. System and method for programmatic device connectivity
US11622022B2 (en) 2016-05-23 2023-04-04 Twilio Inc. System and method for a multi-channel notification service
US10063713B2 (en) 2016-05-23 2018-08-28 Twilio Inc. System and method for programmatic device connectivity
US11076054B2 (en) 2016-05-23 2021-07-27 Twilio Inc. System and method for programmatic device connectivity
US11265392B2 (en) 2016-05-23 2022-03-01 Twilio Inc. System and method for a multi-channel notification service
US10440192B2 (en) 2016-05-23 2019-10-08 Twilio Inc. System and method for programmatic device connectivity
US11012915B2 (en) * 2018-03-26 2021-05-18 Qualcomm Incorporated Backpressure signaling for wireless communications
US11936609B2 (en) 2021-04-23 2024-03-19 Twilio Inc. System and method for enabling real-time eventing

Also Published As

Publication number Publication date
US20030195990A1 (en) 2003-10-16
US20030195989A1 (en) 2003-10-16
TW589825B (en) 2004-06-01
US20030189940A1 (en) 2003-10-09
US20030172257A1 (en) 2003-09-11
WO2003005152A3 (en) 2004-02-26
US7103008B2 (en) 2006-09-05
US20030212830A1 (en) 2003-11-13
US20030195991A1 (en) 2003-10-16
US20030172189A1 (en) 2003-09-11
US20030204636A1 (en) 2003-10-30
US20030191862A1 (en) 2003-10-09
US20030200339A1 (en) 2003-10-23
US20030191861A1 (en) 2003-10-09
EP1413098A2 (en) 2004-04-28
US20030200343A1 (en) 2003-10-23
AU2002327187A1 (en) 2003-01-21
US20030200342A1 (en) 2003-10-23
US20030191863A1 (en) 2003-10-09
JP2005516432A (en) 2005-06-02
WO2003005152A2 (en) 2003-01-16

Similar Documents

Publication Publication Date Title
US7103008B2 (en) Communications system using rings architecture
US20030167348A1 (en) Communications system using rings architecture
US20030172190A1 (en) Communications system using rings architecture
JP3832816B2 (en) Network processor, memory configuration and method
US6735773B1 (en) Method and apparatus for issuing commands to a network processor configured to provide a plurality of APIs
JP4066382B2 (en) Network switch and component and method of operation
JP3872342B2 (en) Device for network and scalable network processor
US5802287A (en) Single chip universal protocol multi-function ATM network interface
US5625825A (en) Random number generating apparatus for an interface unit of a carrier sense with multiple access and collision detect (CSMA/CD) ethernet data network
US5446726A (en) Error detection and correction apparatus for an asynchronous transfer mode (ATM) network device
US5668809A (en) Single chip network hub with dynamic window filter
JP3817477B2 (en) VLSI network processor and method
Schmidt et al. Transport system architecture services for high-performance communications systems
US20100158005A1 (en) System-On-a-Chip and Multi-Chip Systems Supporting Advanced Telecommunication Functions
JP3807980B2 (en) Network processor processing complex and method
US20100191911A1 (en) System-On-A-Chip Having an Array of Programmable Processing Elements Linked By an On-Chip Network with Distributed On-Chip Shared Memory and External Shared Memory
US20100162265A1 (en) System-On-A-Chip Employing A Network Of Nodes That Utilize Logical Channels And Logical Mux Channels For Communicating Messages Therebetween
US20100158023A1 (en) System-On-a-Chip and Multi-Chip Systems Supporting Advanced Telecommunication Functions
WO2000001116A1 (en) Method and apparatus for controlling a network processor
US20100161938A1 (en) System-On-A-Chip Supporting A Networked Array Of Configurable Symmetric Multiprocessing Nodes
JP2000295289A (en) Large coupling wide band or narrow band exchange
US20040103086A1 (en) Data structure traversal instructions for packet processing
Chiueh et al. Suez: A cluster-based scalable real-time packet router
Sundström et al. An Interface Architecture for a Low-Latency Network of Workstations using 10 GBIT/S Switched LAN Technology.

Legal Events

Date Code Title Description
AS Assignment

Owner name: GLOBESPANVIRATA INCORPORATED, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZABARSKI, BORIS;TARRAB, MOSHE;NORMAN, ODED;REEL/FRAME:013284/0868

Effective date: 20020819

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION