US20080133489A1 - Run-time performance verification system - Google Patents

Run-time performance verification system Download PDF

Info

Publication number
US20080133489A1
US20080133489A1 US12/018,329 US1832908A US2008133489A1 US 20080133489 A1 US20080133489 A1 US 20080133489A1 US 1832908 A US1832908 A US 1832908A US 2008133489 A1 US2008133489 A1 US 2008133489A1
Authority
US
United States
Prior art keywords
performance
events
bus
program
performance metrics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/018,329
Inventor
Thomas M. Armstead
Lance R. Meyer
Paul E. Schardt
Robert A. Shearer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/018,329 priority Critical patent/US20080133489A1/en
Publication of US20080133489A1 publication Critical patent/US20080133489A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/349Performance evaluation by tracing or monitoring for interfaces, buses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/86Event-based monitoring

Definitions

  • the present invention generally relates to exchanging packets of data on an interconnect bus connecting two devices, and more particularly, to measuring and verifying the performance of such an exchange.
  • a system on a chip generally includes one or more integrated processor cores, some type of embedded memory such as a cache shared between the processor cores, and peripheral interfaces such as an external bus interface, on a single chip to form a complete (or nearly complete) system.
  • the external bus interface is often used to pass data in packets over an external bus between these systems and an external device such as an external memory controller, Input/Output (I/O) controller, or graphics processing unit (GPU).
  • I/O Input/Output
  • GPU graphics processing unit
  • the performance of such a system may depend on several factors which may include device characteristics, characteristics of interconnect buses, memory hierarchy, operating system, and various other factors. A reasonable prediction of ranges for system performance can still be made after considering such factors. However, it is generally desirable to verify that performance falls within these ranges during simulation. For example, it may be desirable to verify that the throughput (or bandwidth) and the latency (or response time) of communication over an interconnect bus between a transmitting and receiving device fall within their predicted range.
  • simulation involves running predefined test cases modeled to emulate normal system operation.
  • bus traffic is monitored, interesting events on the bus are captured, and performance is measured based on the captured events.
  • the captured events and their performance metrics are recorded in a simulation log. It is only after simulation that a user can view all the bus events in the simulation log and identify categories of events that fall outside the predicted performance range.
  • the information contained in the simulation logs is rather cryptic, significant effort will be required to manually analyze, identify and parse those categories of events that do not fall within their performance range.
  • predefined test cases may not adequately test a given category of bus events. For example, a test case may not contain a sufficient number of read operations. As a result, the performance measurements for the read operation may not be statistically significant.
  • Yet another problem with the conventional testing method is that degradations in performance are unlikely to be detected, without tedious manual analysis, when the predicted range of performance is too lenient. For example, if the average latency associated with a particular transaction between two devices is predicted to be 1 second, but the measured average latency is only 0.2 seconds, then a degradation of the average latency from 0.2 seconds to 0.8 seconds is unlikely to be caught even though there is a significant, undesired change in performance.
  • Embodiments of the present invention generally provide methods, computer readable storage media, and systems for measuring and verifying performance of packet based communication transactions between devices over an interconnect bus.
  • One embodiment provides a method for determining performance characteristics of a system.
  • the method generally includes executing a program to cause data to be exchanged between at least two devices of the system via a bus, capturing events indicative of data exchanged between the at least two devices by at least one interface monitor, calculating one or more performance metrics based on the captured events during execution of the program, storing the calculated performance metrics in a database, and determining whether the calculated performance metrics fall within a determined performance range.
  • Another embodiment provides a computer readable storage medium containing a program for determining performance characteristics of a system.
  • the program When executed by a processor, the program performs operations generally including generating data to be exchanged between at least two devices of the system via a bus, capturing events indicative of data exchanged between the at least two devices by at least one interface monitor, calculating one or more performance metrics based on the captured events during execution of the program, storing the calculated performance metrics in a database, and determining whether the calculated performance metrics fall within a determined performance range.
  • Yet another embodiment provides a system generally including a first processing device, a second processing device coupled with the first processing device via a bus, at least one interface monitor for capturing events indicative of data exchanged between the at least two processing devices via the bus, and a performance monitor configured to calculate one or more performance metrics based on the captured events, store the one or more calculated performance metrics in a database, and determine whether the calculated performance metrics fall within a determined performance range.
  • FIG. 1 illustrates an exemplary test environment in accordance with one embodiment of the present invention.
  • FIG. 2 is a flow diagram of exemplary operations for capturing bus events and calculating performance metrics for those events.
  • FIG. 3 is a flow diagram of exemplary operations for verifying that captured bus events fall within the predefined performance ranges.
  • FIG. 4 is a flow diagram of operations performed for verifying that captured events fall within the self learned performance ranges.
  • Embodiments of the present invention allow packet based communication transactions between devices over an interconnect bus to be captured to measure performance.
  • Performance metrics may be determined by capturing events at various nodes as they pass through the system. Performance may be verified at run time by computing performance metrics for captured events and comparing such metrics to predefined performance ranges and/or self learned performance ranges.
  • embodiments of the present invention provide for dynamic tailoring of bus traffic to generate potential failing conditions.
  • FIG. 1 illustrates an exemplary testing system in which a Performance Monitor 100 monitors performance between two devices (or nodes) 120 and 130 over an Interconnect Bus 180 (e.g., commonly referred to as a front side bus).
  • the two devices 120 and 130 may be a central processing unit (CPU) and a graphics processing unit (GPU).
  • the Bus 180 may be a bi-directional multi-bit bus, for example, having eight or more lines for communication from the CPU to the GPU and another eight or more lines for communication from the GPU to the CPU.
  • Link IM 140 may be any combination of hardware and/or software configured to sample data lines of the Interconnect Bus 180 in conjunction with a clock signal.
  • the Link IM may be further configured to examine the sampled data and recognize predefined categories of events. If a known event is captured, the Link IM may notify the performance monitor that the event is presented on the Interconnect Bus 180 .
  • a CPU may perform a read operation on a specific location in the GPU by sending a read packet over an interconnect bus connecting the CPU and the GPU.
  • the Link IM for the interconnect bus may capture the read packet when it is presented on the bus and notify the performance monitor that a read packet is found on the bus.
  • the Link IM may be configured to inject noise on to the Interconnect Bus 180 . Such noise injection may be performed to simulate actual noise on the interconnect bus during normal operation of the system.
  • the Link IM may also be configured to introduce errors into an event captured on the bus before the event is dispatched to the destination device. For example, the Link IM may toggle some bits in the packet. As with noise injection, the introduction of errors may be performed to simulate actual errors that may occur while transferring packets during normal operation of the system.
  • the goal of introducing such errors may be to verify that the destination device properly determines an error in the packet, for example by using a Cyclic Redundancy Check (CRC), and performs error correcting steps which may include correcting erroneous bits or requesting that the packet be sent again.
  • CRC Cyclic Redundancy Check
  • Each device 120 and 130 may be driven by Unit Drivers 160 and 161 respectively.
  • Each Unit Driver may be software that is configured to cause an associated device to perform a series of functions, including sending packets to another device.
  • Unit Driver 160 may generate instructions to Device 120 to send a read packet to Device 130 over the Interconnect Bus 180 .
  • Such instructions by unit drivers 160 and 161 to devices 120 and 130 may be monitored by Application Interface Monitors (IM) 150 and 151 respectively.
  • IM Application Interface Monitors
  • Each Application IM may be any combination of hardware and/or software configured to sample data lines connecting the Application IM and an associated device in conjunction with a clock signal.
  • each Application IM may be configured to examine the sampled data and recognize predefined categories of instructions. As with the Link IM, if a known instruction is captured, the Application IM may notify the performance monitor that the event is presented to the associated device.
  • the events captured by the Link IM 140 and the Application IM's 150 and 151 may be received by a Performance Monitor 100 and stored in a shared Database 170 .
  • the Performance Monitor may store in the Database 170 , a timestamp associated with each captured event.
  • the Performance may store in the database the simulation time at which each event was captured by the interface monitors.
  • the Performance Monitor 100 may be configured to calculate several performance metrics for the system based on the captured events. For example, to compute the latency of a read operation across Device 120 , the Performance Monitor may subtract the timestamps for a read instruction issued by Unit Driver 160 and captured by Application IM 150 , and an associated read packet captured by Link IM 140 . Similarly, the Performance Monitor may also compute the latency of read responses between Device 1 and Device 2 over the Interconnect Bus 180 by subtracting the timestamps of a read packet and an associated data packet captured by Link IM 140 . Several other similar performance metrics may be defined to measure latencies and throughput for the system.
  • the Performance Monitor may be further configured to store the calculated performance metrics in the shared Database 170 .
  • the Performance Monitor may store the latencies of write and read operations in Database 170 .
  • a user may be allowed to query Database 170 to generate graphs that illustrate performance results for various bus events. Such graphs may allow a user to easily compare results between bus events in the same test run and/or different test runs.
  • the Performance Monitor 100 may be configured to fail a simulation test based on predefined or self learned performance ranges 101 .
  • the performance ranges 101 may define upper and lower range limits or an upper or lower threshold value.
  • a predefined range may be defined by a user before running a test on the system.
  • the predefined ranges may be chosen arbitrarily or according to ideal performance metrics calculated considering factors such as device characteristics, system architecture, system software, and the like.
  • the self learned ranges may be calculated based on historic system performance data contained in Database 170 .
  • the self-learned ranges may be determined by computing an average of previously obtained performance metrics or by selecting values at or near the peak of a bell curve representing historic performance results. Any other reasonable method for calculating performance ranges may be used to determine expected performance based on historic performance.
  • FIG. 2 is a flow diagram for exemplary operations performed to capture and store bus events in accordance with embodiments of the present invention.
  • the operations may be performed, for example, by components illustrated in FIG. 1 , while executing a specific program designed to emulate normal system operation (and produce typical bus traffic).
  • a specific program designed to emulate normal system operation and produce typical bus traffic.
  • FIG. 2 may be performed by other components and, further, that the components illustrated in FIG. 1 may be capable of performing other operations.
  • a Link IM or an Application IM may detect events indicating a transaction between devices or between a unit driver and an associated device, capture such an event, and send it to a Performance Monitor.
  • the Link IM and Application IM may be a part of the Performance Monitor, therefore the events may be captured by the performance monitor directly. Captured events may be stored in a shared Database 204 , as illustrated.
  • the Performance Monitor may interpret the captured event and calculate Performance metrics for that event. This may require the Performance Monitor to query the database to find other events associated with the captured event. For example, when the Performance Monitor captures a read packet on the Interconnect Bus 180 , it may query Database 170 for a read instruction issued from the Unit Driver 160 in order to calculate the latency of the read operation through Device 120 . Several other performance metrics may also be computed at this time.
  • the Performance Monitor may store the calculated performance metrics in the shared database.
  • the performance metrics stored in the database may be used later to compute self learned ranges for system performance.
  • FIG. 3 is a flow diagram for exemplary operations performed to verify, during run time, that performance of a system falls within predefined ranges.
  • the operations begin at step 301 by getting the user defined ranges.
  • the user may be prompted to define ranges for one or more performance metrics.
  • the user may also be allowed to select predefined ranges used in previous simulations.
  • Sets of predefined ranges may also be organized into test profiles. Each test profile may contain a unique combination of performance range settings. A user may be prompted to select one of these profiles at the outset of simulation.
  • the predefined ranges may be selected for a plurality of simulation tests to facilitate batch testing with the same predefined parameters.
  • simulation begins by Unit Drivers generating stimulus to the devices in order to emulate normal system operation and produce typical bus traffic.
  • the Performance Monitor performs the steps outlined in FIG. 2 to capture events and measure performance.
  • the Performance Monitor may compute performance results only after the simulation is run for a predetermined period of time.
  • the test in step 303 is performed to determine whether the performance metrics calculated fall within the predefined ranges. If a calculated performance metric for a captured event falls outside of its predefined range, simulation may be stopped and a system failure message may be generated at step 306 .
  • simulation may be stopped only if a certain threshold number of events fall outside the predefined range. Stopping simulation on the occurrence of a failing condition may save valuable simulation time and make performance verification more efficient.
  • the Performance Monitor continues to capture and calculate performance metrics for events until another performance metric falls outside the predefined range or an end-of-test is detected in step 304 . If an end-of-test is detected and all performance metrics fall within the predefined ranges, then the simulation run is deemed successful at step 305 .
  • FIG. 4 is a flow diagram for exemplary operations performed to verify, during run time, that performance of a system falls within ranges determined by the system (self learned ranges).
  • the operations begin in step 401 by determining the ranges that will be used to verify performance metrics.
  • the ranges may be determined by querying the Database 170 for performance metrics stored from previously run simulations and computing the self learned ranges based on such historic data. As discussed earlier, any method such as computing averages and normal curve peaks may be used to determine an expected performance range based on historic data.
  • the simulation may begin once the self learned performance ranges are determined.
  • the Performance Monitor may monitor and calculate the performance metrics for events as they are captured during run time. These calculated performance metrics may be stored for later calculations of self learned ranges. In some embodiments of the invention, however, the Performance monitor may use the calculated performance metric for a captured event to dynamically update the self learned ranges being applied in the current simulation.
  • step 403 if a calculated performance metric for a captured event falls outside of the self learned range, simulation may be stopped and a system failure message may be generated at step 406 . In some embodiments of the invention, simulation may be stopped only if a certain threshold number of events fall outside the self learned range. If, on the other hand, the performance metric is deemed to fall within the predefined range, the Performance Monitor continues to capture and calculate performance metrics for events until another performance falls outside the self learned range or an end-of-test is detected in step 404 . If an end-of-test is detected and all performance metrics fall within the predefined ranges, then the simulation run is deemed successful at step 405 .
  • the user may be allowed to configure the Performance Monitor to compare the performance metrics for a captured event with predefined ranges, self learned ranges, or both the predefined ranges and self learned ranges.
  • a user may choose to run simulation according to user defined ranges when the Database 170 does not contain sufficient information to calculate statistically significant self learned ranges.
  • a user may run simulation according to the self learned ranges in order to detect any drastic changes in performance when the predefined ranges are suspected to be too lenient.
  • a user may elect to run simulation according to both the predefined and self learned ranges to obtain the benefits of both approaches to verifying performance.
  • test case may have only a few read operations which may be insufficient to bring about a failing condition. Therefore, another test case must be written that has sufficient read operations.
  • an innumerable number of test cases will have to be written to account for all the various permutations and combinations of failing conditions.
  • the present invention provides for dynamically tailoring the events generated by the Unit Drivers by weighting commands based on run time results. For example, if a write operation latency is deemed to be approaching a failing condition, a weight parameter associated with the write operation may be dynamically adjusted so that the write operation is generated more frequently.
  • One method for determining whether a performance metric is approaching a failing condition may be to determine if the performance metric falls outside a threshold range within the predefined range and/or self-learned range.
  • the Performance Monitor 100 may contain the necessary logic to compute weights for different categories of events based on run time results and provide feedback to the Unit Drivers 160 and 161 . In response to this feedback, the Unit Drivers may dispatch instructions to reflect the dynamically adjusted weights for the instructions.
  • the present invention may notify a user that there is a potential problem, and identify the offending event. As a result, a more efficient and effective verification of system performance may be achieved.

Abstract

A method and apparatus that allow packet based communication transactions between devices over an interconnect bus to be captured to measure performance. Performance metrics may be determined by capturing events at various locations as they pass through the system. Performance may be verified at run time by computing performance metrics for captured events and comparing such metrics to predefined performance ranges and/or self learned performance ranges. Furthermore, embodiments of the present invention provide for dynamic tailoring of bus traffic to generate potential failing conditions. For some embodiments, performance verification as described herein may be performed in a simulation environment.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a continuation of co-pending U.S. patent application Ser. No. 11/259,294 filed Oct. 26, 2005, which is herein incorporated by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention generally relates to exchanging packets of data on an interconnect bus connecting two devices, and more particularly, to measuring and verifying the performance of such an exchange.
  • 2. Description of the Related Art
  • A system on a chip (SOC) generally includes one or more integrated processor cores, some type of embedded memory such as a cache shared between the processor cores, and peripheral interfaces such as an external bus interface, on a single chip to form a complete (or nearly complete) system. The external bus interface is often used to pass data in packets over an external bus between these systems and an external device such as an external memory controller, Input/Output (I/O) controller, or graphics processing unit (GPU).
  • The performance of such a system may depend on several factors which may include device characteristics, characteristics of interconnect buses, memory hierarchy, operating system, and various other factors. A reasonable prediction of ranges for system performance can still be made after considering such factors. However, it is generally desirable to verify that performance falls within these ranges during simulation. For example, it may be desirable to verify that the throughput (or bandwidth) and the latency (or response time) of communication over an interconnect bus between a transmitting and receiving device fall within their predicted range.
  • Conventionally, simulation involves running predefined test cases modeled to emulate normal system operation. During simulation, bus traffic is monitored, interesting events on the bus are captured, and performance is measured based on the captured events. The captured events and their performance metrics are recorded in a simulation log. It is only after simulation that a user can view all the bus events in the simulation log and identify categories of events that fall outside the predicted performance range. However, because the information contained in the simulation logs is rather cryptic, significant effort will be required to manually analyze, identify and parse those categories of events that do not fall within their performance range. Another problem with conventional simulation is that predefined test cases may not adequately test a given category of bus events. For example, a test case may not contain a sufficient number of read operations. As a result, the performance measurements for the read operation may not be statistically significant.
  • Yet another problem with the conventional testing method is that degradations in performance are unlikely to be detected, without tedious manual analysis, when the predicted range of performance is too lenient. For example, if the average latency associated with a particular transaction between two devices is predicted to be 1 second, but the measured average latency is only 0.2 seconds, then a degradation of the average latency from 0.2 seconds to 0.8 seconds is unlikely to be caught even though there is a significant, undesired change in performance.
  • Accordingly, what is needed is improved methods and apparatus for measuring and verifying performance of packet based data exchanges between devices connected by an interconnect bus.
  • SUMMARY OF THE INVENTION
  • Embodiments of the present invention generally provide methods, computer readable storage media, and systems for measuring and verifying performance of packet based communication transactions between devices over an interconnect bus.
  • One embodiment provides a method for determining performance characteristics of a system. The method generally includes executing a program to cause data to be exchanged between at least two devices of the system via a bus, capturing events indicative of data exchanged between the at least two devices by at least one interface monitor, calculating one or more performance metrics based on the captured events during execution of the program, storing the calculated performance metrics in a database, and determining whether the calculated performance metrics fall within a determined performance range.
  • Another embodiment provides a computer readable storage medium containing a program for determining performance characteristics of a system. When executed by a processor, the program performs operations generally including generating data to be exchanged between at least two devices of the system via a bus, capturing events indicative of data exchanged between the at least two devices by at least one interface monitor, calculating one or more performance metrics based on the captured events during execution of the program, storing the calculated performance metrics in a database, and determining whether the calculated performance metrics fall within a determined performance range.
  • Yet another embodiment provides a system generally including a first processing device, a second processing device coupled with the first processing device via a bus, at least one interface monitor for capturing events indicative of data exchanged between the at least two processing devices via the bus, and a performance monitor configured to calculate one or more performance metrics based on the captured events, store the one or more calculated performance metrics in a database, and determine whether the calculated performance metrics fall within a determined performance range.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • So that the manner in which the above recited features, advantages and objects of the present invention are attained and can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings.
  • It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
  • FIG. 1 illustrates an exemplary test environment in accordance with one embodiment of the present invention.
  • FIG. 2 is a flow diagram of exemplary operations for capturing bus events and calculating performance metrics for those events.
  • FIG. 3 is a flow diagram of exemplary operations for verifying that captured bus events fall within the predefined performance ranges.
  • FIG. 4 is a flow diagram of operations performed for verifying that captured events fall within the self learned performance ranges.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Embodiments of the present invention allow packet based communication transactions between devices over an interconnect bus to be captured to measure performance. Performance metrics may be determined by capturing events at various nodes as they pass through the system. Performance may be verified at run time by computing performance metrics for captured events and comparing such metrics to predefined performance ranges and/or self learned performance ranges. Furthermore, embodiments of the present invention provide for dynamic tailoring of bus traffic to generate potential failing conditions.
  • In the following, reference is made to embodiments of the invention. However, it should be understood that the invention is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the invention. Furthermore, in various embodiments the invention provides numerous advantages over the prior art. However, although embodiments of the invention may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the invention. Thus, the following aspects, features, embodiments and advantages are merely illustrative and not considered elements or limitations of the appended claims except where explicitly recited in the claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
  • An Exemplary Test System
  • FIG. 1 illustrates an exemplary testing system in which a Performance Monitor 100 monitors performance between two devices (or nodes) 120 and 130 over an Interconnect Bus 180 (e.g., commonly referred to as a front side bus). The two devices 120 and 130, for example, may be a central processing unit (CPU) and a graphics processing unit (GPU). For some embodiments, the Bus 180 may be a bi-directional multi-bit bus, for example, having eight or more lines for communication from the CPU to the GPU and another eight or more lines for communication from the GPU to the CPU.
  • Communication between the devices 120 and 130 may be monitored by a Link Interface Monitor (IM) 140. Link IM 140 may be any combination of hardware and/or software configured to sample data lines of the Interconnect Bus 180 in conjunction with a clock signal. The Link IM may be further configured to examine the sampled data and recognize predefined categories of events. If a known event is captured, the Link IM may notify the performance monitor that the event is presented on the Interconnect Bus 180. For example, a CPU may perform a read operation on a specific location in the GPU by sending a read packet over an interconnect bus connecting the CPU and the GPU. The Link IM for the interconnect bus may capture the read packet when it is presented on the bus and notify the performance monitor that a read packet is found on the bus.
  • In some embodiments of the invention the Link IM may be configured to inject noise on to the Interconnect Bus 180. Such noise injection may be performed to simulate actual noise on the interconnect bus during normal operation of the system. In other embodiments, the Link IM may also be configured to introduce errors into an event captured on the bus before the event is dispatched to the destination device. For example, the Link IM may toggle some bits in the packet. As with noise injection, the introduction of errors may be performed to simulate actual errors that may occur while transferring packets during normal operation of the system. The goal of introducing such errors may be to verify that the destination device properly determines an error in the packet, for example by using a Cyclic Redundancy Check (CRC), and performs error correcting steps which may include correcting erroneous bits or requesting that the packet be sent again. While the above mentioned embodiments describe noise and error injection performed by the Link IM, those skilled in the art will recognize that such noise and error injection may be performed by a separate and independent device, such as an irritator device.
  • Each device 120 and 130 may be driven by Unit Drivers 160 and 161 respectively. Each Unit Driver may be software that is configured to cause an associated device to perform a series of functions, including sending packets to another device. For example, Unit Driver 160 may generate instructions to Device 120 to send a read packet to Device 130 over the Interconnect Bus 180. Such instructions by unit drivers 160 and 161 to devices 120 and 130 may be monitored by Application Interface Monitors (IM) 150 and 151 respectively. Each Application IM may be any combination of hardware and/or software configured to sample data lines connecting the Application IM and an associated device in conjunction with a clock signal. Furthermore, each Application IM may be configured to examine the sampled data and recognize predefined categories of instructions. As with the Link IM, if a known instruction is captured, the Application IM may notify the performance monitor that the event is presented to the associated device.
  • The events captured by the Link IM 140 and the Application IM's 150 and 151 may be received by a Performance Monitor 100 and stored in a shared Database 170. In some embodiments of the invention the Performance Monitor may store in the Database 170, a timestamp associated with each captured event. For example, the Performance may store in the database the simulation time at which each event was captured by the interface monitors.
  • The Performance Monitor 100 may be configured to calculate several performance metrics for the system based on the captured events. For example, to compute the latency of a read operation across Device 120, the Performance Monitor may subtract the timestamps for a read instruction issued by Unit Driver 160 and captured by Application IM 150, and an associated read packet captured by Link IM 140. Similarly, the Performance Monitor may also compute the latency of read responses between Device 1 and Device 2 over the Interconnect Bus 180 by subtracting the timestamps of a read packet and an associated data packet captured by Link IM 140. Several other similar performance metrics may be defined to measure latencies and throughput for the system.
  • The Performance Monitor may be further configured to store the calculated performance metrics in the shared Database 170. For example, the Performance Monitor may store the latencies of write and read operations in Database 170. A user may be allowed to query Database 170 to generate graphs that illustrate performance results for various bus events. Such graphs may allow a user to easily compare results between bus events in the same test run and/or different test runs.
  • The Performance Monitor 100 may be configured to fail a simulation test based on predefined or self learned performance ranges 101. The performance ranges 101 may define upper and lower range limits or an upper or lower threshold value. A predefined range may be defined by a user before running a test on the system. The predefined ranges may be chosen arbitrarily or according to ideal performance metrics calculated considering factors such as device characteristics, system architecture, system software, and the like. The self learned ranges, on the other hand, may be calculated based on historic system performance data contained in Database 170. For example, the self-learned ranges may be determined by computing an average of previously obtained performance metrics or by selecting values at or near the peak of a bell curve representing historic performance results. Any other reasonable method for calculating performance ranges may be used to determine expected performance based on historic performance.
  • FIG. 2 is a flow diagram for exemplary operations performed to capture and store bus events in accordance with embodiments of the present invention. The operations may be performed, for example, by components illustrated in FIG. 1, while executing a specific program designed to emulate normal system operation (and produce typical bus traffic). However, those skilled in the art will recognize that the operations of FIG. 2 may be performed by other components and, further, that the components illustrated in FIG. 1 may be capable of performing other operations.
  • The operations begin, at step 201, by capturing events on the bus. As previously described, a Link IM or an Application IM may detect events indicating a transaction between devices or between a unit driver and an associated device, capture such an event, and send it to a Performance Monitor. In some embodiments of the invention the Link IM and Application IM may be a part of the Performance Monitor, therefore the events may be captured by the performance monitor directly. Captured events may be stored in a shared Database 204, as illustrated.
  • At step 202, the Performance Monitor may interpret the captured event and calculate Performance metrics for that event. This may require the Performance Monitor to query the database to find other events associated with the captured event. For example, when the Performance Monitor captures a read packet on the Interconnect Bus 180, it may query Database 170 for a read instruction issued from the Unit Driver 160 in order to calculate the latency of the read operation through Device 120. Several other performance metrics may also be computed at this time.
  • At step 203, the Performance Monitor may store the calculated performance metrics in the shared database. The performance metrics stored in the database may be used later to compute self learned ranges for system performance.
  • FIG. 3 is a flow diagram for exemplary operations performed to verify, during run time, that performance of a system falls within predefined ranges. The operations begin at step 301 by getting the user defined ranges. At this step, the user may be prompted to define ranges for one or more performance metrics. Alternatively, the user may also be allowed to select predefined ranges used in previous simulations. Sets of predefined ranges may also be organized into test profiles. Each test profile may contain a unique combination of performance range settings. A user may be prompted to select one of these profiles at the outset of simulation. In one embodiment of the invention, the predefined ranges may be selected for a plurality of simulation tests to facilitate batch testing with the same predefined parameters.
  • At step 302, simulation begins by Unit Drivers generating stimulus to the devices in order to emulate normal system operation and produce typical bus traffic. As simulation continues, the Performance Monitor performs the steps outlined in FIG. 2 to capture events and measure performance. In some embodiments of the invention, the Performance Monitor may compute performance results only after the simulation is run for a predetermined period of time. As each event is captured and performance metrics calculated, the test in step 303 is performed to determine whether the performance metrics calculated fall within the predefined ranges. If a calculated performance metric for a captured event falls outside of its predefined range, simulation may be stopped and a system failure message may be generated at step 306. In some embodiments of the invention, simulation may be stopped only if a certain threshold number of events fall outside the predefined range. Stopping simulation on the occurrence of a failing condition may save valuable simulation time and make performance verification more efficient.
  • If, on the other hand, the performance metric is deemed to fall within the predefined range, the Performance Monitor continues to capture and calculate performance metrics for events until another performance metric falls outside the predefined range or an end-of-test is detected in step 304. If an end-of-test is detected and all performance metrics fall within the predefined ranges, then the simulation run is deemed successful at step 305.
  • FIG. 4 is a flow diagram for exemplary operations performed to verify, during run time, that performance of a system falls within ranges determined by the system (self learned ranges). The operations begin in step 401 by determining the ranges that will be used to verify performance metrics. The ranges may be determined by querying the Database 170 for performance metrics stored from previously run simulations and computing the self learned ranges based on such historic data. As discussed earlier, any method such as computing averages and normal curve peaks may be used to determine an expected performance range based on historic data.
  • In step 402, the simulation may begin once the self learned performance ranges are determined. As in the description for FIG. 3, the Performance Monitor may monitor and calculate the performance metrics for events as they are captured during run time. These calculated performance metrics may be stored for later calculations of self learned ranges. In some embodiments of the invention, however, the Performance monitor may use the calculated performance metric for a captured event to dynamically update the self learned ranges being applied in the current simulation.
  • In step 403, if a calculated performance metric for a captured event falls outside of the self learned range, simulation may be stopped and a system failure message may be generated at step 406. In some embodiments of the invention, simulation may be stopped only if a certain threshold number of events fall outside the self learned range. If, on the other hand, the performance metric is deemed to fall within the predefined range, the Performance Monitor continues to capture and calculate performance metrics for events until another performance falls outside the self learned range or an end-of-test is detected in step 404. If an end-of-test is detected and all performance metrics fall within the predefined ranges, then the simulation run is deemed successful at step 405.
  • In some embodiments of the invention, the user may be allowed to configure the Performance Monitor to compare the performance metrics for a captured event with predefined ranges, self learned ranges, or both the predefined ranges and self learned ranges. For example, a user may choose to run simulation according to user defined ranges when the Database 170 does not contain sufficient information to calculate statistically significant self learned ranges. On the other hand, a user may run simulation according to the self learned ranges in order to detect any drastic changes in performance when the predefined ranges are suspected to be too lenient. Alternatively, a user may elect to run simulation according to both the predefined and self learned ranges to obtain the benefits of both approaches to verifying performance.
  • Dynamic Command Weighting
  • One common problem with using predefined test cases to generate traffic during simulation is that a problem causing event may not be adequately tested by the test case. For example, a test case may have only a few read operations which may be insufficient to bring about a failing condition. Therefore, another test case must be written that has sufficient read operations. However, under this approach an innumerable number of test cases will have to be written to account for all the various permutations and combinations of failing conditions.
  • The present invention provides for dynamically tailoring the events generated by the Unit Drivers by weighting commands based on run time results. For example, if a write operation latency is deemed to be approaching a failing condition, a weight parameter associated with the write operation may be dynamically adjusted so that the write operation is generated more frequently. One method for determining whether a performance metric is approaching a failing condition may be to determine if the performance metric falls outside a threshold range within the predefined range and/or self-learned range.
  • Referring back to FIG. 1, the Performance Monitor 100 may contain the necessary logic to compute weights for different categories of events based on run time results and provide feedback to the Unit Drivers 160 and 161. In response to this feedback, the Unit Drivers may dispatch instructions to reflect the dynamically adjusted weights for the instructions.
  • CONCLUSION
  • By monitoring key performance metrics real time during simulation, then using that information along with predefined and/or self learned performance ranges and dynamic command weighting based on real time results to fail the simulation, the present invention may notify a user that there is a potential problem, and identify the offending event. As a result, a more efficient and effective verification of system performance may be achieved.
  • While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (6)

1. A computer readable storage medium containing a program which generates data exchanges for determining bus performance characteristics of a system bus which, when executed, performs operations, comprising, during execution of the program:
(a) measuring bus performance of the system bus, comprising:
(i) capturing events indicative of data exchanges by at least one interface monitor between at least two devices of the system via a system bus of the system;
(ii) interpreting the captured events and calculating performance metrics for those captured events;
(iii) storing the calculated performance metrics in a database; and
(b) verifying bus performance of the system bus, comprising:
(i) determining whether the calculated performance metrics fall within a determined performance range, wherein the determined performance range comprises at least a self-learned performance range; wherein the self-learned performance range is generated by:
(1) querying the database to receive a sample of previously stored performance metrics, wherein the sample of previously stored performance metrics comprises at least some performance metrics stored prior to the execution of the program and during the execution of the program; and
(2) calculating the self-learned performance range based on values of the sample;
(c) in response to the determining, varying a rate at which one or more events occur on the bus during the execution of the program, resulting in the generation of potential events which fall outside the predetermined performance range and the generation of those events more frequently.
2. The computer readable storage medium of claim 1, wherein whether to vary the rate is determined by querying the database.
3. A system, comprising:
a first processing device;
a second processing device coupled with the first processing device via a system bus;
at least one interface monitor for capturing events indicative of data exchanged between the at least two processing devices via the system bus during execution of a program which generates the data exchanges;
a performance monitor configured to, during execution of a program:
calculate one or more performance metrics based on the captured events;
store the one or more calculated performance metrics in a database;
determine whether the calculated performance metrics fall within a determined performance range, wherein the determined performance range comprises at least a self-learned performance range; wherein the performance monitor is further configured to generate the self-learned performance range by:
(1) querying the database to receive a sample of previously stored performance metrics, wherein the sample of previously stored performance metrics comprises at least some performance metrics stored prior to the execution of the program and during the execution of the program; and
(2) calculating the self-learned performance range based on values of the sample; and
in response to the determining, vary a rate at which one or more events occur on the bus during the execution of the program, resulting in the generation of potential events which fall outside the predetermined performance range and the generation of those events more frequently.
4. The system of claim 3, wherein whether to vary the rate is determined by querying the database.
5. The system of claim 3, wherein the first processing device is a central processing unit (CPU) and the second processing device is a graphics processing unit (GPU).
6. The system of claim 3, wherein the first processing device is an Input/Output (I/O) bridge chip.
US12/018,329 2005-10-26 2008-01-23 Run-time performance verification system Abandoned US20080133489A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/018,329 US20080133489A1 (en) 2005-10-26 2008-01-23 Run-time performance verification system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/259,294 US7324922B2 (en) 2005-10-26 2005-10-26 Run-time performance verification system
US12/018,329 US20080133489A1 (en) 2005-10-26 2008-01-23 Run-time performance verification system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/259,294 Continuation US7324922B2 (en) 2005-10-26 2005-10-26 Run-time performance verification system

Publications (1)

Publication Number Publication Date
US20080133489A1 true US20080133489A1 (en) 2008-06-05

Family

ID=37986348

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/259,294 Active US7324922B2 (en) 2005-10-26 2005-10-26 Run-time performance verification system
US11/947,636 Active 2026-07-05 US7747414B2 (en) 2005-10-26 2007-11-29 Run-Time performance verification system
US12/018,329 Abandoned US20080133489A1 (en) 2005-10-26 2008-01-23 Run-time performance verification system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/259,294 Active US7324922B2 (en) 2005-10-26 2005-10-26 Run-time performance verification system
US11/947,636 Active 2026-07-05 US7747414B2 (en) 2005-10-26 2007-11-29 Run-Time performance verification system

Country Status (2)

Country Link
US (3) US7324922B2 (en)
CN (1) CN100428173C (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090150857A1 (en) * 2007-12-07 2009-06-11 Krishnan Srinivasan Performance software instrumentation and analysis for electronic design automation
US20090190674A1 (en) * 2008-01-28 2009-07-30 Ibm Corporation Method and apparatus to inject noise in a network system
US20090254525A1 (en) * 2008-04-07 2009-10-08 Krishnan Srinivasan Method and system for a database to monitor and analyze performance of an electronic design
US20100057400A1 (en) * 2008-09-04 2010-03-04 Sonics, Inc. Method and system to monitor, debug, and analyze performance of an electronic design
US20110067114A1 (en) * 2002-11-05 2011-03-17 Sonics Inc Methods and apparatus for a configurable protection architecture for on-chip systems
US8972995B2 (en) 2010-08-06 2015-03-03 Sonics, Inc. Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads
WO2015184801A1 (en) * 2014-11-06 2015-12-10 中兴通讯股份有限公司 Method and apparatus for determining performance of virtual machine
US20190286363A1 (en) * 2018-03-14 2019-09-19 Western Digital Technologies, Inc. Storage System and Method for Determining Ecosystem Bottlenecks and Suggesting Improvements
US11475008B2 (en) * 2020-04-28 2022-10-18 Capital One Services, Llc Systems and methods for monitoring user-defined metrics

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7324922B2 (en) * 2005-10-26 2008-01-29 International Business Machines Corporation Run-time performance verification system
US8253748B1 (en) * 2005-11-29 2012-08-28 Nvidia Corporation Shader performance registers
US7809928B1 (en) 2005-11-29 2010-10-05 Nvidia Corporation Generating event signals for performance register control using non-operative instructions
US7970746B2 (en) * 2006-06-13 2011-06-28 Microsoft Corporation Declarative management framework
US7730068B2 (en) * 2006-06-13 2010-06-01 Microsoft Corporation Extensible data collectors
US8572295B1 (en) 2007-02-16 2013-10-29 Marvell International Ltd. Bus traffic profiling
US20090112932A1 (en) * 2007-10-26 2009-04-30 Microsoft Corporation Visualizing key performance indicators for model-based applications
US8826242B2 (en) * 2007-11-27 2014-09-02 Microsoft Corporation Data driven profiling for distributed applications
US20120317266A1 (en) * 2011-06-07 2012-12-13 Research In Motion Limited Application Ratings Based On Performance Metrics
US9280437B2 (en) * 2012-11-20 2016-03-08 Bank Of America Corporation Dynamically scalable real-time system monitoring
GB2523865B (en) * 2013-07-05 2021-03-24 Pismo Labs Technology Ltd Methods and systems for sending and receiving information data
CN103744771A (en) * 2014-01-28 2014-04-23 中国工商银行股份有限公司 Method, equipment and system for monitoring host performance benchmark deviations, equipment and system
US10089307B2 (en) 2014-12-31 2018-10-02 International Business Machines Corporation Scalable distributed data store
US9952956B2 (en) * 2015-07-06 2018-04-24 International Business Machines Corporation Calculating the clock frequency of a processor
CN105045716B (en) * 2015-07-31 2018-09-21 小米科技有限责任公司 Right management method and device
US11153152B2 (en) * 2018-11-21 2021-10-19 Cisco Technology, Inc. System and methods to validate issue detection and classification in a network assurance system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6278959B1 (en) * 1999-03-19 2001-08-21 International Business Machines Corporation Method and system for monitoring the performance of a data processing system
US6449660B1 (en) * 1995-07-31 2002-09-10 International Business Machines Corporation Object-oriented I/O device interface framework mechanism
US20030061006A1 (en) * 2001-09-24 2003-03-27 Richards Kevin T. Evaluating performance data describing a relationship between a provider and a client
US20040019457A1 (en) * 2002-07-29 2004-01-29 Arisha Khaled A. Performance management using passive testing
US6701363B1 (en) * 2000-02-29 2004-03-02 International Business Machines Corporation Method, computer program product, and system for deriving web transaction performance metrics
US6975963B2 (en) * 2002-09-30 2005-12-13 Mcdata Corporation Method and system for storing and reporting network performance metrics using histograms
US20060059568A1 (en) * 2004-09-13 2006-03-16 Reactivity, Inc. Metric-based monitoring and control of a limited resource
US7076397B2 (en) * 2002-10-17 2006-07-11 Bmc Software, Inc. System and method for statistical performance monitoring
US7324922B2 (en) * 2005-10-26 2008-01-29 International Business Machines Corporation Run-time performance verification system

Family Cites Families (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4011009A (en) * 1975-05-27 1977-03-08 Xerox Corporation Reflection diffraction grating having a controllable blaze angle
US3986020A (en) * 1975-09-25 1976-10-12 Bell Telephone Laboratories, Incorporated Common medium optical multichannel exchange and switching system
US4450554A (en) * 1981-08-10 1984-05-22 International Telephone And Telegraph Corporation Asynchronous integrated voice and data communication system
US4797879A (en) * 1987-06-05 1989-01-10 American Telephone And Telegraph Company At&T Bell Laboratories Packet switched interconnection protocols for a star configured optical lan
US4873681A (en) * 1988-01-26 1989-10-10 Bell Communications Research, Inc. Hybrid optical and electronic packet switch
US4900119A (en) * 1988-04-01 1990-02-13 Canadian Patents & Development Ltd. Wavelength selective optical devices using optical directional coupler
US5023845A (en) * 1988-10-31 1991-06-11 The United States Of America As Represented By The Secretary Of The Navy Embedded fiber optic beam displacement sensor
US4970714A (en) * 1989-01-05 1990-11-13 International Business Machines Corp. Adaptive data link protocol
US5005167A (en) * 1989-02-03 1991-04-02 Bell Communications Research, Inc. Multicast packet switching method
US5103340A (en) * 1989-02-21 1992-04-07 International Business Machines Corporation Multiple-cavity optical filter using change of cavity length
US4926299A (en) * 1989-05-30 1990-05-15 Gilson Warren E Portable flashlight
EP0439646B1 (en) * 1990-01-30 1995-03-15 Hewlett-Packard Company Optical star network protocol and system with minimised delay between consecutive packets
US5140655A (en) * 1990-09-04 1992-08-18 At&T Bell Laboratories Optical star coupler utilizing fiber amplifier technology
US5539559A (en) * 1990-12-18 1996-07-23 Bell Communications Research Inc. Apparatus and method for photonic contention resolution in a large ATM switch
US5093743A (en) * 1990-12-28 1992-03-03 At&T Bell Laboratories Optical packet switch
FR2672169B1 (en) * 1991-01-24 1993-04-09 Alcatel Nv COMMUNICATION METHOD AND NETWORK ON OPTICAL FIBERS WITH FREQUENCY MULTIPLEXING.
US5191626A (en) * 1991-04-22 1993-03-02 The Trustees Of Columbia University In The City Of New York Optical communications system and method
EP0533391A3 (en) * 1991-09-16 1993-08-25 American Telephone And Telegraph Company Packet switching apparatus using pipeline controller
US5257113A (en) * 1991-09-20 1993-10-26 International Business Machines Corporation Video mixing technique using JPEG compressed data
US5212743A (en) * 1992-02-12 1993-05-18 At&T Bell Laboratories Automatic polarization controller having broadband, reset-free operation
US5915054A (en) * 1992-03-05 1999-06-22 Fuji Xerox Co., Ltd. Star coupler for an optical communication network
US5311360A (en) * 1992-04-28 1994-05-10 The Board Of Trustees Of The Leland Stanford, Junior University Method and apparatus for modulating a light beam
FR2696891B1 (en) * 1992-10-09 1994-11-04 Alcatel Nv Optical switching matrix.
US5519526A (en) * 1992-10-21 1996-05-21 California Institute Of Technology Optical protocols for communication networks
US5343542A (en) * 1993-04-22 1994-08-30 International Business Machines Corporation Tapered fabry-perot waveguide optical demultiplexer
JP3516972B2 (en) * 1993-04-22 2004-04-05 株式会社東芝 Communications system
JPH06350646A (en) * 1993-06-08 1994-12-22 Nec Corp Optical wavelength selection control system
DE69434263T2 (en) * 1993-07-14 2006-01-12 Nippon Telegraph And Telephone Corp. Photonic coupling field with frequency routing for time division multiplex links
DE69424311T2 (en) * 1993-11-08 2000-12-14 British Telecomm CROSS-CONNECTING SYSTEM FOR AN OPTICAL NETWORK
US5455699A (en) * 1993-12-21 1995-10-03 At&T Corp. Large capacity multi-access wavelength division multiplexing packet network
US5864414A (en) * 1994-01-26 1999-01-26 British Telecommunications Public Limited Company WDM network with control wavelength
US5500761A (en) * 1994-01-27 1996-03-19 At&T Corp. Micromechanical modulator
US5487120A (en) * 1994-02-09 1996-01-23 International Business Machines Corporation Optical wavelength division multiplexer for high speed, protocol-independent serial data sources
US5530575A (en) * 1994-09-09 1996-06-25 The Trustees Of Columbia University Systems and methods for employing a recursive mesh network with extraplanar links
US5680234A (en) * 1994-10-20 1997-10-21 Lucent Technologies Inc. Passive optical network with bi-directional optical spectral slicing and loop-back
US5500858A (en) * 1994-12-20 1996-03-19 The Regents Of The University Of California Method and apparatus for scheduling cells in an input-queued switch
US5515361A (en) * 1995-02-24 1996-05-07 International Business Machines Corporation Link monitoring and management in optical star networks
US5661592A (en) * 1995-06-07 1997-08-26 Silicon Light Machines Method of making and an apparatus for a flat diffraction grating light valve
US5781537A (en) * 1995-07-07 1998-07-14 International Business Machines Corporation Setting up, taking down and maintaining connections in a communications network
US6041071A (en) * 1995-09-29 2000-03-21 Coretek, Inc. Electro-optically tunable external cavity mirror for a narrow linewidth semiconductor laser
US5739945A (en) * 1995-09-29 1998-04-14 Tayebati; Parviz Electrically tunable optical filter utilizing a deformable multi-layer mirror
US5631758A (en) * 1995-10-26 1997-05-20 Lucent Technologies Inc. Chirped-pulse multiple wavelength telecommunications system
US5739935A (en) * 1995-11-14 1998-04-14 Telefonaktiebolaget Lm Ericsson Modular optical cross-connect architecture with optical wavelength switching
US5825528A (en) * 1995-12-26 1998-10-20 Lucent Technologies Inc. Phase-mismatched fabry-perot cavity micromechanical modulator
FR2743233B1 (en) * 1995-12-28 1998-01-23 Alcatel Nv OPTICAL SIGNAL DISTRIBUTION SYSTEM
US5729527A (en) * 1995-12-29 1998-03-17 Tellabs Operations, Inc. Fault management in a multichannel transmission system
US5751469A (en) * 1996-02-01 1998-05-12 Lucent Technologies Inc. Method and apparatus for an improved micromechanical modulator
US5659418A (en) * 1996-02-05 1997-08-19 Lucent Technologies Inc. Structure for membrane damping in a micromechanical modulator
US5796504A (en) * 1996-03-13 1998-08-18 Hughes Electronics Fiber-optic telemetry system and method for large arrays of sensors
US5793746A (en) * 1996-04-29 1998-08-11 International Business Machines Corporation Fault-tolerant multichannel multiplexer ring configuration
US6108311A (en) * 1996-04-29 2000-08-22 Tellabs Operations, Inc. Multichannel ring and star networks with limited channel conversion
US6212182B1 (en) * 1996-06-27 2001-04-03 Cisco Technology, Inc. Combined unicast and multicast scheduling
JP2002515134A (en) * 1996-07-26 2002-05-21 イタルテル ソシエタ ペル アチオニ Tunable increase / decrease optics
US5745271A (en) * 1996-07-31 1998-04-28 Lucent Technologies, Inc. Attenuation device for wavelength multiplexed optical fiber communications
US5923644A (en) * 1996-10-03 1999-07-13 The Board Of Trustees Of The Leland Stanford Junior University Apparatus and method for processing multicast cells in an input-queued multicast switch
US5912749A (en) * 1997-02-11 1999-06-15 Lucent Technologies Inc. Call admission control in cellular networks
GB9704587D0 (en) * 1997-03-05 1997-04-23 Fujitsu Ltd Wavelength-division multiplexing in passive optical networks
JPH10262000A (en) * 1997-03-19 1998-09-29 Fujitsu Ltd Failure restoring method and device in passive optical network
US6025944A (en) * 1997-03-27 2000-02-15 Mendez R&D Associates Wavelength division multiplexing/code division multiple access hybrid
US5870221A (en) * 1997-07-25 1999-02-09 Lucent Technologies, Inc. Micromechanical modulator having enhanced performance
US5943454A (en) * 1997-08-15 1999-08-24 Lucent Technologies, Inc. Freespace optical bypass-exchange switch
JPH1187812A (en) * 1997-09-12 1999-03-30 Fujitsu Ltd Gain equalizer and optical transmission system provided therewith
US6025950A (en) * 1997-09-22 2000-02-15 Coretek, Inc. Monolithic all-semiconductor optically addressed spatial light modulator based on low-photoconductive semiconductors
US6097533A (en) * 1997-10-21 2000-08-01 Antec Corporation Optical amplifier for CATV system with forward and reverse paths
US5974207A (en) * 1997-12-23 1999-10-26 Lucent Technologies, Inc. Article comprising a wavelength-selective add-drop multiplexer
US5914804A (en) * 1998-01-28 1999-06-22 Lucent Technologies Inc Double-cavity micromechanical optical modulator with plural multilayer mirrors
US6301274B1 (en) * 1998-03-30 2001-10-09 Coretek, Inc. Tunable external cavity laser
US6188477B1 (en) * 1998-05-04 2001-02-13 Cornell Research Foundation, Inc. Optical polarization sensing apparatus and method
US5943158A (en) * 1998-05-05 1999-08-24 Lucent Technologies Inc. Micro-mechanical, anti-reflection, switched optical modulator array and fabrication method
US6417944B1 (en) * 1998-05-28 2002-07-09 3Com Corporation Asynchronous transfer mode switch utilizing optical wave division multiplexing
US6216237B1 (en) * 1998-06-19 2001-04-10 Lucent Technologies Inc. Distributed indirect software instrumentation
US6525850B1 (en) * 1998-07-17 2003-02-25 The Regents Of The University Of California High-throughput, low-latency next generation internet networks using optical label switching and high-speed optical header generation, detection and reinsertion
US5949801A (en) * 1998-07-22 1999-09-07 Coretek, Inc. Tunable laser and method for operating the same
US7110669B2 (en) * 1998-07-22 2006-09-19 Synchrodyne Networks, Inc. Time driven wavelength conversion-based switching with common time reference
US5949571A (en) * 1998-07-30 1999-09-07 Lucent Technologies Mars optical modulators
US5943155A (en) * 1998-08-12 1999-08-24 Lucent Techonolgies Inc. Mars optical modulators
KR100271210B1 (en) * 1998-09-01 2000-11-01 윤덕용 Optical cross-connect with layered modularity
US6356544B1 (en) * 1999-05-03 2002-03-12 Fujitsu Network Communications, Inc. SONET add/drop multiplexer with packet over SONET capability
US6192173B1 (en) * 1999-06-02 2001-02-20 Nortel Networks Limited Flexible WDM network architecture
WO2001013549A1 (en) * 1999-08-13 2001-02-22 Fujitsu Limited Optical communication system and terminal device
US6222954B1 (en) * 1999-09-17 2001-04-24 Light Bytes, Inc. Fault-tolerant fiber-optical beam control modules
US6407851B1 (en) * 2000-08-01 2002-06-18 Mohammed N. Islam Micromechanical optical switch
US6532090B1 (en) * 2000-02-28 2003-03-11 Lucent Technologies Inc. Wavelength selective cross-connect with reduced complexity
US6870836B1 (en) * 2000-03-31 2005-03-22 Nortel Networks Limited System and method for transfer of IP data in an optical communication networks
EP1162860A3 (en) * 2000-06-08 2006-01-11 Alcatel Scalable WDM optical IP router architecture
EP1172681A3 (en) * 2000-07-13 2004-06-09 Creo IL. Ltd. Blazed micro-mechanical light modulator and array thereof
US6920287B1 (en) * 2000-08-01 2005-07-19 Nortel Networks Limited Smart connect
US6401851B1 (en) * 2000-09-14 2002-06-11 Deere & Company Hood assembly
US6925259B2 (en) * 2000-10-12 2005-08-02 At&T Corp. MAC protocol for optical packet-switched ring network
US6721475B1 (en) * 2000-12-22 2004-04-13 Cheetah Omni, Llc Apparatus and method for providing gain equalization
US7000026B2 (en) * 2000-12-22 2006-02-14 Nortel Networks Limited Multi-channel sharing in a high-capacity network
US6721473B1 (en) * 2001-02-02 2004-04-13 Cheetah Omni, Llc Variable blazed grating based signal processing
US7260655B1 (en) * 2001-12-03 2007-08-21 Cheetah Omni, Llc Optical routing using star switching fabric with reduced effective switching time
US7209657B1 (en) * 2001-12-03 2007-04-24 Cheetah Omni, Llc Optical routing using a star switching fabric
US7110671B1 (en) * 2001-12-03 2006-09-19 Cheetah Omni, Llc Method and apparatus for scheduling communication using a star switching fabric
US6937961B2 (en) * 2002-09-26 2005-08-30 Freescale Semiconductor, Inc. Performance monitor and method therefor
US7624174B2 (en) * 2003-05-22 2009-11-24 Microsoft Corporation Self-learning method and system for detecting abnormalities

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6449660B1 (en) * 1995-07-31 2002-09-10 International Business Machines Corporation Object-oriented I/O device interface framework mechanism
US6278959B1 (en) * 1999-03-19 2001-08-21 International Business Machines Corporation Method and system for monitoring the performance of a data processing system
US6701363B1 (en) * 2000-02-29 2004-03-02 International Business Machines Corporation Method, computer program product, and system for deriving web transaction performance metrics
US20030061006A1 (en) * 2001-09-24 2003-03-27 Richards Kevin T. Evaluating performance data describing a relationship between a provider and a client
US20040019457A1 (en) * 2002-07-29 2004-01-29 Arisha Khaled A. Performance management using passive testing
US6975963B2 (en) * 2002-09-30 2005-12-13 Mcdata Corporation Method and system for storing and reporting network performance metrics using histograms
US7076397B2 (en) * 2002-10-17 2006-07-11 Bmc Software, Inc. System and method for statistical performance monitoring
US20060059568A1 (en) * 2004-09-13 2006-03-16 Reactivity, Inc. Metric-based monitoring and control of a limited resource
US7324922B2 (en) * 2005-10-26 2008-01-29 International Business Machines Corporation Run-time performance verification system

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110067114A1 (en) * 2002-11-05 2011-03-17 Sonics Inc Methods and apparatus for a configurable protection architecture for on-chip systems
US8443422B2 (en) 2002-11-05 2013-05-14 Sonics, Inc. Methods and apparatus for a configurable protection architecture for on-chip systems
US8229723B2 (en) 2007-12-07 2012-07-24 Sonics, Inc. Performance software instrumentation and analysis for electronic design automation
US20090150857A1 (en) * 2007-12-07 2009-06-11 Krishnan Srinivasan Performance software instrumentation and analysis for electronic design automation
US8225143B2 (en) * 2008-01-28 2012-07-17 International Business Machines Corporation Method and apparatus to inject noise in a network system
US20090190674A1 (en) * 2008-01-28 2009-07-30 Ibm Corporation Method and apparatus to inject noise in a network system
US8073820B2 (en) * 2008-04-07 2011-12-06 Sonics, Inc. Method and system for a database to monitor and analyze performance of an electronic design
US20090254525A1 (en) * 2008-04-07 2009-10-08 Krishnan Srinivasan Method and system for a database to monitor and analyze performance of an electronic design
US20100057400A1 (en) * 2008-09-04 2010-03-04 Sonics, Inc. Method and system to monitor, debug, and analyze performance of an electronic design
US8032329B2 (en) 2008-09-04 2011-10-04 Sonics, Inc. Method and system to monitor, debug, and analyze performance of an electronic design
US8972995B2 (en) 2010-08-06 2015-03-03 Sonics, Inc. Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads
WO2015184801A1 (en) * 2014-11-06 2015-12-10 中兴通讯股份有限公司 Method and apparatus for determining performance of virtual machine
CN105630645A (en) * 2014-11-06 2016-06-01 中兴通讯股份有限公司 Method and device for determining performance of virtual machine
US20190286363A1 (en) * 2018-03-14 2019-09-19 Western Digital Technologies, Inc. Storage System and Method for Determining Ecosystem Bottlenecks and Suggesting Improvements
US11126367B2 (en) * 2018-03-14 2021-09-21 Western Digital Technologies, Inc. Storage system and method for determining ecosystem bottlenecks and suggesting improvements
US11475008B2 (en) * 2020-04-28 2022-10-18 Capital One Services, Llc Systems and methods for monitoring user-defined metrics

Also Published As

Publication number Publication date
CN100428173C (en) 2008-10-22
CN1955935A (en) 2007-05-02
US20070093986A1 (en) 2007-04-26
US7324922B2 (en) 2008-01-29
US7747414B2 (en) 2010-06-29
US20080071499A1 (en) 2008-03-20

Similar Documents

Publication Publication Date Title
US7324922B2 (en) Run-time performance verification system
US7890813B2 (en) Method and apparatus for identifying a failure mechanism for a component in a computer system
US8627158B2 (en) Flash array built in self test engine with trace array and flash metric reporting
US7610526B2 (en) On-chip circuitry for bus validation
US7003698B2 (en) Method and apparatus for transport of debug events between computer system components
US6609221B1 (en) Method and apparatus for inducing bus saturation during operational testing of busses using a pattern generator
WO2004003748A1 (en) Method and system to implement a system event log for improved system anageability
CN107992410B (en) Software quality monitoring method and device, computer equipment and storage medium
US20080276129A1 (en) Software tracing
CN115348159B (en) Micro-service fault positioning method and device based on self-encoder and service dependency graph
CN104615477A (en) Cycle-accurate replay and debugging of running fpga systems
Singh et al. Verification of safety critical and control systems of Nuclear Power Plants using Petri nets
US8799608B1 (en) Techniques involving flaky path detection
US11105854B2 (en) System, apparatus and method for inter-die functional testing of an integrated circuit
CN112100067B (en) Regression analysis-based test method, system and storage medium
CN106502887A (en) A kind of stability test method, test controller and system
US20240036111A1 (en) Chip verification method and apparatus, electronic device, and storage medium
CN112035996A (en) Equipment testability integrated design and evaluation system
US20200334094A1 (en) Self-verification of operating system memory management
US7788546B2 (en) Method and system for identifying communication errors resulting from reset skew
CN113947049A (en) Self-feedback chip verification platform for improving function coverage rate
Oppermann et al. Anomaly Detection Approaches for Secure Cloud Reference Architectures in Legal Metrology.
Zhao et al. Modeling for early fault detection of intermittent connections on controller area networks
CN113917385A (en) Self-detection method and system for electric energy meter
US7599688B2 (en) Methods and apparatus for passive mid-stream monitoring of real-time properties

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE