CA2242377A1 - Method and apparatus for network assessment - Google Patents

Method and apparatus for network assessment Download PDF

Info

Publication number
CA2242377A1
CA2242377A1 CA002242377A CA2242377A CA2242377A1 CA 2242377 A1 CA2242377 A1 CA 2242377A1 CA 002242377 A CA002242377 A CA 002242377A CA 2242377 A CA2242377 A CA 2242377A CA 2242377 A1 CA2242377 A1 CA 2242377A1
Authority
CA
Canada
Prior art keywords
network
mission
server
sentries
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002242377A
Other languages
French (fr)
Inventor
Paul G. Czarnik
Carl J. Schroeder
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LanQuest Group
Original Assignee
Lanquest Group
Paul G. Czarnik
Carl J. Schroeder
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanquest Group, Paul G. Czarnik, Carl J. Schroeder filed Critical Lanquest Group
Publication of CA2242377A1 publication Critical patent/CA2242377A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5054Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components

Abstract

A system and method are disclosed for acquiring network performance data. A
mission server (120) is connected to a network (115), and is operative to interface with clients (110) to define and receive requests for a mission. The mission as defined includes operations that require participation in the network (115) by devices connected to a plurality of segments (131, 132, 133) at a plurality of locations within the network. A plurality of sentries (141, 142, 143) are provided on devices connected to the segments of the network (115) at locations within the network (115) so that the devices are operative to participate in the network from the segments of the network (115) at their locations. The sentries (141, 142, 143) are then operative to support the mission by participating in the network through the devices. A request for a mission is received at the mission server (120) to the sentries required to execute the mission. The operations of the mission are executed by the sentries and the result of the operations are communicated from the sentries to the mission server. The result of the mission is determined from the results of the operations.

Description

MET~IOD AND APPARATU~ FOR l~ ;'l WORK ASSESSMENT

BACKGROUND OF THE INVENTION
.. .
The present invention relates generally to a method and a~aldlus for network ~ssessmPnt.
The present invention may be used with a local area network ("LAN") or a wide area network 5 ("WAN") which extends over an entire enterprise and includes a number of LAN's connected together in an "intranet." The present invention may also be used over the Internet. The present invention provides a set of software agents or sentries on devices which are physically located on segments of the network which generate and observe network traffic, together with a central controller which receives information from the sentries and ~c~PsSÇs network performance based 10 on that information.

In order to efficiently manage and troubleshoot a network it is of utmost importance that the network ~-lminictrator be able to obtain information about the network traffic at various ~i~stin~tions or nodes along various segm~nt~ of the network. To effectively monitor the network, the network ~-lminictrator needs to know information regarding the source of the packets being 15 routed by the network, where the packets are being routed, the capacity and level of utilization of the network, and the conditions under which the network will become unstable or overloaded.
Such ~let~ l inf(~ .alion about the network capacity is h~l~ul ~ t for management of the network so that the network configuration can be modified or additional switches, routers or other netwcrk devices can be added to the network as needed. Ideally, these changes would occur based on 20 forecasts of network utilization before problems occur.

Information about individual packets and their routing is also nece,cc~ry for troub}eshooting and network opLil~ tion. For example, information about the source of a packet may show that a certain device conn~cte-l to the network is inadvertently bro~rlrz-cting messages or broz-~lc~ting such messages in triplicate. Information about the routing of p~ck:~ges may indicale 25 that excessive traffic on one segment of the network is actually caused by traffic generated and ~l~n~ lrcl by other segments.

W O 98/21844 PCT~US97/20503 Such detailed inforrnation about packets is currently obtained by using a product known as a "network sniffer" or analyzer. A network sniffer is inserted into a segment of the network where it can observe the packets which are traveling along the co~ nication line into which it has been inserted. Network sniffers are effective to record the traffic along single lines but they 5 are generally too costly to permanently install in the lines of a network. A common procedure is to ~ c~f~ a pool of sniffers to probing a large network and then transport the sniffers to trouble spots as such spots are ~ t~ctP~ Part of the network must be taken down so that the sniffer can be inserted into the line for which traffic is to be measured. To minimi7e disruption to network users, this procedure must often occur during light traffic times such as wee'~n-l~ or very late at 10 night. When in~t~3llerl, the sniffer collects data. In some cases data must be collected for an extended period of time before the traffic phenomena to be observed occurs.

Network analysis using a sniffer is complicated by the fact that a sniffer must be physically inserted into the network at some point. As it is not naturally a part of the network, a sniffer must collect and store its own data or transmit that data along a dirre~ l commnnic,.tion 15 channel than the network itself. Often it is not immediately apparent where the sniffer should be inserted to effectively diagnose a network problem. For example, the cause of excessive traffic or other network problem may be located on a different segment of the network than the segment where the problem is first observed. Thus, a costly and time consuming trial and error process must be employed in which the sniffer is physically connected to a segment, monitored, 20 disconnected from the segment, moved to another segment, inserted there, etc. In the end, several sniffers may be required to isolate and analyze the problem. Furtherrnore, since the sniffer is only inserted into the network when a problem occurs, there is no early warning before problems occur or estimation of when network capacity will be exceeded or which segments are likely to fail.

What is needed is a way of using existing network hardware already installed in network 25 segments to perform real time traffic ~sescm~.nt, trace individual packets, determine the source of network traffic load, ~ stim~te the network capacity, and forecast the effect of additional network traffic on network perform~nre Such a system would provide needed information for network W O 98/21844 PCT~US97/20503 management as well as information needed to locate portions of the network where a sniffer should be inserted to obtain further specific troubleshooting information.

W O 98/21844 PCT~US97/20503 SUMl~IARY OF THE INVENTION

Accordingly, the present invention provides a system and method for employing agents or sentries which generate and observe traffic on the network. The sentries are installed on existing network assets so that it is not neces.s~ly to transport addition al hardware or install additional S hardware in network segments. The sentries participate in the network and are capable of generating traffic on the network as well as sending data over the network. A mission control center is provided for sending mission orders to the sentries, obtaining data from the sentries, and analyzing the data collected from the sentries. The system enables the network ~(lmini.~rator to ~lrc~ real time traffic ~cs~.~smt~.nt, trace individual packets, u~ routing paths, de~~ hle 10 the source of network traffic load, estimate the network capacity, and forecast the effect of additional network traffic on that capacity.

In one aspect, the present invention provides a system and method for acquiring network pe.rolll,ance data. A mission server is connected to a network, and is operative to interface with Clients to define and receive requests for a mission. The mission as defined includes operations 15 that require participation in the network by devices connected to a plurality of segments at a plurality of locations within the network. A plurality of sentries are provided on devices connected to the segments of the network at locations within the network so that the devices are operative to participate in the network from the segments of the network at their locations. The sentries are then operative to support the mission by participating in the network through the 20 devices. A request for a mission is rcceived at the mission server and the mission is co,,l~llullicated from the mission server to the sentries required to execute the mission. The operations of the mission are executed by the sentries and the results of the operations are communicated from the sentries to the mission server. The result of the mission is determined from the results of the operations.

These and other features and advantages of the present invention will be presented in more detail in the following specification of the invention and the figures.

W O 98/21844 PCTrUS97/20503 FIGUI~E lA is a high level diagram illustrating an embodiment of the present invention.

FIGURE lB illustrates one embodiment of the system generally depicted in FIGURE ilA.

FIGURE 2 illustrates how a mission server uses a set of sentries located on segments of a 5 network to analyze network traffic or troubleshoot network malfunctions or delays.

FIGUI~ 3 is a flow diagram which ilhlstrates the process implern~n~ l on a Client .

FIGURE 4 is a flow diagram of a process implemented on the mission server.

FIGURE 5 is a flow diagram of the process implemented on each sentry.

FIGURE 6 illu~LldL~s the process by which a server initiates the execution of the mission.

FIGURE 7 illu~llates a Packet Descriptor file.

FIGURE 8 illustrates how the Packet Descriptor file affects IP packet fields and IPX
packet fields.

E~IGURE 9 illustrates the composition of a template file.

W O 98/21844 PCTrUS97/20503 DETAIL~D DESCRIPTION OF THE INVENTION

FIGURE lA is a high level diagram illustrating an embodiment of the present invention. ,~
In this embodiment, a Client 110 desires information about a network 115. The information desired can include paclcet routing data, traffic volume data, packet sources, packet destinations, packet content, etc. Client 110 may be directly connecte~ to network 110, in which case it will typically reside on a single segment of network 115. Alternatively, Client 110 may be a remote Client which is not directly connected to network 115. In order to obtain information about remote segments, Client 110 first establishes a connection with a server 120.

Client 110 communicates the network information it desires in the forrn of a "mission" to server 120. In one embodiment, this is accomplished as follows: server 120 provides various "mission choices" to Client 110, preferably via World Wide Web pages or ~ava Applets. From the mission choices, Client 110 fashions a "rnission request" which it transmits to server 120 for distribution. After server 120 has distributed the mission to sentries for execution and received the results back from the sentries for the completed missions, it cc~ llu~icates "mission results" back to Client 110.

Server 120 is part of network 115. The connection betwecn server 120 and other entities of nc;lw~,lk 115 may be via TCP/IP, local Ethernet, token ring, dial-up, or via any suitable network protocol which enables server 120 to participate in network 115 so that it may send and receive messages to and from devices located on a network segment 131, a network segment 132, and a network segment 133.

~A sentry 141, a sentry 142, and a sentry 143 are connecte~l to network segment 131, network segment 132, and network segment 133, respectively. Each sentry is a software agent which resides on a device connected to a network segment. The device may be a bridge, router, host PC, etc. In a preferred embodiment, each device which supports a sentry runs Microsoft W O 98/2184~ PCT~US97/20S03 Windows NT~ available from Microsoft Corporation of Redmond, Washington. Preferably, server 120 also supports Windows NT.

Each sentry is designed to run in the background on the device on which it is implemented, so that the device continues to perform whatever function it is intended to perforlm 5 in a manner which is unaffected by the presence of the sentry. Because the sentries are implemented on devices which already function as part of the network segments, no additional hardware is required. The sentries perform numerous network assessment tasks such as sending messages, receiving messages, tracing message routes, and sampling messages on the network segrnents. Although server 120 is not physically conn~ctP~I to every segment of network 115, server 120 cooperates with sentry 141, sentry 142, and sentry 143 to obtain information about the nctwork traffic on the segments to which the sentries are connected. In this way, server 120 is a more powerful and versatile tool than a network sniffer, because it need not be physically conn~ct~ ~ to the segments of the network which are being analyzed. Furthermore, because the sentries are implemented on devices already participating in the network segments, and server 120 comm~n~c~t~S with those devices over the network, the system does not require additional hardware or communication lines to gather network data.

Server 120 includes a processor and memory which keeps track of which sentries are available to it and what possible missions may be carried out by those sentries. Server 120 presents Client 110 with mission choices, depending on which sentries are available to it. Client 110 selects a mission or defines a mission including a combination of operations which may be performed by sentries available to server 120. In one embodiment, the level of detail which must be specified by the Client is reduced by making the mission choices high level selections which the server translates into the individual operations which make up the selected mission. The available sentry operations preferably include: send, receive, trace route, and sample. Each operation is performed at a sentry connected to a segment of network 115. Client 110 may alsG
specify a periodic monitoring mission to be executed by server 120 that causes server 120 to periodically generate operations that require a sentry or set of sentries to sample the network W O 98/21844 PCTrUS97/20503 traffic. server 120 then periodically requests sentries to execute the operations. server 120 may collect data and generate reports for Client 110 periodically or as requested by Client 110.

Complex network analysis missions are p~lf ~ led by per~orming operations at dirrc. ~;nl sentries and gathering the resulting data. In one embodiment, not only is the mission definition provided to server 120 by Client 110 as a concise high level statement, but the instructions sent by server 120 to each sentry are also high level. Each sentry stores information regarding the operations it is required to perform for each mission. Once the mission is identified by a mission statement, the sentry performs the operations on schedule. Certain missions require operations to be performed periodically or continuously over long periods of time. A single mission request 10 from server 120 could indicate thousands of operations to be performed by a large number of sentries over days, weeks, or months. The advantage of such high-levei mission definitions is that network traffic between server 120 and the sentries is ~"i~;",i~rl because only relatively short mission statements are trzlncmitt~

Once operations ~.u~polLi-lg a mission are carried out by a sentry, network data is 15 obtained. The sentry sends the data over network 115 to server 120. In one embodiment, each sentry tabulates data, stores the tabulated data, and transmits it to server 120 in summary forrn.
Again, this minimi7~.s the network traffic caused by server 120. Server 120 itself also includes a processor and memory so that server 120 can tabulate the data it receives from the sentries and provide rnission results to Client 110.

In the system described, Client 110 need not be physically connected to any device on network segments which it wishes to analyze. Because server 120 communicates with network 115 and sentries are running on devices already connected to the segments of network 115, Client 110 need merely instruct server 120 to perform a mission. Server 120 can then carry out the mission by instructing apL~ liate sentries to perform the operations supported by the sentries 25 (e.g., send, receive, trace route, and sample).

CA 02242377 l998-07-07 W O 98/21844 PCT~US97/20503 FIGURE lB illustrates one embodiment of the system generally depicted in FIGUI~E IA.
A computer 150 hosts mission server 120 and includes a World Wide Web server. In its capacity as a World Wide Web server, computer 1 S0 connects with a Client 152 and a Client 154 via the Internet. Specifically, a TCP/IP connection is established with each of the Clients to facilitate rnission definition. ~n the embodiment depicted, both Client 152 and Client 154 ~unction as World Wide Web browsers. In addition, Client 1 ~4 is capable of running Java Applets. Thus, c~ uL~I150 also serves as an Applet server and provides Java services.

In operation, Client 152 or Client 154 establishes a connection with the mission server residing on computer 150 in order to direct the server to perform a desired mission. Because the 10 mission server provides Web pages and/or Java Applets, the Clients need contain no speci~li7~d software aside from the software necesc~ry to act as a World Wide Web browser and/or run Applets.

Normally, a Client identifies itself and requests analysis of a network segment. The mission server then determines if the request is legitimate. If the requesting Client is a valid 15 Client, the server then establishes a connection to that Client. Once a direct TCP/IP connection is established between the Client and the mission server, the server provides either Web pages or Java Applets to the C}ient.

To direct the mission, co~ ulel 150 is connf cte~l to an Intranet 156 including multiple LAN segments. Devices on Intranet 156 include a Client World Wide Web browser 158, a 20 hardware probe 160, a hardware probe 162, a sentry 166, and a sentry 168. Sentry 166 and sentry 168 are implemented on different devices ~preferably on different LAN segm~ nt~). E~h of the devices is an existing asset connected to the network 115 capable of performing various sentry operations. The results of the network analysis mission are formatted by the mission server and presented to the appiupliate Client(s).

As noted, the mission request may be received from and the mission results may be presented to the Clients via Java Applets provided by the mission server and running on the W O 98/21844 PCT~US97/20503 Clients. In the early 1 990s, a team at Sun Microsystems developed a new language, "Java," to address the issues of software distribution on the Internet. Java is a simple, object-oriented language which ~U~pOI l~ multi-thread processing and garbage collection. Although the language is based on C+~, a superset of C, it is much simpler. More importantly, 3ava programs are 5 "compiled" into a binary format that can be executed on many different platforms without recompilation. The language includes built-in meçh~ni.~m.~ for verifying and executing Java "binaries" in a controlled environment, protecting the user's co~ 3uler from potential viruses and security violations.

A typical Java system includes the following set of interrelated technologies: a language 10 specification; a compiler for the Java language that produces bytecodes from an abstract, stack-oriented machine; a virtual m~hin~ (VMS) program that interprets the bytecodes at runtime; a set of class libraries; a runtime environment that includes bytecode verification, multi-threading, and garbage collection; supporting development tools, such as a bytecode disassembler; and a browser ~e.~., Sun's "Hot Java" browser).

Java is ~ ign~d for creating applications that will be deployed into heterogeneous netw~ ed environments. Such environments are characterized by a variety of hardware architectures. Further, applications in such environments execute atop a variety of different o~el ~hlg systems and interoperate with a multitude of different pro~ g language interfaces.
To accommodate such diversity, the Java compiler generates platform-neutral "bytecodes" -- an 20 architecturally neutral, intermediate format designed for deploying application code efficiently to multiple platforms.

Java bytecodes are designed to be easy to interpret on any machine. Bytecodes are ~se~ti~lly high-level, machine-independent instructions for a hypothetical or '~virtual" machine that is implemented by the Java hlle~lcter and runtime system. The virtual m~hin~, which is 25 actually a specification of an abstract machine for which a Java language compiler generates bytecodc, must be available for the various hanlwal~;/software platforms which an application is to run. The Java interpreter executes Java bytecode directly on any machine for which the interpreter W O98/21844 PCTrUS97/20503 and runtime system of Java have been ported. In this manner, the same 3ava language bytecode runs on any platform supported by Java.

Compiling Java into platform-neutral bytecodes is advantageous. Once the Java langu~ge hlt~ t~l and runtime support are available on a given hardware and o~e~Lh~g system platfonn, 5 any Java language application can be executed. The bytecodes are portable since they do not require a particular processor, ar~hitt~ct~-re, or other proprietary hardware support. Further, the bytecodes are byte-order independent, so that programs can be executed on both big-endian m,.~hinf,s, (e.g., Intel architecture) and little-endian m,.c~tine,s (e.g., Motorola arrhitectl~re). Since Java bytecodes are typed, each specifies the exact type of its operands, thereby allowing 10 verification that tne bytecodes obey language constraints. All told, the interpreted bytecode approach of compiled Java language programs provides portability of programs to any system on which the Java interpreter and runtime system have been implemented.

Further description of the Java Language environment can be found in Gosling, J. et al., Th.e Java Lc nguage Environment: A White Paper, Sun Microsystems Co~ uler Co~l"~any, 15 October 1995, the disclosure of which is hereby incorporated by reference for all purposes.

A distinct advantage of a network ;lc~çcsm~nt system utili7.ing Java is that a Client need not store any information about the mission server in order to request a mission. The server provides mission choices through Java applets which provide the software n~cçcs:~ry to define and select a mission. A Client need only store or be capable of looking up the URL for the mission server. A
20 description of how connections are made via the World Wide Web is contained in The Whole Internet User's Guide and Catalog, by Ed Krol copyright 1992 published by O'Reilly and Associates, which is herein incorporated by reference. Once a connection with the server is made7 the server supplies the mission definition software to the connected Client.

FIGURE 2 illustrates how a mission server uses a set of sentries located on segments of a 25 network 210 to analyze network traffic or troubleshoot network malfunctions or delays. As shown, network 210 includes seven segments: a segment 211, a segment 212, a segment 213, a segment 214, a segment 215, a segment 216, and a segment 217. ~ach segment has at least one W O 98/21844 PCT~US97/20503 sentry and may have one or more additional devices residing on it. Sentry 221 is connected to segment 211; sentry 222A and sentry 222B are connected to segment 212; sentry 223A and sentry 223B are connected to segment 213; sentry 224 is connected to segment 214; sentry 225 is connected to segment 215; sentry 226 is connected to segment 216; and sentry 227 is connected to S segment 217.

One rnission the sentries may pe~rollll is sampling network traffic on all segments. A
mission server 200 is shown connected to segment 21~. To execute this mission, it sends a mission statement to a portion of the sentries on network 210. The statement instructs at least one sentry per segment to sample packets at a certain frequency. Each sentry so instructed reports 10 results to mission server 200 so that network traffic information is obtained without physically connecting to any of the segments. As mentioned above, the sentries run on preexisting network entities that perform network functions (such as router, bridge, host, etc.) which are independent from the sentry function.

Mission server 200 may also use the sentries to isolate the source of problems. For example, a malfunction in segment 213 may delay messages tr~n.cmit~t cl from segment 211 to segment 215 or cause those messages to be lost. To deterrnine the source of the problem, server 200 defines a mission that instructs each sentry to send a message to every other sentry. Each receiving sentry is notified when each message will be sent to it and is instructed to note the delivery time, and whether errors occurred. Additionally, each transmitting sentry is instr~cted to trace the route of the messages it sends. This is accomplished by setting the Record Route flag in the IP header of the outgoing packet so that each node to which the packet bounces is reported back to the tr~n~mitting sentry.

As mission server 200 collects data from the sentries, the location of the problem will quickly become evident. Errors do not occur, for example, in m~s~ges sent from sentry 221 to sentry 225 which are not routed through segment 213. On the other hand, errors occur in other messages routed through segment 213, and in messages between sentry 223A and sentry 223B
which never leave segment 213. This information localizes the problcm in segment 213. Server W O 98/21844 PCTrUS97/20503 200 can then instruct sentry 223A and sentry 223B to sample packets regularly, and from this it may be discovered, for example, that a device on segment 213 is inadvertently bro~ cting and causing the traffic on segment 213 to exceed capacity. In some cases, it may also be useful to physically connect a network sniffer to a malfunctioning segment. In this manner, mission server 5 200 saves considerable time by determining the proper segment to be analyzed.

A sentry can be instructed to sample/examine the contents of packets that it receives to gather a variety of information. For example, a sentry may be instructed to examine every 10th packet that it receives and ~ t( nnin~ where the packet originated (the source IP address), where the packet is going, the size of the packet, and what the contents of the first 12 bytes of data past the packet header are. The sentry may do this until it has ex~min~-1 2000 packets and then send the summary results o~ this inforrnation back to the server for storing and further pr~ces~ing The sentry can be instructed to conduct this type of sampling for a number of packets as in the above example or for a relatively short period of time (e.g. Iess than one hour) regardless of the number of packets that it receives or both, whichever occurs first. These15 sampling/monitoring activities are of co~ a,alively short duration and do not require the iterati~e interaction of the server that long running sampling demands. If longer running sampling is required, then the server breaks up the sampling into shorter term missions which it iteratively sends to the sentries.

In addition to ~let~cting errors which occur in messages sent along certain network 20 segments, delays in message transmission are also assessed. Three major factors are involved in network delay as perceived by a user:

i . The workstation being used.
2. The server on which the requested files or applications resides.
3. The network illrl~l ucture between the server and the workstation. This includes 25 the physical media, the network topology, and any devices through which co~ lic-~tions pass.

W O 98/21844 PCTrUS97/20503 Quantifying the third factor is accomplished in one embodiment by replicating the Client/server "conversation" in content, sending each message included in the conversation through thc network infrastructure, and measuring the time that each message takes to reach its s~in~tion. Each packet included in the conversations should traverse the network before the next S packet is sent. A time stamp is placed on the packet upon tr:-nsmi~ion and another is "tt~h~d to the packet upon receipt.

The clocks at the sending and receiving m"l~hine~s may be offset with respect to each ot'ner so that a time stamp from the sending m"rhinP is not accurate to determine the transit time of the message to the sending m:~hin~ because of the time offset. However, by playing the 10 conversation in both directions ~Client and server sides), differences in local clocks on the sending and receiving machines can be made to cancel each other. Thus, the amount of time each packet takes to traverse the network is derived. A minimllm conversation time is calculated by multiplying this time by the number of packets in the conversation.

The impact of a firewall or router filters is determined by first playing the conversation 15 without filtering, and then playing the conversation with filtering. The difference in transit times is noted and the delay caused by the firewall or router filters is determined.

Transit times are calculated in one embodiment according to the method described below.
I3ach packet is time stamped twice with a 64 bit integer. The frequency of the time stamps is smallcr than 1 microsecond. The first time stamp occurs in the NDIS driver immediately before 20 the packet is sent. This time stamp is contained in the data area of the packet. The second time stamp is applied upon notification that the packet has arrived. It is contained in the Buffer Descriptor Element that tracks the packet's size and location in the memory buffer.

A time stamp has the following makeup:

TS: TimeStamp CA 02242377 l998-07-07 W O 98/21844 PCT~US97/20503 GMT: Greenwich Mean Time Offsetx: Difference between m~rhine X local time and GMT

TSX = GMT + Offsetx Thus the time stamp is offset from Greenwich Mean Time by Offsetx Se.n~ing the packet S in one directions yields the following result, where Q is the measured time for transit in one direction, Q' is the measured time for transit in the other direction, R is the real transit time for a packet traveling one direction between two m~hines and R' is the real transit time for a packet traveling the other direction:

Q = TSX - TSy 10 Q = (GMTX + Offsetx) + R - (GMTy + Offsety) Q' = TSy - TSX

Q' = (GMTy + Offsety) ~ R' - (GMTX + O~fsetx) Adding Q + Q' causes the Offsets to cancel and enables an average transit time for both directions, (Q + Q')/2, = (R + 3~')/2 to be calculated. The tests in each direction should be run 15 sequentially to minimi7.e clock drift, since separate clocks with extremely small resolution tend to drift closer or farther apart. A verified method of transit time determination is also used in certain embodiments so that the route each packet takes is recorded in the Options field of the IP header.
The RecordRoute option flag is used so that each node on the way to the ~lestin~tion places its IP
address in the Options Data area up to and including 9 nodes. This allows verification that Route 20 Q and Route Q' are identical.
-W O 98/21844 PCT/US97/205~3 Mission server 200 is also operative to determine the capacity and utilization of the network. By defining missions which require different sentries to generate volumes of network traffic and then monitoring the resu}ting performance, server 200 can ~im~ t~ a network usage increase and observe the network perforrnance which results. In this manner, mission server 200 5 provides data which forecasts problems that will occur when network usage increases. Mission server ~00 can also identify bottle necks and deterrnine where additional resources should be deployed to handle increased traffic. .

FIGURE 3 is a flow diagram which illustrates the process implemented on a Client 110, such as Client 152, or Client 154 in F~IGURE 1. The process begins at step 300. In step 302, the 10 Client l 10 connects to the mission server and downloads information needed to select a rnission.
The mission is defined in step 3Q4 in cooperation with the mission server. In one embodiment, this is accomplished simply by making selections from web pages provided by the mission server.
In another embodiment, the Client 110 downloads Java applets and uses those applets to assist the user to define the mission. The rnission is requested in step 306.

In step 308, the Client 1 10 receives data from the mission server. In one embodiment, a direct TCP/IP conncction is set up with the mission server for data tr~n~mi~sion. On the other embodiments, the TCP/IP connection is established prior to mission definition. In step 310, the Client 110 formats the data and reports it to the user. In embodiments which include the Client 110 receiving Java applets, the applets may be used to format and report the data.

F~GURE 4 is a flow diagram of a process implemented on the mission server. The process begins at 400, and in a step 404, the mission server listens on a communication channel for a request from a Client. When a request is received, a step 406 transfers control to a step 408, where the mission server determines the content of the request. If the server has the needed inforrnation to fulfill the request, then a step 410 transfers control to a step 412, the mission server responds to the request, and control is transferred back to step 404. This would be the case if the request were a request for lnission definition information in the form of a Web page or for data already obtained from sentries by the mission server.

CA 02242377 l998-07-07 W O 98/21844 PCTnJS97/20503 If the mission server does not have the information requested, then control is transferred to a step 414. The mission server determines missions for the sentries and then sends the mission requests to the sentries in a step 416. Data relating to the mission results is received and compiled in a step 418. In a step 420, the mission server responds to the request from a Client, and control S is transferred back to step 404.

F~GURE 5 is a flow diagram of the process implemented on each sently. The process starts at 500 at which point the sentry awaits a mission request in a step 502. As long as no request is received, the sentry m:~int~in~ a mission table which includes all current missions which the sentry is ex~cl-ting and continues to generate reports in a step 504.

When a mission recluest is received, the sentry checks the mission table in a step 506 to determine if the table is full. In one embodiment, each sentry is only configured to run 9 missions at once in order to limit the impact of the sentry on the operation of the device on which it is inct~ l If the mission table is full, then the sentry is a}ready ex~cntin~ its maximum number of missions. The mission re~uest is then denied in a step 508 and control is transferred back to step 502.

If the mission table is not full, then the sen~ry accepts the mission, sends an ID to the mission server, and updates the mission table in a step 510. The sentry then waits for a mission statement in a step 512. If no statement is received after a certain time, then the sentry controller deletes the mission in a step 514 and control is transferred back to step 502.

When a mission statement is received, the sentry controller decodes the mission statement and parses it to ensure that it is valid in a step 516. If the statement is invalid, the sentry sends an "invalid" message in a step 518 and control is transferred to step 502. If the mission statement is valid, then the sentry sends a "valid" statement in a step 520, and then waits for an execute command at a step 522. If no execute command is received within a certain period of time, then 25 the sentry controller deletes the mission in a step 524, and control is transferred to back to step 502. If an execute command is received, then the sentry executes the mission steps at a step 526, W O 98/21844 PCT~US97/20503 and the results are reported to the mission server at step 528. Control is then transferred to step 502.

The sentry thus runs in the background of the device on which it is implemented. The sentry waits for a mission request, maintains a table of the missions which it is currently executing, validates the requests, and notifies the mission server whether the request is accepted.
The sentry executes the mission when it receives an execute command and reports the results to the mission server. The sentry operates independently of the mission server, executing operations, and sending reports.

The process of sending the mission statement to the sentries, validating the statement, and 10 verifying that the sentry has room in its miss~on statement table for another nQission enables the mission server to determine that each sentry required for the mission is available to execute the mission. If certain sentries deny the mission request, then the mission server can select other sentries or else terminate missions being executed by the sentries which denied the mission request to make room for the new mission. Thus, the mission is not executed until all sentries are 15 available to execute the mission and have accepted the mission.

FIGURE 6 illustrates the process by which server 600 initiates the execution of the mission. The sentries have already accepted the mission request and are waiting for the command to execute the mission. Because certain operations such as send and receive re~uire synchronization between certain entries, server 600 selects a controlling sentry 602, and 20 authorizes it to execute the mission when ready. In one embodiment, Controlling sentry 602 sends go codes to server 600 for distribution to the other sentries that will execute the mission.
Controlling sentry 602iS then able to synchronize the mission with the other sentries. In one embodiment, for a send and receive mission, the controlling sentry 602 is a receiving sentry. In the embodiment shown in FIGURE 6, controlling sentry 602 sends go codes to server 600 for a 25 sentry 604, a sentry 606, and a sentry 608. Controlling sentry 602 is thus able to synchronize the mission execution by sending go codes to the server to be distributed to the other sentries at a~ ,pliate times when it is ready to receive messages.

W O 98/21844 PCTrUS97/20503 As noted above, missions may include sending and receiving packets by sentries. Once the server defines the mission, it communicates the mission to the sentries by sending the sentries a series of files including Packet Des~ Lc l~, Environments, and Templates. These fi1es flPtermine what traffic will be generated by the sending sentries and define what packets are expected to be received by the receiving sentries. A Packet Descriptor file defines what packet groups are to be used during a traffic generation test. The co~ onents of a Packet Descriptor file determine the construction of the protocol packets which are sent out during a traffic generation mission. Environment files define how the sentry sends packets during a mission. Environment files determine packet quantity, duration, co~ uily, looping, input and output board, and packets 10 per second used during testing. Several individual environment files may be created, and then a particular envi~ lllent file may be selected by the Client for each mission. Template files define packet formats for use in generating various types of packets for network traffic.

Typically, new Packet Descriptors and Environment files are created for the purpose of ~s.cigning specialized variab}es for send or receive missions. The Packet Descriptor 15 configurations affects the construction of the protocol traffic that is created. Predefined Packet Descriptor files are also provided to the Client so that the Client may simply select a predefined Packet Descriptor file for the purpose of generating traffic.

FIGURE 7 illustrates a Packet Descriptor file 700. Packet Descriptor file 700 includes a destination MAC address 702, and a source MAC address 704. A corresponding ~ stin~tion IP
20 address 706 and a corresponding source IP address 708 are also included. Packet Descriptor file 700 also specifies a Template 710 as well as fill data 712 and a packet size 714. The test unit MAC address is specified in 716. Thus, both the MAC addresses and the IP addresses are specified for both the source and destination of the packet. Template 710 provides a standard packet format that may already contain fill data. If different or additional fill data is required, then 25 fill data 712 is also provided and packet size 714 is specified. Packet Descriptor file 700 thus provides all the information n~cçc.~ry for traffic generation. The use of Template 710 enables a substantial portion of packet information to be generated from a stored format that need not be re-W O 98/21844 PCT~US97/20S03 trzn~mitt~d to each sentry for each mission. Predefined Packet Descriptor files are provided to the Client for selection, or the Client may compose new Packet Descriptor files.

The test unit MAC address and the source and (lPst;n~tion addresses are specified in Packet Descriptor file 700 so that a specific device, for example a router, may be tested with traffic which appears to come from a source other than the sentry and which appears to the router to be sent to the destination address specified. Based on the addresses given in Packet Descriptor file 700, a sentry may be instructed to send a m~sS~gP to the router and to include a source address which is different than the actual source address of the sentry. A destination address is also specified, but the m~cs:~ge is sent directly to the test unit address, which may be, for example, the MAC address 10 of the router. (If the test unit address were not specified, then there would be no assurance that a message from a given sentry would in fact be sent through the test unit.) Thus, it is possible to specify a mission that sends messages to a router or other device and determine when the router becomes overwhelmed. The sentries are not only capable of sending packets to each other, but can also send packets to or through a specific test unit which may or may not have a sentry 15 controller. Additionally, the sentries can specify albiLIdly sources and destinations for those packets.

FIGURE 8 illustrates how the Packet Descriptor file affects IP packet fields 802 and IPX
packet fields 804. IP packet fields 802 are the internet protocol packet fields and IPX packet fields 804 are the Novell packet fields. The Packet Descriptor file specifies a destination MAC
20 address 806, and a source MAC address 808, as well as a ~lestin:ltion IP address 810 and a source IP address 812. These affect protocol length 81~ and TCP checksum 816. Data 818 varies by packet length. Likewise, for IPX packet fields 804, the Packet Descriptor file specifies a ~lestin~tion MAC address 856, and a source MAC address 858, as well as a destination network and node address 860 and a source network and node address 862. These affect protocol length 25 864 and 802.3 length 865. A destination node 866 and source node 868 are also specified.
Again, data 870 varies by packet length. Traffic is thus directed from the source whose MAC
address is given to the node at the clestinzltion MAC address. Packet data is deterrnined by the template and Packet Descriptor files.

W O 98/21844 PCTrUS97/20503 FIGURE 9 illustrates the composition of a template file 900. A begin section 902identifies the network protocol for the packet template. A declaration section 904 defines the packet field names, offset locations within the packet, and field lengths. Declaration section 904 deterrnines where the different types of information relating to the packet are located in the S template file. Each field may be defined as occurring at a certain offset byte location in the template file or as occurring imme~ t~:ly before or after certain other fields. An assignment section 906 assigns field values from the Packet Descriptor, from constant values or from expressions. For example, an ~ignm~nt may assign a certain fill pattern which takes a constant value parameter and fills the ~ci~n~cl field with that value repeated as nPcecs~ry to fill the field.
An assertion section 908 is optionally included. Assertion section 908 includes a list of one or more assertion definitions. An assertion definition consists of a field reference, a Co~ alison operator, and then a field expression. The assertion definition is evaluated by the receiver and if the c~ a,ison returns a false result, a packet assertion error is logged. An end section 910 specifi~.s the end of template file 900.

Upon receipt of the mission statement and acceptance of the mission, the sentry controllers at each sentry included in the mission use the inforrnation in the Template files, the Packet Descriptor files, and the Environment files to generate traffic in the case of the sending sentries, or to determine what packets from other sentries are expected by the receiving sentries. The receiving sentries then can check the message to determine errors and can also d~tt.llli,le 20 tr~n~miccion times using the time stamping procedure described above. Sentries also sample network traffic generated by other devices. The sentries store the data thus obtained as directed by the mission statement and then report data back to the sentry server as directed by the mission statement. The sentry server receives data from the various sentries and generates reports of the mission results for the Client according to the mission statement. Thus, the mission is completed 25 and the Client is able to generate traffic on the network which contains the sentries, monitor network traffic, and generate reports of performance data.

Although the foregoing invention has been described in some detail for purposes of clarity of understz-ntling, it will be ap~a.c;-lt that certain changes and modifications may be practiced W O 98121844 PCTnJS97/20S03 within the scope of the appended claims. It should be noted that there are many alternative ways of hllple~ nting both the process and appaldtus of the present invention. It is therefore intPnAPA that the following appended claims be interpreted as including all such alterations, L)e~ uLaLions, and equivalents as fall within the spirit and scope of the present invention.

Claims (18)

1. A mission server for network analysis comprising:
first interface means for connecting said mission server to a remote Client, means for defining a mission based on information received from said Client;

second interface means for connecting said server to a network which is to be analyzed so that said server can send commands to and receive data relating to the execution of said commands from a sentry which is connected to a segment of said network at a location on said network;

command means for directing said sentry which is connected to said segment of said network to perform operations requiring participation in said network at a location on said segment of said network to support a said mission; and processing means for determining the outcome of said mission based on the outcome of said operations requiring participation in said network at said location on said segment of said network.
2. The mission server of claim 1 wherein said command means further includes means for directing a first sentry which is connected to a first segment of said network to send a message to a second sentry which is connected to a second segment of said network.
3. The mission server of claim 2 wherein said first segment is the same as said second segment.
4. The mission server of claim 1 wherein said command means further includes means for notifying a first sentry which is connected to a first segment of said network that it should receive a message from a second sentry which is connected to a second segment of said network.
5. The mission server of claim 4 wherein said first segment is the same as said second segment.
6. The mission server of claim 1 wherein said command means further includes means for directing a sentry device which is connected to a first segment of said network to send a message to a second device which is connected to a second segment of said network and to trace the route of said message.
7. The mission server of claim 6 wherein said first segment is the same as said second segment.
8. The mission server of claim 1 wherein said command means further includes means for directing a sentry which is connected to a segment of said network to sample packets which are traversing said segment of said network.
9. A method of acquiring network performance data comprising:

providing a mission server which is connected to a network, said mission server being operative to interface with Clients to define and receive requests for a mission;

defining a mission which includes operations that require participation in said network from a plurality of segments at a plurality of locations within said network;

providing a plurality of sentries on devices connected to said segments of said network at said plurality of locations within said network so that said devices are operative to participate in said network from said segments of said network at said plurality of locations, said sentries being operative to support mission by participating in said network through said devices;
receiving a request for said mission at said mission server, communicating said mission from said mission server to said plurality of sentries on devices connected to said segments of said network at said plurality of locations within said network;

executing the operations of said mission with said plurality of sentries on devices connected to said segments of said network at said plurality of locations within said network;

communicating the results of said operations from said plurality of sentries to said mission server; and determining the results of said mission at said mission server.
10. The method of claim 9 wherein executing the operations of said mission with said plurality of sentries includes sending a packet from a first sentry to a second sentry.
11. The method of claim 9 wherein executing the operations of said mission withsaid plurality of sentries includes tracing the route of a message sent from one of said plurality of sentries to a one of said segments of said network.
12. The method of claim 9 wherein executing the operations of said mission with said plurality of sentries includes sampling the packets which are traversing one of said segments of said network.
13. The method of claim 9 further including transmitting data gathered by said sentries to said mission server over said network.
14. The method of claim 9 wherein said Clients are remote Clients.
15. The method of claim 9 wherein defining a mission which includes operations that require participation in said network from a plurality of segments at a plurality of locations within said network further includes selecting a mission from a set of predefined missions.
16. The method of claim 9 further including communicating the results of said mission between from said mission server to said client.
17. The method of claim 9 wherein communicating the results of said operations from said plurality of sentries to said mission server further includes communicating the results of said operations from said plurality of sentries to said mission server using said network.
18. The method of claim 9 wherein communicating said mission from said mission server to said plurality of sentries on devices connected to said segments of said network at said plurality of locations within said network further includes communicating said mission from said mission server to said plurality of sentries on devices connected to said segments of said network at said plurality of locations within said network using said network.
CA002242377A 1996-11-12 1997-11-12 Method and apparatus for network assessment Abandoned CA2242377A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/747,376 US5812529A (en) 1996-11-12 1996-11-12 Method and apparatus for network assessment
US08/747,376 1996-11-12

Publications (1)

Publication Number Publication Date
CA2242377A1 true CA2242377A1 (en) 1998-05-22

Family

ID=25004803

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002242377A Abandoned CA2242377A1 (en) 1996-11-12 1997-11-12 Method and apparatus for network assessment

Country Status (5)

Country Link
US (1) US5812529A (en)
EP (1) EP0878069A1 (en)
AU (1) AU5507498A (en)
CA (1) CA2242377A1 (en)
WO (1) WO1998021844A1 (en)

Families Citing this family (139)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6018343A (en) 1996-09-27 2000-01-25 Timecruiser Computing Corp. Web calendar architecture and uses thereof
US6421726B1 (en) * 1997-03-14 2002-07-16 Akamai Technologies, Inc. System and method for selection and retrieval of diverse types of video data on a computer network
US5987608A (en) * 1997-05-13 1999-11-16 Netscape Communications Corporation Java security mechanism
IL121348A0 (en) * 1997-07-21 1998-04-05 Bio Rad Lab Israel Inc System and method for device monitoring
US7043537B1 (en) * 1997-09-05 2006-05-09 Cisco Technology, Inc System and method for remote device management
US6216169B1 (en) * 1997-10-06 2001-04-10 Concord Communications Incorporated Generating reports using distributed workstations
IL121898A0 (en) 1997-10-07 1998-03-10 Cidon Israel A method and apparatus for active testing and fault allocation of communication networks
GB2344265B (en) * 1997-11-20 2003-07-16 Xacct Technologies Inc Network accounting and billing system and method
US6351474B1 (en) 1998-01-14 2002-02-26 Skystream Networks Inc. Network distributed remultiplexer for video program bearing transport streams
US6246701B1 (en) 1998-01-14 2001-06-12 Skystream Corporation Reference time clock locking in a remultiplexer for video program bearing transport streams
US6351471B1 (en) * 1998-01-14 2002-02-26 Skystream Networks Inc. Brandwidth optimization of video program bearing transport streams
US6195368B1 (en) 1998-01-14 2001-02-27 Skystream Corporation Re-timing of video program bearing streams transmitted by an asynchronous communication link
US6292490B1 (en) * 1998-01-14 2001-09-18 Skystream Corporation Receipts and dispatch timing of transport packets in a video program bearing stream remultiplexer
US6125363A (en) * 1998-03-30 2000-09-26 Buzzeo; Eugene Distributed, multi-user, multi-threaded application development method
US6085237A (en) 1998-05-01 2000-07-04 Cisco Technology, Inc. User-friendly interface for setting expressions on an SNMP agent
US6363421B2 (en) * 1998-05-31 2002-03-26 Lucent Technologies, Inc. Method for computer internet remote management of a telecommunication network element
US6304905B1 (en) * 1998-09-16 2001-10-16 Cisco Technology, Inc. Detecting an active network node using an invalid protocol option
US6119160A (en) 1998-10-13 2000-09-12 Cisco Technology, Inc. Multiple-level internet protocol accounting
US6381646B2 (en) 1998-11-03 2002-04-30 Cisco Technology, Inc. Multiple network connections from a single PPP link with partial network address translation
US6819655B1 (en) * 1998-11-09 2004-11-16 Applied Digital Access, Inc. System and method of analyzing network protocols
US7739159B1 (en) 1998-11-23 2010-06-15 Cisco Technology, Inc. Aggregation of user usage data for accounting systems in dynamically configured networks
US7370102B1 (en) 1998-12-15 2008-05-06 Cisco Technology, Inc. Managing recovery of service components and notification of service errors and failures
US6718376B1 (en) 1998-12-15 2004-04-06 Cisco Technology, Inc. Managing recovery of service components and notification of service errors and failures
US6871224B1 (en) 1999-01-04 2005-03-22 Cisco Technology, Inc. Facility to transmit network management data to an umbrella management system
US6654801B2 (en) 1999-01-04 2003-11-25 Cisco Technology, Inc. Remote system administration and seamless service integration of a data communication network management system
US6678250B1 (en) * 1999-02-19 2004-01-13 3Com Corporation Method and system for monitoring and management of the performance of real-time networks
US6915466B2 (en) 1999-04-19 2005-07-05 I-Tech Corp. Method and system for multi-user channel allocation for a multi-channel analyzer
US6507923B1 (en) 1999-04-19 2003-01-14 I-Tech Corporation Integrated multi-channel fiber channel analyzer
US6657968B1 (en) * 1999-05-18 2003-12-02 International Business Machines Corporation Glitcher system and method for interfaced or linked architectures
US6591304B1 (en) 1999-06-21 2003-07-08 Cisco Technology, Inc. Dynamic, scaleable attribute filtering in a multi-protocol compatible network access environment
US6789116B1 (en) 1999-06-30 2004-09-07 Hi/Fn, Inc. State processor for pattern matching in a network monitor device
US6771646B1 (en) 1999-06-30 2004-08-03 Hi/Fn, Inc. Associative cache structure for lookups and updates of flow records in a network monitor
CN1293478C (en) * 1999-06-30 2007-01-03 倾向探测公司 Method and apparatus for monitoring traffic in a network
US7342897B1 (en) * 1999-08-07 2008-03-11 Cisco Technology, Inc. Network verification tool
US6487208B1 (en) 1999-09-09 2002-11-26 International Business Machines Corporation On-line switch diagnostics
US6601195B1 (en) 1999-09-09 2003-07-29 International Business Machines Corporation Switch adapter testing
US6560720B1 (en) 1999-09-09 2003-05-06 International Business Machines Corporation Error injection apparatus and method
US7263558B1 (en) 1999-09-15 2007-08-28 Narus, Inc. Method and apparatus for providing additional information in response to an application server request
US7272649B1 (en) * 1999-09-30 2007-09-18 Cisco Technology, Inc. Automatic hardware failure detection and recovery for distributed max sessions server
US6704289B1 (en) * 1999-10-01 2004-03-09 At&T Corp. Method for monitoring service availability and maintaining customer bandwidth in a connectionless (IP) data network
US6654796B1 (en) 1999-10-07 2003-11-25 Cisco Technology, Inc. System for managing cluster of network switches using IP address for commander switch and redirecting a managing request via forwarding an HTTP connection to an expansion switch
US6718282B1 (en) 1999-10-20 2004-04-06 Cisco Technology, Inc. Fault tolerant client-server environment
US6480977B1 (en) * 1999-10-29 2002-11-12 Worldcom, Inc. Multi-protocol monitor
US7130807B1 (en) * 1999-11-22 2006-10-31 Accenture Llp Technology sharing during demand and supply planning in a network-based supply chain environment
US8032409B1 (en) 1999-11-22 2011-10-04 Accenture Global Services Limited Enhanced visibility during installation management in a network-based supply chain environment
US7124101B1 (en) 1999-11-22 2006-10-17 Accenture Llp Asset tracking in a network-based supply chain environment
US7716077B1 (en) 1999-11-22 2010-05-11 Accenture Global Services Gmbh Scheduling and planning maintenance and service in a network-based supply chain environment
US8271336B2 (en) 1999-11-22 2012-09-18 Accenture Global Services Gmbh Increased visibility during order management in a network-based supply chain environment
US6917626B1 (en) * 1999-11-30 2005-07-12 Cisco Technology, Inc. Apparatus and method for automatic cluster network device address assignment
US6636499B1 (en) 1999-12-02 2003-10-21 Cisco Technology, Inc. Apparatus and method for cluster network device discovery
US7031263B1 (en) * 2000-02-08 2006-04-18 Cisco Technology, Inc. Method and apparatus for network management system
US6725264B1 (en) 2000-02-17 2004-04-20 Cisco Technology, Inc. Apparatus and method for redirection of network management messages in a cluster of network devices
AU2001238429A1 (en) * 2000-02-18 2001-08-27 Cedere Corporation Method of automatically baselining business bandwidth
US6990616B1 (en) * 2000-04-24 2006-01-24 Attune Networks Ltd. Analysis of network performance
US6958977B1 (en) 2000-06-06 2005-10-25 Viola Networks Ltd Network packet tracking
US9800671B1 (en) * 2000-06-28 2017-10-24 Intel Corporation Repeatedly accessing a storage resource normally accessed through a web page without accessing the web page
US20030018769A1 (en) * 2000-07-26 2003-01-23 Davis Foulger Method of backtracing network performance
US7058707B1 (en) 2000-08-01 2006-06-06 Qwest Communications International, Inc. Performance modeling in a VDSL network
US7464164B2 (en) * 2000-08-01 2008-12-09 Qwest Communications International, Inc. Linking order entry process to realtime network inventories and capacities
US6920498B1 (en) 2000-08-31 2005-07-19 Cisco Technology, Inc. Phased learning approach to determining closest content serving sites
US6771665B1 (en) 2000-08-31 2004-08-03 Cisco Technology, Inc. Matching of RADIUS request and response packets during high traffic volume
US7411981B1 (en) 2000-08-31 2008-08-12 Cisco Technology, Inc. Matching of radius request and response packets during high traffic volume
US6820123B1 (en) 2000-09-28 2004-11-16 Cisco Technology, Inc. Method and apparatus for assigning hot objects to server load balancer
US7047563B1 (en) 2000-12-07 2006-05-16 Cisco Technology, Inc. Command authorization via RADIUS
US7389354B1 (en) 2000-12-11 2008-06-17 Cisco Technology, Inc. Preventing HTTP server attacks
US6856591B1 (en) 2000-12-15 2005-02-15 Cisco Technology, Inc. Method and system for high reliability cluster management
US6985935B1 (en) 2000-12-20 2006-01-10 Cisco Technology, Inc. Method and system for providing network access to PPP clients
US6988148B1 (en) 2001-01-19 2006-01-17 Cisco Technology, Inc. IP pool management utilizing an IP pool MIB
US7577701B1 (en) 2001-01-22 2009-08-18 Insightete Corporation System and method for continuous monitoring and measurement of performance of computers on network
US20020133575A1 (en) * 2001-02-22 2002-09-19 Viola Networks Ltd. Troubleshooting remote internet users
CN1493130A (en) * 2001-02-22 2004-04-28 ������������ʽ���� Communication quality management system, communication quality management method, program, and recording medium
US7197549B1 (en) 2001-06-04 2007-03-27 Cisco Technology, Inc. On-demand address pools
US7788345B1 (en) 2001-06-04 2010-08-31 Cisco Technology, Inc. Resource allocation and reclamation for on-demand address pools
US7228566B2 (en) * 2001-07-10 2007-06-05 Core Sdi, Incorporated Automated computer system security compromise
US6950405B2 (en) * 2001-07-16 2005-09-27 Agilent Technologies, Inc. Traffic stream generator having a non-consecutive addressing mechanism
US8880709B2 (en) * 2001-09-12 2014-11-04 Ericsson Television Inc. Method and system for scheduled streaming of best effort data
US7200613B2 (en) * 2001-11-09 2007-04-03 Xerox Corporation Asset management system for network-based and non-network-based assets and information
US20030104344A1 (en) * 2001-12-03 2003-06-05 Sable Paula H. Structured observation system for early literacy assessment
US7640335B1 (en) * 2002-01-11 2009-12-29 Mcafee, Inc. User-configurable network analysis digest system and method
DE10208902B4 (en) * 2002-02-27 2005-07-07 Siemens Ag Method and arrangement for evaluating a set of communication data
US20030177160A1 (en) * 2002-03-14 2003-09-18 Internationl Business Machines Corporation Predictive system for self-managed e-business infrastructures
US7599293B1 (en) 2002-04-25 2009-10-06 Lawrence Michael Bain System and method for network traffic and I/O transaction monitoring of a high speed communications network
US20040039772A1 (en) * 2002-04-25 2004-02-26 De Miguel Angel Boveda Methods and arrangements in a telecommunication network
US20050226195A1 (en) * 2002-06-07 2005-10-13 Paris Matteo N Monitoring network traffic
AU2003243433A1 (en) * 2002-06-07 2003-12-22 Ember Corporation Ad hoc wireless network using gradient routing
US20050249185A1 (en) * 2002-06-07 2005-11-10 Poor Robert D Routing in wireless networks
US7277937B2 (en) * 2002-07-17 2007-10-02 Core Sdi, Incorporated Distributed computing using syscall proxying
US7228348B1 (en) 2002-08-13 2007-06-05 Finisar Corporation System and method for triggering communications data capture
US6941482B2 (en) * 2002-09-10 2005-09-06 Finisar Corporation Systems and methods for synchronizing time stamps
US7124134B2 (en) 2003-05-08 2006-10-17 Eugene Buzzeo Distributed, multi-user, multi-threaded application development system and method
US7848229B2 (en) 2003-05-16 2010-12-07 Siemens Enterprise Communications, Inc. System and method for virtual channel selection in IP telephony systems
US7827248B2 (en) * 2003-06-13 2010-11-02 Randy Oyadomari Discovery and self-organization of topology in multi-chassis systems
US8190722B2 (en) * 2003-06-30 2012-05-29 Randy Oyadomari Synchronization of timestamps to compensate for communication latency between devices
WO2005006144A2 (en) * 2003-06-30 2005-01-20 Finisar Corporation Propagation of signals between devices for triggering capture of network data
US7693222B2 (en) * 2003-08-13 2010-04-06 Ericsson Television Inc. Method and system for re-multiplexing of content-modified MPEG-2 transport streams using PCR interpolation
US20050102390A1 (en) * 2003-10-22 2005-05-12 Peterson Eric M. System and method of network usage analyzer
US20070011334A1 (en) * 2003-11-03 2007-01-11 Steven Higgins Methods and apparatuses to provide composite applications
US7376083B2 (en) * 2003-12-09 2008-05-20 International Business Machines Corporation Apparatus and method for modeling queueing systems with highly variable traffic arrival rates
US8214447B2 (en) * 2004-06-08 2012-07-03 Bose Corporation Managing an audio network
US7779340B2 (en) * 2005-03-17 2010-08-17 Jds Uniphase Corporation Interpolated timestamps in high-speed data capture and analysis
US7525922B2 (en) * 2005-04-01 2009-04-28 Cisco Technology, Inc. Duplex mismatch testing
US20070016824A1 (en) * 2005-07-14 2007-01-18 International Business Machines Corporation Methods and apparatus for global systems management
US7835293B2 (en) * 2005-09-13 2010-11-16 Cisco Technology, Inc. Quality of service testing of communications networks
US20070106797A1 (en) * 2005-09-29 2007-05-10 Nortel Networks Limited Mission goal statement to policy statement translation
US8782201B2 (en) * 2005-10-28 2014-07-15 Bank Of America Corporation System and method for managing the configuration of resources in an enterprise
US8239498B2 (en) * 2005-10-28 2012-08-07 Bank Of America Corporation System and method for facilitating the implementation of changes to the configuration of resources in an enterprise
US20070100647A1 (en) * 2005-11-03 2007-05-03 International Business Machines Corporation Eligibility list management in a distributed group membership system
US7904759B2 (en) * 2006-01-11 2011-03-08 Amazon Technologies, Inc. System and method for service availability management
US7990887B2 (en) * 2006-02-22 2011-08-02 Cisco Technology, Inc. Sampling test of network performance
JP2007226398A (en) * 2006-02-22 2007-09-06 Hitachi Ltd Database connection management method and computer system
US9037698B1 (en) 2006-03-14 2015-05-19 Amazon Technologies, Inc. Method and system for collecting and analyzing time-series data
US7979439B1 (en) 2006-03-14 2011-07-12 Amazon Technologies, Inc. Method and system for collecting and analyzing time-series data
US8601112B1 (en) * 2006-03-14 2013-12-03 Amazon Technologies, Inc. Method and system for collecting and analyzing time-series data
US8868660B2 (en) * 2006-03-22 2014-10-21 Cellco Partnership Electronic communication work flow manager system, method and computer program product
US7630385B2 (en) * 2006-08-04 2009-12-08 Oyadomari Randy I Multiple domains in a multi-chassis system
US7764695B2 (en) * 2006-09-25 2010-07-27 Oyadomari Randy I Arm and rollback in a multi-chassis system
JP5129499B2 (en) * 2007-04-11 2013-01-30 キヤノン株式会社 Image forming apparatus, image forming apparatus control method, program, and storage medium
US20090192818A1 (en) * 2008-01-29 2009-07-30 International Business Machines Corporation Systems and method for continuous health monitoring
US9723070B2 (en) * 2008-01-31 2017-08-01 International Business Machines Corporation System to improve cluster machine processing and associated methods
US20110022692A1 (en) * 2009-07-24 2011-01-27 Jeyhan Karaoguz Method and system for determining and controlling user experience in a network
US8189487B1 (en) 2009-07-28 2012-05-29 Sprint Communications Company L.P. Determination of application latency in a network node
EP2529508A4 (en) * 2010-01-31 2017-06-28 Hewlett-Packard Enterprise Development LP Method and system for management of sampled traffic data
US9356941B1 (en) * 2010-08-16 2016-05-31 Symantec Corporation Systems and methods for detecting suspicious web pages
US20130111008A1 (en) * 2011-10-28 2013-05-02 Chuck A. Black Network service monitoring at edge network device
KR101164029B1 (en) * 2011-11-15 2012-07-18 주식회사 좋은친구 Method for managing pc on network based on calculation velocity information of pc, and web-server used therein
US9749257B2 (en) * 2013-10-24 2017-08-29 Kcura Llc Method and apparatus for dynamically deploying software agents
US10366337B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating the likelihood of technology change incidents
US10216798B2 (en) 2016-02-24 2019-02-26 Bank Of America Corporation Technical language processor
US10275183B2 (en) 2016-02-24 2019-04-30 Bank Of America Corporation System for categorical data dynamic decoding
US10430743B2 (en) 2016-02-24 2019-10-01 Bank Of America Corporation Computerized system for simulating the likelihood of technology change incidents
US10275182B2 (en) 2016-02-24 2019-04-30 Bank Of America Corporation System for categorical data encoding
US10223425B2 (en) 2016-02-24 2019-03-05 Bank Of America Corporation Operational data processor
US10067984B2 (en) 2016-02-24 2018-09-04 Bank Of America Corporation Computerized system for evaluating technology stability
US10387230B2 (en) 2016-02-24 2019-08-20 Bank Of America Corporation Technical language processor administration
US10366338B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating the impact of technology change incidents
US10019486B2 (en) 2016-02-24 2018-07-10 Bank Of America Corporation Computerized system for analyzing operational event data
US10366367B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating and modifying technology change events
US10776535B2 (en) 2016-07-11 2020-09-15 Keysight Technologies Singapore (Sales) Pte. Ltd. Methods, systems and computer readable media for testing network devices using variable traffic burst profiles
US11388078B1 (en) 2019-06-10 2022-07-12 Keysight Technologies, Inc. Methods, systems, and computer readable media for generating and using statistically varying network traffic mixes to test network devices

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218697A (en) * 1990-04-18 1993-06-08 Microsoft Corporation Method and system for networking computers having varying file architectures
DE4321458A1 (en) * 1993-06-29 1995-01-12 Alcatel Network Services Network management support method and network management facility therefor
US5673393A (en) * 1993-11-24 1997-09-30 Intel Corporation Managing bandwidth over a computer network having a management computer that allocates bandwidth to client computers upon request
US5680461A (en) * 1995-10-26 1997-10-21 Sun Microsystems, Inc. Secure network protocol system and method

Also Published As

Publication number Publication date
EP0878069A1 (en) 1998-11-18
AU5507498A (en) 1998-06-03
US5812529A (en) 1998-09-22
WO1998021844A1 (en) 1998-05-22

Similar Documents

Publication Publication Date Title
US5812529A (en) Method and apparatus for network assessment
WO1998021844A9 (en) Method and apparatus for network assessment
US7031895B1 (en) Apparatus and method of generating network simulation model, and storage medium storing program for realizing the method
US6128644A (en) Load distribution system for distributing load among plurality of servers on www system
US6144961A (en) Method and system for non-intrusive measurement of transaction response times on a network
US5961594A (en) Remote node maintenance and management method and system in communication networks using multiprotocol agents
US5838919A (en) Methods, systems and computer program products for endpoint pair based communications network performance testing
US7555421B1 (en) Device emulation for testing data network configurations
US5881237A (en) Methods, systems and computer program products for test scenario based communications network performance testing
US5440719A (en) Method simulating data traffic on network in accordance with a client/sewer paradigm
US7539769B2 (en) Automated deployment and management of network devices
CN101056218B (en) A network performance measurement method and system
US6683856B1 (en) Method and apparatus for measuring network performance and stress analysis
KR20020089400A (en) Server monitoring using virtual points of presence
CN107391239B (en) Scheduling method and device based on container service
JP2004086904A (en) System and method for remotely controlling testing device on network
Blum Network performance open source toolkit: using Netperf, tcptrace, NISTnet, and SSFNet
JP2002007232A (en) Performance testing method and server testing device for www server
CN1432231A (en) Method and appts. for measuring internet router traffic
Marshall et al. Application-level programmable internetwork environment
US20080281969A1 (en) Controlling access to versions of application software by a server, based on site ID
Zheng et al. Empower: A cluster architecture supporting network emulation
US20050091378A1 (en) Method and system for using mobile code to measure quality of service over a network
Osterhage Computer Performance Optimization
Rubinstein et al. Evaluating the performance of a network management application based on mobile agents

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued