US20020103631A1 - Traffic engineering system and method - Google Patents

Traffic engineering system and method Download PDF

Info

Publication number
US20020103631A1
US20020103631A1 US09/876,384 US87638401A US2002103631A1 US 20020103631 A1 US20020103631 A1 US 20020103631A1 US 87638401 A US87638401 A US 87638401A US 2002103631 A1 US2002103631 A1 US 2002103631A1
Authority
US
United States
Prior art keywords
network
traffic
links
information
readable medium
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/876,384
Inventor
Anja Feldmann
Albert Greenberg
Carsten Lund
Nicholas Reingold
Jennifer Rexford
Frederick True
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Priority to US09/876,384 priority Critical patent/US20020103631A1/en
Assigned to AT&T CORP. reassignment AT&T CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TRUE, FREDERICK D., LUND, CARSTEN, GREENBERG, ALBERT GORDON, REINGOLD, NICHOLAS, REXFORD, JENNIFER LYNN, FELDMANN, ANJA
Publication of US20020103631A1 publication Critical patent/US20020103631A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/508Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement
    • H04L41/5083Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement wherein the managed service relates to web hosting

Definitions

  • the present invention relates generally to packet-switched networks. More particularly, the present invention relates to traffic engineering in a packet-switched network.
  • the Internet is divided into a collection of autonomous systems, each autonomous system (“AS”) managed by an Internet Service Provider (“ISP”) who operates a backbone network that connects to customers and other service providers.
  • ISPs Internet Service Provider
  • Large ISPs have few software systems and tools to support traffic measurement and network modeling, the underpinnings of effective traffic engineering. Seemingly simple questions about topology, traffic, and routing are surprisingly hard to answer in today's packet-switched networks.
  • a tremendous amount of work has gone into developing mechanisms and protocols for controlling traffic. By comparison, little work has gone to support traffic measurement and network modeling in operational networks. Unfortunately, unless control mechanisms are driven by the appropriate measurements and understanding from well-tested models, the benefit of the controls will be limited.
  • the present invention is directed to a novel system and method for traffic engineering in a packet-switched network, such as an Internet Protocol (“IP”) based backbone network.
  • IP Internet Protocol
  • a global view of the network is constructed utilizing a network data model that can be readily constructed from the balkanized network information associated locally with the individual elements in the network.
  • the data model can be utilized to support useful traffic engineering tools such as routing modeling and visualization.
  • basic concepts of traffic engineering such as a traffic matrix or an offered-load are simply not present in IP networks and must be estimated and/or derived. Moreover, they must be derived in a manner that takes into account the dynamic set of intra-domain and inter-domain routing protocols.
  • FIG. 1 sets forth an abstract software architecture, in accordance with a preferred embodiment of the present invention.
  • FIG. 2 sets forth a diagram of an IP backbone network.
  • FIG. 3 and FIG. 4 set forth pseudo-code for computing traffic flows, in accordance with a preferred embodiment of the present invention.
  • FIG. 5 sets forth pseudo-code for a disambiguation process, in accordance with a preferred embodiment of the present invention.
  • FIG. 6 set forth a diagram illustrating routing traffic division, in accordance with a preferred embodiment of the present invention.
  • FIG. 7 through FIG. 11 set forth examples of graphical user interfaces for visualization of the network, in accordance with a preferred embodiment of the present invention.
  • FIG. 1 sets forth an abstract architecture of an advanced traffic engineering software tool configured in accordance with a preferred embodiment of the present invention.
  • the software is advantageously divided into separate modules, each of which can be implemented in any of a number of suitable programming languages capable of execution on a digital computer, such as C, Tcl/TM, or Perl.
  • An advantageous manner of program code development is to use a software package, such as “Obj2C”, to convert data object descriptions into generated programming code for standard manipulation of the data objects and a prototype graphical user interface.
  • the software entitled by the inventors “NetScope” comprises three modules: a data model module 110 responsible for construction and manipulation of the network data model, a routing model module 120 responsible for construction and manipulation of the network routing model, and a visualization module 130 responsible for visualization, display, and correlation of multiple views of the network and usage information.
  • the data model module 110 receives, as input, network topology information and network traffic demand information in order to properly populate the network data model.
  • Netscope is advantageously separated and modularized (as shown abstractly by the dotted line) from the sources of the network topology and traffic demand information. This architecture permits the parts above the dotted line to be unaware of changes to modules below the dotted line. The decomposition into modules and the design of the underlying modules localize possible changes to the network, allowing for the simple evolution and extension of the software.
  • FIG. 1 One way of obtaining the configuration and traffic data is shown in FIG. 1: namely, two other modules 140 and 150 entitled by the inventors “NetBuild” and “DataBuild” respectively.
  • the modules 140 and 150 provide input to the data model module 110 ; these two modules take as input the raw data regarding network configuration and measurement, and provide as output higher level abstractions and information for the data model. This is highly advantageous in that an operational network is under continuous change.
  • the inputs to the data model module 110 need not come from an operational network.
  • configuration data can come from an operational network or from a proposed/projected topology design.
  • the traffic demands could come from measurements of the operational network, from estimates or projections, or from customer subscriptions (e.g., for a virtual leased line service).
  • the particular source of the network configuration and traffic demand information for the traffic engineering software is not a limitation of the present invention.
  • the present invention advantageously combines diverse network configuration information with diverse network measurements in a joint data model.
  • the following two subsections describe (a) a preferred network topology model as well as practical ways of obtaining information to populate the topology model; and (b) a preferred traffic demand model and various ways of obtaining network traffic measurements to populate the traffic matrix.
  • a preferred data model includes data objects for network nodes and links in both a “pure” IP router layer (i.e. routers and layer three links) and in a physical transport layer (i.e. devices and trunks). It is advantageous to include layer two devices and trunks in the data model because some networking technologies, such as FDDI and ATM, introduce an intermediate switching fabric at layer two, e.g. multiple layer three links may share a single trunk, or a single layer-three link may correspond to a permanent virtual circuit (“PVC”) that traverses one or more ATM switches. This introduces layers of connectivity and capacity, which has implications for traffic engineering and reliability.
  • PVC permanent virtual circuit
  • FIG. 2 sets forth a diagram of an IP backbone network 200 which highlights the different elements of the data model in the main router layer.
  • IP routers and bi-directional layer-three links are represented as nodes and edges in FIG. 2.
  • each router terminates a mixture of “access”, “peering”, and “backbone” links.
  • An access link 225 connects directly to customers, e.g. to a modem bank for dial-up users, a web-hosting complex, or a particular business or university campus.
  • customers can have two or more access links for higher capacity, load balancing or fault tolerance (such customers are referred to in the art as “multi-homed customers”).
  • Peering links 215 connect the backbone network to neighboring service providers, e.g. to a public Internet exchange point or directly to a private peer or transit provider.
  • a typical ISP has multiple peering links to each neighboring provider, typically in different geographic locations.
  • Backbone links connect routers inside the ISP backbone.
  • the network in FIG. 2 has been simplified in that all access links terminate at Access Routers (“ARs” e.g. 221 ) and all peering links terminate at Internet Gateway Routers (“IGRs” e.g. 211 ), and all remaining routers are characterized as Backbone Routers (“BRS” e.g. 201 ) that only terminate backbone links.
  • ARs Access Routers
  • IGRs Internet Gateway Routers
  • BRS Backbone Routers
  • ARs provide high port density to connect to a large number of customers with various access speeds and technologies
  • BRs provide high packet-forwarding performance
  • IGRs can isolate peer traffic and simplify management of inter-domain routing policies.
  • Each router is represented by a data object with attributes including the router name, the loopback IP address of the router, the type of the router (e.g. AR, BR, IGR), and the geographic location of the router in terms of city and latitude/longitude.
  • each router includes information about which links it originates.
  • IP string
  • dlatitude double
  • dlongitude double
  • pnode An identifier for a data object representing the physical node that corresponds to this router.
  • networks A sequence of IP masks. The IP networks that are directly connected to this router. Can be empty.
  • Each layer-three link is represented by a data object with attributes containing general information about the router originating the link, the name of the router card, the IP address of the interface, whether the link is shutdown or not, a textual description of its purpose, its capacity, and its OSPF weight.
  • Some attributes can be associated with both directions of a link.
  • each bidirectional link can be classified as an access, backbone, or peering link.
  • Backbone links also belong to a particular OSPF area, which must be the same for both unidirectional links.
  • Peering links are associated with a particular BGP peer, identified by its AS number and annotated by the IP address of the BGP peer in the remote domain.
  • a link data object can model a router interface and have the following parameters. Note that only interfaces that terminate at another router are included, e.g. interfaces that correspond to a PVC but not the interfaces that terminate on an ATM switch: x An identifier corresponding to the router data object with this interface. CardName (string) Name of the interface. y An identifier corresponding to the router data object where this link terminates. linkid (int) This gives an identifier on the full-duplex link. This can be used to find the link in the opposite direction. type (string) The type of the link, e.g. BACKBONE, INTERNAL, or ACCESS. IP (string) The IP address of the interface.
  • Capacity (int) The bit rate of the interface in Kbytes. Note that for some links, e.g. PVCs, this number makes not too much sense since the bandwidth is shared between multiple interfaces.
  • OSPF (int) The OSPF weight of the interface.
  • OSPFarea The OSPF area of the interface.
  • Status int) Status of the interface.
  • Desc (string) The description field from the router configuration file. plinks A sequence of references to physical link data objects. This is a list of the physical links that this link lays out to.
  • Each device or physical node is represented by a data object that has parameters identifying what type of device it is, e.g. a router or an ATM switch, where the node is located, and a list of trunks that originate at the device. Trunks describe the connectivity between routers and devices, and include the information about which links traverse a given trunk.
  • each physical node (“pnode”) can have the following parameters: name (string) An identifying string for the physical node. complex An identifier for a data object representing a central office housing the physical node. dlatitude (double) The displacement from the complex location for this pnode. dlongitude (double) The displacement from the complex location for this pnode. type (string) The type of pnode, e.g. router or ATM.
  • each physical link can have the following parameters: x An identifier corresponding to the pnode data object where this link originates. CardName (string) Any string which with x uniquely identifies the plink. y An identifier corresponding to the pnode data object where this link terminates. trunkid (int) This gives an identifier on the trunk. type (string) capacity (int) Desc (string)
  • the above model is very general and its objects can be populated in a number of different ways, such as modifying an existing data model, constructing an artificial network, or extracting the information from the real network.
  • End-to-end mechanisms such as “ping” and “traceroute” can be used for basic network topology discovery—but are cumbersome and provide only basic connectivity information.
  • SNMP queries or traps can also be utilized, but require active querying of all network elements. See, e.g., “A Simple Network Management Protocol (SNMP),” IETF RFC 1157, Network Working Group, May 1990.
  • An alternative approach is to extract the information from the router configuration files for the operational network. This has the advantage of capitalizing on all the additional information contained in the router configuration files, including customer and peer information.
  • a perhaps less obvious advantage is that the router configuration files are routinely logged for backup purposes and easily accessible without accessing the live operational network.
  • the disadvantages are that the information is not updated continuously and that the configuration files reflect the state of the network absent failures. Nevertheless, router or link failures or physically disconnected links can be taken account of by a separate data feed.
  • Populating the data objects in the data model using the router configuration files is made much easier by using a packet-switched network configuration debugger and database, as described in co-pending utility patent application, “Netdb: IP Network Configuration Debugger/Database,” U.S. Patent and Trademark Office application Ser. No. 60/160,446.
  • Traffic engineering requires information about the IP addresses reachable from each access and peering link. Reachability information can be obtained from a number of different sources in the network, e.g., as described below, forwarding tables, BGP tables in general, and route reflector BGP tables in particular. The inventors have found it advantageous to rely on the forwarding tables, although the same information could come from other sources as well, such as the BGP tables, configuration files, etc.
  • the packet-forwarding tables at each of the Access Routers may be used to extract customer IP addresses, when not listed in a router configuration file.
  • the forwarding table is, in a way, the ultimate authority for how the backbone forwards packets to a set of customer IP addresses.
  • the forwarding table can be logged periodically (e.g., with the IOS command “show ip cef”) and post-processed (e.g. using a Perl script) to extract the set of network addresses associated with each access link.
  • the table includes three main fields—the network address, the next-hop IP address (when known), and the card name of the outgoing link.
  • the network address can be associated with the appropriate access link based on the card name, which is part of the topology model that is extracted from the router configuration files.
  • the BGP routing table may also be processed to determine which peering links are used to reach each external IP address.
  • An ISP has limited control over the external IP addresses that connect to the Internet through other service providers. Routing of traffic from these external addresses depends on the policies other service providers employ for selecting paths and propagating router advertisements. Routing of traffic from customers to these external addresses depends on the advertisements the ISP receives and how they are processed. Applying local policy to the route advertisements results in a BGP routing table that indicates the chosen AS path for each external network address. See, e.g., “A Border Gateway Protocol (BGP-4),” IETF RFC 1771, Network Working Group, March 1995. Based on this information, the set of peering links that can be used to reach each external network address can be determined.
  • Border Gateway Protocol BGP-4
  • IETF RFC 1771 Network Working Group
  • each peering link can be associated with a set of external network addresses (it should be noted that, in a preferred embodiment of the present invention, this information is used to study routing of traffic destined for that network address and not how traffic form that network address enters the network).
  • the BGP routing table from a single route reflector in the backbone can also be utilized to determine the set of peering links associated with each external network address (e.g., using the IOS command “show ip bgp”).
  • the ARs and BRs receive advertisements of the AS paths selected by the IGRs. Given the potentially significant fluctuations in BGP routing information, it is advantageous to incorporate a continuous feed of BGP information into the model.
  • Each entry in the BGP routing table corresponds to a single IGR that can be used to reach a particular network address.
  • An entry in the table indicates the network address, the loopback address of the associated IGR, and the AS path.
  • a simple Perl script may be used to process all of the entries in the BGP table to determine the set of network addresses associated with each peering link.
  • IP traffic could be represented at the level of individual source-destination pairs, possibly aggregating sources and destinations to the network address or AS level. Representing all hosts or network addresses, however, would result in an overly large traffic matrix, virtually impossible to populate since no single ISP is likely to see all of the traffic to and from each network address.
  • IP traffic demands might be aggregated to point-to-point demands between edge links or routers in the ISP backbone. This approach, however, has fundamental difficulties in dealing with interdomain traffic (traffic whose ultimate destination belongs to another domain).
  • Inter-domain traffic which constitutes a large fraction of traffic in operational IP networks today, may exit the ISP backbone from any of a set of egress links, determined by interdomain routing policies. Modeling interdomain traffic as point-to-point would couple the demand model to internal routing configuration, making it highly problematic to predict how changing internal routing configuration would influence network load.
  • the preferred model of traffic demand consists of an ingress link, a set of egress links, and a volume of load.
  • the traffic demands between routers can be represented as data objects with the following attributes: x An identifier corresponding to the source router data object. y A sequence of router data objects representing the set of potential destination routers. The string x, y must uniquely identify this demand. Kbytes (double) The amount of data for this demand. packets (double) Alternative measurement of demand. Does not need to be used. arrivals (double) Alternative measurement of demand. Does not need to be used.
  • the path traveled by an IP packet depends on the interplay between interdomain routing protocols (e.g. BGP) and intradomain routing protocols (e.g. OSPF, IS-IS, or MPLS).
  • the ISP network lies in the middle of the Internet and may not have any direct connection to the sender or the receiver of any particular flow of packets.
  • a particular destination prefix may be reachable via multiple egress links from the ISP: e.g. a multi-homed customer may receive traffic on multiple links that connect to different points in the backbone or an ISP may have multiple links connecting to a neighboring provider.
  • the ultimate decision of which route to use depends on the BGP route-selection process. By associating each traffic demand with a set of egress links that could carry the traffic, the set basically represents the outcome of the early stages of the BGP route selection process before the consideration of the intradomain protocol.
  • the set of peer links can be represented by a logical node X i , and, similarly, a set of access links can be represented by a logical node Y j , as illustrated in FIG. 2.
  • the matching process can draw on the list of customer network addresses from the Access Router forwarding tables and the external network addresses from the BGP table.
  • the source and destination addresses can be aggregated by performing a longest-prefix match on these lists of network addresses.
  • the network addresses can then be used to associate traffic measurements with the appropriate sets of access or peering links.
  • Traffic Measurement It is advantageous to collect traffic measurements at all ingress links to compute traffic demands and identify the traffic as it enters the ISP backbone. Collecting packet-level traces at each ingress link, however, would be prohibitively expensive. Instead, flow-level statistics can be collected by each ingress router, a “flow” being defined in the art as a set of packets that match in the key IP and TCP/UDP header fields (such as the source and destination address, and port numbers) and arrive on the same ingress link. For example, routers manufactured by Cisco have a NetflowTM feature that, when enabled, permits the router to keep track of the amount of traffic in each active flow. The router can summarize the traffic statistics on a regular basis, either after the flow has become inactive or after an extended period of activity. Sampling the flow measurements may also be performed to reduce the total amount of data.
  • FIG. 3 sets forth an algorithm, in pseudo-code, for computing the traffic demands upon receiving a flow record with the following information: an input link and a destination IP address dest to identify the end-points of the demand, the start and finish times of the flow, and the total number of bytes in the flow. Additional information in the measurement records, such as TCP/UDP port numbers or type-of-service bits, could be used to compute separate traffic demands for each quality-of-service class or application. Aggregating the flow-level measurements into traffic demands requires information about the destination prefixes associated with each egress link. For example, the aggregation process draws on a list, dest_prefix_set, of network addresses, each consisting of an IP address and mask length.
  • Each destination prefix, dest_prefix can be associated with a set of egress links, reachability (dest_prefix). For example, in an operational network, these prefixes could be determined from the entries in the forwarding tables at routers that terminate egress links. (Note that the forwarding table at a router connected to an ingress link could have a different set of prefixes, particularly if the IP routing protocols have been configured to aggregate subnet address). In particular, each forwarding table entry identifies the next-hop link(s) for a particular prefix. This enables identification of the prefixes associated with each egress link.
  • computing traffic demands across a collection of flows at different routers introduces a number of timing challenges.
  • the flow records do not capture the timing of the individual packets within a flow.
  • traffic engineering occurs on a time scale larger than most flow durations, thus permitting time to be divided into consecutive width-second bins in which most flows will start and finish.
  • the traffic can be subdivided in proportion to the fraction of time spent in each time period
  • An alternative to measuring traffic demand at each ingress link is to collect measurements at a much smaller number of edge links, e.g. the links connecting the ISP to neighboring providers. This is advantageous in that it frees access routers, which often may not be capable of collecting fine-grain measurements, from the additional measurement overhead.
  • the small number of high-end routers that connect neighboring providers typically have a much smaller number of links, with substantial functionality (including measurement functions) implemented directly on the interface cards that connect each link to the router.
  • Traffic flows in the IP backbone can be characterized as “inbound” traffic (i.e. packets travelling from a peering link to an access links), “transit” traffic (travelling between two peering links), “outbound” traffic (travelling from an access link to a peering link) and “internal” traffic (travelling between two access links).
  • inbound traffic i.e. packets travelling from a peering link to an access links
  • transit traffic traveling between two peering links
  • outbound traffic traveling from an access link to a peering link
  • internal traffic traveling between two access links
  • the flow can be classified at a peering link based on the input and output links as follows: Input Output Classification Action Peer Backbone Inbound or multi-hop Point-to-multipoint transit demand Peer Peer Single-hop transit Point-to-multipoint demand Backbone Backbone Backbone traffic Omit flow Backbone Peer Outbound or multi-hop Identify possible ingress transit link(s). Omit flow or compute demand.
  • FIG. 4 sets forth the process of aggregating the flow records, skipping the details from FIG. 3 regarding dividing the bytes of the flow across multiple time_bins.
  • Transit Traffic falls into two categories—single-hop and multiple-hop.
  • a single-hop flow enters and exits the ISP backbone at the same edge router, without traversing any backbone links: in this case, the flow can be measured once, at this router.
  • a multi-hop flow enters at one router, traverses one or more backbone links, and exits at another router. Measuring both ingress and egress traffic at the peering links, thus, results in duplicate measurements of transit traffic that travels from one provider to another; special attention is required to avoid double-counting this traffic. The best place to capture a transit flow is at its ingress link, where the above methodology can be applied.
  • the flow records generated by multi-hop transit flows as they leave the network need to be discarded. This requires distinguishing outbound flows (introduced by an access link) from transit flows (introduced by a peering link).
  • the algorithm in FIG. 4 attempts to match the source IP address with a customer prefix at an access link. For transit flows, this matching process would fail, and the associated flow record would be properly excluded.
  • d. Outbound Traffic Computing the outbound traffic demands that travel from across links to peering links becomes more difficult, since flow-level measurements are not available at the ingress links.
  • the flow measurements provide two pieces of information that help to infer the access link responsible for the outbound traffic (1) the source IP address and (2) the input/output links that observed the flow at the egress router.
  • the source IP address indicates which customer generated the traffic (assuming the sender has not spoofed the source address).
  • the source IP address should be matched with a customer prefix which, in turn, should be matched with a set of possible access links that could have generated the traffic.
  • the pseudocode in FIG. 4 draws on a list, src_access_prefix_set, of the network addresses introducing traffic at access links.
  • Each source prefix, src_pref ix, can be associated with a set of ingress links, sendability (src_prefix). It should be noted that the routing forwarding tables are not sufficient for identifying the source addresses that might generate traffic on an access link. This is because Internet routing is not symmetric: traffic to and from a customer does not necessarily leave or enter the backbone on the same link. Fortunately, an ISP typically knows the IP addresses of its directly-connected customers, and, in fact, may assign IP prefixes from a larger address block belonging to the ISP.
  • Packet filters are often used by ISPs to remove traffic with bogus source IP addresses, and, these packet filters are specified in the router's configuration file which may be accessed and parsed to determine which source prefixes to associate with each access link. From this information it can be determined the set of access links associated with each source prefix. (Where customers connect to other service providers or have downstream customers of their own, it may be preferable to perform flow-level measurements at the ingress links rather Man depending on knowing the set of links where these sources could enter the ISP backbone).
  • FIG. 4 results in a point-to-multipoint demand for inbound and transit flows. Each outbound flow, however, is associated with a set of ingress links, resulting in a multipoint-to-multipoint aggregate. Computing point-to-multpoint demands for outbound traffic requires an additional step to determine which access link initiated the traffic.
  • FIG. 5 sets forth a “disambiguation” process which attempts to determine whether an outbound flow could have entered the network at a given ingress link based on knowledge of the backbone topology and intradomain routing configuration at the time the flow was measured.
  • Information on the possible paths from each ingress link to each egress link is obtained from a routing model that is further described below in Section 2 .
  • a routing model that is further described below in Section 2 .
  • Knowing the path(s) from the ingress link to the egress link provides additional information: where the path of the flow from the ingress link does not include both of the links that observed the flow (i.e. the input backbone link and the output peering link), the ingress link should again be excluded from consideration.
  • the process should be repeated for each of the possible ingress links, as shown in FIG. 5. The process has three possible outcomes.
  • a single ingress link could have generated the traffic, resulting in the ideal situation of a single point-to-multipoint demand.
  • more than one of the candidate ingress links could have generated the traffic, in which case the disambiguation process generates multiple demands, each with an equal fraction of the traffic.
  • a feature of the preferred embodiment of the present invention is that it combines the network model and the traffic measurements with an accurate model of path selection.
  • a routing module determines the path(s) chosen by the relevant routing protocol for each traffic demand, and the load imparted on each link as the traffic flows through the network.
  • the routing module captures the selection of shortest paths to/from multi-homed customers and peers, the splitting of traffic across multiple shortest-path routes, and the multiplexing of layer-three links over layer-two trunks.
  • OSPF OSPF
  • IS-IS IS-IS
  • OSPF protocol defines how routers within an area exchange link-state information and compute shortest paths based on the sum of the link weights. See “OSPF Version 2”, IETF RFC 2328, Network Working Group, April 1998.
  • the link weights are static and are typically configured based on the link capacity, physical distance, and some notion of the expected traffic load.
  • the chosen paths do not change unless a link or router failure occurs, or the OSPF parameters are reconfigured. These are rare events, particularly for the backbone links that participate in the routing protocol.
  • the routing module can consider a single instance of the network topology and OSPF configuration and need not simulate the details of the OSPF protocol, such as the flooding of link-state advertisements or the exchange of “hello” messages.
  • the routing module can be verified by comparing the resulting paths with the router forwarding tables or traceroute experiments on an operational network Performing the path selection computation inside the tool, rather than using the forwarding tables or traceroute results directly, facilitates experimentation with alternate OSPF configurations and different topologies.
  • path selection simply involves computing the shortest paths between each pair of routers, based on the link weights.
  • traffic between two routers in the same area follows a shortest path within the area, even if the network has a shorter path that involves links in other areas.
  • the path depends on how much information each area has about its neighbors.
  • the routing module can assume that the network does not summarize routing information at area boundaries. In the absence of route summarization, each border router reports the cost of the shortest path(s) to each of the other routers in the area, and the traffic between routers in different areas simply follows a shortest path without regard to the area boundaries.
  • the routes can be computed using, for example, Dijkstra's shortest-path-first algorithm, which is well-known in the art.
  • an implementation of the routing module can operate on a reduced network graph that collapses equivalent edges and nodes, and avoids recomputing distances and paths by caching intermediate results.
  • Path selection becomes more complex when there are multiple shortest paths between a pair of routers. Such ties arise very naturally when the network topology has parallel links between adjacent routers for additional capacity. Ties also surface when many of the links in the network have similar weights. This is sometimes done intentionally to increase the effective capacity between two endpoints.
  • the presence of multiple shortest paths allows for load-balancing of the traffic between the two endpoints. This is achieved by allowing the IP forwarding table to have multiple outgoing links associated with a single entry. Rather than alternating between these links at the packet level, routers typically attempt to forward packets for the same source-destination pair along a single path; this reduces the likelihood that packets from the same TCP connection arrive out-of-order at the receiver. Load-balancing is typically achieved by performing a hash function on the source and destination IP addresses of each packet. The value of the hash function determines which outgoing link should carry the packet.
  • the details of the “tie-breaking” function can be modeled in the routing module. This, however, significantly complicates the path selection computation and would require computing traffic demands at a significantly finer level of granularity.
  • the details of the hashing function, and how the outputs of the has function map to particular outgoing links are not specified by the OSPF protocol and, as such, depend on the vender's implementation. Fortunately, these details are not important.
  • the hash function is designed to support an even splitting of the traffic across the multiple outgoing links, especially for backbone links that carry a diverse mixture of traffic with different source and destination addresses. As such, the routing module advantageously splits traffic evenly cross each of the outgoing links along a shortest path. For example, with regard to FIG.
  • each link would carry 50% of the traffic.
  • the division of traffic is recursive, with the downstream routers dividing the traffic across each of their outgoing links, as set forth in FIG. 6.
  • the routing module could assume that each outgoing link carries a little extra more than its fair share of the traffic by applying a multiplicative factor.
  • the routing module can operate on a set of demands, each traveling from one peering or access link to a set of access or peering links.
  • the module computes the set of shortest-path routes based on the topology and the OSPF configuration, and determines how the demand splits across the multiple paths. Repeating this process for each demand results in an estimate of the load imparted on each link. Then, the routing module determines the load on each trunk (layer-two link) by summing across the associated layer three links.
  • the generality of the routing model facilitates experiments with alternate topologies and OSPF configurations, as illustrated in the next section. It also supports experimentation with the BGP policies for outbound traffic, by changing the sets of peering links associated with external network addresses.
  • a graphical user interface such as the one set forth in FIGS. 7 through 11, can be used to provide an efficient visualization environment with many ways to explore the data in the data model.
  • each router and link is modeled with a data object.
  • FIG. 7 sets forth an example of an information panel which displays attributes for objects of a given type. The information panel in FIG. 7 permits a user to quickly scroll through a list of links and see the corresponding attributes to the selected link in the bottom part of the panel, along with corresponding physical links in the right hand box in the panel.
  • Each statistic need be no more than simply a value for each object of some type.
  • a link utilization statistic which is a percentage associated with each link, can be calculated and displayed as set forth in FIG. 8.
  • link utilization statistics can be stored for periods of time and can be used to create histograms, scatter plots, tables, etc. The color or size of the object when displayed can be utilized to reflect the statistic, thereby providing a visual representation of the statistic, e.g. coloring links with high utilization as thick and red.
  • FIG. 10 and 11 sets forth the basic display of the router and link data objects graphically superimposed over a map of the relevant geography. Given the large numbers of nodes and links that may need to be displayed, it is helpful to permit the user to choose which sets of objects to display as well as to define different layers of aggregation and abstraction, e.g. combining routers into complexes, aggregating parallel links, etc.
  • the display in FIG. 10 and 11 has a user interface that recognizes the notion of a current object for each type. An object becomes the current object either when it is selected or when the mouse is moved on top of it graphically. For example, a user whose mouse pointer hovers over a particular link displayed in FIG. 10 would cause the associated Link Panel window and Link Statistics window to change to information regarding that particular link.
  • the visualization module can permit changes to the data model “on the fly” such as modifications to an OSPF weight of a link in the network. Then the software can use the routing module to automatically recalculate all routes for all active traffic demands, and update all relevant statistics that are based upon the traffic including link load and utilization. It is also helpful to maintain at least two different sets of weights, one that can be manipulated and one that can act as an anchor or baseline.

Abstract

The present invention is directed to a novel system and method for traffic engineering in a packet-switched network, such as an Internet Protocol (“IP”) based backbone network. A global view of the network is constructed utilizing a network data model that can be readily constructed from the balkanized network information associated locally with the individual elements in the network. The data model, in turn, can be utilized to support useful traffic engineering tools such as routing modeling and visualization.

Description

    FIELD OF THE INVENTION
  • The present invention relates generally to packet-switched networks. More particularly, the present invention relates to traffic engineering in a packet-switched network. [0001]
  • BACKGROUND OF THE INVENTION
  • The Internet is divided into a collection of autonomous systems, each autonomous system (“AS”) managed by an Internet Service Provider (“ISP”) who operates a backbone network that connects to customers and other service providers. Large ISPs have few software systems and tools to support traffic measurement and network modeling, the underpinnings of effective traffic engineering. Seemingly simple questions about topology, traffic, and routing are surprisingly hard to answer in today's packet-switched networks. A tremendous amount of work has gone into developing mechanisms and protocols for controlling traffic. By comparison, little work has gone to support traffic measurement and network modeling in operational networks. Unfortunately, unless control mechanisms are driven by the appropriate measurements and understanding from well-tested models, the benefit of the controls will be limited. [0002]
  • Accordingly, there is a need for new systems and methods of measuring and modeling a packet-switched network that permit effective traffic engineering. [0003]
  • SUMMARY OF THE INVENTION
  • It is an object of the present invention to enable network management tools in a packet-switched network that are capable of efficient reporting, capacity planning, provisioning, configuration debugging, performance debugging, and allowing the investigation of the impact of evolutionary changes to the network. [0004]
  • Thus, the present invention is directed to a novel system and method for traffic engineering in a packet-switched network, such as an Internet Protocol (“IP”) based backbone network. A global view of the network is constructed utilizing a network data model that can be readily constructed from the balkanized network information associated locally with the individual elements in the network. The data model, in turn, can be utilized to support useful traffic engineering tools such as routing modeling and visualization. Unlike conventional circuit-switching, Frame Relay or ATM networks, in which global views of topology and traffic are either given or trivial to derive, basic concepts of traffic engineering such as a traffic matrix or an offered-load are simply not present in IP networks and must be estimated and/or derived. Moreover, they must be derived in a manner that takes into account the dynamic set of intra-domain and inter-domain routing protocols. [0005]
  • These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.[0006]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 sets forth an abstract software architecture, in accordance with a preferred embodiment of the present invention. [0007]
  • FIG. 2 sets forth a diagram of an IP backbone network. [0008]
  • FIG. 3 and FIG. 4 set forth pseudo-code for computing traffic flows, in accordance with a preferred embodiment of the present invention. [0009]
  • FIG. 5 sets forth pseudo-code for a disambiguation process, in accordance with a preferred embodiment of the present invention. [0010]
  • FIG. 6 set forth a diagram illustrating routing traffic division, in accordance with a preferred embodiment of the present invention. [0011]
  • FIG. 7 through FIG. 11 set forth examples of graphical user interfaces for visualization of the network, in accordance with a preferred embodiment of the present invention.[0012]
  • DETAILED DESCRIPTION
  • FIG. 1 sets forth an abstract architecture of an advanced traffic engineering software tool configured in accordance with a preferred embodiment of the present invention. The software is advantageously divided into separate modules, each of which can be implemented in any of a number of suitable programming languages capable of execution on a digital computer, such as C, Tcl/TM, or Perl. An advantageous manner of program code development is to use a software package, such as “Obj2C”, to convert data object descriptions into generated programming code for standard manipulation of the data objects and a prototype graphical user interface. [0013]
  • As is further described in the following sections below, the software entitled by the inventors “NetScope” comprises three modules: a [0014] data model module 110 responsible for construction and manipulation of the network data model, a routing model module 120 responsible for construction and manipulation of the network routing model, and a visualization module 130 responsible for visualization, display, and correlation of multiple views of the network and usage information. The data model module 110 receives, as input, network topology information and network traffic demand information in order to properly populate the network data model. Netscope is advantageously separated and modularized (as shown abstractly by the dotted line) from the sources of the network topology and traffic demand information. This architecture permits the parts above the dotted line to be unaware of changes to modules below the dotted line. The decomposition into modules and the design of the underlying modules localize possible changes to the network, allowing for the simple evolution and extension of the software.
  • One way of obtaining the configuration and traffic data is shown in FIG. 1: namely, two other modules [0015] 140 and 150 entitled by the inventors “NetBuild” and “DataBuild” respectively. The modules 140 and 150 provide input to the data model module 110; these two modules take as input the raw data regarding network configuration and measurement, and provide as output higher level abstractions and information for the data model. This is highly advantageous in that an operational network is under continuous change. It should be noted that the inputs to the data model module 110 need not come from an operational network. For example, configuration data can come from an operational network or from a proposed/projected topology design. Likewise, the traffic demands could come from measurements of the operational network, from estimates or projections, or from customer subscriptions (e.g., for a virtual leased line service). The particular source of the network configuration and traffic demand information for the traffic engineering software is not a limitation of the present invention.
  • 1. Data Model [0016]
  • The present invention advantageously combines diverse network configuration information with diverse network measurements in a joint data model. The following two subsections describe (a) a preferred network topology model as well as practical ways of obtaining information to populate the topology model; and (b) a preferred traffic demand model and various ways of obtaining network traffic measurements to populate the traffic matrix. [0017]
  • A. Topology [0018]
  • Traffic engineering requires a network-wide view of the underlying layer-three and layer-two topology. In accordance with a preferred embodiment of the present invention, a topology model is presented that advantageously captures backbone connectivity, connections to customers and peers, link capacity, and OSPF configuration. A preferred data model includes data objects for network nodes and links in both a “pure” IP router layer (i.e. routers and layer three links) and in a physical transport layer (i.e. devices and trunks). It is advantageous to include layer two devices and trunks in the data model because some networking technologies, such as FDDI and ATM, introduce an intermediate switching fabric at layer two, e.g. multiple layer three links may share a single trunk, or a single layer-three link may correspond to a permanent virtual circuit (“PVC”) that traverses one or more ATM switches. This introduces layers of connectivity and capacity, which has implications for traffic engineering and reliability. [0019]
  • FIG. 2 sets forth a diagram of an [0020] IP backbone network 200 which highlights the different elements of the data model in the main router layer. IP routers and bi-directional layer-three links are represented as nodes and edges in FIG. 2. In a typical IP backbone network, each router terminates a mixture of “access”, “peering”, and “backbone” links. An access link 225 connects directly to customers, e.g. to a modem bank for dial-up users, a web-hosting complex, or a particular business or university campus. As shown in FIG. 2, some customers can have two or more access links for higher capacity, load balancing or fault tolerance (such customers are referred to in the art as “multi-homed customers”). Peering links 215 connect the backbone network to neighboring service providers, e.g. to a public Internet exchange point or directly to a private peer or transit provider. A typical ISP has multiple peering links to each neighboring provider, typically in different geographic locations. Backbone links connect routers inside the ISP backbone. The network in FIG. 2 has been simplified in that all access links terminate at Access Routers (“ARs” e.g. 221) and all peering links terminate at Internet Gateway Routers (“IGRs” e.g. 211), and all remaining routers are characterized as Backbone Routers (“BRS” e.g. 201) that only terminate backbone links. In fact, in an operational network, this split in functionality simplifies the requirements for each router: ARs provide high port density to connect to a large number of customers with various access speeds and technologies; BRs provide high packet-forwarding performance; IGRs can isolate peer traffic and simplify management of inter-domain routing policies. (The meaning of the groupings X1, X2, . . . and Y1, Y2, . . . will be explained in the section below on traffic demand).
  • Each router is represented by a data object with attributes including the router name, the loopback IP address of the router, the type of the router (e.g. AR, BR, IGR), and the geographic location of the router in terms of city and latitude/longitude. In addition, each router includes information about which links it originates. For example, a router data object can have the following parameters: [0021]
    routerName (string) Any uniquely identifying string.
    type (string) The type of router: e.g. BR = Backbone router;
    AR = Access router; IGR = Internet Gateway router;
    CR = Customer router; and PR = Peering router.
    count (int) useful router index, e.g. the third access router at
    Chicago will have a count equal to 3.
    complex An identifier for a data object representing a central
    office housing the instant router.
    IP (string) The loopback interface of the router.
    dlatitude (double) The displacement from the complex location for this
    router.
    dlongitude (double) The displacement from the complex location for this
    router.
    pnode An identifier for a data object representing the
    physical node that corresponds to this router.
    networks A sequence of IP masks. The IP networks that are
    directly connected to this router. Can be empty.
  • Each layer-three link is represented by a data object with attributes containing general information about the router originating the link, the name of the router card, the IP address of the interface, whether the link is shutdown or not, a textual description of its purpose, its capacity, and its OSPF weight. Some attributes can be associated with both directions of a link. For example, each bidirectional link can be classified as an access, backbone, or peering link. Backbone links also belong to a particular OSPF area, which must be the same for both unidirectional links. Peering links are associated with a particular BGP peer, identified by its AS number and annotated by the IP address of the BGP peer in the remote domain. [0022]
  • For example, a link data object can model a router interface and have the following parameters. Note that only interfaces that terminate at another router are included, e.g. interfaces that correspond to a PVC but not the interfaces that terminate on an ATM switch: [0023]
    x An identifier corresponding to the router data object
    with this interface.
    CardName (string) Name of the interface.
    y An identifier corresponding to the router data object
    where this link terminates.
    linkid (int) This gives an identifier on the full-duplex link. This
    can be used to find the link in the opposite direction.
    type (string) The type of the link, e.g. BACKBONE,
    INTERNAL, or ACCESS.
    IP (string) The IP address of the interface.
    Capacity (int) The bit rate of the interface in Kbytes. Note that for
    some links, e.g. PVCs, this number makes not too
    much sense since the bandwidth is shared between
    multiple interfaces.
    OSPF (int) The OSPF weight of the interface.
    OSPFarea The OSPF area of the interface.
    Status (int) Status of the interface.
    Desc (string) The description field from the router configuration
    file.
    plinks A sequence of references to physical link data
    objects. This is a list of the physical links that this
    link lays out to.
  • Each device or physical node is represented by a data object that has parameters identifying what type of device it is, e.g. a router or an ATM switch, where the node is located, and a list of trunks that originate at the device. Trunks describe the connectivity between routers and devices, and include the information about which links traverse a given trunk. For example, each physical node (“pnode”) can have the following parameters: [0024]
    name (string) An identifying string for the physical node.
    complex An identifier for a data object representing a central
    office housing the physical node.
    dlatitude (double) The displacement from the complex location for this
    pnode.
    dlongitude (double) The displacement from the complex location for this
    pnode.
    type (string) The type of pnode, e.g. router or ATM.
  • And each physical link (“plink”) can have the following parameters: [0025]
    x An identifier corresponding to the pnode data object
    where this link originates.
    CardName (string) Any string which with x uniquely identifies the
    plink.
    y An identifier corresponding to the pnode data object
    where this link terminates.
    trunkid (int) This gives an identifier on the trunk.
    type (string)
    capacity (int)
    Desc (string)
  • The above model is very general and its objects can be populated in a number of different ways, such as modifying an existing data model, constructing an artificial network, or extracting the information from the real network. [0026]
  • Extracting Network Topology. Unfortunately, there is no single place within a typical IP network that would allow extraction of the information necessary to populate the above preferred model. Rather, the information is distributed among many routers in the Internet. Even within an ISP network, information is decentralized. For example, even OSPF link state information may be insufficient to extract the topology of an IP backbone, especially since OSPF areas hide information. In addition, OSPF link state certainly does not contain information about access links and peering links. [0027]
  • End-to-end mechanisms such as “ping” and “traceroute” can be used for basic network topology discovery—but are cumbersome and provide only basic connectivity information. SNMP queries or traps can also be utilized, but require active querying of all network elements. See, e.g., “A Simple Network Management Protocol (SNMP),” IETF RFC 1157, Network Working Group, May 1990. [0028]
  • An alternative approach is to extract the information from the router configuration files for the operational network. This has the advantage of capitalizing on all the additional information contained in the router configuration files, including customer and peer information. A perhaps less obvious advantage is that the router configuration files are routinely logged for backup purposes and easily accessible without accessing the live operational network. The disadvantages are that the information is not updated continuously and that the configuration files reflect the state of the network absent failures. Nevertheless, router or link failures or physically disconnected links can be taken account of by a separate data feed. Populating the data objects in the data model using the router configuration files is made much easier by using a packet-switched network configuration debugger and database, as described in co-pending utility patent application, “Netdb: IP Network Configuration Debugger/Database,” U.S. Patent and Trademark Office application Ser. No. 60/160,446. [0029]
  • Traffic engineering requires information about the IP addresses reachable from each access and peering link. Reachability information can be obtained from a number of different sources in the network, e.g., as described below, forwarding tables, BGP tables in general, and route reflector BGP tables in particular. The inventors have found it advantageous to rely on the forwarding tables, although the same information could come from other sources as well, such as the BGP tables, configuration files, etc. [0030]
  • The packet-forwarding tables at each of the Access Routers may be used to extract customer IP addresses, when not listed in a router configuration file. The forwarding table is, in a way, the ultimate authority for how the backbone forwards packets to a set of customer IP addresses. The forwarding table can be logged periodically (e.g., with the IOS command “show ip cef”) and post-processed (e.g. using a Perl script) to extract the set of network addresses associated with each access link. The table includes three main fields—the network address, the next-hop IP address (when known), and the card name of the outgoing link. The network address can be associated with the appropriate access link based on the card name, which is part of the topology model that is extracted from the router configuration files. [0031]
  • The BGP routing table may also be processed to determine which peering links are used to reach each external IP address. An ISP has limited control over the external IP addresses that connect to the Internet through other service providers. Routing of traffic from these external addresses depends on the policies other service providers employ for selecting paths and propagating router advertisements. Routing of traffic from customers to these external addresses depends on the advertisements the ISP receives and how they are processed. Applying local policy to the route advertisements results in a BGP routing table that indicates the chosen AS path for each external network address. See, e.g., “A Border Gateway Protocol (BGP-4),” IETF RFC 1771, Network Working Group, March 1995. Based on this information, the set of peering links that can be used to reach each external network address can be determined. Similar to the customer addresses associated with each access link, each peering link can be associated with a set of external network addresses (it should be noted that, in a preferred embodiment of the present invention, this information is used to study routing of traffic destined for that network address and not how traffic form that network address enters the network). [0032]
  • The BGP routing table from a single route reflector in the backbone can also be utilized to determine the set of peering links associated with each external network address (e.g., using the IOS command “show ip bgp”). The ARs and BRs receive advertisements of the AS paths selected by the IGRs. Given the potentially significant fluctuations in BGP routing information, it is advantageous to incorporate a continuous feed of BGP information into the model. Each entry in the BGP routing table corresponds to a single IGR that can be used to reach a particular network address. An entry in the table indicates the network address, the loopback address of the associated IGR, and the AS path. A simple Perl script may be used to process all of the entries in the BGP table to determine the set of network addresses associated with each peering link. [0033]
  • B. Traffic Demand [0034]
  • Effective traffic engineering requires not just a view of the topology but also an accurate estimate of the offered load between various points in the backbone. How should traffic demands be modeled and inferred from operational measurements? At one extreme, IP traffic could be represented at the level of individual source-destination pairs, possibly aggregating sources and destinations to the network address or AS level. Representing all hosts or network addresses, however, would result in an overly large traffic matrix, virtually impossible to populate since no single ISP is likely to see all of the traffic to and from each network address. Alternatively, IP traffic demands might be aggregated to point-to-point demands between edge links or routers in the ISP backbone. This approach, however, has fundamental difficulties in dealing with interdomain traffic (traffic whose ultimate destination belongs to another domain). Inter-domain traffic, which constitutes a large fraction of traffic in operational IP networks today, may exit the ISP backbone from any of a set of egress links, determined by interdomain routing policies. Modeling interdomain traffic as point-to-point would couple the demand model to internal routing configuration, making it highly problematic to predict how changing internal routing configuration would influence network load. [0035]
  • In accordance with an aspect of the present invention, an alternative model is described which effectively handles interdomain traffic and advantageously is invariant to changes in the internal routing configuration. The preferred model of traffic demand consists of an ingress link, a set of egress links, and a volume of load. For example, the traffic demands between routers can be represented as data objects with the following attributes: [0036]
    x An identifier corresponding to the source router data
    object.
    y A sequence of router data objects representing the
    set of potential destination routers. The string x, y
    must uniquely identify this demand.
    Kbytes (double) The amount of data for this demand.
    packets (double) Alternative measurement of demand. Does not need to
    be used.
    arrivals (double) Alternative measurement of demand. Does not need to
    be used.
  • The path traveled by an IP packet depends on the interplay between interdomain routing protocols (e.g. BGP) and intradomain routing protocols (e.g. OSPF, IS-IS, or MPLS). The ISP network lies in the middle of the Internet and may not have any direct connection to the sender or the receiver of any particular flow of packets. As such, a particular destination prefix may be reachable via multiple egress links from the ISP: e.g. a multi-homed customer may receive traffic on multiple links that connect to different points in the backbone or an ISP may have multiple links connecting to a neighboring provider. The ultimate decision of which route to use depends on the BGP route-selection process. By associating each traffic demand with a set of egress links that could carry the traffic, the set basically represents the outcome of the early stages of the BGP route selection process before the consideration of the intradomain protocol. [0037]
  • The set of peer links can be represented by a logical node X[0038] i, and, similarly, a set of access links can be represented by a logical node Yj, as illustrated in FIG. 2. The matching process can draw on the list of customer network addresses from the Access Router forwarding tables and the external network addresses from the BGP table. The source and destination addresses can be aggregated by performing a longest-prefix match on these lists of network addresses. The network addresses can then be used to associate traffic measurements with the appropriate sets of access or peering links.
  • Traffic Measurement. It is advantageous to collect traffic measurements at all ingress links to compute traffic demands and identify the traffic as it enters the ISP backbone. Collecting packet-level traces at each ingress link, however, would be prohibitively expensive. Instead, flow-level statistics can be collected by each ingress router, a “flow” being defined in the art as a set of packets that match in the key IP and TCP/UDP header fields (such as the source and destination address, and port numbers) and arrive on the same ingress link. For example, routers manufactured by Cisco have a Netflow™ feature that, when enabled, permits the router to keep track of the amount of traffic in each active flow. The router can summarize the traffic statistics on a regular basis, either after the flow has become inactive or after an extended period of activity. Sampling the flow measurements may also be performed to reduce the total amount of data. [0039]
  • FIG. 3 sets forth an algorithm, in pseudo-code, for computing the traffic demands upon receiving a flow record with the following information: an input link and a destination IP address dest to identify the end-points of the demand, the start and finish times of the flow, and the total number of bytes in the flow. Additional information in the measurement records, such as TCP/UDP port numbers or type-of-service bits, could be used to compute separate traffic demands for each quality-of-service class or application. Aggregating the flow-level measurements into traffic demands requires information about the destination prefixes associated with each egress link. For example, the aggregation process draws on a list, dest_prefix_set, of network addresses, each consisting of an IP address and mask length. Each destination prefix, dest_prefix, can be associated with a set of egress links, reachability (dest_prefix). For example, in an operational network, these prefixes could be determined from the entries in the forwarding tables at routers that terminate egress links. (Note that the forwarding table at a router connected to an ingress link could have a different set of prefixes, particularly if the IP routing protocols have been configured to aggregate subnet address). In particular, each forwarding table entry identifies the next-hop link(s) for a particular prefix. This enables identification of the prefixes associated with each egress link. [0040]
  • As reflected in FIG. 3, computing traffic demands across a collection of flows at different routers introduces a number of timing challenges. The flow records do not capture the timing of the individual packets within a flow. Nevertheless, traffic engineering occurs on a time scale larger than most flow durations, thus permitting time to be divided into consecutive width-second bins in which most flows will start and finish. When a flow spans multiple bins, the traffic can be subdivided in proportion to the fraction of time spent in each time period [0041]
  • An alternative to measuring traffic demand at each ingress link is to collect measurements at a much smaller number of edge links, e.g. the links connecting the ISP to neighboring providers. This is advantageous in that it frees access routers, which often may not be capable of collecting fine-grain measurements, from the additional measurement overhead. In contrast, the small number of high-end routers that connect neighboring providers typically have a much smaller number of links, with substantial functionality (including measurement functions) implemented directly on the interface cards that connect each link to the router. By monitoring both the ingress and egress links at these locations, it is possible to capture a large fraction of the traffic in the ISP backbone—but this introduces new complications for measuring traffic. [0042]
  • Traffic flows in the IP backbone can be characterized as “inbound” traffic (i.e. packets travelling from a peering link to an access links), “transit” traffic (travelling between two peering links), “outbound” traffic (travelling from an access link to a peering link) and “internal” traffic (travelling between two access links). The characterization of the traffic flow will affect how the flow should be handled. As further described below, the flow can be classified at a peering link based on the input and output links as follows: [0043]
    Input Output Classification Action
    Peer Backbone Inbound or multi-hop Point-to-multipoint
    transit demand
    Peer Peer Single-hop transit Point-to-multipoint
    demand
    Backbone Backbone Backbone traffic Omit flow
    Backbone Peer Outbound or multi-hop Identify possible ingress
    transit link(s). Omit flow or
    compute demand.
  • a. Internal Traffic. It should be noted that monitoring the peering links does not capture internal traffic sent from one access link to another. For customer traffic to and from particularly important access links (e.g., to the ISP's e-mail, Web, and DNS services), this can be addressed by enabling flow-level measurements—effectively treating these connections like peering links. [0044]
  • b. Inbound Traffic. For inbound flows, traveling from a peering link to a backbone link, the above measurement methodology can be directly applied, since flow-level measurements are available from the ingress link. FIG. 4 sets forth the process of aggregating the flow records, skipping the details from FIG. 3 regarding dividing the bytes of the flow across multiple time_bins. [0045]
  • c. Transit Traffic. Transit traffic falls into two categories—single-hop and multiple-hop. A single-hop flow enters and exits the ISP backbone at the same edge router, without traversing any backbone links: in this case, the flow can be measured once, at this router. A multi-hop flow enters at one router, traverses one or more backbone links, and exits at another router. Measuring both ingress and egress traffic at the peering links, thus, results in duplicate measurements of transit traffic that travels from one provider to another; special attention is required to avoid double-counting this traffic. The best place to capture a transit flow is at its ingress link, where the above methodology can be applied. To avoid double-counting the flow, the flow records generated by multi-hop transit flows as they leave the network need to be discarded. This requires distinguishing outbound flows (introduced by an access link) from transit flows (introduced by a peering link). For a flow leaving the ISP network, the algorithm in FIG. 4 attempts to match the source IP address with a customer prefix at an access link. For transit flows, this matching process would fail, and the associated flow record would be properly excluded. [0046]
  • d. Outbound Traffic. Computing the outbound traffic demands that travel from across links to peering links becomes more difficult, since flow-level measurements are not available at the ingress links. The flow measurements provide two pieces of information that help to infer the access link responsible for the outbound traffic (1) the source IP address and (2) the input/output links that observed the flow at the egress router. The source IP address indicates which customer generated the traffic (assuming the sender has not spoofed the source address). The source IP address should be matched with a customer prefix which, in turn, should be matched with a set of possible access links that could have generated the traffic. The pseudocode in FIG. 4 draws on a list, src_access_prefix_set, of the network addresses introducing traffic at access links. Each source prefix, src_pref ix, can be associated with a set of ingress links, sendability (src_prefix). It should be noted that the routing forwarding tables are not sufficient for identifying the source addresses that might generate traffic on an access link. This is because Internet routing is not symmetric: traffic to and from a customer does not necessarily leave or enter the backbone on the same link. Fortunately, an ISP typically knows the IP addresses of its directly-connected customers, and, in fact, may assign IP prefixes from a larger address block belonging to the ISP. Packet filters are often used by ISPs to remove traffic with bogus source IP addresses, and, these packet filters are specified in the router's configuration file which may be accessed and parsed to determine which source prefixes to associate with each access link. From this information it can be determined the set of access links associated with each source prefix. (Where customers connect to other service providers or have downstream customers of their own, it may be preferable to perform flow-level measurements at the ingress links rather Man depending on knowing the set of links where these sources could enter the ISP backbone). [0047]
  • Information about the input and output links that measured the flow should be maintained, as this information is useful to help infer which of the access links could have generated the traffic. The algorithm in FIG. 4 results in a point-to-multipoint demand for inbound and transit flows. Each outbound flow, however, is associated with a set of ingress links, resulting in a multipoint-to-multipoint aggregate. Computing point-to-multpoint demands for outbound traffic requires an additional step to determine which access link initiated the traffic. FIG. 5 sets forth a “disambiguation” process which attempts to determine whether an outbound flow could have entered the network at a given ingress link based on knowledge of the backbone topology and intradomain routing configuration at the time the flow was measured. Information on the possible paths from each ingress link to each egress link is obtained from a routing model that is further described below in Section [0048] 2. For a given ingress link and set of egress links, it is determined on which egress link the flow would exit the network. If this was not the egress link where the flow was observed, then this ingress link can be eliminated from consideration. Knowing the path(s) from the ingress link to the egress link provides additional information: where the path of the flow from the ingress link does not include both of the links that observed the flow (i.e. the input backbone link and the output peering link), the ingress link should again be excluded from consideration. The process should be repeated for each of the possible ingress links, as shown in FIG. 5. The process has three possible outcomes. First, a single ingress link could have generated the traffic, resulting in the ideal situation of a single point-to-multipoint demand. Second, more than one of the candidate ingress links could have generated the traffic, in which case the disambiguation process generates multiple demands, each with an equal fraction of the traffic. Third, if none of the candidate ingress links could have generated the traffic, the disambiguation process has failed and the flow record is discarded. This provides a useful consistency check on the initial processing of the flow-level data.
  • 2. Routing Model [0049]
  • A feature of the preferred embodiment of the present invention is that it combines the network model and the traffic measurements with an accurate model of path selection. Specifically, a routing module determines the path(s) chosen by the relevant routing protocol for each traffic demand, and the load imparted on each link as the traffic flows through the network. The routing module captures the selection of shortest paths to/from multi-homed customers and peers, the splitting of traffic across multiple shortest-path routes, and the multiplexing of layer-three links over layer-two trunks. These capabilities allow a user to explore the impact of changes in the traffic demands or in the underlying network topology. [0050]
  • There are a variety of routing protocols that may be utilized with the present invention, e.g., OSPF, IS-IS, etc. For example, the OSPF protocol defines how routers within an area exchange link-state information and compute shortest paths based on the sum of the link weights. See “OSPF Version 2”, IETF RFC 2328, Network Working Group, April 1998. The link weights are static and are typically configured based on the link capacity, physical distance, and some notion of the expected traffic load. The chosen paths do not change unless a link or router failure occurs, or the OSPF parameters are reconfigured. These are rare events, particularly for the backbone links that participate in the routing protocol. As such, the routing module can consider a single instance of the network topology and OSPF configuration and need not simulate the details of the OSPF protocol, such as the flooding of link-state advertisements or the exchange of “hello” messages. The routing module can be verified by comparing the resulting paths with the router forwarding tables or traceroute experiments on an operational network Performing the path selection computation inside the tool, rather than using the forwarding tables or traceroute results directly, facilitates experimentation with alternate OSPF configurations and different topologies. [0051]
  • When all of the backbone links reside in a single OSPF area, path selection simply involves computing the shortest paths between each pair of routers, based on the link weights. In a hierarchical network, traffic between two routers in the same area follows a shortest path within the area, even if the network has a shorter path that involves links in other areas. When traffic must travel between routers in different areas, the path depends on how much information each area has about its neighbors. The routing module can assume that the network does not summarize routing information at area boundaries. In the absence of route summarization, each border router reports the cost of the shortest path(s) to each of the other routers in the area, and the traffic between routers in different areas simply follows a shortest path without regard to the area boundaries. The routes can be computed using, for example, Dijkstra's shortest-path-first algorithm, which is well-known in the art. To limit the computational overheads, an implementation of the routing module can operate on a reduced network graph that collapses equivalent edges and nodes, and avoids recomputing distances and paths by caching intermediate results. [0052]
  • Path selection becomes more complex when there are multiple shortest paths between a pair of routers. Such ties arise very naturally when the network topology has parallel links between adjacent routers for additional capacity. Ties also surface when many of the links in the network have similar weights. This is sometimes done intentionally to increase the effective capacity between two endpoints. The presence of multiple shortest paths allows for load-balancing of the traffic between the two endpoints. This is achieved by allowing the IP forwarding table to have multiple outgoing links associated with a single entry. Rather than alternating between these links at the packet level, routers typically attempt to forward packets for the same source-destination pair along a single path; this reduces the likelihood that packets from the same TCP connection arrive out-of-order at the receiver. Load-balancing is typically achieved by performing a hash function on the source and destination IP addresses of each packet. The value of the hash function determines which outgoing link should carry the packet. [0053]
  • The details of the “tie-breaking” function can be modeled in the routing module. This, however, significantly complicates the path selection computation and would require computing traffic demands at a significantly finer level of granularity. In addition, the details of the hashing function, and how the outputs of the has function map to particular outgoing links are not specified by the OSPF protocol and, as such, depend on the vender's implementation. Fortunately, these details are not important. The hash function is designed to support an even splitting of the traffic across the multiple outgoing links, especially for backbone links that carry a diverse mixture of traffic with different source and destination addresses. As such, the routing module advantageously splits traffic evenly cross each of the outgoing links along a shortest path. For example, with regard to FIG. 6, if a router has two outgoing links on shortest paths, each link would carry 50% of the traffic. The division of traffic is recursive, with the downstream routers dividing the traffic across each of their outgoing links, as set forth in FIG. 6. For a more conservative estimate of the load on each link, the routing module could assume that each outgoing link carries a little extra more than its fair share of the traffic by applying a multiplicative factor. [0054]
  • Using the traffic demands described in the previous section, the routing module can operate on a set of demands, each traveling from one peering or access link to a set of access or peering links. The module computes the set of shortest-path routes based on the topology and the OSPF configuration, and determines how the demand splits across the multiple paths. Repeating this process for each demand results in an estimate of the load imparted on each link. Then, the routing module determines the load on each trunk (layer-two link) by summing across the associated layer three links. The generality of the routing model facilitates experiments with alternate topologies and OSPF configurations, as illustrated in the next section. It also supports experimentation with the BGP policies for outbound traffic, by changing the sets of peering links associated with external network addresses. [0055]
  • 3. Visualization [0056]
  • A graphical user interface, such as the one set forth in FIGS. 7 through 11, can be used to provide an efficient visualization environment with many ways to explore the data in the data model. As set forth above, each router and link is modeled with a data object. FIG. 7 sets forth an example of an information panel which displays attributes for objects of a given type. The information panel in FIG. 7 permits a user to quickly scroll through a list of links and see the corresponding attributes to the selected link in the bottom part of the panel, along with corresponding physical links in the right hand box in the panel. [0057]
  • It is useful to permit the data model to associate statistical information with objects. Each statistic need be no more than simply a value for each object of some type. For example, a link utilization statistic, which is a percentage associated with each link, can be calculated and displayed as set forth in FIG. 8. There should be no restriction on how many statistics can be associated with an object type. Thus, link utilization statistics can be stored for periods of time and can be used to create histograms, scatter plots, tables, etc. The color or size of the object when displayed can be utilized to reflect the statistic, thereby providing a visual representation of the statistic, e.g. coloring links with high utilization as thick and red. [0058]
  • It is advantageous to include some search facility permitting queries on objects, as set forth in FIG. 9. The “Find Link” user interface in FIG. 9 permits arbitrary expression searches involving statistics, object fields, special filters, etc. [0059]
  • FIG. 10 and [0060] 11 sets forth the basic display of the router and link data objects graphically superimposed over a map of the relevant geography. Given the large numbers of nodes and links that may need to be displayed, it is helpful to permit the user to choose which sets of objects to display as well as to define different layers of aggregation and abstraction, e.g. combining routers into complexes, aggregating parallel links, etc. The display in FIG. 10 and 11 has a user interface that recognizes the notion of a current object for each type. An object becomes the current object either when it is selected or when the mouse is moved on top of it graphically. For example, a user whose mouse pointer hovers over a particular link displayed in FIG. 10 would cause the associated Link Panel window and Link Statistics window to change to information regarding that particular link.
  • It is advantageous for the visualization module to permit changes to the data model “on the fly” such as modifications to an OSPF weight of a link in the network. Then the software can use the routing module to automatically recalculate all routes for all active traffic demands, and update all relevant statistics that are based upon the traffic including link load and utilization. It is also helpful to maintain at least two different sets of weights, one that can be manipulated and one that can act as an anchor or baseline. [0061]
  • The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. For example, the detailed description described the present invention in the context of the Internet and IP-based backbone networks. However, the principles of the present invention could be extended to other types of packet-switched networks. Such an extension could be readily implemented by one of ordinary skill in the art given the above disclosure. [0062]

Claims (34)

What is claimed is:
1. A computer readable medium containing executable program instructions for performing a method on a computer connected to a network comprising the steps of:
receiving network topology information as an input;
receiving network traffic demand information as an input;
constructing a data model of a packet-switched network from the network topology information and network traffic demand information wherein the data model further comprises data objects for network nodes, network links, and for network traffic demands; and
constructing a routing model wherein the data objects for network nodes, network links, and for network traffic demands are utilized to simulate network traffic in the packet-switched network.
2. The computer readable medium of claim 1 wherein the network topology information is derived from data obtained from an operational packet-switched network.
3. The computer readable medium of claim 2 wherein the data is extracted from router configuration files.
4. The computer readable medium of claim 2 wherein the data is extracted utilizing end-to-end query mechanisms.
5. The computer readable medium of claim 1 wherein the network topology information is derived from a proposed topology design.
6. The computer readable medium of claim 1 wherein the network traffic demand information is derived from data obtained from an operational packet-switched network.
7. The computer readable medium of claim 6 wherein the data is extracted from traffic measurements collected at ingress routers.
8. The computer readable medium of claim 7 wherein the traffic measurements are made between an ingress link and a set of egress links.
9. The computer readable medium of claim 8 wherein the traffic measurements are collected by associating one or more destination network addresses with the set of egress links.
10. The computer readable medium of claim 9 wherein the set of egress links is identified by extracting reachability information from network forwarding tables.
11. The computer readable medium of claim 9 wherein the set of egress links is identified by extracting reachability information from BGP tables.
12. The computer readable medium of claim 9 wherein the set of egress links is identified by extracting reachability information from network configuration files.
13. The computer readable medium of claim 1 wherein the network traffic demand information is derived from estimates of projected network traffic demand.
14. The computer readable medium of claim 1 wherein the network traffic demand information is derived from customer subscription information.
15. The computer readable medium of claim 1 further comprising the step of providing an interface to the data model that graphically displays the network nodes, network links and network traffic calculated by the routing model.
16. The computer readable medium of claim 1 wherein the routing model simulates the OSPF routing protocol.
17. The computer readable medium of claim 1 wherein the routing model simulates the IS-IS routing protocol.
18. A method of traffic engineering in a packet-switched network comprising the steps of:
retrieving network topology information;
retrieving traffic measurement information;
constructing a data model of a packet-switched network from the network topology information and network traffic information wherein the data model further comprises data objects for network nodes, network links, and for network traffic demands; and
constructing a routing model wherein the data objects for network nodes, network links, and for network traffic demands are utilized to simulate network traffic in the packet-switched network.
19. The method of claim 18 wherein the network topology information is derived from data obtained from an operational packet-switched network.
20. The method of claim 19 wherein the data is extracted from router configuration files.
21. The method of claim 19 wherein the data is extracted utilizing end-to-end query mechanisms.
22. The method of claim 18 wherein the network topology information is derived from a proposed topology design.
23. The method of claim 18 wherein the network traffic demand information is derived from data obtained from an operational packet-switched network.
24. The method of claim 23 wherein the data is extracted from traffic measurements collected at ingress routers.
25. The method of claim 24 wherein the traffic measurements are made between an ingress link and a set of egress links.
26. The method of claim 25 wherein the traffic measurements are collected by associating one or more destination network addresses with the set of egress links.
27. The method of claim 26 wherein the set of egress links is identified by extracting reachability information from network forwarding tables.
28. The method of claim 26 wherein the set of egress links is identified by extracting reachability information from BGP tables.
29. The method of claim 26 wherein the set of egress links is identified by extracting reachability information from network configuration files.
30. The method of claim 18 wherein the network traffic demand information is derived from estimates of projected network traffic demand.
31. The method of claim 18 wherein the network traffic demand information is derived from customer subscription information.
32. The method of claim 18 further comprising the step of providing an interface to the data model that graphically displays the network nodes, network links and network traffic calculated by the routing model.
33. The method of claim 18 wherein the routing model simulates the OSPF routing protocol.
34. The method of claim 18 wherein the routing model simulates the IS-IS routing protocol.
US09/876,384 2000-04-21 2001-06-07 Traffic engineering system and method Abandoned US20020103631A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/876,384 US20020103631A1 (en) 2000-04-21 2001-06-07 Traffic engineering system and method

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US19909100P 2000-04-21 2000-04-21
US66152700A 2000-09-13 2000-09-13
US09/876,384 US20020103631A1 (en) 2000-04-21 2001-06-07 Traffic engineering system and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US66152700A Continuation 2000-04-21 2000-09-13

Publications (1)

Publication Number Publication Date
US20020103631A1 true US20020103631A1 (en) 2002-08-01

Family

ID=26894461

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/876,384 Abandoned US20020103631A1 (en) 2000-04-21 2001-06-07 Traffic engineering system and method
US09/876,383 Expired - Fee Related US7027448B2 (en) 2000-04-21 2001-06-07 System and method for deriving traffic demands for a packet-switched network
US11/235,491 Expired - Lifetime US7796619B1 (en) 2000-04-21 2005-09-26 System and method for deriving traffic demands for a packet-switched network

Family Applications After (2)

Application Number Title Priority Date Filing Date
US09/876,383 Expired - Fee Related US7027448B2 (en) 2000-04-21 2001-06-07 System and method for deriving traffic demands for a packet-switched network
US11/235,491 Expired - Lifetime US7796619B1 (en) 2000-04-21 2005-09-26 System and method for deriving traffic demands for a packet-switched network

Country Status (1)

Country Link
US (3) US20020103631A1 (en)

Cited By (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020052937A1 (en) * 2000-11-02 2002-05-02 Microsoft Corporation Method and apparatus for verifying the contents of a global configuration file
US20020184362A1 (en) * 2001-05-31 2002-12-05 International Business Machines Corporation System and method for extending server security through monitored load management
US20030002441A1 (en) * 2001-06-27 2003-01-02 International Business Machines Corporation Reduction of server overload
US20030084157A1 (en) * 2001-10-26 2003-05-01 Hewlett Packard Company Tailorable optimization using model descriptions of services and servers in a computing environment
US20030084156A1 (en) * 2001-10-26 2003-05-01 Hewlett-Packard Company Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers
US20030128668A1 (en) * 2002-01-04 2003-07-10 Yavatkar Rajendra S. Distributed implementation of control protocols in routers and switches
US20030236822A1 (en) * 2002-06-10 2003-12-25 Sven Graupner Generating automated mappings of service demands to server capacites in a distributed computer system
US20040013113A1 (en) * 2002-07-17 2004-01-22 Ranjeeta Singh Technique to improve network routing using best-match and exact-match techniques
US20040032856A1 (en) * 2002-02-11 2004-02-19 Sandstrom Mark Henrik Transparent, look-up-free packet forwarding method for optimizing global network throughput based on real-time route status
US20040039839A1 (en) * 2002-02-11 2004-02-26 Shivkumar Kalyanaraman Connectionless internet traffic engineering framework
WO2004023719A2 (en) * 2002-09-09 2004-03-18 Sheer Networks Inc. Root cause correlation in connectionless networks
WO2004040858A1 (en) * 2002-11-01 2004-05-13 Nokia Corporation Dynamic load distribution using local state information
WO2004051932A2 (en) 2002-11-29 2004-06-17 Marconi Intellectual Property (Ringfence) Inc. Route objects in telecommunications networks
US20050154758A1 (en) * 2004-01-08 2005-07-14 International Business Machines Corporation Method and apparatus for supporting transactions
US20050154776A1 (en) * 2004-01-08 2005-07-14 International Business Machines Corporation Method and apparatus for non-invasive discovery of relationships between nodes in a network
US20060080462A1 (en) * 2004-06-04 2006-04-13 Asnis James D System for Meta-Hop routing
US20060087989A1 (en) * 2004-10-22 2006-04-27 Cisco Technology, Inc., A Corporation Of California Network device architecture for consolidating input/output and reducing latency
US7039705B2 (en) * 2001-10-26 2006-05-02 Hewlett-Packard Development Company, L.P. Representing capacities and demands in a layered computing environment using normalized values
US20060101140A1 (en) * 2004-10-22 2006-05-11 Cisco Technology, Inc. Ethernet extension for the data center
US20060098589A1 (en) * 2004-10-22 2006-05-11 Cisco Technology, Inc. Forwarding table reduction and multipath network forwarding
US20060129672A1 (en) * 2001-03-27 2006-06-15 Redseal Systems, Inc., A Corporation Of Delaware Method and apparatus for network wide policy-based analysis of configurations of devices
US20060129670A1 (en) * 2001-03-27 2006-06-15 Redseal Systems, Inc. Method and apparatus for network wide policy-based analysis of configurations of devices
US20060174154A1 (en) * 2005-01-28 2006-08-03 Cariden Technologies, Inc. Method and system for communicating predicted network behavior between interconnected networks
US20060198302A1 (en) * 2005-03-03 2006-09-07 Sofman Lev B Traffic dimensioning in a metro area with IPTV architecture
US20060251067A1 (en) * 2004-10-22 2006-11-09 Cisco Technology, Inc., A Corporation Of California Fibre channel over ethernet
US20060271857A1 (en) * 2005-05-12 2006-11-30 David Rosenbluth Imaging system for network traffic data
US20060288296A1 (en) * 2005-05-12 2006-12-21 David Rosenbluth Receptor array for managing network traffic data
US20060291445A1 (en) * 2005-06-14 2006-12-28 Luca Martini Method for auto-routing of multi-hop pseudowires
US20070076615A1 (en) * 2005-10-03 2007-04-05 The Hong Kong University Of Science And Technology Non-Blocking Destination-Based Routing Networks
US20070081454A1 (en) * 2005-10-11 2007-04-12 Cisco Technology, Inc. A Corporation Of California Methods and devices for backward congestion notification
EP1777875A1 (en) 2005-10-21 2007-04-25 Hewlett-Packard Development Company, L.P. Graphical arrangement of IT network components
US20070195701A1 (en) * 2003-09-03 2007-08-23 Michael Menth Simple And Resource-Efficient Resilient Network Systems
WO2007111814A2 (en) 2006-03-23 2007-10-04 Cisco Technology, Inc. Method and application tool for dynamically navigating a user customizable representation of a network device configuration
US20070250640A1 (en) * 2006-04-24 2007-10-25 Cisco Technology, Inc. Method and apparatus for assigning Ipv6 link state identifiers
US20070280113A1 (en) * 2006-06-02 2007-12-06 Opnet Technologies, Inc. Traffic flow inference based on link loads and gravity measures
US20090147703A1 (en) * 2005-10-26 2009-06-11 James Wesley Bemont Method for Efficiently Retrieving Topology-Specific Data for Point-to-Point Networks
US20100131672A1 (en) * 2008-11-25 2010-05-27 Jeyhan Karaoguz MULTIPLE PATHWAY SESSION SETUP TO SUPPORT QoS SERVICES
US7805287B1 (en) 2003-06-05 2010-09-28 Verizon Laboratories Inc. Node emulator
WO2010115096A2 (en) * 2009-04-02 2010-10-07 University Of Florida Research Foundation Inc. System, method, and media for network traffic measurement on high-speed routers
US7844432B1 (en) * 2003-06-05 2010-11-30 Verizon Laboratories Inc. Node emulator
US20110126108A1 (en) * 2001-12-13 2011-05-26 Luc Beaudoin Overlay View Method and System for Representing Network Topology
US8121038B2 (en) 2007-08-21 2012-02-21 Cisco Technology, Inc. Backward congestion notification
US8149710B2 (en) 2007-07-05 2012-04-03 Cisco Technology, Inc. Flexible and hierarchical dynamic buffer allocation
US8160094B2 (en) 2004-10-22 2012-04-17 Cisco Technology, Inc. Fibre channel over ethernet
US8259720B2 (en) * 2007-02-02 2012-09-04 Cisco Technology, Inc. Triple-tier anycast addressing
US8289878B1 (en) 2007-05-09 2012-10-16 Sprint Communications Company L.P. Virtual link mapping
US8301762B1 (en) * 2009-06-08 2012-10-30 Sprint Communications Company L.P. Service grouping for network reporting
US8355316B1 (en) 2009-12-16 2013-01-15 Sprint Communications Company L.P. End-to-end network monitoring
US20130125235A1 (en) * 2011-11-14 2013-05-16 Kddi Corporation Method, Apparatus and Program for Detecting Spoofed Network Traffic
US20130132542A1 (en) * 2011-11-18 2013-05-23 Telefonktiebolaget L M Ericsson (Publ) Method and System for Effective BGP AS-Path Pre-pending
US8458323B1 (en) 2009-08-24 2013-06-04 Sprint Communications Company L.P. Associating problem tickets based on an integrated network and customer database
US20130159864A1 (en) * 2006-07-06 2013-06-20 John Kei Smith System for Network Flow Visualization through Network Devices within Network Topology
US8644146B1 (en) 2010-08-02 2014-02-04 Sprint Communications Company L.P. Enabling user defined network change leveraging as-built data
US8891379B2 (en) 2006-06-02 2014-11-18 Riverbed Technology, Inc. Traffic flow inference based on link loads and gravity measures
US20150304206A1 (en) * 2014-04-17 2015-10-22 Cisco Technology, Inc. Segment routing - egress peer engineering (sp-epe)
US9305029B1 (en) 2011-11-25 2016-04-05 Sprint Communications Company L.P. Inventory centric knowledge management
WO2016095410A1 (en) * 2014-12-19 2016-06-23 中兴通讯股份有限公司 Link traffic distributing method and device
US9467362B1 (en) 2014-02-10 2016-10-11 Google Inc. Flow utilization metrics
US9553794B1 (en) * 2013-01-10 2017-01-24 Google Inc. Traffic engineering for network usage optimization
US10212076B1 (en) 2012-12-27 2019-02-19 Sitting Man, Llc Routing methods, systems, and computer program products for mapping a node-scope specific identifier
US10367737B1 (en) 2012-12-27 2019-07-30 Sitting Man, Llc Routing methods, systems, and computer program products
US10374938B1 (en) 2012-12-27 2019-08-06 Sitting Man, Llc Routing methods, systems, and computer program products
US10397101B1 (en) 2012-12-27 2019-08-27 Sitting Man, Llc Routing methods, systems, and computer program products for mapping identifiers
US10397100B1 (en) 2012-12-27 2019-08-27 Sitting Man, Llc Routing methods, systems, and computer program products using a region scoped outside-scope identifier
US10404582B1 (en) 2012-12-27 2019-09-03 Sitting Man, Llc Routing methods, systems, and computer program products using an outside-scope indentifier
US10402765B1 (en) 2015-02-17 2019-09-03 Sprint Communications Company L.P. Analysis for network management using customer provided information
US10404583B1 (en) 2012-12-27 2019-09-03 Sitting Man, Llc Routing methods, systems, and computer program products using multiple outside-scope identifiers
US10411997B1 (en) 2012-12-27 2019-09-10 Sitting Man, Llc Routing methods, systems, and computer program products for using a region scoped node identifier
US10411998B1 (en) 2012-12-27 2019-09-10 Sitting Man, Llc Node scope-specific outside-scope identifier-equipped routing methods, systems, and computer program products
US20190277652A1 (en) * 2017-06-12 2019-09-12 Duel S.R.L. Data processing method for synthesizing in real time customized traffic information
US10419335B1 (en) 2012-12-27 2019-09-17 Sitting Man, Llc Region scope-specific outside-scope indentifier-equipped routing methods, systems, and computer program products
US10419334B1 (en) 2012-12-27 2019-09-17 Sitting Man, Llc Internet protocol routing methods, systems, and computer program products
US10447575B1 (en) 2012-12-27 2019-10-15 Sitting Man, Llc Routing methods, systems, and computer program products
US10462017B2 (en) * 2017-10-13 2019-10-29 Fujitsu Limited Network property verification in hybrid networks
US10476787B1 (en) 2012-12-27 2019-11-12 Sitting Man, Llc Routing methods, systems, and computer program products
US10587505B1 (en) 2012-12-27 2020-03-10 Sitting Man, Llc Routing methods, systems, and computer program products
US20210152462A1 (en) * 2017-09-21 2021-05-20 Silver Peak Systems, Inc. Selective routing
US11127079B2 (en) * 2018-07-06 2021-09-21 Michael Arthur Brown Computer simulated monetary allocation using network of operator data objects
US20210352021A1 (en) * 2015-12-10 2021-11-11 Microsoft Technology Licensing, Llc Data driven automated provisioning of telecommunication applications
CN114745174A (en) * 2022-04-11 2022-07-12 中国南方电网有限责任公司 Access verification system and method for power grid equipment

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529856B2 (en) * 1997-03-05 2009-05-05 At Home Corporation Delivering multimedia services
US6370571B1 (en) 1997-03-05 2002-04-09 At Home Corporation System and method for delivering high-performance online multimedia services
US6912222B1 (en) * 1997-09-03 2005-06-28 Internap Network Services Corporation Private network access point router for interconnecting among internet route providers
US6985963B1 (en) * 2000-08-23 2006-01-10 At Home Corporation Sharing IP network resources
US8677016B1 (en) * 2000-10-16 2014-03-18 Packet Design, Llc System and method for identifying network topology information
US7349994B2 (en) 2000-10-17 2008-03-25 Avaya Technology Corp. Method and apparatus for coordinating routing parameters via a back-channel communication medium
US8023421B2 (en) * 2002-07-25 2011-09-20 Avaya Inc. Method and apparatus for the assessment and optimization of network traffic
US7756032B2 (en) * 2000-10-17 2010-07-13 Avaya Inc. Method and apparatus for communicating data within measurement traffic
AU2002213287A1 (en) 2000-10-17 2002-04-29 Routescience Technologies Inc Method and apparatus for performance and cost optimization in an internetwork
US7080161B2 (en) * 2000-10-17 2006-07-18 Avaya Technology Corp. Routing information exchange
US7720959B2 (en) * 2000-10-17 2010-05-18 Avaya Inc. Method and apparatus for characterizing the quality of a network path
US6900822B2 (en) * 2001-03-14 2005-05-31 Bmc Software, Inc. Performance and flow analysis method for communication networks
US7139242B2 (en) * 2001-03-28 2006-11-21 Proficient Networks, Inc. Methods, apparatuses and systems facilitating deployment, support and configuration of network routing policies
EP1271844B1 (en) * 2001-06-21 2009-12-09 SK Telecom Co.,Ltd. Route determining method in a multi protocol label switching network
US6744729B2 (en) * 2001-08-17 2004-06-01 Interactive Sapience Corp. Intelligent fabric
US8199647B2 (en) * 2001-09-20 2012-06-12 Nokia Siemens Networks Gmbh & Co. Kg Data transmission in a packet-oriented communication network
US7743139B1 (en) * 2001-10-30 2010-06-22 At&T Intellectual Property Ii, L.P. Method of provisioning a packet network for handling incoming traffic demands
US7617302B1 (en) * 2001-11-02 2009-11-10 Nortel Networks Limited Communication networks
US7126970B2 (en) * 2001-12-20 2006-10-24 Tropic Networks Inc. Communication system with balanced transmission bandwidth
US7027396B1 (en) 2002-02-13 2006-04-11 At&T Corp. Traffic matrix computation for a backbone network supporting virtual private networks
US7047496B2 (en) * 2002-03-20 2006-05-16 Tropic Networks Inc. Method for visualization of optical network topology
ITTO20020762A1 (en) * 2002-09-02 2004-03-03 Telecom Italia Lab Spa PROCEDURE AND SYSTEM FOR REALIZING CONNECTIVITY ESTIMATES
US7307961B2 (en) * 2002-09-25 2007-12-11 At&T Knowledge Ventures, L.P. Traffic modeling for packet data communications system dimensioning
WO2004107677A1 (en) * 2003-06-03 2004-12-09 Siemens Aktiengesellschaft Method for distributing traffic using hash-codes corresponding to a desired traffic distribution in a packet-oriented network comprising multipath routing
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
US7912055B1 (en) * 2004-08-25 2011-03-22 Emc Corporation Method and apparatus for configuration and analysis of network multicast routing protocols
WO2006029399A2 (en) 2004-09-09 2006-03-16 Avaya Technology Corp. Methods of and systems for network traffic security
US7222149B2 (en) * 2004-09-17 2007-05-22 Microsoft Corporation Ordering decision nodes in distributed decision making
US7400585B2 (en) * 2004-09-23 2008-07-15 International Business Machines Corporation Optimal interconnect utilization in a data processing network
CA2827035A1 (en) 2004-11-08 2006-05-18 Adaptive Computing Enterprises, Inc. System and method of providing system jobs within a compute environment
US20060106919A1 (en) * 2004-11-12 2006-05-18 David Watkinson Communication traffic control rule generation methods and systems
US7333501B2 (en) * 2005-01-14 2008-02-19 Cisco Technology, Inc. Techniques for determining network nodes to represent, multiple subnetworks for a routing protocol
US9075657B2 (en) 2005-04-07 2015-07-07 Adaptive Computing Enterprises, Inc. On-demand access to compute resources
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
US9231886B2 (en) 2005-03-16 2016-01-05 Adaptive Computing Enterprises, Inc. Simple integration of an on-demand compute environment
US9015324B2 (en) 2005-03-16 2015-04-21 Adaptive Computing Enterprises, Inc. System and method of brokering cloud computing resources
WO2008036058A2 (en) 2005-03-16 2008-03-27 Cluster Resources, Inc. On-demand computing environment
US8782120B2 (en) 2005-04-07 2014-07-15 Adaptive Computing Enterprises, Inc. Elastic management of compute resources between a web server and an on-demand compute environment
US8155126B1 (en) * 2005-06-03 2012-04-10 At&T Intellectual Property Ii, L.P. Method and apparatus for inferring network paths
US7889655B2 (en) * 2006-01-17 2011-02-15 Cisco Technology, Inc. Techniques for detecting loop-free paths that cross routing information boundaries
US7716586B2 (en) * 2006-02-17 2010-05-11 International Business Machines Corporation Apparatus, system, and method for progressively disclosing information in support of information technology system visualization and management
US7609672B2 (en) * 2006-08-29 2009-10-27 Cisco Technology, Inc. Method and apparatus for automatic sub-division of areas that flood routing information
US7899005B2 (en) * 2006-09-12 2011-03-01 Cisco Technology, Inc. Method and apparatus for passing routing information among mobile routers
US8009591B2 (en) * 2006-11-30 2011-08-30 Cisco Technology, Inc. Automatic overlapping areas that flood routing information
US8345565B2 (en) * 2007-01-16 2013-01-01 Nxp B.V. Method and system for operating a wireless access point in the presence of bursty interference
US7733798B2 (en) * 2007-08-28 2010-06-08 Cisco Technology, Inc. Evaluation of network data aggregation
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
US7936732B2 (en) * 2007-09-27 2011-05-03 Cisco Technology, Inc. Selecting aggregation nodes in a network
US8477772B2 (en) 2008-12-16 2013-07-02 At&T Intellectual Property I, L.P. System and method for determination of routing information in a network
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US8767799B2 (en) * 2011-04-12 2014-07-01 Alcatel Lucent Method and apparatus for determining signal-to-noise ratio
CN104519509A (en) * 2013-09-29 2015-04-15 索尼公司 Wireless network monitoring device in wireless communication system, method used in wireless communication system and device in wireless communication system
US9641411B1 (en) 2013-12-12 2017-05-02 Google Inc. Estimating latent demand with user prioritization
US9853882B2 (en) * 2014-07-23 2017-12-26 Cisco Technology, Inc. Dynamic path switchover decision override based on flow characteristics
WO2016118498A1 (en) * 2015-01-20 2016-07-28 Tata Communications (America) Inc. Service dependent ip addresses
US10917334B1 (en) * 2017-09-22 2021-02-09 Amazon Technologies, Inc. Network route expansion
US11323374B2 (en) 2019-09-25 2022-05-03 Juniper Networks, Inc. Visualizing network traffic plans based on egress peer engineering
TWI739635B (en) * 2020-10-20 2021-09-11 國立陽明交通大學 Reliability evaluation method of multi-state distributed network system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5737525A (en) * 1992-05-12 1998-04-07 Compaq Computer Corporation Network packet switch using shared memory for repeating and bridging packets at media rate
US5881268A (en) * 1996-03-14 1999-03-09 International Business Machines Corporation Comparative performance modeling for distributed object oriented applications
US6014697A (en) * 1994-10-25 2000-01-11 Cabletron Systems, Inc. Method and apparatus for automatically populating a network simulator tool
US6061331A (en) * 1998-07-28 2000-05-09 Gte Laboratories Incorporated Method and apparatus for estimating source-destination traffic in a packet-switched communications network
US6128214A (en) * 1999-03-29 2000-10-03 Hewlett-Packard Molecular wire crossbar memory
US6259679B1 (en) * 1996-02-22 2001-07-10 Mci Communications Corporation Network management system
US6363056B1 (en) * 1998-07-15 2002-03-26 International Business Machines Corporation Low overhead continuous monitoring of network performance
US6459517B1 (en) * 1999-02-02 2002-10-01 International Business Machines Corporation Enhanced electromagnetic interference shield
US6477572B1 (en) * 1998-12-17 2002-11-05 International Business Machines Corporation Method for displaying a network topology for a task deployment service
US6560204B1 (en) * 1998-05-13 2003-05-06 Telcordia Technologies, Inc. Method of estimating call level traffic intensity based on channel link measurements
US6614763B1 (en) * 1999-02-04 2003-09-02 Fujitsu Limited Method of and apparatus for measuring network communication performances, as well as computer readable record medium having network communication performance measuring program stored therein
US6665714B1 (en) * 1999-06-30 2003-12-16 Emc Corporation Method and apparatus for determining an identity of a network device
US6728214B1 (en) * 1999-07-28 2004-04-27 Lucent Technologies Inc. Testing of network routers under given routing protocols
US6738349B1 (en) * 2000-03-01 2004-05-18 Tektronix, Inc. Non-intrusive measurement of end-to-end network properties
US6795399B1 (en) * 1998-11-24 2004-09-21 Lucent Technologies Inc. Link capacity computation methods and apparatus for designing IP networks with performance guarantees
US6810211B1 (en) * 1999-09-08 2004-10-26 Alcatel Preferred WDM packet-switched router architecture and method for generating same

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5274625A (en) * 1992-09-10 1993-12-28 International Business Machines Corporation Traffic measurements in packet communications networks
US6209033B1 (en) * 1995-02-01 2001-03-27 Cabletron Systems, Inc. Apparatus and method for network capacity evaluation and planning
US6058102A (en) * 1997-11-07 2000-05-02 Visual Networks Technologies, Inc. Method and apparatus for performing service level analysis of communications network performance metrics
US6363077B1 (en) * 1998-02-13 2002-03-26 Broadcom Corporation Load balancing in link aggregation and trunking
US6631136B1 (en) * 1998-08-26 2003-10-07 Hypercom Corporation Methods and apparatus for data communication using a hybrid transport switching protocol
US6549517B1 (en) * 1998-12-11 2003-04-15 Nortel Networks Limited Explicit rate computation for flow control in compute networks
US6711172B1 (en) * 1999-08-02 2004-03-23 Nortel Networks Corp. Network packet routing
US7151775B1 (en) * 1999-09-23 2006-12-19 Pluris, Inc. Apparatus and method for forwarding data on multiple label-switched data paths
US6954739B1 (en) * 1999-11-16 2005-10-11 Lucent Technologies Inc. Measurement-based management method for packet communication networks
US6738350B1 (en) * 2000-03-16 2004-05-18 Hughes Electronics Corporation Congestion avoidance approach for a switching communication system with transmission constraints

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5737525A (en) * 1992-05-12 1998-04-07 Compaq Computer Corporation Network packet switch using shared memory for repeating and bridging packets at media rate
US6014697A (en) * 1994-10-25 2000-01-11 Cabletron Systems, Inc. Method and apparatus for automatically populating a network simulator tool
US6259679B1 (en) * 1996-02-22 2001-07-10 Mci Communications Corporation Network management system
US5881268A (en) * 1996-03-14 1999-03-09 International Business Machines Corporation Comparative performance modeling for distributed object oriented applications
US6560204B1 (en) * 1998-05-13 2003-05-06 Telcordia Technologies, Inc. Method of estimating call level traffic intensity based on channel link measurements
US6363056B1 (en) * 1998-07-15 2002-03-26 International Business Machines Corporation Low overhead continuous monitoring of network performance
US6061331A (en) * 1998-07-28 2000-05-09 Gte Laboratories Incorporated Method and apparatus for estimating source-destination traffic in a packet-switched communications network
US6795399B1 (en) * 1998-11-24 2004-09-21 Lucent Technologies Inc. Link capacity computation methods and apparatus for designing IP networks with performance guarantees
US6477572B1 (en) * 1998-12-17 2002-11-05 International Business Machines Corporation Method for displaying a network topology for a task deployment service
US6459517B1 (en) * 1999-02-02 2002-10-01 International Business Machines Corporation Enhanced electromagnetic interference shield
US6614763B1 (en) * 1999-02-04 2003-09-02 Fujitsu Limited Method of and apparatus for measuring network communication performances, as well as computer readable record medium having network communication performance measuring program stored therein
US6128214A (en) * 1999-03-29 2000-10-03 Hewlett-Packard Molecular wire crossbar memory
US6665714B1 (en) * 1999-06-30 2003-12-16 Emc Corporation Method and apparatus for determining an identity of a network device
US6728214B1 (en) * 1999-07-28 2004-04-27 Lucent Technologies Inc. Testing of network routers under given routing protocols
US6810211B1 (en) * 1999-09-08 2004-10-26 Alcatel Preferred WDM packet-switched router architecture and method for generating same
US6738349B1 (en) * 2000-03-01 2004-05-18 Tektronix, Inc. Non-intrusive measurement of end-to-end network properties

Cited By (158)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020052937A1 (en) * 2000-11-02 2002-05-02 Microsoft Corporation Method and apparatus for verifying the contents of a global configuration file
US6892231B2 (en) * 2000-11-02 2005-05-10 Microsoft Corporation Method and apparatus for verifying the contents of a global configuration file
US20060129670A1 (en) * 2001-03-27 2006-06-15 Redseal Systems, Inc. Method and apparatus for network wide policy-based analysis of configurations of devices
US8135815B2 (en) 2001-03-27 2012-03-13 Redseal Systems, Inc. Method and apparatus for network wide policy-based analysis of configurations of devices
US20060129672A1 (en) * 2001-03-27 2006-06-15 Redseal Systems, Inc., A Corporation Of Delaware Method and apparatus for network wide policy-based analysis of configurations of devices
US20020184362A1 (en) * 2001-05-31 2002-12-05 International Business Machines Corporation System and method for extending server security through monitored load management
US20030002441A1 (en) * 2001-06-27 2003-01-02 International Business Machines Corporation Reduction of server overload
US7009938B2 (en) * 2001-06-27 2006-03-07 International Business Machines Corporation Reduction of server overload
US7039705B2 (en) * 2001-10-26 2006-05-02 Hewlett-Packard Development Company, L.P. Representing capacities and demands in a layered computing environment using normalized values
US7054934B2 (en) * 2001-10-26 2006-05-30 Hewlett-Packard Development Company, L.P. Tailorable optimization using model descriptions of services and servers in a computing environment
US7035930B2 (en) * 2001-10-26 2006-04-25 Hewlett-Packard Development Company, L.P. Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers
US20030084156A1 (en) * 2001-10-26 2003-05-01 Hewlett-Packard Company Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers
US20030084157A1 (en) * 2001-10-26 2003-05-01 Hewlett Packard Company Tailorable optimization using model descriptions of services and servers in a computing environment
US20110126108A1 (en) * 2001-12-13 2011-05-26 Luc Beaudoin Overlay View Method and System for Representing Network Topology
US20030128668A1 (en) * 2002-01-04 2003-07-10 Yavatkar Rajendra S. Distributed implementation of control protocols in routers and switches
US7254138B2 (en) * 2002-02-11 2007-08-07 Optimum Communications Services, Inc. Transparent, look-up-free packet forwarding method for optimizing global network throughput based on real-time route status
US20040039839A1 (en) * 2002-02-11 2004-02-26 Shivkumar Kalyanaraman Connectionless internet traffic engineering framework
US20040032856A1 (en) * 2002-02-11 2004-02-19 Sandstrom Mark Henrik Transparent, look-up-free packet forwarding method for optimizing global network throughput based on real-time route status
US7072960B2 (en) * 2002-06-10 2006-07-04 Hewlett-Packard Development Company, L.P. Generating automated mappings of service demands to server capacities in a distributed computer system
US20030236822A1 (en) * 2002-06-10 2003-12-25 Sven Graupner Generating automated mappings of service demands to server capacites in a distributed computer system
US20040013113A1 (en) * 2002-07-17 2004-01-22 Ranjeeta Singh Technique to improve network routing using best-match and exact-match techniques
US7039018B2 (en) * 2002-07-17 2006-05-02 Intel Corporation Technique to improve network routing using best-match and exact-match techniques
US7373563B2 (en) 2002-09-09 2008-05-13 Sheer Networks Inc. Root cause correlation in connectionless networks
WO2004023719A3 (en) * 2002-09-09 2004-05-06 Sheer Networks Inc Root cause correlation in connectionless networks
WO2004023719A2 (en) * 2002-09-09 2004-03-18 Sheer Networks Inc. Root cause correlation in connectionless networks
US20040203827A1 (en) * 2002-11-01 2004-10-14 Adreas Heiner Dynamic load distribution using local state information
WO2004040858A1 (en) * 2002-11-01 2004-05-13 Nokia Corporation Dynamic load distribution using local state information
US7280482B2 (en) 2002-11-01 2007-10-09 Nokia Corporation Dynamic load distribution using local state information
EP1570602A2 (en) * 2002-11-29 2005-09-07 Marconi Intellectual Property (Ringfence) Inc. Route objects in telecommunications networks
WO2004051932A2 (en) 2002-11-29 2004-06-17 Marconi Intellectual Property (Ringfence) Inc. Route objects in telecommunications networks
US20060069741A1 (en) * 2002-11-29 2006-03-30 Morris Stephen B Route objects in telecommunications networks
US7844432B1 (en) * 2003-06-05 2010-11-30 Verizon Laboratories Inc. Node emulator
US7805287B1 (en) 2003-06-05 2010-09-28 Verizon Laboratories Inc. Node emulator
US20100074103A1 (en) * 2003-09-03 2010-03-25 Nokia Siemens Networks Gmbh & Co. Kg Simple and resource-efficient resilient network systems
US8279750B2 (en) * 2003-09-03 2012-10-02 Nokia Siemens Networks Gmbh & Co. Kg Simple and resource-efficient resilient network systems
US20070195701A1 (en) * 2003-09-03 2007-08-23 Michael Menth Simple And Resource-Efficient Resilient Network Systems
US20080002596A1 (en) * 2004-01-08 2008-01-03 Childress Rhonda L Method and apparatus for non-invasive discovery of relationships between nodes in a network
US8738804B2 (en) * 2004-01-08 2014-05-27 International Business Machines Corporation Supporting transactions in a data network using router information
US20050154758A1 (en) * 2004-01-08 2005-07-14 International Business Machines Corporation Method and apparatus for supporting transactions
US20050154776A1 (en) * 2004-01-08 2005-07-14 International Business Machines Corporation Method and apparatus for non-invasive discovery of relationships between nodes in a network
US7733806B2 (en) 2004-01-08 2010-06-08 International Business Machines Corporation Method and apparatus for non-invasive discovery of relationships between nodes in a network
US8578016B2 (en) 2004-01-08 2013-11-05 International Business Machines Corporation Non-invasive discovery of relationships between nodes in a network
US20060080462A1 (en) * 2004-06-04 2006-04-13 Asnis James D System for Meta-Hop routing
US7730294B2 (en) * 2004-06-04 2010-06-01 Nokia Corporation System for geographically distributed virtual routing
US7801125B2 (en) 2004-10-22 2010-09-21 Cisco Technology, Inc. Forwarding table reduction and multipath network forwarding
US8160094B2 (en) 2004-10-22 2012-04-17 Cisco Technology, Inc. Fibre channel over ethernet
US8842694B2 (en) 2004-10-22 2014-09-23 Cisco Technology, Inc. Fibre Channel over Ethernet
US7969971B2 (en) 2004-10-22 2011-06-28 Cisco Technology, Inc. Ethernet extension for the data center
US20060098589A1 (en) * 2004-10-22 2006-05-11 Cisco Technology, Inc. Forwarding table reduction and multipath network forwarding
US20110007741A1 (en) * 2004-10-22 2011-01-13 Cisco Technology, Inc. Forwarding table reduction and multipath network forwarding
US8565231B2 (en) 2004-10-22 2013-10-22 Cisco Technology, Inc. Ethernet extension for the data center
US8532099B2 (en) 2004-10-22 2013-09-10 Cisco Technology, Inc. Forwarding table reduction and multipath network forwarding
US20060101140A1 (en) * 2004-10-22 2006-05-11 Cisco Technology, Inc. Ethernet extension for the data center
US20060087989A1 (en) * 2004-10-22 2006-04-27 Cisco Technology, Inc., A Corporation Of California Network device architecture for consolidating input/output and reducing latency
US20060251067A1 (en) * 2004-10-22 2006-11-09 Cisco Technology, Inc., A Corporation Of California Fibre channel over ethernet
US7830793B2 (en) 2004-10-22 2010-11-09 Cisco Technology, Inc. Network device architecture for consolidating input/output and reducing latency
US9246834B2 (en) 2004-10-22 2016-01-26 Cisco Technology, Inc. Fibre channel over ethernet
US8238347B2 (en) 2004-10-22 2012-08-07 Cisco Technology, Inc. Fibre channel over ethernet
US7734813B2 (en) * 2005-01-28 2010-06-08 Cariden Technologies, Inc. Method and system for communicating predicted network behavior between interconnected networks
US20060174154A1 (en) * 2005-01-28 2006-08-03 Cariden Technologies, Inc. Method and system for communicating predicted network behavior between interconnected networks
US20060198302A1 (en) * 2005-03-03 2006-09-07 Sofman Lev B Traffic dimensioning in a metro area with IPTV architecture
US20060271857A1 (en) * 2005-05-12 2006-11-30 David Rosenbluth Imaging system for network traffic data
US20060288296A1 (en) * 2005-05-12 2006-12-21 David Rosenbluth Receptor array for managing network traffic data
US20060291445A1 (en) * 2005-06-14 2006-12-28 Luca Martini Method for auto-routing of multi-hop pseudowires
US7408941B2 (en) * 2005-06-14 2008-08-05 Cisco Technology, Inc. Method for auto-routing of multi-hop pseudowires
US20070076615A1 (en) * 2005-10-03 2007-04-05 The Hong Kong University Of Science And Technology Non-Blocking Destination-Based Routing Networks
US7898957B2 (en) * 2005-10-03 2011-03-01 The Hong Kong University Of Science And Technology Non-blocking destination-based routing networks
US20070081454A1 (en) * 2005-10-11 2007-04-12 Cisco Technology, Inc. A Corporation Of California Methods and devices for backward congestion notification
US7961621B2 (en) 2005-10-11 2011-06-14 Cisco Technology, Inc. Methods and devices for backward congestion notification
US8792352B2 (en) 2005-10-11 2014-07-29 Cisco Technology, Inc. Methods and devices for backward congestion notification
US8199678B2 (en) 2005-10-21 2012-06-12 Hewlett-Packard Development Company, L.P. Graphical arrangement of IT network components
EP1777875A1 (en) 2005-10-21 2007-04-25 Hewlett-Packard Development Company, L.P. Graphical arrangement of IT network components
US20090147703A1 (en) * 2005-10-26 2009-06-11 James Wesley Bemont Method for Efficiently Retrieving Topology-Specific Data for Point-to-Point Networks
US8411591B2 (en) * 2005-10-26 2013-04-02 Sanmina Corporation Method for efficiently retrieving topology-specific data for point-to-point networks
EP1997025A4 (en) * 2006-03-23 2010-08-11 Cisco Tech Inc Method and application tool for dynamically navigating a user customizable representation of a network device configuration
EP1997025A2 (en) * 2006-03-23 2008-12-03 Cisco Technology, Inc. Method and application tool for dynamically navigating a user customizable representation of a network device configuration
US20110072352A1 (en) * 2006-03-23 2011-03-24 Cisco Technology, Inc. Method and application tool for dynamically navigating a user customizable representation of a network device configuration
WO2007111814A2 (en) 2006-03-23 2007-10-04 Cisco Technology, Inc. Method and application tool for dynamically navigating a user customizable representation of a network device configuration
US8161185B2 (en) * 2006-04-24 2012-04-17 Cisco Technology, Inc. Method and apparatus for assigning IPv6 link state identifiers
US20070250640A1 (en) * 2006-04-24 2007-10-25 Cisco Technology, Inc. Method and apparatus for assigning Ipv6 link state identifiers
US8891379B2 (en) 2006-06-02 2014-11-18 Riverbed Technology, Inc. Traffic flow inference based on link loads and gravity measures
US8312139B2 (en) 2006-06-02 2012-11-13 Opnet Technologies, Inc. Traffic flow inference based on link loads and gravity measures
US20070280113A1 (en) * 2006-06-02 2007-12-06 Opnet Technologies, Inc. Traffic flow inference based on link loads and gravity measures
US8095645B2 (en) 2006-06-02 2012-01-10 Opnet Technologies, Inc. Traffic flow inference based on link loads and gravity measures
US9246772B2 (en) 2006-07-06 2016-01-26 LiveAction, Inc. System and method for network topology and flow visualization
US9240930B2 (en) * 2006-07-06 2016-01-19 LiveAction, Inc. System for network flow visualization through network devices within network topology
US9350622B2 (en) 2006-07-06 2016-05-24 LiveAction, Inc. Method and system for real-time visualization of network flow within network device
US20130159864A1 (en) * 2006-07-06 2013-06-20 John Kei Smith System for Network Flow Visualization through Network Devices within Network Topology
US8743738B2 (en) 2007-02-02 2014-06-03 Cisco Technology, Inc. Triple-tier anycast addressing
US8259720B2 (en) * 2007-02-02 2012-09-04 Cisco Technology, Inc. Triple-tier anycast addressing
US8289878B1 (en) 2007-05-09 2012-10-16 Sprint Communications Company L.P. Virtual link mapping
US8149710B2 (en) 2007-07-05 2012-04-03 Cisco Technology, Inc. Flexible and hierarchical dynamic buffer allocation
US8804529B2 (en) 2007-08-21 2014-08-12 Cisco Technology, Inc. Backward congestion notification
US8121038B2 (en) 2007-08-21 2012-02-21 Cisco Technology, Inc. Backward congestion notification
US20100131672A1 (en) * 2008-11-25 2010-05-27 Jeyhan Karaoguz MULTIPLE PATHWAY SESSION SETUP TO SUPPORT QoS SERVICES
US8959245B2 (en) * 2008-11-25 2015-02-17 Broadcom Corporation Multiple pathway session setup to support QoS services
US8842690B2 (en) 2009-04-02 2014-09-23 University Of Florida Research Foundation, Incorporated System, method, and media for network traffic measurement on high-speed routers
WO2010115096A3 (en) * 2009-04-02 2011-01-13 University Of Florida Research Foundation Inc. System, method, and media for network traffic measurement on high-speed routers
WO2010115096A2 (en) * 2009-04-02 2010-10-07 University Of Florida Research Foundation Inc. System, method, and media for network traffic measurement on high-speed routers
US8301762B1 (en) * 2009-06-08 2012-10-30 Sprint Communications Company L.P. Service grouping for network reporting
US8458323B1 (en) 2009-08-24 2013-06-04 Sprint Communications Company L.P. Associating problem tickets based on an integrated network and customer database
US8355316B1 (en) 2009-12-16 2013-01-15 Sprint Communications Company L.P. End-to-end network monitoring
US8644146B1 (en) 2010-08-02 2014-02-04 Sprint Communications Company L.P. Enabling user defined network change leveraging as-built data
US8925079B2 (en) * 2011-11-14 2014-12-30 Telcordia Technologies, Inc. Method, apparatus and program for detecting spoofed network traffic
US20130125235A1 (en) * 2011-11-14 2013-05-16 Kddi Corporation Method, Apparatus and Program for Detecting Spoofed Network Traffic
US20130132542A1 (en) * 2011-11-18 2013-05-23 Telefonktiebolaget L M Ericsson (Publ) Method and System for Effective BGP AS-Path Pre-pending
US9305029B1 (en) 2011-11-25 2016-04-05 Sprint Communications Company L.P. Inventory centric knowledge management
US10367737B1 (en) 2012-12-27 2019-07-30 Sitting Man, Llc Routing methods, systems, and computer program products
US10498642B1 (en) 2012-12-27 2019-12-03 Sitting Man, Llc Routing methods, systems, and computer program products
US11784914B1 (en) 2012-12-27 2023-10-10 Morris Routing Technologies, Llc Routing methods, systems, and computer program products
US11196660B1 (en) 2012-12-27 2021-12-07 Sitting Man, Llc Routing methods, systems, and computer program products
US11012344B1 (en) 2012-12-27 2021-05-18 Sitting Man, Llc Routing methods, systems, and computer program products
US10212076B1 (en) 2012-12-27 2019-02-19 Sitting Man, Llc Routing methods, systems, and computer program products for mapping a node-scope specific identifier
US10862791B1 (en) 2012-12-27 2020-12-08 Sitting Man, Llc DNS methods, systems, and computer program products
US10374938B1 (en) 2012-12-27 2019-08-06 Sitting Man, Llc Routing methods, systems, and computer program products
US10382327B1 (en) 2012-12-27 2019-08-13 Sitting Man, Llc Methods, systems, and computer program products for routing using headers including a sequence of node scope-specific identifiers
US10389624B1 (en) 2012-12-27 2019-08-20 Sitting Man, Llc Scoped identifier space routing methods, systems, and computer program products
US10389625B1 (en) 2012-12-27 2019-08-20 Sitting Man, Llc Routing methods, systems, and computer program products for using specific identifiers to transmit data
US10397101B1 (en) 2012-12-27 2019-08-27 Sitting Man, Llc Routing methods, systems, and computer program products for mapping identifiers
US10397100B1 (en) 2012-12-27 2019-08-27 Sitting Man, Llc Routing methods, systems, and computer program products using a region scoped outside-scope identifier
US10404582B1 (en) 2012-12-27 2019-09-03 Sitting Man, Llc Routing methods, systems, and computer program products using an outside-scope indentifier
US10841198B1 (en) 2012-12-27 2020-11-17 Sitting Man, Llc Routing methods, systems, and computer program products
US10404583B1 (en) 2012-12-27 2019-09-03 Sitting Man, Llc Routing methods, systems, and computer program products using multiple outside-scope identifiers
US10411997B1 (en) 2012-12-27 2019-09-10 Sitting Man, Llc Routing methods, systems, and computer program products for using a region scoped node identifier
US10411998B1 (en) 2012-12-27 2019-09-10 Sitting Man, Llc Node scope-specific outside-scope identifier-equipped routing methods, systems, and computer program products
US10805204B1 (en) 2012-12-27 2020-10-13 Sitting Man, Llc Routing methods, systems, and computer program products
US10419335B1 (en) 2012-12-27 2019-09-17 Sitting Man, Llc Region scope-specific outside-scope indentifier-equipped routing methods, systems, and computer program products
US10419334B1 (en) 2012-12-27 2019-09-17 Sitting Man, Llc Internet protocol routing methods, systems, and computer program products
US10447575B1 (en) 2012-12-27 2019-10-15 Sitting Man, Llc Routing methods, systems, and computer program products
US10785143B1 (en) 2012-12-27 2020-09-22 Sitting Man, Llc Routing methods, systems, and computer program products
US10476788B1 (en) 2012-12-27 2019-11-12 Sitting Man, Llc Outside-scope identifier-equipped routing methods, systems, and computer program products
US10476787B1 (en) 2012-12-27 2019-11-12 Sitting Man, Llc Routing methods, systems, and computer program products
US10764171B1 (en) 2012-12-27 2020-09-01 Sitting Man, Llc Routing methods, systems, and computer program products
US10574562B1 (en) 2012-12-27 2020-02-25 Sitting Man, Llc Routing methods, systems, and computer program products
US10587505B1 (en) 2012-12-27 2020-03-10 Sitting Man, Llc Routing methods, systems, and computer program products
US10594594B1 (en) 2012-12-27 2020-03-17 Sitting Man, Llc Routing methods, systems, and computer program products
US10652134B1 (en) 2012-12-27 2020-05-12 Sitting Man, Llc Routing methods, systems, and computer program products
US10652150B1 (en) 2012-12-27 2020-05-12 Sitting Man, Llc Routing methods, systems, and computer program products
US10652133B1 (en) 2012-12-27 2020-05-12 Sitting Man, Llc Routing methods, systems, and computer program products
US10708168B1 (en) 2012-12-27 2020-07-07 Sitting Man, Llc Routing methods, systems, and computer program products
US10721164B1 (en) 2012-12-27 2020-07-21 Sitting Man, Llc Routing methods, systems, and computer program products with multiple sequences of identifiers
US10735306B1 (en) 2012-12-27 2020-08-04 Sitting Man, Llc Routing methods, systems, and computer program products
US10757020B2 (en) 2012-12-27 2020-08-25 Sitting Man, Llc Routing methods, systems, and computer program products
US10757010B1 (en) 2012-12-27 2020-08-25 Sitting Man, Llc Routing methods, systems, and computer program products
US9553794B1 (en) * 2013-01-10 2017-01-24 Google Inc. Traffic engineering for network usage optimization
US9467362B1 (en) 2014-02-10 2016-10-11 Google Inc. Flow utilization metrics
US9912577B2 (en) * 2014-04-17 2018-03-06 Cisco Technology, Inc. Segment routing—egress peer engineering (SP-EPE)
US20150304206A1 (en) * 2014-04-17 2015-10-22 Cisco Technology, Inc. Segment routing - egress peer engineering (sp-epe)
WO2016095410A1 (en) * 2014-12-19 2016-06-23 中兴通讯股份有限公司 Link traffic distributing method and device
US10402765B1 (en) 2015-02-17 2019-09-03 Sprint Communications Company L.P. Analysis for network management using customer provided information
US20210352021A1 (en) * 2015-12-10 2021-11-11 Microsoft Technology Licensing, Llc Data driven automated provisioning of telecommunication applications
US10921152B2 (en) * 2017-06-12 2021-02-16 Duel S.R.L. Data processing method for synthesizing in real time customized traffic information
US20190277652A1 (en) * 2017-06-12 2019-09-12 Duel S.R.L. Data processing method for synthesizing in real time customized traffic information
US20210152462A1 (en) * 2017-09-21 2021-05-20 Silver Peak Systems, Inc. Selective routing
US11805045B2 (en) * 2017-09-21 2023-10-31 Hewlett Packard Enterprise Development Lp Selective routing
US10462017B2 (en) * 2017-10-13 2019-10-29 Fujitsu Limited Network property verification in hybrid networks
US11127079B2 (en) * 2018-07-06 2021-09-21 Michael Arthur Brown Computer simulated monetary allocation using network of operator data objects
CN114745174A (en) * 2022-04-11 2022-07-12 中国南方电网有限责任公司 Access verification system and method for power grid equipment

Also Published As

Publication number Publication date
US7796619B1 (en) 2010-09-14
US20020101821A1 (en) 2002-08-01
US7027448B2 (en) 2006-04-11

Similar Documents

Publication Publication Date Title
US7027448B2 (en) System and method for deriving traffic demands for a packet-switched network
US7769019B2 (en) Efficient discovery and verification of paths through a meshed overlay network
Quoitin et al. Modeling the routing of an autonomous system with C-BGP
Haddadi et al. Network topologies: inference, modeling, and generation
US7903573B2 (en) Method and system for network traffic matrix analysis
US7376154B2 (en) Non-intrusive method for routing policy discovery
EP2984798B1 (en) Identification of paths taken through a network of interconnected devices
US20020021675A1 (en) System and method for packet network configuration debugging and database
Zhang et al. Scaling IP Routing with the Core Router-Integrated Overlay.
US20060056328A1 (en) Identifying network rotuters and paths
US20140029443A1 (en) Interprovider virtual private network path identification
US9531598B2 (en) Querying a traffic forwarding table
Krähenbühl et al. Deployment and scalability of an inter-domain multi-path routing infrastructure
US7457244B1 (en) System and method for generating a traffic matrix in a network environment
Cerav-Erbas et al. The interaction of IGP weight optimization with BGP
Wang et al. Towards an aggregation-aware internet routing
Swinnen et al. An evaluation of bgp-based traffic engineering techniques
KR20120002176A (en) Server for managing mpls vpn routing information and method thereof
Raza et al. Effective iBGP operation without a full mesh topology
Aoki et al. Flow analysis system for multi-layer service networks
Nozaki Tiered based addressing in internetwork routing protocols for the future internet
Quoitin et al. A BGP solver for hot-potato routing sensitivity analysis
Alam A Proportional Performance Investigation Route Relocation Amongthree Different Routing Protocols Using Opnet Simulation
Yannuzzi Strategies for internet route control: past, present and future
Raman et al. Reducing power consumption using the Border Gateway Protocol

Legal Events

Date Code Title Description
AS Assignment

Owner name: AT&T CORP., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FELDMANN, ANJA;GREENBERG, ALBERT GORDON;LUND, CARSTEN;AND OTHERS;REEL/FRAME:012395/0343;SIGNING DATES FROM 20011004 TO 20011106

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION