US20170063604A1 - Method and apparatus for sve redundancy - Google Patents

Method and apparatus for sve redundancy Download PDF

Info

Publication number
US20170063604A1
US20170063604A1 US15/347,115 US201615347115A US2017063604A1 US 20170063604 A1 US20170063604 A1 US 20170063604A1 US 201615347115 A US201615347115 A US 201615347115A US 2017063604 A1 US2017063604 A1 US 2017063604A1
Authority
US
United States
Prior art keywords
sve
service
scl
virtual
sves
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/347,115
Inventor
Chao Feng
Samar Sharma
Sriram Chidambaram
Raghavendra J. Rao
Sanjay Hemant Sane
Murali Basavaiah
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US15/347,115 priority Critical patent/US20170063604A1/en
Assigned to CISCO TECHNOLOGY, INC. reassignment CISCO TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAO, RAGHAVENDRA J., BASAVAIAH, MURALI, SHARMA, SAMAR, SANE, SANJAY HEMANT, FENG, CHAO, CHIDAMBARAM, SRIRAM
Publication of US20170063604A1 publication Critical patent/US20170063604A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/58Association of routers
    • H04L45/586Association of routers of virtual routers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/028Capturing of monitoring data by filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • TCO Total Cost of Ownership
  • High availability in the data center refers not only to device-specific availability and uptime, but also to network design and features that prevent downtime in the case of a catastrophic event. Uptime in this context refers to availability of the switch to direct traffic. As more and more equipment is added to the data center network, the high availability of the network may be undermined. Network architects need to consider design best practices to reduce single points of failure and achieve network uptime goals in the data center.
  • SVE service virtualization endpoint
  • CCN-CP cloud-centric-network control point
  • SN service node
  • the hosting operation is taken over by the standby SVE, therefore the failover will be transparent to CCN-CP and the Service Node.
  • redundancy in a service insertion architecture may be include: providing a service classifier (SCL) that performs traffic classification and service header insertion; providing a first services virtualization endpoint (SVE) and a second services virtualization endpoint (SVE) at a virtual IP address, the first SVE and the second SVE each providing access to service nodes; replicating service chaining information from the first SVE to the second SVE; redirecting packets received at the SCL to the first SVE at the virtual IP address; directing the packets in accordance with a mapping for processing; and returning the packets to the SCL.
  • SCL service classifier
  • a cloud-centric-network (CCN) control point that maintains an ordered list of service nodes and a path connecting each element in the order; a service classifier (SCL); one or more services virtualization endpoints (SVEs); and one or more service nodes (SNs) that provide services within the service insertion architecture.
  • the one SVE registers with the CCN at a virtual IP address, and wherein a packet enters the service insertion architecture at the SCL and is directed to the SNs via the one SVE at the virtual IP address.
  • FIG. 1 illustrates an exemplary Service Insertion Architecture (SIA) infrastructure having a Services Virtualization Endpoint (SVE) with redundancy;
  • SIA Service Insertion Architecture
  • SVE Services Virtualization Endpoint
  • FIG. 2 illustrates an exemplary operational flow diagram for providing SVEs within the SIA Infrastructure of FIG. 1 where the SVEs have redundancy;
  • FIGS. 3-7 illustrate data flows within the SIA Infrastructure during the execution of the operational flow of FIG. 2 .
  • the Cloud Centric Network (CCN) Control Point is the central control plane entity in the SIA domain.
  • the CCN CP maintains the mapping between the classification context and the service paths.
  • Each service path may contain one classification context, an ordered list of service nodes and a path connecting each element in the order as defined in the policy.
  • the CCN CP aids in the creation and binding of service paths, thus facilitating the setup of the data path.
  • CCN CPs implemented on multiple chassis may be clustered using an NX-OS Cluster Infrastructure. All the configuration of CCN CPs is maintained consistent throughout the cluster using Reliable Group Communication facility of NX-OS Cluster Infrastructure.
  • the Services Virtualization Endpoint front-ends service nodes and off-loads the complex SIA data plane functions such as service chaining, transport encapsulation, decapsulation and routing from the actual service.
  • SVEs running on two different chassis may be grouped together using Virtual PortChannel (VPC). In this configuration, one of the SVEs functions as an active SVE while the other will be in standby mode from Control Path perspective.
  • VPC Virtual PortChannel
  • CCN CP will duplicate configuration between the two SVEs. This aspect of the disclosure is discussed in further detail below.
  • the Service Node delivers the actual service in the SIA service-path.
  • the service node communicates with CCN-CP in the control and SVE in the data plane.
  • the SVE bridges data packets between the SCL or another SVE and the service node.
  • the SCL intercepts the interested traffic and redirects it into the service path comprising of ordered service nodes.
  • Each Service Node (SN) is front-ended by an SVE. After the traffic reaches SVE, the traffic flows from one service node to another in the order it was provisioned until it reaches the last service node. When a packet flows to another service node, it is always via the SVE.
  • the last service node returns the traffic back to SVE which decides whether to redirect it back to SCL or forward it.
  • the Service Nodes is always Layer-2 adjacent to the SVE. When there is more than one SVEs present, they can be either Layer-2 or Layer-3 away between two SVEs.
  • the SIA data plane functions may include classification and SIA context tagging, redirection and service selection.
  • classification and SIA context tagging the classifier intercepts the interested traffic and inserts a shared context in the packet.
  • the shared context primarily constitutes a unique id (e.g., a traffic classification id) and service ordering information for next service selection in the service path the tagged traffic is redirected.
  • This id conveys the classification context, i.e., the result of classification, information to the service nodes in the service path.
  • Service nodes may use this id to apply service specific polices to the packets.
  • the id remains constant across a service path. In addition, it also represents a service path the traffic is flowing in a SIA domain.
  • the id is used to derive the path to the next hop at each service node.
  • the id may be used to virtualize services in the network which means that the irrespective of the service device location, the packet tagged with the same classification id will always undergo the same set of services in the SIA domain.
  • each SIA physical device at data plane level may redirect the tagged packets to the next hop physical device in the service path.
  • the redirection is done using the available transport mechanisms of the underlying network. For example, a GRE tunnel may be used for this purpose.
  • This redirection tunnel may be shared between two physically or logically adjacent SIA devices and is used to carry entire SIA traffic for multiple service paths that flows between the two devices.
  • the SCL is configured to communicate to a virtual IP address exposed via First Hop Redundancy Protocol (FHRP) to the master SVE entity.
  • FHRP First Hop Redundancy Protocol
  • SCL is Layer-2 adjacent to SVE
  • vPC can be formed between SCL and the two SVEs.
  • the traffic from SCL to SVE reaches either SVE, depending on flow-based load-balancing at SCL. That SVE forwards the packet to the service. There is no need for the packets to flow between SVE-1 and SVE-2.
  • SCL is Layer-3 adjacent to SVE, it is possible that the packet can follow a route through SVE2 before it gets to SVE1. Again, SVE-2 can directly forward the packet if there is a TCAM match.
  • SVE can handle the traffic and send it back to SCL.
  • the slave SVE keeps a regular health check against the master SVE. If the master SVE fails, the slave SVE will take over the virtual IP, virtual MAC and virtual name and become the master.
  • the scheme can further be enhanced such that only certain TCAM entries are synchronized between the two SVEs.
  • a user can specify the load balancing weights among the two SVEs. For example, the user may want one SVE to handle more traffic.
  • a user can rely on RBH (result based hash) in the ASIC for load-balancing.
  • one of the SVEs may be configured with more TCAM entries than the other SVE. As such the former SVE will carry more traffic than the later SVE.
  • the above may be implemented for load balancing between the SVEs. This may also be based that SVE-1 is more powerful than SVE-2 (not the other way around), and SVE function cost is more than packet forwarding function (e.g., the SVE is implemented in software instead of in a hardware ASIC).
  • SVE redundancy may be provided such that if one SVE fails (e.g. SVE-1), a user does not lose services provided by a failed SVE, but rather is serviced by a surviving SVE (e.g., SVE-2).
  • SVE-1 a SVE fails
  • SVE-2 a surviving SVE
  • FIG. 2 there is illustrated an exemplary operational flow diagram for providing an SVE in the SIA 100 of FIG. 1 where the SVEs provide redundancy.
  • FIGS. 3-7 illustrate data flows within the SIA 100 during the execution of the operational flow of FIG. 2 .
  • SVE redundancy may be provided such that if one SVE fails (e.g. SVE-1), a user does not lose services provided by a failed SVE, but rather is serviced by a surviving SVE (e.g., SVE-2).
  • the SVE-1 and SVE-2 register with the CCN using a Hot Standby Router Protocol (HSRP) Virtual IP address.
  • the Hot Standby Router Protocol (HSRP) is a CISCO protocol that provides network redundancy for IP networks, ensuring that user traffic immediately and transparently recovers from first hop failures in network edge devices or access circuits.
  • a virtual IP address (VIP) is an IP address that is not connected to a specific computer or network interface card (NIC) on a computer. Incoming packets are sent to the VIP address, but they are redirected to physical network interfaces. VIPs may be used for connection redundancy. For example, a VIP address may still be available if a computer or NIC fails because an alternative computer or NIC replies to connections. As shown in FIG.
  • one SVE (e.g., SVE-1) registers with the CCN.
  • SVE-1 and SVE-2 have a common Virtual IP, and the CCN will only see one SVE, i.e., SVE-1.
  • the CCN duplicates the service chaining configuration to SVE-1 and SVE-2 to keep them identically configured.
  • the TCAM entries in SVE-1 and SVE-2 are identically programmed. Additionally or alternatively, the TCAM entries in SVE-1 and SVE-2 may be synchronized by communicating the entries between each other.
  • services begin communicating to the Virtual IP that represents SVE.
  • SVE-1 and SVE-2 will have the HSRP virtual IP switch virtual interface (SVI) at the access side of SIA 100 .
  • SVI virtual IP switch virtual interface
  • Service-1 and Service-2 communicate to the Virtual IP, and packets are directed to the active SVE, e.g., SVE-1.
  • the SCL encapsulates the packet with an SIA header and redirects packets to SVE-1 for a service path that goes from Service-1 to Service-2 based on the Virtual IP.
  • an inbound packet received by the SCL is classified and send to SVE-1.
  • SVE-1 redirects packet to Service-1, as the packet is destined for that service. As shown in FIG. 5 , SVE-1 forwards the packet to Service-1.
  • packets received from Service-1 are returned.
  • packets may be sent from Service-1 to SVE-2 based on a port channel hash.
  • SVE-2 may be programmed with the identical TCAM entries as SVE-1. As such, SVE-2 does not need to forward the return packet to SVE-1 over the peer link, rather SVE-2 can service the packet directly. With reference to FIG. 7 , SVE-2 may forward the packet to Service 2 for further processing.
  • packets are received back from Service-2 are sent to SVE-1 or SVE-2.
  • SVE-1 or SVE-2 sends the packet to SCL, or remove the SIA header and route the payload normally, As shown in FIG. 7 , the packets are returned to SVE-2 and then sent to the SCL or otherwise routed.
  • SVE-1 SVE-2
  • SVE-1 and SVE-2 are active and able to serve packets.
  • the SVE-1 may be implemented in software. As such, a larger TCAM table size may be possible. In such an implementation, all TCAM entries may be synchronized, as table size is not a limitation.
  • the computing device In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
  • One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like.
  • API application programming interface
  • Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system.
  • the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.

Abstract

Systems and methods for providing service virtualization endpoint (SVE) redundancy in a two-node, active-standby form. An active-standby pair of SVEs register with a cloud-centric-network control point (CCN-CP) as a single service node (SN) using a virtual IP address for both a control-plane and a data-plane. At any given time, only the active SVE is a host for the control-plane and the data-plane. When a failover happens, the hosting operation is taken over by the standby SVE, therefore the failover will be transparent to CCN-CP and the SN.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of, claims priority to, and incorporates entirely by reference co-pending U.S. patent application Ser. No. 13/297,568, now U.S. Pat. No. 9,503,366.
  • BACKGROUND
  • Approximately a third of all IT spending is consumed in the data center. With such a large share of IT Total Cost of Ownership (TCO) concentrated in the data center, changes in architecture can materially impact IT spend and corporate competitiveness. While the trends of virtualization and cloud computing offer data center architecture opportunities, there are also challenges. High-end data center design is challenged with increasing complexity, the need for greater workload mobility and reduced energy consumption. Traffic patterns have also shifted significantly, from primarily client-server or as commonly referred to as north-to-south flows, to a combination of client-server and server-server or east-to-west plus north-to-south streams. These shifts have wreaked havoc on application response time and end user experience, since the network is not designed for these Brownian motion type flows.
  • High availability in the data center refers not only to device-specific availability and uptime, but also to network design and features that prevent downtime in the case of a catastrophic event. Uptime in this context refers to availability of the switch to direct traffic. As more and more equipment is added to the data center network, the high availability of the network may be undermined. Network architects need to consider design best practices to reduce single points of failure and achieve network uptime goals in the data center.
  • SUMMARY
  • Systems and methods for providing service virtualization endpoint (SVE) redundancy in a two-node, active-standby form. An active-standby pair of SVEs register with a cloud-centric-network control point (CCN-CP) as a single service node (SN) using a virtual IP address for both a control-plane and a data-plane. At any given time, only the active SVE is a host for the control-plane and the data-plane. When a failover happens, the hosting operation is taken over by the standby SVE, therefore the failover will be transparent to CCN-CP and the Service Node.
  • In accordance with a method of the present disclosure, redundancy in a service insertion architecture may be include: providing a service classifier (SCL) that performs traffic classification and service header insertion; providing a first services virtualization endpoint (SVE) and a second services virtualization endpoint (SVE) at a virtual IP address, the first SVE and the second SVE each providing access to service nodes; replicating service chaining information from the first SVE to the second SVE; redirecting packets received at the SCL to the first SVE at the virtual IP address; directing the packets in accordance with a mapping for processing; and returning the packets to the SCL.
  • In accordance with service insertion architecture of the present disclosure, there is provided a cloud-centric-network (CCN) control point that maintains an ordered list of service nodes and a path connecting each element in the order; a service classifier (SCL); one or more services virtualization endpoints (SVEs); and one or more service nodes (SNs) that provide services within the service insertion architecture. The one SVE registers with the CCN at a virtual IP address, and wherein a packet enters the service insertion architecture at the SCL and is directed to the SNs via the one SVE at the virtual IP address.
  • Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.
  • FIG. 1 illustrates an exemplary Service Insertion Architecture (SIA) infrastructure having a Services Virtualization Endpoint (SVE) with redundancy;
  • FIG. 2 illustrates an exemplary operational flow diagram for providing SVEs within the SIA Infrastructure of FIG. 1 where the SVEs have redundancy; and
  • FIGS. 3-7 illustrate data flows within the SIA Infrastructure during the execution of the operational flow of FIG. 2.
  • DETAILED DESCRIPTION
  • As will be described in detail herein with reference to several exemplary implementations, systems and methods for providing Services Virtualization Endpoint (SVE) redundancy and load-sharing in a Service Insertion Architecture (SIA) infrastructure are provided. With reference to FIG. 1, the SIA 100 may consist of a control plane and data plane. The SIA control plane functionality may include Cloud-Centric-Network (CCN) Control Point, a Service Classifier (SCL), one or more Services Virtualization Endpoints (SVE) and a Service Node (SN). The SIA data plane functionality may include the Service Classifier (SCL), the one or more Services Virtualization Endpoints (SVE), and the Service Node (SN).
  • The Cloud Centric Network (CCN) Control Point (CP) is the central control plane entity in the SIA domain. The CCN CP maintains the mapping between the classification context and the service paths. Each service path may contain one classification context, an ordered list of service nodes and a path connecting each element in the order as defined in the policy. In addition to maintaining information about active services, the CCN CP aids in the creation and binding of service paths, thus facilitating the setup of the data path. For high availability, CCN CPs implemented on multiple chassis may be clustered using an NX-OS Cluster Infrastructure. All the configuration of CCN CPs is maintained consistent throughout the cluster using Reliable Group Communication facility of NX-OS Cluster Infrastructure.
  • The Service Classifier (SCL) performs initial traffic classification and service header insertion. SCL is the entry point in the data path to SIA domain and is typically present at the edge of a network. The SCL is the first (head) node in a service-path. The SCL mainly interacts with the CCN CP in the control-plane and with SVE in the data plane.
  • The Services Virtualization Endpoint (SVE) front-ends service nodes and off-loads the complex SIA data plane functions such as service chaining, transport encapsulation, decapsulation and routing from the actual service. For high availability, SVEs running on two different chassis may be grouped together using Virtual PortChannel (VPC). In this configuration, one of the SVEs functions as an active SVE while the other will be in standby mode from Control Path perspective. CCN CP will duplicate configuration between the two SVEs. This aspect of the disclosure is discussed in further detail below.
  • The Service Node (SN) delivers the actual service in the SIA service-path. The service node communicates with CCN-CP in the control and SVE in the data plane. In the data plane, the SVE bridges data packets between the SCL or another SVE and the service node. In other words, in the data plane, the SCL intercepts the interested traffic and redirects it into the service path comprising of ordered service nodes. Each Service Node (SN) is front-ended by an SVE. After the traffic reaches SVE, the traffic flows from one service node to another in the order it was provisioned until it reaches the last service node. When a packet flows to another service node, it is always via the SVE. The last service node returns the traffic back to SVE which decides whether to redirect it back to SCL or forward it. The Service Nodes is always Layer-2 adjacent to the SVE. When there is more than one SVEs present, they can be either Layer-2 or Layer-3 away between two SVEs.
  • The SIA data plane functions may include classification and SIA context tagging, redirection and service selection. With respect to classification and SIA context tagging, the classifier intercepts the interested traffic and inserts a shared context in the packet. The shared context primarily constitutes a unique id (e.g., a traffic classification id) and service ordering information for next service selection in the service path the tagged traffic is redirected. This id conveys the classification context, i.e., the result of classification, information to the service nodes in the service path. Service nodes may use this id to apply service specific polices to the packets. The id remains constant across a service path. In addition, it also represents a service path the traffic is flowing in a SIA domain. It may also represent the service path in an SIA domain. If the path is linear, it is often referred to as a chain. The id is used to derive the path to the next hop at each service node. The id may be used to virtualize services in the network which means that the irrespective of the service device location, the packet tagged with the same classification id will always undergo the same set of services in the SIA domain.
  • With respect to redirection, each SIA physical device at data plane level, may redirect the tagged packets to the next hop physical device in the service path. The redirection is done using the available transport mechanisms of the underlying network. For example, a GRE tunnel may be used for this purpose. This redirection tunnel may be shared between two physically or logically adjacent SIA devices and is used to carry entire SIA traffic for multiple service paths that flows between the two devices.
  • In accordance with some implementations, SVE functionality on SVE1 and SVE2 may be clustered together so that they act as though there is only one SVE from the CCN's perspective. During bootstrap configuration, one SVE may be assigned as master, the other slave. A virtual name, a virtual IP and a virtual MAC may be hosted by the master SVE. Both SVEs advertise their capabilities and their corresponding master/slave roles. The CCN is responsible for keeping both SVEs informed of the service path segments creation/deletion. As a result, the Ternary Content Addressable Memory (TCAM) entries of both master and slave SVEs may be identical and ready to forward packets along the service path. This ensures that the data path traffic can actually enter any of the SVE nodes and can get forwarded directly to the service or back to the SCL without having to traverse the peer link (vPC) that connects both SVE nodes together.
  • The SCL is configured to communicate to a virtual IP address exposed via First Hop Redundancy Protocol (FHRP) to the master SVE entity. For example, where there is one Service, one SCL and two SVEs, both SVEs maintain identical TCAM entries. If SCL is Layer-2 adjacent to SVE, then vPC can be formed between SCL and the two SVEs. The traffic from SCL to SVE reaches either SVE, depending on flow-based load-balancing at SCL. That SVE forwards the packet to the service. There is no need for the packets to flow between SVE-1 and SVE-2. If SCL is Layer-3 adjacent to SVE, it is possible that the packet can follow a route through SVE2 before it gets to SVE1. Again, SVE-2 can directly forward the packet if there is a TCAM match.
  • Similarly, for the reverse traffic from the service going back to SVE. Either SVE can handle the traffic and send it back to SCL. The slave SVE keeps a regular health check against the master SVE. If the master SVE fails, the slave SVE will take over the virtual IP, virtual MAC and virtual name and become the master.
  • In some implementations, the scheme can further be enhanced such that only certain TCAM entries are synchronized between the two SVEs. For the TCAM entries which are not synchronized, a user can specify the load balancing weights among the two SVEs. For example, the user may want one SVE to handle more traffic. For the TCAM entries which are synchronized, a user can rely on RBH (result based hash) in the ASIC for load-balancing.
  • In some implementations, one of the SVEs may be configured with more TCAM entries than the other SVE. As such the former SVE will carry more traffic than the later SVE. The above may be implemented for load balancing between the SVEs. This may also be based that SVE-1 is more powerful than SVE-2 (not the other way around), and SVE function cost is more than packet forwarding function (e.g., the SVE is implemented in software instead of in a hardware ASIC).
  • With reference to FIG. 1, SVE redundancy may be provided such that if one SVE fails (e.g. SVE-1), a user does not lose services provided by a failed SVE, but rather is serviced by a surviving SVE (e.g., SVE-2).
  • With reference to FIG. 2, there is illustrated an exemplary operational flow diagram for providing an SVE in the SIA 100 of FIG. 1 where the SVEs provide redundancy. FIGS. 3-7 illustrate data flows within the SIA 100 during the execution of the operational flow of FIG. 2. As noted above, SVE redundancy may be provided such that if one SVE fails (e.g. SVE-1), a user does not lose services provided by a failed SVE, but rather is serviced by a surviving SVE (e.g., SVE-2).
  • At 202, the SVE-1 and SVE-2 register with the CCN using a Hot Standby Router Protocol (HSRP) Virtual IP address. The Hot Standby Router Protocol (HSRP) is a CISCO protocol that provides network redundancy for IP networks, ensuring that user traffic immediately and transparently recovers from first hop failures in network edge devices or access circuits. A virtual IP address (VIP) is an IP address that is not connected to a specific computer or network interface card (NIC) on a computer. Incoming packets are sent to the VIP address, but they are redirected to physical network interfaces. VIPs may be used for connection redundancy. For example, a VIP address may still be available if a computer or NIC fails because an alternative computer or NIC replies to connections. As shown in FIG. 3, during the operation 202, one SVE (e.g., SVE-1) registers with the CCN. As such SVE-1 and SVE-2 have a common Virtual IP, and the CCN will only see one SVE, i.e., SVE-1.
  • At 204, the CCN duplicates the service chaining configuration to SVE-1 and SVE-2 to keep them identically configured. As a result, the TCAM entries in SVE-1 and SVE-2 are identically programmed. Additionally or alternatively, the TCAM entries in SVE-1 and SVE-2 may be synchronized by communicating the entries between each other.
  • At 206, services begin communicating to the Virtual IP that represents SVE. As shown in FIG. 4, SVE-1 and SVE-2 will have the HSRP virtual IP switch virtual interface (SVI) at the access side of SIA 100. As such, Service-1 and Service-2 communicate to the Virtual IP, and packets are directed to the active SVE, e.g., SVE-1.
  • At 208, the SCL encapsulates the packet with an SIA header and redirects packets to SVE-1 for a service path that goes from Service-1 to Service-2 based on the Virtual IP. As shown in FIG. 5, an inbound packet received by the SCL is classified and send to SVE-1. At 210, SVE-1 redirects packet to Service-1, as the packet is destined for that service. As shown in FIG. 5, SVE-1 forwards the packet to Service-1.
  • At 212, packets received from Service-1 are returned. As shown in FIG. 6, packets may be sent from Service-1 to SVE-2 based on a port channel hash. In some implementations, SVE-2 may be programmed with the identical TCAM entries as SVE-1. As such, SVE-2 does not need to forward the return packet to SVE-1 over the peer link, rather SVE-2 can service the packet directly. With reference to FIG. 7, SVE-2 may forward the packet to Service 2 for further processing.
  • At 214, packets are received back from Service-2 are sent to SVE-1 or SVE-2. At 216, SVE-1 or SVE-2 sends the packet to SCL, or remove the SIA header and route the payload normally, As shown in FIG. 7, the packets are returned to SVE-2 and then sent to the SCL or otherwise routed.
  • Thus, in accordance with the operation flow of FIG. 2, from the standpoint of the CCN, there is only one SVE (SVE-1) that is identified by the Virtual IP, whereas SVE-2 is a standby. However, from a data path standpoint, both SVE-1 and SVE-2 are active and able to serve packets.
  • In some implementations, the SVE-1 may be implemented in software. As such, a larger TCAM table size may be possible. In such an implementation, all TCAM entries may be synchronized, as table size is not a limitation.
  • It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.
  • Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A method for providing redundancy in a service insertion architecture, comprising:
providing a service classifier (SCL) that performs traffic classification and service header insertion, the service header including service ordering information related to one or more service nodes that apply service-specific policies to packets received at the SCL; providing a first services virtualization endpoint (SVE) and a second services virtualization endpoint (SVE), the first SVE and the second SVE each sharing a same virtual IP address in a control path, the first SVE and the second SVE each connected to the one or more service nodes in a data path;
duplicating service chaining information in the first SVE and the second SVE,
redirecting the packets received at the SCL to the virtual IP address;
forwarding the packets, via the virtual IP address, to one of the first SVE and the second SVE in the data path and then to the one or more service nodes in accordance with the service header, and
returning the packets to the SCL.
2. A method according to claim 1, further comprising accessing the service header with one of the first and second SVEs to utilize service ordering information within the service header and related to one or more service nodes that apply service-specific policies to packets received at the SCL.
3. The method of claim 2, further comprising:
providing a cloud centric network control point (CCN-CP) that maintains the mapping, which is an ordered list of the service nodes and a path connecting the service nodes, and configuring the SCL with a load balancer to manage the redirecting of the packets to the virtual IP address.
4. The method of claim 3, further comprising:
storing the service chaining information in a ternary content addressable memory (TCAM); and
replicating, by the CCN-CP, the TCAM entries between the first SVE and the second SVE.
5. The method of claim 1, wherein the service nodes communicate to the virtual IP address.
6. The method of claim 5, further comprising directing the packets to the first SVE.
7. The method of claim 1, further comprising:
storing the service chaining information in a ternary content addressable memory (TCAM); and
replicating the TCAM entries between the first SVE and the second SVE, wherein the first SVE and the second SVE communicate directly with each other.
8. The method of claim 1, further comprising:
detecting a failure of the first SVE;
redirecting packets received at the SCL to the second SVE at the virtual IP address; and
directing the packets in accordance with the mapping for processing.
9. The method of claim 1, further comprising storing the service chaining information in a ternary content addressable memory (TCAM); and
replicating predetermined ones of the TCAM entries between the first SVE and the second SVE.
10. The method of claim 9, wherein a RBH (result based hash) is used for load-balancing between the first and second SVEs.
11. The method of claim 1, further comprising:
storing the service chaining information in a ternary content addressable memory (TCAM); and
replicating the TCAM entries between the first SVE and the second SVE.
12. The method of claim 1, further comprising:
storing the service chaining information in a ternary content addressable memory (TCAM); and
configuring the first SVE with more TCAM entries than the second SVE.
13. A service insertion architecture, comprising:
a cloud-centric-network (CCN) control point that maintains an ordered list of service nodes and a path connecting each element in the order;
a service classifier (SCL) that performs traffic classification and service header insertion, the service header including service ordering information related to one or more service nodes that apply service-specific policies to packets received at the SCL;
a first services virtualization endpoint (SVE) and a second services virtualization endpoint (SVE) the first SVE and the second SVE each sharing a same virtual IP address in a control path, the first SVE and the second SVE each in data communication with the service nodes in a data path, the first SVE and the second SVE each comprising Ternary Content Addressable Memory (TCAM), wherein TCAM entries comprise service chaining information of the first and second SVEs,
wherein said TCAM entries are distributed between the first SVE and the second SVE to effect load balancing of packets between the SVEs.
14. The service insertion architecture of claim 13, wherein one SVE registers with the CCN at the virtual IP address, and wherein a packet enters the service insertion architecture at the SCL and is directed to the service nodes via the one SVE at the virtual IP address, and
wherein one of the first SVE and the second SVE forwards the packet in the data path to the service nodes in accordance with a mapping comprising an ordered list of the service nodes and a path connecting the service nodes.
15. The service insertion architecture of claim 13, wherein if one of the SVE fails, an other SVE services the packet.
16. The service insertion architecture of claim 15, wherein one of the first and second SVEs is designated a master and another of the SVEs is designated a slave.
17. The service insertion architecture of claim 16, wherein only one of the first and second SVEs is active at any one time in the data plane.
18. The service insertion architecture of claim 15, wherein at least one of the first and second SVEs is implemented in software.
19. The service insertion architecture of claim 15, wherein at least one of the first and second SVEs is implemented in an ASIC.
20. A service insertion architecture, comprising:
a cloud centric network control point (CCN-CP) that maintains mapping in both a network node classification context and a service path context;
a service classifier (SCL) that performs traffic classification and service header insertion, the service header including service ordering information related to one or more service nodes that apply service-specific policies to packets received at the SCL;
a first services virtualization endpoint (SVE) and a second services virtualization endpoint (SVE), the first SVE and the second SVEs each sharing a same virtual IP address in a control path, such that the CCN•CP communicates with the virtual IP address to direct packets to one of the first SVE and the second SVE, wherein said first and second SVEs are connected to service nodes in a data path, and wherein only one of the first SVE and the second SVE is active at any one time;
a virtual IP switch interface to the service insertion architecture connecting the service nodes to the virtual IP address connecting the packets to an active SVE.
US15/347,115 2011-11-16 2016-11-09 Method and apparatus for sve redundancy Abandoned US20170063604A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/347,115 US20170063604A1 (en) 2011-11-16 2016-11-09 Method and apparatus for sve redundancy

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/297,568 US9503366B2 (en) 2011-11-16 2011-11-16 Method and apparatus for SVE redundancy
US15/347,115 US20170063604A1 (en) 2011-11-16 2016-11-09 Method and apparatus for sve redundancy

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/297,568 Continuation US9503366B2 (en) 2011-11-16 2011-11-16 Method and apparatus for SVE redundancy

Publications (1)

Publication Number Publication Date
US20170063604A1 true US20170063604A1 (en) 2017-03-02

Family

ID=48280532

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/297,568 Active 2032-04-11 US9503366B2 (en) 2011-11-16 2011-11-16 Method and apparatus for SVE redundancy
US15/347,115 Abandoned US20170063604A1 (en) 2011-11-16 2016-11-09 Method and apparatus for sve redundancy

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/297,568 Active 2032-04-11 US9503366B2 (en) 2011-11-16 2011-11-16 Method and apparatus for SVE redundancy

Country Status (1)

Country Link
US (2) US9503366B2 (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8743885B2 (en) 2011-05-03 2014-06-03 Cisco Technology, Inc. Mobile service routing in a network environment
US9794379B2 (en) * 2013-04-26 2017-10-17 Cisco Technology, Inc. High-efficiency service chaining with agentless service nodes
CN104243302B (en) 2013-06-20 2018-03-16 华为技术有限公司 Business route message processing method, device and network system
US9565075B2 (en) * 2013-10-07 2017-02-07 Empire Technology Development Llc Distributed user interfaces as a service
CN104734868A (en) * 2013-12-19 2015-06-24 中兴通讯股份有限公司 Service processing method and device among service nodes
US9451032B2 (en) * 2014-04-10 2016-09-20 Palo Alto Research Center Incorporated System and method for simple service discovery in content-centric networks
US9379931B2 (en) 2014-05-16 2016-06-28 Cisco Technology, Inc. System and method for transporting information to services in a network environment
US9479443B2 (en) 2014-05-16 2016-10-25 Cisco Technology, Inc. System and method for transporting information to services in a network environment
WO2016061243A1 (en) * 2014-10-14 2016-04-21 Interdigital Patent Holdings, Inc. Anchoring ip devices in icn networks
US10417025B2 (en) 2014-11-18 2019-09-17 Cisco Technology, Inc. System and method to chain distributed applications in a network environment
US9660909B2 (en) 2014-12-11 2017-05-23 Cisco Technology, Inc. Network service header metadata for load balancing
USRE48131E1 (en) 2014-12-11 2020-07-28 Cisco Technology, Inc. Metadata augmentation in a service function chain
CN105743822B (en) * 2014-12-11 2019-04-19 华为技术有限公司 A kind of method and device handling message
US9853898B1 (en) * 2015-04-29 2017-12-26 Juniper Networks, Inc. Dynamic service chain provisioning
US9762402B2 (en) 2015-05-20 2017-09-12 Cisco Technology, Inc. System and method to facilitate the assignment of service functions for service chains in a network environment
US11044203B2 (en) 2016-01-19 2021-06-22 Cisco Technology, Inc. System and method for hosting mobile packet core and value-added services using a software defined network and service chains
US10187306B2 (en) 2016-03-24 2019-01-22 Cisco Technology, Inc. System and method for improved service chaining
US10931793B2 (en) 2016-04-26 2021-02-23 Cisco Technology, Inc. System and method for automated rendering of service chaining
US10419550B2 (en) 2016-07-06 2019-09-17 Cisco Technology, Inc. Automatic service function validation in a virtual network environment
US10218616B2 (en) 2016-07-21 2019-02-26 Cisco Technology, Inc. Link selection for communication with a service function cluster
US10320664B2 (en) 2016-07-21 2019-06-11 Cisco Technology, Inc. Cloud overlay for operations administration and management
US10225270B2 (en) 2016-08-02 2019-03-05 Cisco Technology, Inc. Steering of cloned traffic in a service function chain
US10218593B2 (en) 2016-08-23 2019-02-26 Cisco Technology, Inc. Identifying sources of packet drops in a service function chain environment
US10361969B2 (en) 2016-08-30 2019-07-23 Cisco Technology, Inc. System and method for managing chained services in a network environment
US10225187B2 (en) 2017-03-22 2019-03-05 Cisco Technology, Inc. System and method for providing a bit indexed service chain
US10257033B2 (en) 2017-04-12 2019-04-09 Cisco Technology, Inc. Virtualized network functions and service chaining in serverless computing infrastructure
US10884807B2 (en) 2017-04-12 2021-01-05 Cisco Technology, Inc. Serverless computing and task scheduling
US10333855B2 (en) 2017-04-19 2019-06-25 Cisco Technology, Inc. Latency reduction in service function paths
US10554689B2 (en) 2017-04-28 2020-02-04 Cisco Technology, Inc. Secure communication session resumption in a service function chain
US10735275B2 (en) 2017-06-16 2020-08-04 Cisco Technology, Inc. Releasing and retaining resources for use in a NFV environment
US10798187B2 (en) 2017-06-19 2020-10-06 Cisco Technology, Inc. Secure service chaining
US10397271B2 (en) 2017-07-11 2019-08-27 Cisco Technology, Inc. Distributed denial of service mitigation for web conferencing
US10673698B2 (en) 2017-07-21 2020-06-02 Cisco Technology, Inc. Service function chain optimization using live testing
US11063856B2 (en) 2017-08-24 2021-07-13 Cisco Technology, Inc. Virtual network function monitoring in a network function virtualization deployment
US10791065B2 (en) 2017-09-19 2020-09-29 Cisco Technology, Inc. Systems and methods for providing container attributes as part of OAM techniques
US11018981B2 (en) 2017-10-13 2021-05-25 Cisco Technology, Inc. System and method for replication container performance and policy validation using real time network traffic
US10541893B2 (en) 2017-10-25 2020-01-21 Cisco Technology, Inc. System and method for obtaining micro-service telemetry data
US10666612B2 (en) 2018-06-06 2020-05-26 Cisco Technology, Inc. Service chains for inter-cloud traffic
US10909067B2 (en) * 2018-08-07 2021-02-02 Futurewei Technologies, Inc. Multi-node zero-copy mechanism for packet data processing
US10986041B2 (en) 2018-08-07 2021-04-20 Futurewei Technologies, Inc. Method and apparatus for virtual network functions and packet forwarding
CN109451084B (en) * 2018-09-14 2020-12-22 华为技术有限公司 Service access method and device
US11146490B2 (en) * 2019-05-07 2021-10-12 Cisco Technology, Inc. Distributed load balancer health management using data center network manager

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020097672A1 (en) * 2001-01-25 2002-07-25 Crescent Networks, Inc. Redundant control architecture for a network device
US20060002343A1 (en) * 2004-07-02 2006-01-05 Cisco Technology, Inc. Network route processor using state-based switchover
US20080177896A1 (en) * 2007-01-19 2008-07-24 Cisco Technology, Inc. Service insertion architecture
US20080285438A1 (en) * 2007-04-20 2008-11-20 Rohini Marathe Methods, systems, and computer program products for providing fault-tolerant service interaction and mediation function in a communications network
US20100165985A1 (en) * 2008-12-29 2010-07-01 Cisco Technology, Inc. Service Selection Mechanism In Service Insertion Architecture Data Plane

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003692B1 (en) * 2002-05-24 2006-02-21 Cisco Technology, Inc. Dynamic configuration synchronization in support of a “hot” standby stateful switchover
US8806266B1 (en) * 2011-09-28 2014-08-12 Juniper Networks, Inc. High availability using full memory replication between virtual machine instances on a network device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020097672A1 (en) * 2001-01-25 2002-07-25 Crescent Networks, Inc. Redundant control architecture for a network device
US20060002343A1 (en) * 2004-07-02 2006-01-05 Cisco Technology, Inc. Network route processor using state-based switchover
US20080177896A1 (en) * 2007-01-19 2008-07-24 Cisco Technology, Inc. Service insertion architecture
US20080285438A1 (en) * 2007-04-20 2008-11-20 Rohini Marathe Methods, systems, and computer program products for providing fault-tolerant service interaction and mediation function in a communications network
US20100165985A1 (en) * 2008-12-29 2010-07-01 Cisco Technology, Inc. Service Selection Mechanism In Service Insertion Architecture Data Plane

Also Published As

Publication number Publication date
US20130121137A1 (en) 2013-05-16
US9503366B2 (en) 2016-11-22

Similar Documents

Publication Publication Date Title
US20170063604A1 (en) Method and apparatus for sve redundancy
US11411770B2 (en) Virtual port channel bounce in overlay network
US10333836B2 (en) Convergence for EVPN multi-homed networks
US10547544B2 (en) Network fabric overlay
US10225331B1 (en) Network address translation load balancing over multiple internet protocol addresses
US8730793B2 (en) Method and apparatus providing network redundancy and high availability to remote network nodes
US8976652B2 (en) Relay device, method of controlling relay device, and relay system
US10148554B2 (en) System and methods for load placement in data centers
CN107547366B (en) Message forwarding method and device
US8699327B2 (en) Multipath virtual router redundancy
US20170048146A1 (en) Multiple persistant load balancer system
US11012412B2 (en) Method and system for network traffic steering towards a service device
WO2021004385A1 (en) Service unit switching method, system and apparatus
Chen et al. A scalable multi-datacenter layer-2 network architecture
US20140269250A1 (en) Systems and methods for tunnel-free fast rerouting in internet protocol networks
US10700893B1 (en) Multi-homed edge device VxLAN data traffic forwarding system
US9288141B2 (en) Highly scalable modular system with high reliability and low latency
US10447581B2 (en) Failure handling at logical routers according to a non-preemptive mode
US10142254B1 (en) Service chaining based on labels in control and forwarding
CN114342333B (en) Transparent isolation region providing stateful services between physical and logical networks
US20230239273A1 (en) Managing exchanges between edge gateways and hosts in a cloud environment to support a private network connection
US20230239274A1 (en) Managing exchanges between edge gateways in a cloud environment to support a private network connection
US20240015095A1 (en) Designating a primary multicast flow and a backup multicast flow for multicast traffic

Legal Events

Date Code Title Description
AS Assignment

Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FENG, CHAO;SHARMA, SAMAR;CHIDAMBARAM, SRIRAM;AND OTHERS;SIGNING DATES FROM 20111021 TO 20111115;REEL/FRAME:041318/0898

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING PUBLICATION PROCESS