WO1999051001A1 - Method and device for bandwidth pooling - Google Patents

Method and device for bandwidth pooling Download PDF

Info

Publication number
WO1999051001A1
WO1999051001A1 PCT/US1999/006301 US9906301W WO9951001A1 WO 1999051001 A1 WO1999051001 A1 WO 1999051001A1 US 9906301 W US9906301 W US 9906301W WO 9951001 A1 WO9951001 A1 WO 9951001A1
Authority
WO
WIPO (PCT)
Prior art keywords
bandwidth
pooling device
subscriber
server
data
Prior art date
Application number
PCT/US1999/006301
Other languages
French (fr)
Inventor
Tjandra Trisno
Original Assignee
Acucomm, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acucomm, Inc. filed Critical Acucomm, Inc.
Publication of WO1999051001A1 publication Critical patent/WO1999051001A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • H04Q3/0066Bandwidth allocation or management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5635Backpressure, e.g. for ABR
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13034A/D conversion, code compression/expansion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13093Personal computer, PC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13103Memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13106Microprocessor, CPU
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13109Initializing, personal profile
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13141Hunting for free outlet, circuit or channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13166Fault prevention
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13174Data transmission, file transfer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13175Graphical user interface [GUI], WWW interface, visual indication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13196Connection circuit/link/trunk/junction, bridge, router, gateway
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13199Modem, modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13204Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13209ISDN
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13271Forced release
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1329Asynchronous transfer mode, ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13292Time division multiplexing, TDM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13299Bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1332Logic circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13322Integrated circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13332Broadband, CATV, dynamic bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13375Electronic mail
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13389LAN, internet

Definitions

  • This invention relates to a method and device for bandwidth pooling and splitting. More particular, this invention relates to a method and device allow multiple subscriber lines to share one or more high bandwidth data lines.
  • Analog modems perform digital-to-analog and analog-to-digital conversion to allow digital data to be transmitted in analog form over standard telephone lines.
  • Analog modems are inexpensive and widely available, but have limited bandwidth and are subject to losses due to the multiple digital-to-analog and analog-to-digital conversion processes that may take place along any given telephone connection.
  • analog modems offer speeds of up to 33.6 kbps, with speeds of up to 56 kbps possible if conversions at the receiving end are eliminated.
  • Modem bonding techniques can combine the bandwidths from two or more analog modems, but require specialized hardware and a corresponding number of telephone lines.
  • ISDN Integrated Services Digital Network
  • ISDN eliminates digital-to-analog or analog-to-digital conversion, and carries all data, including voice and fax, over a digital line.
  • ISDN offers speeds of up to 64 kbps or 128 kbps.
  • ISDN is limited in availability and requires the installation of ISDN lines, which may make it much costlier to implement than an analog modem.
  • Proprietary satellite dish formats offer higher speeds, but require hardware that can be costly, and still require an analog modem and a standard telephone line for uploads. Cable modems also offer more bandwidth, but require infrastructure upgrades not in place in most areas, and thus suffer from extremely limited availability.
  • xDSL Digital Subscriber Line
  • xDSL Digital Subscriber Line
  • xDSL thus does not require the installation of specialized lines, and can be easily implemented in older buildings. However, the local loop distance is limited, thus requiring expensive infrastructure upgrades such as fiber optic cable.
  • Variants of xDSL include RADSL (Rate Adaptive), ADSL (Asymmetric), HDSL (High-bit-rate), VDSL (Very high-bit-rate), and SDSL (Symmetric).
  • bandwidth pooling device which can pool data from multiple subscriber lines onto a high bandwidth back-end line. What is also needed is a bandwidth pooling device which is intelligent and can optimize utilization of the back-end line.
  • An object of the present invention is to provide a bandwidth pooling device which pools data from multiple subscriber lines onto one or more high bandwidth back-end lines.
  • bandwidth pooling device for allowing a plurality of data lines to share at least one high bandwidth line.
  • the bandwidth pooling device comprises a plurality of front-end ports capable of being connected to the plurality of data lines, at least one back-end port capable of being connected to at least one high bandwidth line, and a microprocessor connected to the front-end ports and the back-end port.
  • the microprocessor is capable of taking into account conditions at the front-end ports and the back-end port to perform a data routing function, a bandwidth shaping function, and a data concentrating function.
  • FIG. 1 shows a bandwidth pooling device of the present invention in a typical deployment environment.
  • FIG. 2 A shows a simplified subsystem diagram of the bandwidth pooling device present invention.
  • FIG. 2B shows one embodiment of the bandwidth pooling device of the present invention.
  • FIG. 2C shows another embodiment of the bandwidth pooling device of the present invention.
  • FIG. 3 A shows a visual depiction of data flow from the subscriber lines to the back-end line, or upstream data flow
  • FIG. 3B shows a visual depiction of data flow from the back-end line to the subscriber lines, or downstream data flow.
  • FIG. 3C shows a flow process diagram of a bandwidth shaping algorithm used in the bandwidth pooling device of the present invention.
  • DETAILED DESCRIPTION In most data access configurations, the greatest cost and expense is associated with leasing the high bandwidth back-end line. Therefore, the bandwidth pooling device of the present invention seeks to maximize utilization of the back-end line by a combination of techniques, including minimizing unnecessary back-end traffic, efficiently allotting back-end bandwidth, and reducing the demand for back-end bandwidth.
  • the bandwidth pooling device of the present invention maximizes utilization of back-end bandwidth in several ways: (1) integrating routing functions into the device, which allows the device to select when and where routing takes place and directly route local traffic and data, thus minimizing unnecessary back-end traffic; (2) integrating bandwidth shaping functions into the device to provide varying levels of service to the subscribers, thus efficiently allotting back-end bandwidth; (3) providing for easy connectibility to readily localized services such as an e-mail server, a proxy server, or a network computer server, thus reducing demand for back-end bandwidth; and (4) integrating all of the above functions so that a synergistic interaction is created wherein each function can be performed based on an intimate knowledge of the activity and condition of the others.
  • bandwidth pooling device permits routing takes place at the most advantageous location depending on the overall system configuration.
  • the availability of an e-mail server means routing should take place soon after the front-end, to avoid sending e-mail data to the bandwidth shaping function.
  • bandwidth shaping algorithm can now take into account: (1) the front-end port configuration, such as the number of subscriber lines connected and the quality of service associated with each subscriber line; (2) the routing configuration; and (3) the aggregate bandwidth available at the back-end, depending on the number of back-end lines and amount of traffic in queue.
  • DSLAMs Digital Subscriber Line Access Multiplexers
  • DSLAMs do not have routers, and thus multiplex all incoming data and sends it out the high bandwidth line. This simplifies design, but wastes back- end bandwidth, especially if there is a significant amount of local traffic.
  • the lack of a router also makes it costly and difficult to connect a DSLAM to readily localized services, such as an e-mail server, proxy server, or network computer server. Again, back-end bandwidth is wasted because additional bandwidth is required for these services to be provided at the other end of the high bandwidth line.
  • DSLAMs also do not have quality of service capabilities, and cannot perform bandwidth shaping. As a result, subscribers may experience bottlenecks or periods of severe congestion during which no service is available at all. Quality of service boxes are available which can be connected downstream of the DSLAM before the high bandwidth line, but these boxes cannot provide feedback to the DSLAM in order to provide optimized bandwidth shaping.
  • FIGURE 1 shows a bandwidth pooling device 100 of the present invention in a typical deployment environment.
  • Bandwidth pooling device 100 is connected to a plurality of subscriber lines 110 in a multi-tenant residential or office building.
  • Subscriber lines 110 may be xDSL (Digital Subscriber Line) lines, using any variant of a single pair xDSL technology.
  • xDSL technology delivers data to the subscriber on a single unshielded twisted copper pair line, or standard telephone wiring. This eliminates the need for major rewiring of buildings, especially older buildings, thus simplifying deployment.
  • Subscriber lines 110 may use technologies other than xDSL, such as ISDN or LAN connections such as Ethernet.
  • Subscriber lines 110 are connected to subscriber modems 120, which in turn may be connected to a subscriber computer 122, a subscriber router 124, or other subscriber equipment. Subscriber modems 120 are compatible with the front-end ports found on bandwidth pooling device 100. Subscriber modems 220 may be xDSL modems. Subscriber router 124 allows a LAN (Local Access Network) 126 or other subscriber data terminal equipment to be connected to bandwidth pooling device 100. Subscriber router 124 may filter out local traffic so that it does not get forwarded to subscriber modem 120, thus conserving bandwidth by minimizing the data sent to bandwidth pooling device 100.
  • LAN Local Access Network
  • Bandwidth pooling device 100 is also connected to at least one CSU/DSU (Channel Service Unit/Data Service Unit) 130.
  • CSU/DSU 130 provides framing connectivity to the WAN leased line.
  • CSU/DSU 130 may be connected to a Tl/El, T3/E3, frame relay, ATM, or other high bandwidth line.
  • CSU/DSU -130 may be external and modular, so that bandwidth pooling device 100 can be easily connected and used with a wide variety of WAN interfaces.
  • CSU/DSU 130 is in turn connected to a high bandwidth back-end line
  • Back-end line 140 such as a WAN leased line connection.
  • Back-end line 140 may be a WAN leased line connection, such as Tl/El, T3/E3, frame relay, ATM, xDSL, and the like.
  • Back-end line 140 is in turn connected to a telephone company network 150, CLEC (Competitive Local Exchange Carrier) network, or other data network.
  • Telephone company network 150 provides connectivity services such as Tl, T3, frame relay,
  • Telephone company network 150 provides the connection between bandwidth pooling device 100 at the deployment site to an ISP (Internet Service Provider) POP (Point of Presence) or headquarters.
  • Telephone company network 150 may be connected to an ISP router 160 or access system.
  • ISP router 160 is the entry point for bandwidth pooling device 100 into the ISP domain, and eventually to the Internet or other WAN via the ISP's back-end connection.
  • the ISP can monitor and configure bandwidth pooling device 100 using SNMP (Simple Network Management Protocol) over back-end line 140.
  • Bandwidth pooling device 100 may have a management port 102.
  • Management port 102 provides a back-up method of monitoring and configuring bandwidth pooling device 100.
  • Management port 102 can be connected to a management modem 142, which may be a V.34 analog modem that can be connected to telephone company network 150 through a POTS (Plain Old Telephone Service) line 144.
  • Management port 102 may alternatively be connected to a console terminal, such as a laptop computer connected by a field technician.
  • Bandwidth pooling device 100 offers support for more than one ISP, which allows multiple ISPs to use a single bandwidth pooling device 100 to connect their subscribers. This will offer the subscriber multiple ISP choices. In terms of network management, each ISP will be able to see and manage their own resources, but not those of others.
  • the network may be managed by a master ISP, which has full control over the system resources.
  • the master ISP may assign resources to one or more renter ISPs. Each renter ISP will then be allowed to control their own assigned resources but not those of other renter ISPs.
  • Bandwidth pooling device 100 may be connected to a community local network 170.
  • Community local network 170 is shared among the subscriber community, and is deemed local to the community because the connection to it does not have to go through a WAN connection.
  • Community local network 170 is connected directly to bandwidth pooling device 100 using a LAN connection such as 10/lOOBASE-T Ethernet.
  • Community local network 170 may include a community local server 172.
  • Community local server 172 may include shared server resources, such as a community Web server, Web proxy server, Web caching server, e-mail server, application server, or network computer server.
  • Community local server 172 allows for high speed access to many different kinds of data while minimizing use of back-end line 140, thus conserving bandwidth.
  • FIGURE 2 A shows a simplified subsystem diagram of bandwidth pooling device 100.
  • Bandwidth pooling device 100 includes a front-end subsystem 210.
  • Front-end subsystem 210 includes a plurality of front-end ports 212, which provide connectivity to subscriber equipment.
  • Front-end subsystem 110 may range from a dumb HDLC (High-level Data Link Carrier) framer in the case of a smaller capacity system, to a microprocessor-based, highly intelligent and robust, hot-swappable subsystem in a high capacity, highly reliable system.
  • dumb HDLC High-level Data Link Carrier
  • Bandwidth pooling device 100 also includes a back-end subsystem 230.
  • Backend subsystem 230 provides the data connectivity to a high speed connection to a WAN (Wide Area Network), LAN (Local Area Network), or local server.
  • Back-end subsystem 230 may also be used as a system interconnect for stacking together more than one bandwidth pooling device 100.
  • Front-end subsystem 210 and back-end subsystem 230 are connected through a front-end bus 215 and a back-end bus 235, respectively, to a microprocessor subsystem 250.
  • Microprocessor subsystem 250 is responsible for management of bandwidth pooling device 100.
  • Microprocessor subsystem 250 performs major protocol and routing processing.
  • Microprocessor subsystem 250 is a protocol engine and management agent responsible for processing the back-end link protocol and performing protocol conversion.
  • Microprocessor subsystem 250 is also responsible for performing any management processing, and may further be responsible for performing any bandwidth shaping processing.
  • FIGURE 2B shows one embodiment of bandwidth pooling device 100 of the present invention.
  • Front-end subsystem 210 includes a plurality of front-end ports 212 connected to a multiport HDLC controller 214.
  • Front-end ports 212 may accommodate single unshielded twisted pair or telephone wire, using any variant of a single pair xDSL technology.
  • xDSL technology is used to deliver data to the subscriber on a twisted copper pair line, or standard telephone wiring, which allows deployment without major rewiring of buildings, especially older buildings. This allows for deployment in multi-tenant residential or office buildings with standard telephone wiring.
  • technologies other than xDSL may also be employed.
  • Front-end ports 212 may include a hybrid and isolation device 216.
  • Hybrid and isolation device 216 provides the analog amplification and filtering needed to support the analog front end. Hybrid and isolation device 216 also provides electrical isolation of front-end ports 212 and the subscriber line, thus preventing damage to front-end ports 212 if anything has been connected incorrectly. Front-end ports 212 may also include chipset 218. Chipset 218 may be DDP (Digital Data Pump) and AFE (Analog Front End) chip pair that provide the digital data pump and analog front end functionality of the xDSL subscriber line. These chips are responsible for encoding digital data into an analog signal and decoding it back. DDP and AFE are also responsible for synchronizing with subscriber equipment or modems. HDLC controller 214 handles the HDLC framing and packetization of serial data coming from the subscriber. HDLC controller 214 relieves microprocessor 252 by providing framing functions, packet-oriented interfacing, intelligent buffer management, and DMA (Direct Memory Access) functions.
  • DDP Digital Data Pump
  • AFE Analog Front End
  • Front-end bus 215 connects HDLC controller 214 to a microprocessor 252.
  • Front-end bus 215 may range from the processor local bus in the case of a smaller capacity system with a dumb front-end subsystem, to a backplane oriented bus such as VME (Versa Module Eurocard) or PCI (Peripheral Component Interconnect) in a high-capacity system with an intelligent front-end subsystem.
  • VME Very Module Eurocard
  • PCI Peripheral Component Interconnect
  • Back-end subsystem 230 includes at least one back-end interface device 232 and at least one back-end port 234.
  • Back-end interface device 232 allows a high speed serial connection to CSU/DSU 130.
  • Back-end interface device 232 may use any one or more of a variety of protocols, including V.35, Tl, T3, frame relay, ATM (Asynchronous Transfer Mode), Sonet, or Ethernet.
  • Back-end interface device 232 may be a high speed PCI bus-mastering USART (Universal Synchronous/Asynchronous Receiver/Transmitter), such as a V.35 HDLC for use with
  • Backend interface device 232 may also be a 10/lOOBASE-T Ethernet interface for use with a local network or server.
  • Back-end port 234 is the physical port or connector.
  • back-end port 234 can be a leased line type connection, such as Tl, T3, frame relay, or ATM.
  • back- end port 234 can be any suitable LAN port such as 10/lOOBASE-T Ethernet. Data routing from the back-end line into the appropriate subscriber line may be performed using IP (Internet protocol) based and PPP (Point-to-Point Protocol) session based routing.
  • IP Internet protocol
  • PPP Point-to-Point Protocol
  • Back-end bus 235 connects back-end interface device 232 to microprocessor 252.
  • Back-end bus 235 may range from the processor local bus in the case of a smaller capacity system, to a high speed microprocessor bus such as a PCI bus in a higher capacity system.
  • Microprocessor subsystem 250 may include an embedded microprocessor 252, which provides all general purpose computing power in bandwidth pooling device 100, including protocol processing, bandwidth shaping, and management agent functions.
  • Microprocessor 252 may be an INTELTM i960RD microprocessor running at 33 MHz external clock.
  • Microprocessor 252 may be connected through a local bus 255 to at least one memory device 254.
  • Memory device 254 may be a DRAM
  • Memory device 254 may also be non- volatile memory device such as a ROM (Read-Only Memory) or EEPROM (Electronically Erasable Programmable Read Only Memory).
  • ROM may store up to 2 MB of data, such as system software, user configuration, or an event trace.
  • System software is used to run bandwidth pooling device 100.
  • User configuration information allows bandwidth pooling device 100 to be tailored and configured to the needs of the specific implementation.
  • Event trace information may be used for management and diagnostic purposes.
  • Microprocessor 252 may also be connected through a management port interface device 256 to a management port 258. Management of microprocessor subsystem 250 is normally performed via SNMP (Simple Network Management
  • management port 258 is an out-of-band management port which ensures access to bandwidth pooling device 100 when back- end line 140 is not available or bandwidth pooling device 100 is not functioning properly.
  • Management port 258 may be an RS232 asynchronous serial port that can be connected to a console terminal or modem for dial-up management access by a remote ISP, NSP (Network Service Provider), or other administrator. This provides a high degree of remote system maintenance availability, because it does not depend on the proper functioning of bandwidth pooling device 100 in order to allow remote configuration.
  • Management port interface device 256 may be an UART (Universal Asynchronous Receiver/Transmitter) or other interface chip which can control management port 258.
  • FIGURE 2C shows two bandwidth pooling devices 100 stacked together to accommodate a greater number of subscriber lines.
  • a multidevice controller 270 controls the group of bandwidth pooling devices 100.
  • Multidevice controller 270 includes a master microprocessor 272.
  • Master microprocessor 272 may be an
  • Master microprocessor 272 may be connected through a local bus 255 to memory devices 254, management port interface device 256, management port 258, and a storage device controller 260.
  • Local bus 255 handles data that is local to master microprocessor 272 only, thus minimizing the effect of data traffic.
  • Storage device controller 260 allows a storage device to be connected to bandwidth pooling device 100 in order to perform functions such as configuration backup, configuration restoration, and code loading.
  • Storage device may be a fixed device, such as a hard drive, or a removable device, such as a 3.5" high-density floppy disk drive.
  • An additional bus 265, such as an I2C bus, may also interconnect multiple bandwidth pooling devices 100 for control information exchange.
  • Master microprocessor 272 may be connected through a mezzanine bus 275 to a shared memory module 280 and a back-end interface device 232 and back-end port 234.
  • Mezzanine bus 275 may be a PCI bus.
  • Shared memory module 280 is an additional buffer memory off mezzanine bus 275. Network data traffic can be buffered into shared memory module 280, thus eliminating the need to transfer this data into memory devices 254, thus off-loading local bus 255. Shared memory module 280 thus decouples local bus 255 from data traffic in high capacity systems.
  • Master microprocessor 272 is connected through a system bus 295 to the microprocessors 252 of two or more bandwidth pooling devices 100. Although only two are shown, a greater number of bandwidth pooling devices 100 may be stacked to accommodate even more subscriber lines. By having multiples bandwidth pooling devices 100 working as a single unit, more subscribers may be accommodated.
  • FIGURE 3 A shows a visual depiction of data flow from the subscriber lines to the back-end line, or upstream data flow.
  • QoS Quality of Service
  • QoS includes an assigned rate and a level of service, such as CBR (Constant Bit Rate), VBR (Variable Bit Rate), or UBR (Unspecified variable Bit Rate).
  • CBR is a fixed data rate, for example 64 kbps, lx ISDN, or 2x ISDN.
  • VBR has a minimum data rate requirement, but can burst higher when data bandwidth is available.
  • UBR is similar to VBR but without the minimum data rate requirement.
  • Each subscriber line also has a port quota associated with it. The port quota is the amount of data bandwidth a subscriber line can use.
  • FEUpQ Front- End Upstream Queue
  • a FERcvISR Front-End Receive Interrupt Service Routine 304 picks up data from each FEUpQ 302, and deposits it in either a corresponding FEUpHoldingQ (Front-End Upstream Holding Queue) 306 or a
  • BEUpQ Back-End Upstream Queue 310, depending on the QoS associated with the subscriber line and the port quota.
  • An upstream bandwidth control 308 adheres to the QoS configuration in moving data from FEUpHoldingQ 306 to BEUpQ 310.
  • Data in BEUpQ 310 is finally transmitted over the back-end line.
  • FIGURE 3B shows a visual depiction of data flow from the back-end line to the subscriber lines, or downstream data flow. When data is received through the back-end line, it is placed in a BEDnQ (Back-End Downstream Queue) 312.
  • BEDnQ Back-End Downstream Queue
  • 11 BERcvISR Back-End Receive Interrupt Service Routine 314 will take data from BEDnQ 312 and route it to a FEDnHoldingQ (Front-End Downstream Holding Queue) 316 or a FEDnQ (Front-End Downstream Queue) 320 associated with the appropriate subscriber line based on the routing criteria, such as IP or PPP routing information.
  • a downstream bandwidth control 318 controls the speed at which data is transmitted through the subscriber line by controlling the amount of data transferred from FEDnHoldingQ 316 to FEDnQ 320. It does this by dynamically adjusting the port quota setting periodically. By controlling the flow of data at this point, downstream bandwidth control 318 creates a back pressure to the original sender. Data in FEDnQ 320 is finally transmitted over the subscriber line.
  • FIGURE 3C shows a flow process diagram of a bandwidth shaping algorithm used in bandwidth pooling device 100 of the present invention.
  • the bandwidth shaping algorithm may be a multi-pass predictive QoS-based queuing algorithm. Two initial passes are done to distribute bandwidth among the active ports based on their configured rate. The bandwidth usage can be further optimized by performing an additional pass to adjust the bandwidth based on the port data flow history and to redistribute unused bandwidth to busier ports. This additional pass can be done repeatedly to achieve higher bandwidth utilization. The number of passes is a tradeoff between computing power requirements and bandwidth utilization. For best results, at least two passes should be made. A three pass scheme offers a very good tradeoff, provided there is enough computing power to accommodate it.
  • One of the features of this bandwidth shaping algorithm is its low latency. Due to its predictive nature, bandwidth control decisions can be made at the time a packet is received, allowing bandwidth pooling device 100 to process and potentially send the packet immediately, thus minimizing the packet holding time.
  • the bandwidth shaping algorithm includes the instructions: (1) obtain the QoS for each port that was active during the last period (block 322); (2) create a distribution of the aggregate available bandwidth to all active ports, based on QoS (block 324); (3) reclaim any unused bandwidth during the last period for each port (block 326); (4) calculate the redistribution of reclaimed bandwidth to ports that need it, based on their QoS (block 328); and (5) assign the final calculated bandwidth distribution to the ports, to be used during the next period (block 330).
  • EXAMPLE 1 In this first example, there are 10 subscriber lines, most of which are running below capacity:
  • the remaining bandwidth is 1.5 Mbps, a Tl line.
  • the remaining bandwidth is thus 860 Kbps
  • Ports 3 and 5-10 Each port is assigned bandwidth units based on an arbitrary quantity, 64 Kbps in this example.
  • Port 3 256 Kbps VBR
  • Ports 5-10 UBR
  • UBR UBR
  • Ports 5-10 are assigned 1 unit each. This gives a total of 10 units which will share the remaining bandwidth. Each unit will therefore receive 86 Kbps (860 Kbps / 10). So the distribution becomes:
  • Ports 7-10 have been allotted 86 Kbps each, but are only using 64 Kbps.
  • the reclaimed bandwidth is equal to 88 Kbps (4 x 22 Kbps).
  • Ports 3, 5, and 6 are Ports 3, 5, and 6.
  • Port 3 (256 Kbps VBR with packet in its waiting queue) will receive 4 units.
  • Ports 5 and 6 (UBR with packets in their waiting queues) will receive 1 unit each. This gives a total of 6 units which will share the reclaimed bandwidth. Each unit will therefore receive an additional 14.7 Kbps (88 Kbps / 6). Therefore, Port 3 receives an additional 58.7 Kbps (4 x 14.7 Kbps), and Ports 5 and 6 receive an additional 14.7 Kbps each.
  • the port configurations are the same as in the first example: :
  • the remaining bandwidth is 1.5 Mbps, a Tl line.
  • the remaining bandwidth is thus 348 Kbps
  • Ports 3- 10 Each port is assigned bandwidth units based on an arbitrary quantity, 64 Kbps in this example. Ports 3 and 4 (256 Kbps VBR) are thus assigned 4 units (256 Kbps / 64 Kbps). Ports 5-10 (UBR) are assigned 1 unit each. This gives
  • the reclaimed bandwidth is equal to zero.
  • the bandwidth shaping algorithm gives each port a proportional advantage when there is unused bandwidth.
  • the bandwidth shaping algorithm maintains the bandwidth guaranteed to each port.

Abstract

A method and device (100) for bandwidth pooling are described. This method and device allow a plurality of subscriber data lines (110) to share one or more high bandwidth back-end lines (140). The device and method are capable of taking into consideration conditions at the subscriber data lines, such as the number of active lines and the quality of service associated with each line, and conditions at the high bandwidth back-end line, such as the aggregate available bandwidth, in order to perform data routing, bandwidth shaping, and data concentrating functions. These functions are all integrated into each other to produce a dynamic interaction wherein efficient utilization of the high bandwidth back-end line is achieved.

Description

METHOD AND DEVICE FOR BANDWIDTH POOLING
BACKGROUND OF THE INVENTION Field of the Invention
This invention relates to a method and device for bandwidth pooling and splitting. More particular, this invention relates to a method and device allow multiple subscriber lines to share one or more high bandwidth data lines.
Description of the Related Art
Expanding interest in the Internet and related services, along with the development and growth and of more sophisticated services such as on-demand video and Internet telephony, has created an ever-increasing demand for greater bandwidth. The growth in popularity of the Internet and related online services for personal computers and other information appliances has created a huge consumer demand for connection methods. Consumer-oriented solutions include conventional analog modems, ISDN, satellite-based systems, cable modems, and xDSL.
Conventional analog modems perform digital-to-analog and analog-to-digital conversion to allow digital data to be transmitted in analog form over standard telephone lines. Analog modems are inexpensive and widely available, but have limited bandwidth and are subject to losses due to the multiple digital-to-analog and analog-to-digital conversion processes that may take place along any given telephone connection. Typically, analog modems offer speeds of up to 33.6 kbps, with speeds of up to 56 kbps possible if conversions at the receiving end are eliminated. Modem bonding techniques can combine the bandwidths from two or more analog modems, but require specialized hardware and a corresponding number of telephone lines.
ISDN (Integrated Services Digital Network) eliminates digital-to-analog or analog-to-digital conversion, and carries all data, including voice and fax, over a digital line. ISDN offers speeds of up to 64 kbps or 128 kbps. However, ISDN is limited in availability and requires the installation of ISDN lines, which may make it much costlier to implement than an analog modem.
Proprietary satellite dish formats offer higher speeds, but require hardware that can be costly, and still require an analog modem and a standard telephone line for uploads. Cable modems also offer more bandwidth, but require infrastructure upgrades not in place in most areas, and thus suffer from extremely limited availability. xDSL (Digital Subscriber Line) technology also carries data completely in the digital domain, but has the advantage of using existing the twisted copper pair wiring used for standard telephone lines. xDSL thus does not require the installation of specialized lines, and can be easily implemented in older buildings. However, the local loop distance is limited, thus requiring expensive infrastructure upgrades such as fiber optic cable. Variants of xDSL include RADSL (Rate Adaptive), ADSL (Asymmetric), HDSL (High-bit-rate), VDSL (Very high-bit-rate), and SDSL (Symmetric).
Beyond these connection methods are high bandwidth solutions typically reserved for larger entities, such as Tl and T3 lines. These high bandwidth solutions may offer lower cost per unit bandwidth than the consumer-oriented solutions, but are prohibitively expensive for the typical consumer or small business. What is needed is a bandwidth pooling device which can pool data from multiple subscriber lines onto a high bandwidth back-end line. What is also needed is a bandwidth pooling device which is intelligent and can optimize utilization of the back-end line.
SUMMARY OF THE INVENTION
An object of the present invention is to provide a bandwidth pooling device which pools data from multiple subscriber lines onto one or more high bandwidth back-end lines.
Another object of the present invention is to provide a bandwidth pooling device which integrates a router which can directly route local traffic among the subscriber lines, thus optimizing utilization of the back-end line. Yet another object of the present invention is to provide a bandwidth pooling device which integrates a router and bandwidth shaping capabilities, -which allows bandwidth shaping according to the condition of the front-end lines and the available back-end bandwidth, thus maximizing utilization of the back-end line. Still another object of the present invention is to provide a bandwidth pooling device which is easily connected to localized services such as an e-mail server, a proxy server, or a network computer server, which frees up back-end bandwidth and maximizes utilization of the back-end line.
These and other objects of the invention are achieved in a bandwidth pooling device for allowing a plurality of data lines to share at least one high bandwidth line.
The bandwidth pooling device comprises a plurality of front-end ports capable of being connected to the plurality of data lines, at least one back-end port capable of being connected to at least one high bandwidth line, and a microprocessor connected to the front-end ports and the back-end port. The microprocessor is capable of taking into account conditions at the front-end ports and the back-end port to perform a data routing function, a bandwidth shaping function, and a data concentrating function.
BRIEF DESCRIPTION OF THE FIGURES FIG. 1 shows a bandwidth pooling device of the present invention in a typical deployment environment.
FIG. 2 A shows a simplified subsystem diagram of the bandwidth pooling device present invention.
FIG. 2B shows one embodiment of the bandwidth pooling device of the present invention. FIG. 2C shows another embodiment of the bandwidth pooling device of the present invention.
FIG. 3 A shows a visual depiction of data flow from the subscriber lines to the back-end line, or upstream data flow
FIG. 3B shows a visual depiction of data flow from the back-end line to the subscriber lines, or downstream data flow.
FIG. 3C shows a flow process diagram of a bandwidth shaping algorithm used in the bandwidth pooling device of the present invention. DETAILED DESCRIPTION In most data access configurations, the greatest cost and expense is associated with leasing the high bandwidth back-end line. Therefore, the bandwidth pooling device of the present invention seeks to maximize utilization of the back-end line by a combination of techniques, including minimizing unnecessary back-end traffic, efficiently allotting back-end bandwidth, and reducing the demand for back-end bandwidth.
The bandwidth pooling device of the present invention maximizes utilization of back-end bandwidth in several ways: (1) integrating routing functions into the device, which allows the device to select when and where routing takes place and directly route local traffic and data, thus minimizing unnecessary back-end traffic; (2) integrating bandwidth shaping functions into the device to provide varying levels of service to the subscribers, thus efficiently allotting back-end bandwidth; (3) providing for easy connectibility to readily localized services such as an e-mail server, a proxy server, or a network computer server, thus reducing demand for back-end bandwidth; and (4) integrating all of the above functions so that a synergistic interaction is created wherein each function can be performed based on an intimate knowledge of the activity and condition of the others.
This synergistic interaction offers many advantages. First, it permits the bandwidth pooling device to be configured so that the routing takes place at the most advantageous location depending on the overall system configuration. For example, the availability of an e-mail server means routing should take place soon after the front-end, to avoid sending e-mail data to the bandwidth shaping function. Second, it optimizes bandwidth shaping because the bandwidth shaping algorithm can now take into account: (1) the front-end port configuration, such as the number of subscriber lines connected and the quality of service associated with each subscriber line; (2) the routing configuration; and (3) the aggregate bandwidth available at the back-end, depending on the number of back-end lines and amount of traffic in queue. DSLAMs (Digital Subscriber Line Access Multiplexers) and other concentrators seek to multiplex data from multiple digital subscriber lines onto a high bandwidth line, thus taking advantage of high bandwidth line discounts available from data carrier providers. DSLAMs do not have routers, and thus multiplex all incoming data and sends it out the high bandwidth line. This simplifies design, but wastes back- end bandwidth, especially if there is a significant amount of local traffic. The lack of a router also makes it costly and difficult to connect a DSLAM to readily localized services, such as an e-mail server, proxy server, or network computer server. Again, back-end bandwidth is wasted because additional bandwidth is required for these services to be provided at the other end of the high bandwidth line.
DSLAMs also do not have quality of service capabilities, and cannot perform bandwidth shaping. As a result, subscribers may experience bottlenecks or periods of severe congestion during which no service is available at all. Quality of service boxes are available which can be connected downstream of the DSLAM before the high bandwidth line, but these boxes cannot provide feedback to the DSLAM in order to provide optimized bandwidth shaping.
Referring now to the figures, FIGURE 1 shows a bandwidth pooling device 100 of the present invention in a typical deployment environment. Bandwidth pooling device 100 is connected to a plurality of subscriber lines 110 in a multi-tenant residential or office building. Subscriber lines 110 may be xDSL (Digital Subscriber Line) lines, using any variant of a single pair xDSL technology. xDSL technology delivers data to the subscriber on a single unshielded twisted copper pair line, or standard telephone wiring. This eliminates the need for major rewiring of buildings, especially older buildings, thus simplifying deployment. Subscriber lines 110 may use technologies other than xDSL, such as ISDN or LAN connections such as Ethernet. Subscriber lines 110 are connected to subscriber modems 120, which in turn may be connected to a subscriber computer 122, a subscriber router 124, or other subscriber equipment. Subscriber modems 120 are compatible with the front-end ports found on bandwidth pooling device 100. Subscriber modems 220 may be xDSL modems. Subscriber router 124 allows a LAN (Local Access Network) 126 or other subscriber data terminal equipment to be connected to bandwidth pooling device 100. Subscriber router 124 may filter out local traffic so that it does not get forwarded to subscriber modem 120, thus conserving bandwidth by minimizing the data sent to bandwidth pooling device 100.
Bandwidth pooling device 100 is also connected to at least one CSU/DSU (Channel Service Unit/Data Service Unit) 130. CSU/DSU 130 provides framing connectivity to the WAN leased line. CSU/DSU 130 may be connected to a Tl/El, T3/E3, frame relay, ATM, or other high bandwidth line. CSU/DSU -130 may be external and modular, so that bandwidth pooling device 100 can be easily connected and used with a wide variety of WAN interfaces. CSU/DSU 130 is in turn connected to a high bandwidth back-end line
140, such as a WAN leased line connection. Back-end line 140 may be a WAN leased line connection, such as Tl/El, T3/E3, frame relay, ATM, xDSL, and the like. Back-end line 140 is in turn connected to a telephone company network 150, CLEC (Competitive Local Exchange Carrier) network, or other data network. Telephone company network 150 provides connectivity services such as Tl, T3, frame relay,
ATM, and the like.
Telephone company network 150 provides the connection between bandwidth pooling device 100 at the deployment site to an ISP (Internet Service Provider) POP (Point of Presence) or headquarters. Telephone company network 150 may be connected to an ISP router 160 or access system. ISP router 160 is the entry point for bandwidth pooling device 100 into the ISP domain, and eventually to the Internet or other WAN via the ISP's back-end connection. The ISP can monitor and configure bandwidth pooling device 100 using SNMP (Simple Network Management Protocol) over back-end line 140. Bandwidth pooling device 100 may have a management port 102.
Management port 102 provides a back-up method of monitoring and configuring bandwidth pooling device 100. Management port 102 can be connected to a management modem 142, which may be a V.34 analog modem that can be connected to telephone company network 150 through a POTS (Plain Old Telephone Service) line 144. Management port 102 may alternatively be connected to a console terminal, such as a laptop computer connected by a field technician. Bandwidth pooling device 100 offers support for more than one ISP, which allows multiple ISPs to use a single bandwidth pooling device 100 to connect their subscribers. This will offer the subscriber multiple ISP choices. In terms of network management, each ISP will be able to see and manage their own resources, but not those of others. The network may be managed by a master ISP, which has full control over the system resources. The master ISP may assign resources to one or more renter ISPs. Each renter ISP will then be allowed to control their own assigned resources but not those of other renter ISPs.
Bandwidth pooling device 100 may be connected to a community local network 170. Community local network 170 is shared among the subscriber community, and is deemed local to the community because the connection to it does not have to go through a WAN connection. Community local network 170 is connected directly to bandwidth pooling device 100 using a LAN connection such as 10/lOOBASE-T Ethernet. Community local network 170 may include a community local server 172. Community local server 172 may include shared server resources, such as a community Web server, Web proxy server, Web caching server, e-mail server, application server, or network computer server. Community local server 172 allows for high speed access to many different kinds of data while minimizing use of back-end line 140, thus conserving bandwidth.
FIGURE 2 A shows a simplified subsystem diagram of bandwidth pooling device 100. Bandwidth pooling device 100 includes a front-end subsystem 210.
Front-end subsystem 210 includes a plurality of front-end ports 212, which provide connectivity to subscriber equipment. Front-end subsystem 110 may range from a dumb HDLC (High-level Data Link Carrier) framer in the case of a smaller capacity system, to a microprocessor-based, highly intelligent and robust, hot-swappable subsystem in a high capacity, highly reliable system.
Bandwidth pooling device 100 also includes a back-end subsystem 230. Backend subsystem 230 provides the data connectivity to a high speed connection to a WAN (Wide Area Network), LAN (Local Area Network), or local server. Back-end subsystem 230 may also be used as a system interconnect for stacking together more than one bandwidth pooling device 100.
Front-end subsystem 210 and back-end subsystem 230 are connected through a front-end bus 215 and a back-end bus 235, respectively, to a microprocessor subsystem 250. Microprocessor subsystem 250 is responsible for management of bandwidth pooling device 100. Microprocessor subsystem 250 performs major protocol and routing processing. Microprocessor subsystem 250 is a protocol engine and management agent responsible for processing the back-end link protocol and performing protocol conversion. Microprocessor subsystem 250 is also responsible for performing any management processing, and may further be responsible for performing any bandwidth shaping processing.
FIGURE 2B shows one embodiment of bandwidth pooling device 100 of the present invention. Front-end subsystem 210 includes a plurality of front-end ports 212 connected to a multiport HDLC controller 214. Front-end ports 212 may accommodate single unshielded twisted pair or telephone wire, using any variant of a single pair xDSL technology. xDSL technology is used to deliver data to the subscriber on a twisted copper pair line, or standard telephone wiring, which allows deployment without major rewiring of buildings, especially older buildings. This allows for deployment in multi-tenant residential or office buildings with standard telephone wiring. However, technologies other than xDSL may also be employed. Front-end ports 212 may include a hybrid and isolation device 216. Hybrid and isolation device 216 provides the analog amplification and filtering needed to support the analog front end. Hybrid and isolation device 216 also provides electrical isolation of front-end ports 212 and the subscriber line, thus preventing damage to front-end ports 212 if anything has been connected incorrectly. Front-end ports 212 may also include chipset 218. Chipset 218 may be DDP (Digital Data Pump) and AFE (Analog Front End) chip pair that provide the digital data pump and analog front end functionality of the xDSL subscriber line. These chips are responsible for encoding digital data into an analog signal and decoding it back. DDP and AFE are also responsible for synchronizing with subscriber equipment or modems. HDLC controller 214 handles the HDLC framing and packetization of serial data coming from the subscriber. HDLC controller 214 relieves microprocessor 252 by providing framing functions, packet-oriented interfacing, intelligent buffer management, and DMA (Direct Memory Access) functions.
Front-end bus 215 connects HDLC controller 214 to a microprocessor 252. Front-end bus 215 may range from the processor local bus in the case of a smaller capacity system with a dumb front-end subsystem, to a backplane oriented bus such as VME (Versa Module Eurocard) or PCI (Peripheral Component Interconnect) in a high-capacity system with an intelligent front-end subsystem.
Back-end subsystem 230 includes at least one back-end interface device 232 and at least one back-end port 234. Back-end interface device 232 allows a high speed serial connection to CSU/DSU 130. Back-end interface device 232 may use any one or more of a variety of protocols, including V.35, Tl, T3, frame relay, ATM (Asynchronous Transfer Mode), Sonet, or Ethernet. Back-end interface device 232 may be a high speed PCI bus-mastering USART (Universal Synchronous/Asynchronous Receiver/Transmitter), such as a V.35 HDLC for use with
Tl/El and low speed frame relay, HSSI (High Speed Serial Interface) HDLC for use with T3/E3 and high speed frame relay, or ATM for use with fiber and copper. Backend interface device 232 may also be a 10/lOOBASE-T Ethernet interface for use with a local network or server. Back-end port 234 is the physical port or connector. For WAN or Internet connectivity, back-end port 234 can be a leased line type connection, such as Tl, T3, frame relay, or ATM. For local network or server connectivity, back- end port 234 can be any suitable LAN port such as 10/lOOBASE-T Ethernet. Data routing from the back-end line into the appropriate subscriber line may be performed using IP (Internet protocol) based and PPP (Point-to-Point Protocol) session based routing.
Back-end bus 235 connects back-end interface device 232 to microprocessor 252. Back-end bus 235 may range from the processor local bus in the case of a smaller capacity system, to a high speed microprocessor bus such as a PCI bus in a higher capacity system. Microprocessor subsystem 250 may include an embedded microprocessor 252, which provides all general purpose computing power in bandwidth pooling device 100, including protocol processing, bandwidth shaping, and management agent functions. Microprocessor 252 may be an INTEL™ i960RD microprocessor running at 33 MHz external clock. Microprocessor 252 may be connected through a local bus 255 to at least one memory device 254. Memory device 254 may be a DRAM
(Dynamic Random Access Memory), from 4 MB to 16 MB in size. Memory device 254 may also be non- volatile memory device such as a ROM (Read-Only Memory) or EEPROM (Electronically Erasable Programmable Read Only Memory). ROM may store up to 2 MB of data, such as system software, user configuration, or an event trace. System software is used to run bandwidth pooling device 100. User configuration information allows bandwidth pooling device 100 to be tailored and configured to the needs of the specific implementation. Event trace information may be used for management and diagnostic purposes.
Microprocessor 252 may also be connected through a management port interface device 256 to a management port 258. Management of microprocessor subsystem 250 is normally performed via SNMP (Simple Network Management
Protocol) over the back-end line. However, management port 258 is an out-of-band management port which ensures access to bandwidth pooling device 100 when back- end line 140 is not available or bandwidth pooling device 100 is not functioning properly. Management port 258 may be an RS232 asynchronous serial port that can be connected to a console terminal or modem for dial-up management access by a remote ISP, NSP (Network Service Provider), or other administrator. This provides a high degree of remote system maintenance availability, because it does not depend on the proper functioning of bandwidth pooling device 100 in order to allow remote configuration. Management port interface device 256 may be an UART (Universal Asynchronous Receiver/Transmitter) or other interface chip which can control management port 258.
FIGURE 2C shows two bandwidth pooling devices 100 stacked together to accommodate a greater number of subscriber lines. A multidevice controller 270 controls the group of bandwidth pooling devices 100. Multidevice controller 270 includes a master microprocessor 272. Master microprocessor 272 may be an
INTEL™ i960RD microprocessor running at 66 MHz. Master microprocessor 272 may be connected through a local bus 255 to memory devices 254, management port interface device 256, management port 258, and a storage device controller 260. Local bus 255 handles data that is local to master microprocessor 272 only, thus minimizing the effect of data traffic. Storage device controller 260 allows a storage device to be connected to bandwidth pooling device 100 in order to perform functions such as configuration backup, configuration restoration, and code loading. Storage device may be a fixed device, such as a hard drive, or a removable device, such as a 3.5" high-density floppy disk drive. An additional bus 265, such as an I2C bus, may also interconnect multiple bandwidth pooling devices 100 for control information exchange.
10 Master microprocessor 272 may be connected through a mezzanine bus 275 to a shared memory module 280 and a back-end interface device 232 and back-end port 234. Mezzanine bus 275 may be a PCI bus. Shared memory module 280 is an additional buffer memory off mezzanine bus 275. Network data traffic can be buffered into shared memory module 280, thus eliminating the need to transfer this data into memory devices 254, thus off-loading local bus 255. Shared memory module 280 thus decouples local bus 255 from data traffic in high capacity systems.
Master microprocessor 272 is connected through a system bus 295 to the microprocessors 252 of two or more bandwidth pooling devices 100. Although only two are shown, a greater number of bandwidth pooling devices 100 may be stacked to accommodate even more subscriber lines. By having multiples bandwidth pooling devices 100 working as a single unit, more subscribers may be accommodated.
FIGURE 3 A shows a visual depiction of data flow from the subscriber lines to the back-end line, or upstream data flow. Each subscriber line has a QoS (Quality of Service) associated with it. QoS includes an assigned rate and a level of service, such as CBR (Constant Bit Rate), VBR (Variable Bit Rate), or UBR (Unspecified variable Bit Rate). CBR is a fixed data rate, for example 64 kbps, lx ISDN, or 2x ISDN. VBR has a minimum data rate requirement, but can burst higher when data bandwidth is available. UBR is similar to VBR but without the minimum data rate requirement. Each subscriber line also has a port quota associated with it. The port quota is the amount of data bandwidth a subscriber line can use.
Data received from each subscriber line is deposited in its own FEUpQ (Front- End Upstream Queue) 302. A FERcvISR (Front-End Receive Interrupt Service Routine) 304 picks up data from each FEUpQ 302, and deposits it in either a corresponding FEUpHoldingQ (Front-End Upstream Holding Queue) 306 or a
BEUpQ (Back-End Upstream Queue) 310, depending on the QoS associated with the subscriber line and the port quota. An upstream bandwidth control 308 adheres to the QoS configuration in moving data from FEUpHoldingQ 306 to BEUpQ 310. Data in BEUpQ 310 is finally transmitted over the back-end line. FIGURE 3B shows a visual depiction of data flow from the back-end line to the subscriber lines, or downstream data flow. When data is received through the back-end line, it is placed in a BEDnQ (Back-End Downstream Queue) 312. A
11 BERcvISR (Back-End Receive Interrupt Service Routine) 314 will take data from BEDnQ 312 and route it to a FEDnHoldingQ (Front-End Downstream Holding Queue) 316 or a FEDnQ (Front-End Downstream Queue) 320 associated with the appropriate subscriber line based on the routing criteria, such as IP or PPP routing information. A downstream bandwidth control 318 controls the speed at which data is transmitted through the subscriber line by controlling the amount of data transferred from FEDnHoldingQ 316 to FEDnQ 320. It does this by dynamically adjusting the port quota setting periodically. By controlling the flow of data at this point, downstream bandwidth control 318 creates a back pressure to the original sender. Data in FEDnQ 320 is finally transmitted over the subscriber line.
FIGURE 3C shows a flow process diagram of a bandwidth shaping algorithm used in bandwidth pooling device 100 of the present invention. The bandwidth shaping algorithm may be a multi-pass predictive QoS-based queuing algorithm. Two initial passes are done to distribute bandwidth among the active ports based on their configured rate. The bandwidth usage can be further optimized by performing an additional pass to adjust the bandwidth based on the port data flow history and to redistribute unused bandwidth to busier ports. This additional pass can be done repeatedly to achieve higher bandwidth utilization. The number of passes is a tradeoff between computing power requirements and bandwidth utilization. For best results, at least two passes should be made. A three pass scheme offers a very good tradeoff, provided there is enough computing power to accommodate it. One of the features of this bandwidth shaping algorithm is its low latency. Due to its predictive nature, bandwidth control decisions can be made at the time a packet is received, allowing bandwidth pooling device 100 to process and potentially send the packet immediately, thus minimizing the packet holding time.
The bandwidth shaping algorithm includes the instructions: (1) obtain the QoS for each port that was active during the last period (block 322); (2) create a distribution of the aggregate available bandwidth to all active ports, based on QoS (block 324); (3) reclaim any unused bandwidth during the last period for each port (block 326); (4) calculate the redistribution of reclaimed bandwidth to ports that need it, based on their QoS (block 328); and (5) assign the final calculated bandwidth distribution to the ports, to be used during the next period (block 330).
12 The following examples show bandwidth distribution using the bandwidth shaping algorithm.
EXAMPLE 1 In this first example, there are 10 subscriber lines, most of which are running below capacity:
Port Activity
1 using 256 Kbps 2 idle
3 needs up to 1 Mbps, has packet in its waiting queue
4 using 128 Kbps
5 active at up to 1 Mbps, has packet in its waiting queue
6 active at up to 1 Mbps, has packet in its waiting queue 7 active at 64 Kbps, no packets in its waiting queue
8 active at 64 Kbps, no packets in its waiting queue
9 active at 64 Kbps, no packets in its waiting queue
10 active at 64 Kbps, no packets in its waiting queue
1. Get the QoS for each port.
Port QoS
1 CBR, 512 Kbps (fixed)
2 CBR, 256 Kbps (fixed) 3 VBR, 256 Kbps (minimum)
4 VBR, 256 Kbps (minimum)
5 UBR (no minimum)
6 UBR (no minimum)
7 UBR (no minimum) 8 UBR (no minimum)
9 UBR (no minimum)
10 UBR (no minimum)
2. Distribute the aggregate available bandwidth to all active ports, based on their QoS.
First, distribute the guaranteed bandwidth for the active CBR and VBR ports. If any of the CBR or VBR ports are using less than the guaranteed bandwidth, allot only the amount being used:
13 Port Distribution
1 256 Kbps
2 0
3 256 Kbps 4 128 Kbps 5 0
6 0
7 0
8 0
9 0
Figure imgf000016_0001
10 0
Second, calculate the remaining bandwidth. Here, the total available bandwidth is 1.5 Mbps, a Tl line. The remaining bandwidth is thus 860 Kbps
(1.5 Mbps - 256 Kbps - 256 Kbps - 128 Kbps). Third, distribute this remaining bandwidth to the VBR and UBR ports. Here, these are Ports 3 and 5-10. Each port is assigned bandwidth units based on an arbitrary quantity, 64 Kbps in this example. Port 3 (256 Kbps VBR) is thus assigned 4 units (256 Kbps / 64 Kbps). Ports 5-10 (UBR) are assigned 1 unit each. This gives a total of 10 units which will share the remaining bandwidth. Each unit will therefore receive 86 Kbps (860 Kbps / 10). So the distribution becomes:
Port Distribution
1 256 Kbps
2 0 Kbps
3 600 Kbps
4 128 Kbps
5 86 Kbps
6 86 Kbps
7 86 Kbps
8 86 Kbps
9 86 Kbps
Figure imgf000016_0002
10 86 Kbps
3. Reclaim unused bandwidth.
Ports 7-10 have been allotted 86 Kbps each, but are only using 64 Kbps. The reclaimed bandwidth is equal to 88 Kbps (4 x 22 Kbps).
14 4. Distribute reclaimed bandwidth to the VBR and UBR ports that need bandwidth.
Here, these are Ports 3, 5, and 6. Port 3 (256 Kbps VBR with packet in its waiting queue) will receive 4 units. Ports 5 and 6 (UBR with packets in their waiting queues) will receive 1 unit each. This gives a total of 6 units which will share the reclaimed bandwidth. Each unit will therefore receive an additional 14.7 Kbps (88 Kbps / 6). Therefore, Port 3 receives an additional 58.7 Kbps (4 x 14.7 Kbps), and Ports 5 and 6 receive an additional 14.7 Kbps each.
5. Assign the final distribution results.
Port Distribution
1 256 Kbps
2 0 Kbps
3 658.7 Kbps
4 128 Kbps
5 100.7 Kbps
6 100.7 Kbps
7 64 Kbps
8 64 Kbps
9 64 Kbps
10 64 Kbps
Figure imgf000017_0001
EXAMPLE 2
In this second example, there are 10 subscriber lines, most of which are running at their capacity:
Port Activity 1 using 384 Kbps
2 using 256 Kbps
3 needs up to 1 Mbps, has packet in its waiting queue
4 needs up to 512 Kbps, has packet in its waiting queue
5 active at up to 1 Mbps, has packet in its waiting queue 6 active at up to 1 Mbps, has packet in its waiting queue
7 active at 64 Kbps, no packets in its waiting queue
8 active at 64 Kbps, no packets in its waiting queue
9 active at 64 Kbps, no packets in its waiting queue
10 active at 64 Kbps, no packets in its waiting queue
15 1. Get the QoS for each port.
The port configurations are the same as in the first example: :
Port QoS 1 CBR, 512 Kbps (fixed)
2 CBR, 256 Kbps (fixed)
3 VBR, 256 Kbps (minimum)
4 VBR, 256 Kbps (minimum)
5 UBR (no minimum) 6 UBR (no minimum)
7 UBR (no minimum)
8 UBR (no minimum)
9 UBR (no minimum)
10 UBR (no minimum)
Distribute the aggregate available bandwidth to all active ports, based on their QoS. First, distribute the guaranteed bandwidth for the active CBR and VBR ports. If any of the CBR or VBR ports are using less than the guaranteed bandwidth, allot only the amount being used:
Port Distribution
1 384 Kbps
2 256 Kbps 3 256 Kbps 4 256 Kbps 5 0
6 0
7 0
8 0
9 0
Figure imgf000018_0001
10 0
Second, calculate the remaining bandwidth. Here, the total available bandwidth is 1.5 Mbps, a Tl line. The remaining bandwidth is thus 348 Kbps
(1.5 Mbps - 384 Kbps - 256 Kbps - 256 Kbps - 256 Kbps). Third, distribute this remaining bandwidth to the VBR and UBR ports. Here, these are Ports 3- 10. Each port is assigned bandwidth units based on an arbitrary quantity, 64 Kbps in this example. Ports 3 and 4 (256 Kbps VBR) are thus assigned 4 units (256 Kbps / 64 Kbps). Ports 5-10 (UBR) are assigned 1 unit each. This gives
16 a total of 14 units which will share the remaining bandwidth. Each unit will therefore receive 24.8 Kbps ( 348 Kbps / 14). So the distribution becomes:
Port Distribution 1 384 Kbps
2 256 Kbps
3 355.2 Kbps
4 355.2 Kbps
5 24.9 Kbps 6 24.9 Kbps
7 24.9 Kbps
8 24.9 Kbps
9 24.9 Kbps
10 24.9 Kbps
3. Reclaim unused bandwidth.
The reclaimed bandwidth is equal to zero.
4. Distribute reclaimed bandwidth to the VBR and UBR ports that need bandwidth.
In this case, this step is not necessary.
5. Assign the final distribution results.
Port Distribution
1 384 Kbps
2 256 Kbps
3 355.2 Kbps
4 355.2 Kbps
5 24.9 Kbps
6 24.9 Kbps
7 24.9 Kbps
8 24.9 Kbps
9 24.9 Kbps
Figure imgf000019_0001
10 24.9 Kbps
As the two examples above show, the bandwidth shaping algorithm gives each port a proportional advantage when there is unused bandwidth. However, as the bandwidth demand increases, the bandwidth shaping algorithm maintains the bandwidth guaranteed to each port.
17 The foregoing description of a preferred embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art.
What is claimed is:
18

Claims

CLAIMS 1. A bandwidth pooling device for allowing a plurality of subscriber lines to share at least one shared line, comprising: a routing device capable of routing data streams between the plurality of subscriber lines and at least one shared line; a bandwidth shaping device capable of modifying a data rate associated with each subscriber line based on a usage level associated with each subscriber line and a device configuration; and a control device connected to the routing device, the data concentrating device, and the bandwidth shaping device, the control device capable of controlling the routing device, the data concentrating device, and the bandwidth shaping device.
2. The bandwidth pooling device of claim 1, wherein the device configuration includes the number of subscriber lines.
3. The bandwidth pooling device of claim 1 , wherein the device configuration includes a guaranteed data rate and a level of service associated with each subscriber line.
4. The bandwidth pooling device of claim 1 , wherein the device configuration includes a routing configuration.
5. The bandwidth pooling device of claim 1, wherein the device configuration includes an available bandwidth at the shared line.
6. The bandwidth pooling device of claim 1 , wherein the device configuration includes a local server configuration.
7. The bandwidth pooling device of claim 1 , further comprising: at least one local server connected to the control device.
19
8. The bandwidth pooling device of claim 7, wherein the local server includes a Web server.
9. The bandwidth pooling device of claim 7, wherein the local server includes a Web proxy server.
10. The bandwidth pooling device of claim 7, wherein the local server includes a Web caching server.
11. The bandwidth pooling device of claim 7, wherein the local server includes an e-mail server.
12. The bandwidth pooling device of claim 7, wherein the local server includes an application server.
13. The bandwidth pooling device of claim 7, wherein the local server includes a network computer server.
14. A bandwidth pooling device for allowing a plurality of subscriber lines to share at least one shared line, comprising: a plurality of front-end ports capable of being connected to the plurality of subscriber lines; at least one back-end port capable of being connected to at least one shared line; and a microprocessor connected to the front-end ports and the back-end port, the microprocessor capable of performing a data routing function, a bandwidth shaping function, and a data concentrating function according to a device configuration.
15. The bandwidth pooling device of claim 14, wherein the device configuration includes the number of subscriber lines.
20
16. The bandwidth pooling device of claim 14, wherein the device configuration includes a quality of service associated with each subscriber line.
17. The bandwidth pooling device of claim 16, wherein the quality of service includes a guaranteed data rate and a level of service.
18. The bandwidth pooling device of claim 14, wherein the device configuration includes a routing configuration.
19. The bandwidth pooling device of claim 14, wherein the device configuration includes an available bandwidth at the shared line.
20. The bandwidth pooling device of claim 14, wherein the device configuration includes a local community network configuration.
21. The bandwidth pooling device of claim 14, further comprising: a management port connected to the microprocessor, the management port capable of setting the device configuration.
22. The bandwidth pooling device of claim 14, further comprising: a community local network connected to the microprocessor.
23. The bandwidth pooling device of claim 22, wherein the community local network includes a Web server.
24. The bandwidth pooling device of claim 22, wherein the community local network includes a Web proxy server.
25. The bandwidth pooling device of claim 22, wherein the community local network includes a Web caching server.
21
26. The bandwidth pooling device of claim 22, wherein the community local network includes an e-mail server.
27. The bandwidth pooling device of claim 22, wherein the community local network includes an application server.
28. The bandwidth pooling device of claim 22, wherein the community local network includes a network computer server.
29. A method of allowing a plurality of subscriber lines to share at least one shared line, comprising: receiving a plurality of data streams from the plurality of subscriber lines, the plurality of subscriber lines associated with a plurality of data rates; modifying the plurality of data rates based on a quality of service and a usage level associated with each subscriber line; concentrating the plurality of data streams according to the modified data rates to create a concentrated data stream; and transmitting the concentrated data stream over the shared line.
30. The method of claim 29, wherein modifying includes: distributing an available bandwidth associated with the shared line among the subscriber lines based upon a quality of service associated with each subscriber line; reclaiming an unused bandwidth based on a usage level associated with each subscriber line; and redistributing the unused bandwidth among the subscriber lines based on the usage level associated with each subscriber line.
31. The method of claim 30, wherein the quality of service includes a guaranteed data rate and a level of service.
32. A method of distributing an available bandwidth among a plurality of subscriber lines, comprising:
22 obtaining an usage activity, a guaranteed bandwidth, and a level of service associated with each subscriber line; distributing the lesser of the usage activity and the guaranteed bandwidth to each subscriber line; reclaiming an unused bandwidth by calculating the difference between the available bandwidth and a bandwidth distributed to the subscriber lines; and redistributing the unused bandwidth among the subscriber lines according to the usage activity and the level of service associated with each subscriber line.
33. The method of claim 32, wherein the usage activity is an amount of bandwidth used during a previous period of time.
34. The method of claim 32, wherein the level of service is a constant data rate.
35. The method of claim 32, wherein the level of service is a variable data rate.
36. The method of claim 32, wherein the level of service is an unspecified data rate.
23
PCT/US1999/006301 1998-03-31 1999-03-25 Method and device for bandwidth pooling WO1999051001A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5244298A 1998-03-31 1998-03-31
US09/052,442 1998-03-31

Publications (1)

Publication Number Publication Date
WO1999051001A1 true WO1999051001A1 (en) 1999-10-07

Family

ID=21977636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/006301 WO1999051001A1 (en) 1998-03-31 1999-03-25 Method and device for bandwidth pooling

Country Status (1)

Country Link
WO (1) WO1999051001A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001093600A2 (en) * 2000-05-31 2001-12-06 Telefonaktiebolaget L M Ericsson (Publ) Session dispatcher at a wireless multiplexer interface
WO2002067513A1 (en) 2001-02-16 2002-08-29 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for bandwidth shaping in data communications system
US20130142040A1 (en) * 2011-12-05 2013-06-06 Todd Fryer Pooling available network bandwidth from multiple devices
US9356980B2 (en) 2012-07-31 2016-05-31 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US9444726B2 (en) 2012-07-31 2016-09-13 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US9491093B2 (en) 2012-07-31 2016-11-08 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0405830A2 (en) * 1989-06-30 1991-01-02 AT&T Corp. Fully shared communications network
US5051982A (en) * 1989-07-27 1991-09-24 Data General Corporation Methods and apparatus for implementing switched virtual connections (SVCs) in a digital communications switching system
WO1995024802A1 (en) * 1994-03-09 1995-09-14 British Telecommunications Public Limited Company Bandwidth management in a switched telecommunications network
US5463629A (en) * 1992-07-13 1995-10-31 Ko; Cheng-Hsu Dynamic channel allocation method and system for integrated services digital network
US5574861A (en) * 1993-12-21 1996-11-12 Lorvig; Don Dynamic allocation of B-channels in ISDN
WO1997035410A1 (en) * 1996-03-18 1997-09-25 General Instrument Corporation Dynamic bandwidth allocation for a communication network
EP0881808A2 (en) * 1997-05-30 1998-12-02 Sun Microsystems, Inc. Latency-reducing bandwidth-prioritization for network servers and clients

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0405830A2 (en) * 1989-06-30 1991-01-02 AT&T Corp. Fully shared communications network
US5051982A (en) * 1989-07-27 1991-09-24 Data General Corporation Methods and apparatus for implementing switched virtual connections (SVCs) in a digital communications switching system
US5463629A (en) * 1992-07-13 1995-10-31 Ko; Cheng-Hsu Dynamic channel allocation method and system for integrated services digital network
US5574861A (en) * 1993-12-21 1996-11-12 Lorvig; Don Dynamic allocation of B-channels in ISDN
WO1995024802A1 (en) * 1994-03-09 1995-09-14 British Telecommunications Public Limited Company Bandwidth management in a switched telecommunications network
WO1997035410A1 (en) * 1996-03-18 1997-09-25 General Instrument Corporation Dynamic bandwidth allocation for a communication network
EP0881808A2 (en) * 1997-05-30 1998-12-02 Sun Microsystems, Inc. Latency-reducing bandwidth-prioritization for network servers and clients

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001093600A2 (en) * 2000-05-31 2001-12-06 Telefonaktiebolaget L M Ericsson (Publ) Session dispatcher at a wireless multiplexer interface
WO2001093600A3 (en) * 2000-05-31 2002-03-28 Ericsson Telefon Ab L M Session dispatcher at a wireless multiplexer interface
US6807178B1 (en) 2000-05-31 2004-10-19 Telefonaktiebolaget Lm Ericsson (Publ) Session dispatcher at a wireless multiplexer interface
WO2002067513A1 (en) 2001-02-16 2002-08-29 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for bandwidth shaping in data communications system
US20130142040A1 (en) * 2011-12-05 2013-06-06 Todd Fryer Pooling available network bandwidth from multiple devices
US8971180B2 (en) * 2011-12-05 2015-03-03 At&T Intellectual Property I, L.P. Pooling available network bandwidth from multiple devices
US9491093B2 (en) 2012-07-31 2016-11-08 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US9444726B2 (en) 2012-07-31 2016-09-13 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US9356980B2 (en) 2012-07-31 2016-05-31 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US9973556B2 (en) 2012-07-31 2018-05-15 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US10142384B2 (en) 2012-07-31 2018-11-27 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US10237315B2 (en) 2012-07-31 2019-03-19 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US10560503B2 (en) 2012-07-31 2020-02-11 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US10693932B2 (en) 2012-07-31 2020-06-23 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US11063994B2 (en) 2012-07-31 2021-07-13 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices
US11412018B2 (en) 2012-07-31 2022-08-09 At&T Intellectual Property I, L.P. Distributing communication of a data stream among multiple devices

Similar Documents

Publication Publication Date Title
EP1421744B1 (en) Dynamic traffic bandwidth management system and method for a communication network
US6181694B1 (en) Systems and methods for multiple mode voice and data communciations using intelligently bridged TDM and packet buses
US8144729B2 (en) Systems and methods for multiple mode voice and data communications using intelligently bridged TDM and packet buses
US6870834B1 (en) Communication server apparatus providing XDSL services and method
US6477595B1 (en) Scalable DSL access multiplexer with high reliability
US7626981B2 (en) Systems and methods for TDM/packet communications using telephony station cards including voltage generators
US20020095498A1 (en) Network architecture for multi-client units
US7142590B2 (en) Method and system for oversubscribing a DSL modem
EP1851976A1 (en) Communication link bonding apparatus and methods
US7142591B2 (en) Method and system for oversubscribing a pool of modems
MXPA02003528A (en) System and method for providing pots services in dsl environment in event of failures.
WO1999051001A1 (en) Method and device for bandwidth pooling
MXPA02003525A (en) System and method for providing voice and or data services.
KR100399575B1 (en) Optimal Resource Allocation of the Gateway
WO2001031969A1 (en) Ethernet edge switch for cell-based networks
US20020080955A1 (en) Method of optimizing equipment utilization in telecommunication access network
KR100314582B1 (en) Method of managing control messages for extending the number of subscriber boards of digital subscriber line access multiplexor

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA CN JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase