US8484357B2 - Communication in multiprocessor using proxy sockets - Google Patents

Communication in multiprocessor using proxy sockets Download PDF

Info

Publication number
US8484357B2
US8484357B2 US13/232,382 US201113232382A US8484357B2 US 8484357 B2 US8484357 B2 US 8484357B2 US 201113232382 A US201113232382 A US 201113232382A US 8484357 B2 US8484357 B2 US 8484357B2
Authority
US
United States
Prior art keywords
processor
communication
socket
application
communication resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US13/232,382
Other versions
US20120005350A1 (en
Inventor
George Shin
Richard Brame
Michael Jacobson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US13/232,382 priority Critical patent/US8484357B2/en
Publication of US20120005350A1 publication Critical patent/US20120005350A1/en
Application granted granted Critical
Publication of US8484357B2 publication Critical patent/US8484357B2/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2089Redundant storage control functionality

Definitions

  • the described subject matter relates to electronic computing, and more particularly to systems and methods for implementing proxy sockets to facilitate communication in a multiprocessor computing system.
  • each processor is capable of managing its own network connectivity operations.
  • each physical network interface is bound only to the local communication stack in the multi-processor system.
  • a communication stack that is not bound to a particular network interface does not have direct access to the data link layer supplied by that interface.
  • an application-level service may find that not all network interfaces in a multiprocessor environment are available for establishing a communication endpoint because the local network stack cannot bind to all of the interfaces in the multi-processor system. In other words, not all of the network interfaces are available to all application layer services present in a multi-processor system.
  • a first processor receives a request to provide a communication resource for an application executing on the first processor to communicate with a remote application.
  • the first processor opens a communication resource on a second processor, and implements communication operations between the application executing on the first processor and the remote application using the communication resource on the second processor.
  • FIG. 1 is a schematic illustration of an exemplary implementation of a data storage system that implements RAID storage
  • FIG. 2 is a schematic illustration of an exemplary implementation of a RAID controller in more detail
  • FIG. 3 is a schematic illustration of an exemplary multi-processor communication architecture
  • FIGS. 4A-4B are flowcharts illustrating operations in an exemplary method for creating a socket for communication in a multiprocessor system.
  • FIG. 5 is a flowchart illustrating additional exemplary socket operations
  • FIG. 6 is schematic illustration of exemplary data structures for managing proxy socket based multi-processor communication
  • FIG. 7 is a flowchart that illustrates a socket I/O sequence for a circumstance in which a main processor acts as a communication server for an external application;
  • FIGS. 8A-8B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor acts as a communication server for a remote application, but communicates using proxy sockets managed by the main processor that has real socket equivalent managed by the co-processor; and
  • FIGS. 9A-9B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor acts as a communication client for a remote application, and communicates using a proxy socket managed by the co-processor.
  • Described herein are exemplary architectures and techniques for implementing proxy sockets in a multi-processor computing system.
  • the methods described herein may be embodied as logic instructions on a computer-readable medium, firmware, or as dedicated circuitry. When executed on a processor, the logic instructions (or firmware) cause a processor to be programmed as a special-purpose machine that implements the described methods.
  • the processor when configured by the logic instructions (or firmware) to execute the methods recited herein, constitutes structure for performing the described methods.
  • FIG. 1 is a schematic illustration of an exemplary implementation of a data storage system 100 .
  • the data storage system 100 has a disk array with multiple storage disks 130 a - 130 f , a disk array controller module 120 , and a storage management system 110 .
  • the disk array controller module 120 is coupled to multiple storage disks 130 a - 130 f via one or more interface buses, such as a small computer system interface (SCSI) bus.
  • SCSI small computer system interface
  • the storage management system 110 is coupled to the disk array controller module 120 via one or more interface buses. It is noted that the storage management system 110 can be embodied as a separate component (as shown), or within the disk array controller module 120 , or within a host computer.
  • data storage system 100 may implement RAID (Redundant Array of Independent Disks) data storage techniques.
  • RAID storage systems are disk array systems in which part of the physical storage capacity is used to store redundant data.
  • RAID systems are typically characterized as one of six architectures, enumerated under the acronym RAID.
  • a RAID 0 architecture is a disk array system that is configured without any redundancy. Since this architecture is really not a redundant architecture, RAID 0 is often omitted from a discussion of RAID systems.
  • a RAID 1 architecture involves storage disks configured according to mirror redundancy. Original data is stored on one set of disks and a duplicate copy of the data is kept on separate disks.
  • the RAID 2 through RAID 5 architectures involve parity-type redundant storage.
  • a RAID 5 system distributes data and parity information across a plurality of the disks 130 a - 130 c . Typically, the disks are divided into equally sized address areas referred to as “blocks”. A set of blocks from each disk that have the same unit address ranges are referred to as “stripes”. In RAID 5, each stripe has N blocks of data and one parity block which contains redundant information for the data in the N blocks.
  • the parity block is cycled across different disks from stripe-to-stripe.
  • the parity block for the first stripe might be on the fifth disk; the parity block for the second stripe might be on the fourth disk; the parity block for the third stripe might be on the third disk; and so on.
  • the parity block for succeeding stripes typically rotates around the disk drives in a helical pattern (although other patterns are possible).
  • RAID 2 through RAID 4 architectures differ from RAID 5 in how they compute and place the parity block on the disks. The particular RAID class implemented is not important.
  • the storage management system 110 optionally may be implemented as a RAID management software module that runs on a processing unit of the data storage device, or on the processor unit of a computer.
  • the disk array controller module 120 coordinates data transfer to and from the multiple storage disks 130 a - 130 f .
  • the disk array module 120 has two identical controllers or controller boards: a first disk array controller 122 a and a second disk array controller 122 b .
  • Parallel controllers enhance reliability by providing continuous backup and redundancy in the event that one controller becomes inoperable.
  • Parallel controllers 122 a and 122 b have respective mirrored memories 124 a and 124 b .
  • the mirrored memories 124 a and 124 b may be implemented as battery-backed, non-volatile RAMs (e.g., NVRAMs). Although only dual controllers 122 a and 122 b are shown and discussed generally herein, aspects of this invention can be extended to other multi-controller configurations where more than two controllers are employed.
  • NVRAMs non-volatile RAMs
  • the mirrored memories 124 a and 124 b store several types of information.
  • the mirrored memories 124 a and 124 b maintain duplicate copies of a coherent memory map of the storage space in multiple storage disks 130 a - 130 f . This memory map tracks where data and redundancy information are stored on the disks, and where available free space is located.
  • the view of the mirrored memories is consistent across the hot-plug interface, appearing the same to external processes seeking to read or write data.
  • the mirrored memories 124 a and 124 b also maintain a read cache that holds data being read from the multiple storage disks 130 a - 130 f . Every read request is shared between the controllers.
  • the mirrored memories 124 a and 124 b further maintain two duplicate copies of a write cache. Each write cache temporarily stores data before it is written out to the multiple storage disks 130 a - 130 f.
  • the controller's mirrored memories 122 a and 122 b are physically coupled via a hot-plug interface 126 .
  • the controllers 122 a and 122 b monitor data transfers between them to ensure that data is accurately transferred and that transaction ordering is preserved (e.g., read/write ordering).
  • FIG. 2 is a schematic illustration of an exemplary implementation of a dual RAID controller in greater detail.
  • the disk array controller also has two I/O modules 240 a and 240 b , an optional display 244 , and two power supplies 242 a and 242 b .
  • the I/O modules 240 a and 240 b facilitate data transfer between respective controllers 210 a and 210 b and one or more host computers.
  • the I/O modules 240 a and 240 b employ fiber channel technology, although other bus technologies may be used.
  • the power supplies 242 a and 242 b provide power to the other components in the respective disk array controllers 210 a , 210 b , the display 244 , and the I/O modules 240 a , 240 b.
  • Each controller 210 a , 210 b has a converter 230 a , 230 b connected to receive signals from the host via respective I/O modules 240 a , 240 b .
  • Each converter 230 a and 230 b converts the signals from one bus format (e.g., Fibre Channel) to another bus format (e.g., peripheral component interconnect (PCI)).
  • a first PCI bus 228 a , 228 b carries the signals to an array controller memory transaction manager 226 a , 226 b , which handles all mirrored memory transaction traffic to and from the NVRAM 222 a , 222 b in the mirrored controller.
  • the array controller memory transaction manager maintains the memory map, computes parity, and facilitates cross-communication with the other controller.
  • the array controller memory transaction manager 226 a , 226 b is preferably implemented as an integrated circuit (IC), such as an application-specific integrated circuit (ASIC).
  • IC integrated circuit
  • ASIC application-specific integrated circuit
  • the array controller memory transaction manager 226 a , 226 b is coupled to the NVRAM 222 a , 222 b via a high-speed bus 224 a , 224 b and to other processing and memory components via a second PCI bus 220 a , 220 b .
  • Controllers 210 a , 210 b may include several types of memory connected to the PCI bus 220 a and 220 b .
  • the memory includes a dynamic RAM (DRAM) 214 a , 214 b , flash memory 218 a , 218 b , and cache 216 a , 216 b.
  • DRAM dynamic RAM
  • the array controller memory transaction managers 226 a and 226 b are coupled to one another via a communication interface 250 .
  • the communication interface 250 supports bi-directional parallel communication between the two array controller memory transaction managers 226 a and 226 b at a data transfer rate commensurate with the NVRAM buses 224 a and 224 b.
  • the array controller memory transaction managers 226 a and 226 b employ a high-level packet protocol to exchange transactions in packets over hot-plug interface 250 .
  • the array controller memory transaction managers 226 a and 226 b perform error correction on the packets to ensure that the data is correctly transferred between the controllers.
  • the array controller memory transaction managers 226 a and 226 b provide a memory image that is coherent across the hot plug interface 250 .
  • the managers 226 a and 226 b also provide an ordering mechanism to support an ordered interface that ensures proper sequencing of memory transactions.
  • each controller 210 a , 210 b includes multiple central processing units (CPUs) 212 a , 213 a , 212 b , 213 b , also referred to as processors.
  • the processors on each controller may be assigned specific functionality to manage. For example, a first set of processing units 212 a , 212 b may manage storage operations for the plurality of disks 130 a - 130 f , while a second set of processing units 213 a , 213 b may manage networking operations with host computers or other devices that request storage services from data storage system 100 .
  • Separating networking operations from storage operations and assigning the networking operations to a separate processor can improve the performance of a storage controller.
  • Computationally-expensive networking operations can be off-loaded to a co-processor, thereby permitting the main processor to dedicate its processor cycles to storage operations.
  • FIG. 3 is a schematic illustration of a multiprocessor communication configuration.
  • the multiprocessor configuration depicted in FIG. 3 may be implemented in storage controller such as, e.g., the storage controller depicted in FIG. 2 .
  • the multiprocessor configuration comprises a first processor, i.e., the main processor 310 and a second processor, i.e., the co-processor 350 .
  • the main processor 310 manages data I/O operations to and from the plurality of disks 130 a - 130 f , while the co-processor 350 manages network operations.
  • Main processor 310 comprises a services module 312 that provides one or more services to a management network by invoking one or more applications in application module 314 .
  • Application module 314 invokes a communication module 316 to pass service protocol data units (PDUs) down the protocol stack.
  • Communication module 316 includes a sockets I/O demultiplexer module 318 , a socket I/O API module 320 , a local communication protocol stack 322 , a proxy/socket mirror module 324 , and a shared memory driver module 326 .
  • the various modules in communication module 316 may be implemented as software objects that facilitate transmitting and receiving service PDUs with other objects in an attached management network. Operation of these modules will be explained in greater detail below.
  • Co-processor 350 also comprises a services module 352 that provides one or more services via a storage attached network by invoking one or more applications in application module 354 .
  • Application module 354 invokes a communication module 356 to pass service PDUs down the protocol stack.
  • Communication module 356 includes a sockets I/O API 358 , a primary/socket master module 360 , a shared memory driver module 362 , and a local network protocol module 364 .
  • the various modules in communication module may be implemented as software objects that facilitate transmitting and receiving service PDUs with other object in an attached network. Operation of these modules will be explained in greater detail below.
  • main processor 310 communicates with co-processor 350 logically at the sockets I/O demultiplexer 318 and the sockets I/O API 358 level using conventional sockets-based communications techniques that rely on underlying remote procedure call (RPC) between proxy/socket mirror 324 and primary/socket master 360 .
  • RPC remote procedure call
  • This communication depends on the lower-level communication layer between shared memory drivers 326 , 362 , which is implemented by shared memory function calls (SMFC).
  • Sockets I/O demultiplexer 318 implements operations that permit the main processor to send and receive communication with external devices using a proxy socket it manages (the co-processor plays part in this managed or proxy socket by offering server-side RPCs for socket APIs called by main processor).
  • Connection 380 illustrates a logical link between the sockets I/O demultiplexer 318 and the sockets I/O API on co-processor 358 where main processor 310 connects to 358 via the proxy module.
  • Connection 382 illustrates a logical link between the proxy/socket mirror module 324 and the primary/socket master module 360 where 324 connects to this link as client side RPC and 360 connects to this link as server side RPC.
  • Connection 384 illustrates a logical link between shared memory drivers 326 and 362 where this link implements the IPC (Inter-Processor Communication) using SMFCs through 370 .
  • IPC Inter-Processor Communication
  • the data path from 318 through 324 and 326 which is the proxy/socket mirror path leads to managed sockets through the use of proxy module in 310 .
  • the data path from 318 through 320 and 322 which is the conventional socket I/O API path leads to unmanaged sockets through the use of master module in 310 .
  • FIGS. 4A-4B illustrate an exemplary process implemented by the sockets I/O demultiplexer 318 to create a socket.
  • the process illustrated in FIG. 4 a is concerned with selecting either a master module or a proxy module for communication by the main processor 310 .
  • FIG. 4 a assumes that an application in application module 314 has been invoked and needs to create an endpoint for communication with one or more external devices.
  • the sockets I/O demultiplexer 318 initializes a data table referred to herein as the SdOSockets table 610 , which is depicted in FIG. 6 .
  • the application module 314 passes the conventional socket API based communication request to the socket I/O demultiplexer 318 ,
  • the SdOSockets table 610 is a data structure that can include as many entries as are required by applications. Each entry in the SdOSockets table 610 includes a pointer to a separate instance of a data structure referred to herein as a SockFdEx data structure 615 a , 615 b .
  • SockFdEx data structures 615 a , 615 b the SockFdEx data structure includes a SOps pointer 620 a , 620 b , a proxy flag data field 622 a , 622 b and a SockFd field 624 a , 624 b .
  • the SOps pointer points to one of two data structures: the ProxySocketOps data structure 640 or the PriSocketOps data structure 660 .
  • the proxy flag data field holds a data flag that indicates whether the socket represented by the particular instance of the SockFdEx data structure is a proxy socket.
  • the SockFd field holds the socket handle as returned by the underlying Socket I/O API 320 , 358 .
  • the ProxySocketOps data structure 640 includes entry points into application programming interfaces (APIs) for performing various socket operations.
  • APIs application programming interfaces
  • the ecs_proxy_socket entry 642 provides an entry into an API that creates a proxy socket
  • the ecs_proxy_bind entry 644 provides an entry into an API that binds a proxy socket
  • the ecs_proxy_listen entry 646 provides an entry into an API that transforms a proxy socket into a listening socket.
  • APIs application programming interfaces
  • the PriSocketOps table data structure 660 includes entry points into application programming interfaces (APIs) for performing various socket operations.
  • APIs application programming interfaces
  • the ecs_socket entry 662 provides an entry into an API that creates a primary socket
  • the ecs_bind entry 664 provides an entry into an API that binds a primary socket
  • the ecs_listen entry 666 provides an entry into an API that transforms a primary socket into a listening socket.
  • APIs application programming interfaces
  • the sockets I/O demultiplexer 318 initializes and activates the master module and the proxy module.
  • master module refers to the library of functions that enable communication using the communication stack on the main processor 310 .
  • proxy module refers to the library of services that permit communication with applications on the main processor 310 via the communication stack on the co-processor 350 .
  • the proxy module is implemented as client-server software (or firmware) that communicates via shared memory inter-process communication (IPC) calls.
  • the client-side software resides in the main processor 310 and the server-side software resides in the co-processor 350 .
  • the sockets I/O demultiplexer 318 determines whether the request includes a proxy flag that is set.
  • the proxy flag is implemented in a protocol parameter, specifically, a shared memory function call (SMFC) flag is passed to the demultiplexer with the socket request.
  • SMFC shared memory function call
  • SockFdEx structure 615 b this is illustrated by the SockFdEx structure 615 b , in which the pointer SOps 2 is set to point to the PriSocketOps table 660 .
  • the proxy field in the SockFdEx table 615 b is set to a value that indicates “false” (operation 466 ).
  • a pointer SOps in the SockFdEx structure e.g., by setting a pointer SOps in the SockFdEx structure to point to the ProxySocketOps table.
  • the proxy field in the SockFdEx table 615 a is set to a value that indicates “true” (operation 470 ).
  • Control passes to optional operation 472 , in which the SMFC flag is cleared from “protocol” actual argument prior to reusing it in reissuing call to socket( ) API.
  • the sockets I/O demultiplexer 318 sends a socket create request to the selected communication module, i.e., either the proxy module or the master module. If an error is returned (operation 476 ) then at operation 480 an error code is returned to the calling module. By contrast, if an error code is not returned at operation 480 , then a socket descriptor is returned at operation 478 .
  • FIG. 5 is a flowchart illustrating additional exemplary socket I/O operations that may be implemented by the sockets I/O demultiplexer 318 .
  • the operations of FIG. 5 are exemplary for socket I/O requests other than requests to create a socket.
  • the operations illustrated in FIG. 5 assume that the sockets I/O demultiplexer 318 has received a network I/O request addressed to a socket under its management.
  • the sockets I/O demultiplexer 318 sends a socket I/O request to a selected module, i.e., the proxy module or the master module.
  • operation 510 may be performed by selecting a socket operation that corresponds to the network I/O request in the SocketOps structure that corresponds to the selected module.
  • the sockets I/O demultiplexer 318 would select the ecs_bind operation 664 in the PriSocketOps table 660 .
  • the sockets I/O demultiplexer 318 would select the ecs_proxy_bind operation 644 in the ProxySocketOps table 640 .
  • the data tables of FIG. 6 provide a mechanism for the sockets I/O demultiplexer 318 to map network I/O requests to socket operations for either the master module or the proxy module.
  • the sockets I/O demultiplexer 318 can match the network I/O request to a corresponding socket operation in either the ProxySocketOps table 640 or the PriSocketOps table 660 .
  • the sockets I/O demultiplexer 318 determines whether the call is still pending. If the call is still pending, then the sockets I/O demultiplexer 318 implements a loop that waits for the call to return. Once the call returns, control may pass to operation 514 , where the sockets I/O demultiplexer 318 sets a return variable to the value returned by the call to the selected module, i.e., either the proxy module or the master module.
  • Control then passes to operation 522 , and the sockets I/O demultiplexer 318 passes the return value to the module that invoked the socket I/O API function call.
  • FIGS. 7-10 are flowcharts that illustrate the use of proxy sockets in multiprocessor communication.
  • FIG. 7 is a flowchart that illustrates a socket I/O sequence for a circumstance in which the main processor 310 acts as a communication server for an external application. In this circumstance the main processor uses its own communication resources (i.e., socket I/O API 320 and stack 322 ) to communicate with external objects.
  • the operations of FIG. 7 will be described with reference to the multiprocessor communication architecture described in FIG. 3 , but the particular configuration is not critical.
  • the application layer 314 generates a socket( ) call and passes the socket( ) call to the sockets I/O demultiplexer 318 .
  • the sockets I/O demultiplexer 318 receives the socket( ) call and passes the call to the selected communication module.
  • the master module is the selected communication resource, so at operation 712 the sockets I/O demultiplexer 318 passes the socket( ) request to the socket I/O layer API 320 , which returns a handle to the newly-instanced primary socket object (operation 714 ).
  • the application layer 314 generates a bind( ) request and passes the bind( ) request to the sockets I/O demultiplexer 318 , which in turn passes the bind( ) request to the socket I/O layer API 320 (operation 718 ).
  • the socket I/O layer API 320 binds the newly-instanced socket handle to a communication resource.
  • the application layer 314 generates a listen( ) call to convert the newly-instanced socket handle into a listening socket, and passes the listen( ) call to the sockets I/O demultiplexer 318 , which in turn passes the listen( ) call to the socket I/O layer API 320 .
  • the socket I/O layer API 320 converts the newly-instanced socket handle into a listening socket and sends a reply to the application layer 314 (operation 726 ).
  • the application layer 314 generates an accept( ) call and passes the accept( ) call to the sockets I/O demultiplexer 318 , which passes the accept( ) call to the socket I/O layer API 320 .
  • the socket I/O layer API 320 configures the newly-instanced socket handle to accept an incoming connection request from a remote application.
  • the accept( ) call may be implemented as a blocking call, such that the process or thread waits for an incoming (or inbound) connection requests call from a remote application.
  • the socket I/O layer API 320 receives a connect request from a remote application.
  • the socket I/O layer API 320 unblocks the accept wait, and at operation 736 the socket I/O layer API 320 forwards a newly-instanced socket handle to the socket I/O demultiplexer 318 , which in turn forwards the newly-instanced socket handle to the application layer 314 (operation 738 ).
  • the remote application communicates with the application layer 314 using the newly-instanced socket handle.
  • the application layer 314 generates a close( ) call (operation 742 ) to close the socket created in step 736 and the socket I/O demultiplexer passes the close( ) to the sockets I/O demultiplexer 318 (operation 744 ), which in turn passes the close( ) call to the socket I/O API 320 .
  • the socket I/O API 320 closes the socket instanced in step 736 for send and receive operations and returns a response to the application layer 314 .
  • FIGS. 8A-8B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor 310 acts as a communication server for a remote application, but communicates using a proxy socket managed by the main processor 310 which as real socket equivalent managed by the co-processor 350 .
  • the operations of FIG. 8 will be described with reference to the multiprocessor communication architecture described in FIG. 3 , but the particular configuration is not critical.
  • an application 314 generates a socket( ) call and passes the socket( ) call to the sockets I/O demultiplexer 318 .
  • the socket( ) call specifies that the socket call is to a proxy socket, e.g., by including a SMFC flag in a “protocol” formal parameter passed with the socket( ) call as described above.
  • the sockets I/O demultiplexer 318 receives the socket( ) call and, at operation 812 , passes the socket( ) request to the co-processor 350 .
  • the sockets I/O demultiplexer 318 passes various socket calls to the co-processor 350 via a shared memory function call (SMFC). Accordingly, the socket( ) call is passed to the proxy/socket mirror module 324 , which passes the socket( ) call to a shared memory driver 326 .
  • the shared memory driver 326 maintains a communication path to shared memory driver 362 in the co-processor via shared memory module 370 .
  • the shared memory driver 362 retrieves the call from the shared memory module 370 and passes it up its communication stack to the primary/socket master module 360 and to the socket I/O layer 358 for processing. Communication from the co-processor to the main processor follows the reverse path.
  • the socket I/O layer API 358 generates a newly instanced socket and at operation 818 the socket I/O layer API 358 returns a handle to the newly-instanced socket for send and receive operations to the socket I/O demultiplexer 318 in the main processor.
  • the socket I/O demultiplexer 318 passes the handle to the application 314 .
  • the application 314 generates a bind( ) call and passes the bind( ) call to the socket I/O demultiplexer 318 .
  • the socket I/O demultiplexer 318 passes the bind( ) call to the co-processor 350 .
  • the socket I/O layer API 358 in co-processor 350 binds the newly-instanced socket handle to a communication resource in the co-processor and returns an acknowledgment to the socket I/O demultiplexer 318 , which passes the acknowledgment back to the application 314 .
  • the application 314 generates a listen( ) call, which is passed to the socket I/O demultiplexer 318 .
  • the socket I/O demultiplexer 318 passes the listen( ) call to the co-processor 350 .
  • the socket I/O layer API 358 executes the listen( ) call and returns an acknowledgment to the socket I/O demultiplexer 318 , which passes the acknowledgment back to the application 314 .
  • the application generates an accept( ) call, which is passed to the socket I/O demultiplexer 318 .
  • the accept( ) call is passed to the co-processor 350 .
  • the socket I/O API 358 executes the accept( ) call, which places the proxy socket implemented at the co-processor into a state that can accept a connect-request from a remote application.
  • an accept( ) call may be implemented as a blocking call.
  • the newly-instanced socket receives a connect request from a remote application.
  • the socket I/O API 358 returns the newly instanced socket handle to the application 314 in main processor 310 .
  • the application 314 in main processor communicates with the remote application that invoked the connect( ) call over the communication path via the proxy (or managed) socket that has equivalent of real socket instantiated in co-processor 350 .
  • the application 314 passes a close( ) call to the socket I/O demultiplexer 318 operation 846 .
  • the socket I/O demultiplexer passes the close( ) call to the co-processor 350 .
  • the socket I/O API 358 executes the socket close call to close the real socket instance implemented in co-processor 350 at operation 840 , and passes an acknowledgment back to the application 314 , which terminates socket operations over the proxy socket at operation 852 .
  • FIGS. 9A-9B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor 310 acts as a communication client for a remote application, and communicates using a proxy socket managed by the main processor 310 which has equivalent real socket that is managed by the co-processor 350 .
  • the operations of FIG. 9 will be described with reference to the multiprocessor communication architecture described in FIG. 3 , but the particular configuration is not critical.
  • Operations 910 - 924 involve opening and binding a proxy socket, and may be implemented as described in operations 810 - 824 . For brevity and clarity, these operations will not be repeated in detail.
  • the application 314 generates a connect( ) call and passes the connect( ) call to the socket I/O demultiplexer 318 .
  • the socket I/O demultiplexer 318 passes the connect request to the co-processor 350 .
  • the socket I/O API 358 executes the connect( ) call and sends an acknowledgment back to the socket I/O demultiplexer 318 .
  • the applications communicate using the newly instanced socket.
  • the application 314 When the communication session is finished the application 314 generates a close( ) call and passes the close( ) call to the socket I/O demultiplexer 318 .
  • the socket I/O demultiplexer 318 passes the close( ) call to the co-processor.
  • the socket I/O API 358 executes the close( ) call to close the real socket instanced in operation 914 in the co-processor 350 and returns an acknowledgment back to the application 314 , which terminates socket operations over the proxy socket at operation 940 .
  • an application 314 on main processor 310 may communicate with an application 354 on co-processor 350 . This may be implemented using the operations illustrated in FIGS. 8A-8B with minor changes.
  • the application sends a bind( ) request to bind the communication resource to a local address on the co-processor (i.e., a loopback address).
  • a bind( ) call in operation 820 is executed, it enables a communication path between the application 314 on the main processor and an application 354 on the co-processor.
  • the application 354 on the co-processor 350 creates and binds a socket to a loopback address to enable the communications path.
  • the loopback network interface is assigned IP address of 127.0.0.1 in TCP/IP network stack's network layer. Remaining communication operations can be implemented as described in FIGS. 8A-8B .
  • the system architecture and techniques described herein enable multiprocessor communication using proxy sockets.
  • Applications on a first processor can communicate with remote applications using communication resources from a second processor.
  • the remote application can be external to both processors, or can execute on the second processor such that communication takes place using a private loopback address for extra security instead of using existing public address assigned real socket instanced in second processor.

Abstract

Systems and methods for implementing communication in a multiprocessor are disclosed. In one exemplary implementation a first processor receives a request to provide a communication resource for an application executing on the first processor to communicate with a remote application. In response to the communication request, the first processor opens a communication resource on a second processor, and implements communication operations between the application executing on the first processor and the remote application using the communication resource on the second processor.

Description

CROSS REFERENCE TO RELATED APPLICATION
This application is a continuation of U.S. application Ser. No. 10/856,263 titled “Communication in Multiprocessor Using Proxy Sockets,” filed May 27, 2004, which is incorporated by reference herein for all purposes.
TECHNICAL FIELD
The described subject matter relates to electronic computing, and more particularly to systems and methods for implementing proxy sockets to facilitate communication in a multiprocessor computing system.
BACKGROUND
In multiprocessor computing systems, each processor is capable of managing its own network connectivity operations. In such a configuration, each physical network interface is bound only to the local communication stack in the multi-processor system. A communication stack that is not bound to a particular network interface does not have direct access to the data link layer supplied by that interface.
Thus, an application-level service may find that not all network interfaces in a multiprocessor environment are available for establishing a communication endpoint because the local network stack cannot bind to all of the interfaces in the multi-processor system. In other words, not all of the network interfaces are available to all application layer services present in a multi-processor system.
One way to address this issue is to pair application-level services with specific local network interfaces, i.e., to dedicate local network interfaces to specific applications or task. However, dedicating services only to specific local network interfaces impairs the scalability of the system. Accordingly, additional solutions are desirable.
SUMMARY
Systems and methods described herein address these issues by enabling multiprocessor communication using proxy sockets. In one exemplary implementation a first processor receives a request to provide a communication resource for an application executing on the first processor to communicate with a remote application. In response to the communication request, the first processor opens a communication resource on a second processor, and implements communication operations between the application executing on the first processor and the remote application using the communication resource on the second processor.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic illustration of an exemplary implementation of a data storage system that implements RAID storage;
FIG. 2 is a schematic illustration of an exemplary implementation of a RAID controller in more detail;
FIG. 3 is a schematic illustration of an exemplary multi-processor communication architecture;
FIGS. 4A-4B are flowcharts illustrating operations in an exemplary method for creating a socket for communication in a multiprocessor system.
FIG. 5 is a flowchart illustrating additional exemplary socket operations;
FIG. 6 is schematic illustration of exemplary data structures for managing proxy socket based multi-processor communication;
FIG. 7 is a flowchart that illustrates a socket I/O sequence for a circumstance in which a main processor acts as a communication server for an external application;
FIGS. 8A-8B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor acts as a communication server for a remote application, but communicates using proxy sockets managed by the main processor that has real socket equivalent managed by the co-processor; and
FIGS. 9A-9B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor acts as a communication client for a remote application, and communicates using a proxy socket managed by the co-processor.
DETAILED DESCRIPTION
Described herein are exemplary architectures and techniques for implementing proxy sockets in a multi-processor computing system. The methods described herein may be embodied as logic instructions on a computer-readable medium, firmware, or as dedicated circuitry. When executed on a processor, the logic instructions (or firmware) cause a processor to be programmed as a special-purpose machine that implements the described methods. The processor, when configured by the logic instructions (or firmware) to execute the methods recited herein, constitutes structure for performing the described methods.
Exemplary Storage Architecture
FIG. 1 is a schematic illustration of an exemplary implementation of a data storage system 100. The data storage system 100 has a disk array with multiple storage disks 130 a-130 f, a disk array controller module 120, and a storage management system 110. The disk array controller module 120 is coupled to multiple storage disks 130 a-130 f via one or more interface buses, such as a small computer system interface (SCSI) bus. The storage management system 110 is coupled to the disk array controller module 120 via one or more interface buses. It is noted that the storage management system 110 can be embodied as a separate component (as shown), or within the disk array controller module 120, or within a host computer.
In an exemplary implementation data storage system 100 may implement RAID (Redundant Array of Independent Disks) data storage techniques. RAID storage systems are disk array systems in which part of the physical storage capacity is used to store redundant data. RAID systems are typically characterized as one of six architectures, enumerated under the acronym RAID. A RAID 0 architecture is a disk array system that is configured without any redundancy. Since this architecture is really not a redundant architecture, RAID 0 is often omitted from a discussion of RAID systems.
A RAID 1 architecture involves storage disks configured according to mirror redundancy. Original data is stored on one set of disks and a duplicate copy of the data is kept on separate disks. The RAID 2 through RAID 5 architectures involve parity-type redundant storage. Of particular interest, a RAID 5 system distributes data and parity information across a plurality of the disks 130 a-130 c. Typically, the disks are divided into equally sized address areas referred to as “blocks”. A set of blocks from each disk that have the same unit address ranges are referred to as “stripes”. In RAID 5, each stripe has N blocks of data and one parity block which contains redundant information for the data in the N blocks.
In RAID 5, the parity block is cycled across different disks from stripe-to-stripe. For example, in a RAID 5 system having five disks, the parity block for the first stripe might be on the fifth disk; the parity block for the second stripe might be on the fourth disk; the parity block for the third stripe might be on the third disk; and so on. The parity block for succeeding stripes typically rotates around the disk drives in a helical pattern (although other patterns are possible). RAID 2 through RAID 4 architectures differ from RAID 5 in how they compute and place the parity block on the disks. The particular RAID class implemented is not important.
In a RAID implementation, the storage management system 110 optionally may be implemented as a RAID management software module that runs on a processing unit of the data storage device, or on the processor unit of a computer. The disk array controller module 120 coordinates data transfer to and from the multiple storage disks 130 a-130 f. In an exemplary implementation, the disk array module 120 has two identical controllers or controller boards: a first disk array controller 122 a and a second disk array controller 122 b. Parallel controllers enhance reliability by providing continuous backup and redundancy in the event that one controller becomes inoperable. Parallel controllers 122 a and 122 b have respective mirrored memories 124 a and 124 b. The mirrored memories 124 a and 124 b may be implemented as battery-backed, non-volatile RAMs (e.g., NVRAMs). Although only dual controllers 122 a and 122 b are shown and discussed generally herein, aspects of this invention can be extended to other multi-controller configurations where more than two controllers are employed.
The mirrored memories 124 a and 124 b store several types of information. The mirrored memories 124 a and 124 b maintain duplicate copies of a coherent memory map of the storage space in multiple storage disks 130 a-130 f. This memory map tracks where data and redundancy information are stored on the disks, and where available free space is located. The view of the mirrored memories is consistent across the hot-plug interface, appearing the same to external processes seeking to read or write data.
The mirrored memories 124 a and 124 b also maintain a read cache that holds data being read from the multiple storage disks 130 a-130 f. Every read request is shared between the controllers. The mirrored memories 124 a and 124 b further maintain two duplicate copies of a write cache. Each write cache temporarily stores data before it is written out to the multiple storage disks 130 a-130 f.
The controller's mirrored memories 122 a and 122 b are physically coupled via a hot-plug interface 126. Generally, the controllers 122 a and 122 b monitor data transfers between them to ensure that data is accurately transferred and that transaction ordering is preserved (e.g., read/write ordering).
FIG. 2 is a schematic illustration of an exemplary implementation of a dual RAID controller in greater detail. In addition to controller boards 210 a and 210 b, the disk array controller also has two I/ O modules 240 a and 240 b, an optional display 244, and two power supplies 242 a and 242 b. The I/ O modules 240 a and 240 b facilitate data transfer between respective controllers 210 a and 210 b and one or more host computers. In one implementation, the I/ O modules 240 a and 240 b employ fiber channel technology, although other bus technologies may be used. The power supplies 242 a and 242 b provide power to the other components in the respective disk array controllers 210 a, 210 b, the display 244, and the I/ O modules 240 a, 240 b.
Each controller 210 a, 210 b has a converter 230 a, 230 b connected to receive signals from the host via respective I/ O modules 240 a, 240 b. Each converter 230 a and 230 b converts the signals from one bus format (e.g., Fibre Channel) to another bus format (e.g., peripheral component interconnect (PCI)). A first PCI bus 228 a, 228 b carries the signals to an array controller memory transaction manager 226 a, 226 b, which handles all mirrored memory transaction traffic to and from the NVRAM 222 a, 222 b in the mirrored controller. The array controller memory transaction manager maintains the memory map, computes parity, and facilitates cross-communication with the other controller. The array controller memory transaction manager 226 a, 226 b is preferably implemented as an integrated circuit (IC), such as an application-specific integrated circuit (ASIC).
The array controller memory transaction manager 226 a, 226 b is coupled to the NVRAM 222 a, 222 b via a high- speed bus 224 a, 224 b and to other processing and memory components via a second PCI bus 220 a, 220 b. Controllers 210 a, 210 b may include several types of memory connected to the PCI bus 220 a and 220 b. The memory includes a dynamic RAM (DRAM) 214 a, 214 b, flash memory 218 a, 218 b, and cache 216 a, 216 b.
The array controller memory transaction managers 226 a and 226 b are coupled to one another via a communication interface 250. The communication interface 250 supports bi-directional parallel communication between the two array controller memory transaction managers 226 a and 226 b at a data transfer rate commensurate with the NVRAM buses 224 a and 224 b.
The array controller memory transaction managers 226 a and 226 b employ a high-level packet protocol to exchange transactions in packets over hot-plug interface 250. The array controller memory transaction managers 226 a and 226 b perform error correction on the packets to ensure that the data is correctly transferred between the controllers.
The array controller memory transaction managers 226 a and 226 b provide a memory image that is coherent across the hot plug interface 250. The managers 226 a and 226 b also provide an ordering mechanism to support an ordered interface that ensures proper sequencing of memory transactions.
In an exemplary implementation each controller 210 a, 210 b includes multiple central processing units (CPUs) 212 a, 213 a, 212 b, 213 b, also referred to as processors. The processors on each controller may be assigned specific functionality to manage. For example, a first set of processing units 212 a, 212 b may manage storage operations for the plurality of disks 130 a-130 f, while a second set of processing units 213 a, 213 b may manage networking operations with host computers or other devices that request storage services from data storage system 100.
Separating networking operations from storage operations and assigning the networking operations to a separate processor can improve the performance of a storage controller. Computationally-expensive networking operations can be off-loaded to a co-processor, thereby permitting the main processor to dedicate its processor cycles to storage operations.
In such a multi-processor architecture, each processor side may implement its own software protocol stack for communication purposes. FIG. 3 is a schematic illustration of a multiprocessor communication configuration. In one exemplary implementation the multiprocessor configuration depicted in FIG. 3 may be implemented in storage controller such as, e.g., the storage controller depicted in FIG. 2. The multiprocessor configuration comprises a first processor, i.e., the main processor 310 and a second processor, i.e., the co-processor 350. The main processor 310 manages data I/O operations to and from the plurality of disks 130 a-130 f, while the co-processor 350 manages network operations.
Main processor 310 comprises a services module 312 that provides one or more services to a management network by invoking one or more applications in application module 314. Application module 314 invokes a communication module 316 to pass service protocol data units (PDUs) down the protocol stack. Communication module 316 includes a sockets I/O demultiplexer module 318, a socket I/O API module 320, a local communication protocol stack 322, a proxy/socket mirror module 324, and a shared memory driver module 326. The various modules in communication module 316 may be implemented as software objects that facilitate transmitting and receiving service PDUs with other objects in an attached management network. Operation of these modules will be explained in greater detail below.
Co-processor 350 also comprises a services module 352 that provides one or more services via a storage attached network by invoking one or more applications in application module 354. Application module 354 invokes a communication module 356 to pass service PDUs down the protocol stack. Communication module 356 includes a sockets I/O API 358, a primary/socket master module 360, a shared memory driver module 362, and a local network protocol module 364. The various modules in communication module may be implemented as software objects that facilitate transmitting and receiving service PDUs with other object in an attached network. Operation of these modules will be explained in greater detail below.
In an exemplary implementation, main processor 310 communicates with co-processor 350 logically at the sockets I/O demultiplexer 318 and the sockets I/O API 358 level using conventional sockets-based communications techniques that rely on underlying remote procedure call (RPC) between proxy/socket mirror 324 and primary/socket master 360. This communication depends on the lower-level communication layer between shared memory drivers 326, 362, which is implemented by shared memory function calls (SMFC). Sockets I/O demultiplexer 318 implements operations that permit the main processor to send and receive communication with external devices using a proxy socket it manages (the co-processor plays part in this managed or proxy socket by offering server-side RPCs for socket APIs called by main processor).
Connection 380 illustrates a logical link between the sockets I/O demultiplexer 318 and the sockets I/O API on co-processor 358 where main processor 310 connects to 358 via the proxy module. Connection 382 illustrates a logical link between the proxy/socket mirror module 324 and the primary/socket master module 360 where 324 connects to this link as client side RPC and 360 connects to this link as server side RPC. Connection 384 illustrates a logical link between shared memory drivers 326 and 362 where this link implements the IPC (Inter-Processor Communication) using SMFCs through 370. The data path from 318 through 324 and 326 which is the proxy/socket mirror path leads to managed sockets through the use of proxy module in 310. The data path from 318 through 320 and 322 which is the conventional socket I/O API path leads to unmanaged sockets through the use of master module in 310.
Operation of an exemplary sockets I/O demultiplexer 318 will be described with reference to the flowcharts of FIGS. 4A-4B, FIG. 5, and the data tables of FIG. 6.
Exemplary Operations
FIGS. 4A-4B illustrate an exemplary process implemented by the sockets I/O demultiplexer 318 to create a socket. The process illustrated in FIG. 4 a is concerned with selecting either a master module or a proxy module for communication by the main processor 310. FIG. 4 a assumes that an application in application module 314 has been invoked and needs to create an endpoint for communication with one or more external devices. At operation 410 the sockets I/O demultiplexer 318 initializes a data table referred to herein as the SdOSockets table 610, which is depicted in FIG. 6. The application module 314 passes the conventional socket API based communication request to the socket I/O demultiplexer 318,
Referring briefly to FIG. 6 the SdOSockets table 610 is a data structure that can include as many entries as are required by applications. Each entry in the SdOSockets table 610 includes a pointer to a separate instance of a data structure referred to herein as a SockFdEx data structure 615 a, 615 b. Referring to SockFdEx data structures 615 a, 615 b the SockFdEx data structure includes a SOps pointer 620 a, 620 b, a proxy flag data field 622 a, 622 b and a SockFd field 624 a, 624 b. The SOps pointer points to one of two data structures: the ProxySocketOps data structure 640 or the PriSocketOps data structure 660. The proxy flag data field holds a data flag that indicates whether the socket represented by the particular instance of the SockFdEx data structure is a proxy socket. The SockFd field holds the socket handle as returned by the underlying Socket I/ O API 320, 358.
The ProxySocketOps data structure 640 includes entry points into application programming interfaces (APIs) for performing various socket operations. By way of example, the ecs_proxy_socket entry 642 provides an entry into an API that creates a proxy socket, the ecs_proxy_bind entry 644 provides an entry into an API that binds a proxy socket, the ecs_proxy_listen entry 646 provides an entry into an API that transforms a proxy socket into a listening socket. One skilled in the art will recognize the remaining socket operations in the ProxySocketOps table.
Similarly, the PriSocketOps table data structure 660 includes entry points into application programming interfaces (APIs) for performing various socket operations. By way of example, the ecs_socket entry 662 provides an entry into an API that creates a primary socket, the ecs_bind entry 664 provides an entry into an API that binds a primary socket, the ecs_listen entry 666 provides an entry into an API that transforms a primary socket into a listening socket. One skilled in the art will recognize the remaining socket operations in the PriSocketOps table.
Referring again to FIG. 4 a, at operation 412 the sockets I/O demultiplexer 318 initializes and activates the master module and the proxy module. The term master module refers to the library of functions that enable communication using the communication stack on the main processor 310. The term proxy module refers to the library of services that permit communication with applications on the main processor 310 via the communication stack on the co-processor 350. In an exemplary implementation, the proxy module is implemented as client-server software (or firmware) that communicates via shared memory inter-process communication (IPC) calls. The client-side software resides in the main processor 310 and the server-side software resides in the co-processor 350.
At operation 416 the sockets I/O demultiplexer 318 determines whether the request includes a proxy flag that is set. In an exemplary implementation the proxy flag is implemented in a protocol parameter, specifically, a shared memory function call (SMFC) flag is passed to the demultiplexer with the socket request.
If the proxy flag is not set, then control passes to operation 418, and the sockets I/O demultiplexer 318 determines whether it can accept socket I/O to the socket I/O API 320 in the main processor 310. If it can accept socket I/O to the socket I/O API 320 then control passes to operation 426, and the sockets I/O demultiplexer 318 determines whether the socket I/O API 320 is active (or initialized properly and ready). If the socket I/O API 320 is active, then control passes to operation 430 and the sockets I/O demultiplexer 318 selects the master module. By contrast, if the socket I/O API 320 is inactive, then control passes to operation 432 and an error code is returned to the calling routine.
Referring back to operation 418, if the sockets I/O demultiplexer 318 cannot accept socket I/O to the socket I/O API 320 in the main processor 310, then control passes to operation 420, and the sockets I/O demultiplexer 318 determines whether it can accept sockets I/O communication to the proxy module. If so, then control passes to operation 422, and if the socket I/O API 358 is active, then the proxy module is selected for communication at operation 424. By contrast, if either of the tests implemented at operations 420 or 422 fail, then control passes to operation 432, and an error code is returned to the calling routine.
Referring back to operation 416, if the SMFC flag is set, then control passes to operation 438 and the sockets I/O demultiplexer 318 determines whether it can accept socket I/O to the socket I/O API 358 in the co-processor 350. If it can accept socket I/O to the socket I/O API 358 then control passes to operation 446, and the sockets I/O demultiplexer 318 determines whether the socket I/O API 358 is active. If the socket I/O API 358 is active, then control passes to operation 450 and the sockets I/O demultiplexer 318 selects the proxy module. By contrast, if the socket I/O API 358 is inactive, then control passes to operation 452 and an error code is returned to the calling routine.
Referring back to operation 438, if the sockets I/O demultiplexer 318 cannot accept socket I/O to the socket I/O API 358 in the co-processor 350, then control passes to operation 440, and the sockets I/O demultiplexer 318 determines whether it can accept sockets I/O communication to the master module. If so, then control passes to operation 442, and if the socket I/O API 320 is active, then the master module is selected for communication at operation 444. By contrast, if either of the tests implemented at operations 440 or 442 fail, then control passes to operation 452, and an error code is returned to the calling routine.
After the sockets I/O demultiplexer 318 has selected either the proxy module or the master module for communication, control passes to operation 460 (FIG. 4B) and the sockets I/O demultiplexer 318 allocates a new SockFdEx data structure and points to it from the SdOSockets table 610. If at operation 462 the master module is selected, then control passes to operation 464 and the sockets I/O demultiplexer 318 attaches a PriSocketOps structure 660 to the newly-created SockFdEx structure, e.g., by setting a pointer SOps in the SockFdEx structure to point to the PriSocketOps table. Referring to FIG. 6, this is illustrated by the SockFdEx structure 615 b, in which the pointer SOps2 is set to point to the PriSocketOps table 660. In addition, the proxy field in the SockFdEx table 615 b is set to a value that indicates “false” (operation 466).
By contrast, if at operation 462 the master module is not selected then control passes to operation 468 and the sockets I/O demultiplexer 318 attaches a ProxySocketOps structure to 640 to the newly-created SockFdEx structure, e.g., by setting a pointer SOps in the SockFdEx structure to point to the ProxySocketOps table. Referring to FIG. 6, this is illustrated by the SockFdEx structure 615 a, in which the pointer SOps1 is set to point to the ProxySocketOps table 640. In addition, the proxy field in the SockFdEx table 615 a is set to a value that indicates “true” (operation 470).
Control then passes to optional operation 472, in which the SMFC flag is cleared from “protocol” actual argument prior to reusing it in reissuing call to socket( ) API. At operation 474 the sockets I/O demultiplexer 318 sends a socket create request to the selected communication module, i.e., either the proxy module or the master module. If an error is returned (operation 476) then at operation 480 an error code is returned to the calling module. By contrast, if an error code is not returned at operation 480, then a socket descriptor is returned at operation 478.
FIG. 5 is a flowchart illustrating additional exemplary socket I/O operations that may be implemented by the sockets I/O demultiplexer 318. The operations of FIG. 5 are exemplary for socket I/O requests other than requests to create a socket. The operations illustrated in FIG. 5 assume that the sockets I/O demultiplexer 318 has received a network I/O request addressed to a socket under its management.
At operation 510 the sockets I/O demultiplexer 318 sends a socket I/O request to a selected module, i.e., the proxy module or the master module. In an exemplary implementation operation 510 may be performed by selecting a socket operation that corresponds to the network I/O request in the SocketOps structure that corresponds to the selected module. By way of example, if the network I/O request involves a bind operation and the master module is the selected module, then the sockets I/O demultiplexer 318 would select the ecs_bind operation 664 in the PriSocketOps table 660. Similarly, if the network I/O request involves a bind operation and the proxy module is the selected module, then the sockets I/O demultiplexer 318 would select the ecs_proxy_bind operation 644 in the ProxySocketOps table 640. Hence, the data tables of FIG. 6 provide a mechanism for the sockets I/O demultiplexer 318 to map network I/O requests to socket operations for either the master module or the proxy module. One skilled in the art will recognize that the sockets I/O demultiplexer 318 can match the network I/O request to a corresponding socket operation in either the ProxySocketOps table 640 or the PriSocketOps table 660.
At optional operation 512 the sockets I/O demultiplexer 318 determines whether the call is still pending. If the call is still pending, then the sockets I/O demultiplexer 318 implements a loop that waits for the call to return. Once the call returns, control may pass to operation 514, where the sockets I/O demultiplexer 318 sets a return variable to the value returned by the call to the selected module, i.e., either the proxy module or the master module.
If, at operation 516, the network I/O call was a request to close a socket, then control passes first to operation 518, where the sockets I/O demultiplexer 318 releases the SockFdEx data structure assigned to the socket, then to operation 520, where the sockets I/O demultiplexer 318 clears the socket descriptor slot from the SdOSockets table 610.
Control then passes to operation 522, and the sockets I/O demultiplexer 318 passes the return value to the module that invoked the socket I/O API function call.
FIGS. 7-10 are flowcharts that illustrate the use of proxy sockets in multiprocessor communication. Specifically, FIG. 7 is a flowchart that illustrates a socket I/O sequence for a circumstance in which the main processor 310 acts as a communication server for an external application. In this circumstance the main processor uses its own communication resources (i.e., socket I/O API 320 and stack 322) to communicate with external objects. The operations of FIG. 7 will be described with reference to the multiprocessor communication architecture described in FIG. 3, but the particular configuration is not critical.
At operation 710 the application layer 314 generates a socket( ) call and passes the socket( ) call to the sockets I/O demultiplexer 318. The sockets I/O demultiplexer 318 receives the socket( ) call and passes the call to the selected communication module. In this circumstance the master module is the selected communication resource, so at operation 712 the sockets I/O demultiplexer 318 passes the socket( ) request to the socket I/O layer API 320, which returns a handle to the newly-instanced primary socket object (operation 714).
At operation 716 the application layer 314 generates a bind( ) request and passes the bind( ) request to the sockets I/O demultiplexer 318, which in turn passes the bind( ) request to the socket I/O layer API 320 (operation 718). At operation 720 the socket I/O layer API 320 binds the newly-instanced socket handle to a communication resource.
At operation 722 the application layer 314 generates a listen( ) call to convert the newly-instanced socket handle into a listening socket, and passes the listen( ) call to the sockets I/O demultiplexer 318, which in turn passes the listen( ) call to the socket I/O layer API 320. The socket I/O layer API 320 converts the newly-instanced socket handle into a listening socket and sends a reply to the application layer 314 (operation 726). At operation 728 the application layer 314 generates an accept( ) call and passes the accept( ) call to the sockets I/O demultiplexer 318, which passes the accept( ) call to the socket I/O layer API 320. The socket I/O layer API 320 configures the newly-instanced socket handle to accept an incoming connection request from a remote application. In an exemplary implementation, the accept( ) call may be implemented as a blocking call, such that the process or thread waits for an incoming (or inbound) connection requests call from a remote application.
At operation 732 the socket I/O layer API 320 receives a connect request from a remote application. At operation 734 the socket I/O layer API 320 unblocks the accept wait, and at operation 736 the socket I/O layer API 320 forwards a newly-instanced socket handle to the socket I/O demultiplexer 318, which in turn forwards the newly-instanced socket handle to the application layer 314 (operation 738).
At operation 740 the remote application communicates with the application layer 314 using the newly-instanced socket handle. When the communication session is finished, the application layer 314 generates a close( ) call (operation 742) to close the socket created in step 736 and the socket I/O demultiplexer passes the close( ) to the sockets I/O demultiplexer 318 (operation 744), which in turn passes the close( ) call to the socket I/O API 320. At operation 746 the socket I/O API 320 closes the socket instanced in step 736 for send and receive operations and returns a response to the application layer 314.
FIGS. 8A-8B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor 310 acts as a communication server for a remote application, but communicates using a proxy socket managed by the main processor 310 which as real socket equivalent managed by the co-processor 350. The operations of FIG. 8 will be described with reference to the multiprocessor communication architecture described in FIG. 3, but the particular configuration is not critical.
At operation 810 an application 314 generates a socket( ) call and passes the socket( ) call to the sockets I/O demultiplexer 318. In an exemplary implementation the socket( ) call specifies that the socket call is to a proxy socket, e.g., by including a SMFC flag in a “protocol” formal parameter passed with the socket( ) call as described above. The sockets I/O demultiplexer 318 receives the socket( ) call and, at operation 812, passes the socket( ) request to the co-processor 350.
In an exemplary implementation the sockets I/O demultiplexer 318 passes various socket calls to the co-processor 350 via a shared memory function call (SMFC). Accordingly, the socket( ) call is passed to the proxy/socket mirror module 324, which passes the socket( ) call to a shared memory driver 326. The shared memory driver 326 maintains a communication path to shared memory driver 362 in the co-processor via shared memory module 370. The shared memory driver 362 retrieves the call from the shared memory module 370 and passes it up its communication stack to the primary/socket master module 360 and to the socket I/O layer 358 for processing. Communication from the co-processor to the main processor follows the reverse path.
At operation 814 the socket I/O layer API 358 generates a newly instanced socket and at operation 818 the socket I/O layer API 358 returns a handle to the newly-instanced socket for send and receive operations to the socket I/O demultiplexer 318 in the main processor. At operation 816 the socket I/O demultiplexer 318 passes the handle to the application 314. At operation 820 the application 314 generates a bind( ) call and passes the bind( ) call to the socket I/O demultiplexer 318. At operation 822 the socket I/O demultiplexer 318 passes the bind( ) call to the co-processor 350. At operation 824 the socket I/O layer API 358 in co-processor 350 binds the newly-instanced socket handle to a communication resource in the co-processor and returns an acknowledgment to the socket I/O demultiplexer 318, which passes the acknowledgment back to the application 314.
At operation 826 the application 314 generates a listen( ) call, which is passed to the socket I/O demultiplexer 318. At operation 828 the socket I/O demultiplexer 318 passes the listen( ) call to the co-processor 350. At operation 830 the socket I/O layer API 358 executes the listen( ) call and returns an acknowledgment to the socket I/O demultiplexer 318, which passes the acknowledgment back to the application 314.
At operation 832 the application generates an accept( ) call, which is passed to the socket I/O demultiplexer 318. At operation 834 the accept( ) call is passed to the co-processor 350. At operation 836 the socket I/O API 358 executes the accept( ) call, which places the proxy socket implemented at the co-processor into a state that can accept a connect-request from a remote application. In an exemplary implementation an accept( ) call may be implemented as a blocking call. At operation 838 the newly-instanced socket receives a connect request from a remote application. At operation 840 the socket I/O API 358 returns the newly instanced socket handle to the application 314 in main processor 310.
At operation 844 the application 314 in main processor communicates with the remote application that invoked the connect( ) call over the communication path via the proxy (or managed) socket that has equivalent of real socket instantiated in co-processor 350. When the communication session is finished the application 314 passes a close( ) call to the socket I/O demultiplexer 318 operation 846. At operation 848 the socket I/O demultiplexer passes the close( ) call to the co-processor 350. At operation 850 the socket I/O API 358 executes the socket close call to close the real socket instance implemented in co-processor 350 at operation 840, and passes an acknowledgment back to the application 314, which terminates socket operations over the proxy socket at operation 852.
FIGS. 9A-9B are a flowchart illustrating an exemplary socket I/O sequence for multiprocessor communication in which the main processor 310 acts as a communication client for a remote application, and communicates using a proxy socket managed by the main processor 310 which has equivalent real socket that is managed by the co-processor 350. The operations of FIG. 9 will be described with reference to the multiprocessor communication architecture described in FIG. 3, but the particular configuration is not critical.
Operations 910-924 involve opening and binding a proxy socket, and may be implemented as described in operations 810-824. For brevity and clarity, these operations will not be repeated in detail.
At operation 926 the application 314 generates a connect( ) call and passes the connect( ) call to the socket I/O demultiplexer 318. At operation 928 the socket I/O demultiplexer 318 passes the connect request to the co-processor 350. At operation 930 the socket I/O API 358 executes the connect( ) call and sends an acknowledgment back to the socket I/O demultiplexer 318.
At operation 932 the applications communicate using the newly instanced socket. When the communication session is finished the application 314 generates a close( ) call and passes the close( ) call to the socket I/O demultiplexer 318. At operation 936 the socket I/O demultiplexer 318 passes the close( ) call to the co-processor. At operation 938 the socket I/O API 358 executes the close( ) call to close the real socket instanced in operation 914 in the co-processor 350 and returns an acknowledgment back to the application 314, which terminates socket operations over the proxy socket at operation 940.
In another implementation an application 314 on main processor 310 may communicate with an application 354 on co-processor 350. This may be implemented using the operations illustrated in FIGS. 8A-8B with minor changes. In operation 820 the application sends a bind( ) request to bind the communication resource to a local address on the co-processor (i.e., a loopback address). When the bind( ) call in operation 820 is executed, it enables a communication path between the application 314 on the main processor and an application 354 on the co-processor. The application 354 on the co-processor 350 creates and binds a socket to a loopback address to enable the communications path. In an exemplary implementation the loopback network interface is assigned IP address of 127.0.0.1 in TCP/IP network stack's network layer. Remaining communication operations can be implemented as described in FIGS. 8A-8B.
The system architecture and techniques described herein enable multiprocessor communication using proxy sockets. Applications on a first processor can communicate with remote applications using communication resources from a second processor. The remote application can be external to both processors, or can execute on the second processor such that communication takes place using a private loopback address for extra security instead of using existing public address assigned real socket instanced in second processor.
Although the described arrangements and procedures have been described in language specific to structural features and/or methodological operations, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or operations described. Rather, the specific features and operations are disclosed as preferred forms of implementing the claimed present subject matter.

Claims (20)

What is claimed is:
1. A method of computing, comprising:
receiving, at a first processor, a request to provide a communication resource for an application executing on the first processor to communicate with a remote application;
in response to the communication request, opening a communication resource on a second processor; and
implementing communication operations between the application executing on the first processor and the remote application using the communication resource on the second processor, wherein communication operations between the application executing on the first processor and the remote application are enabled by a demultiplexer module operable on the first processor.
2. The method of claim 1, wherein the communication resource comprises a network socket, and wherein opening a communication resource on a second processor comprises binding the network socket to a network connection managed by the second processor.
3. The method of claim 2, wherein opening a communication resource on the second processor comprises passing a socket descriptor from the second processor to the first processor.
4. The method of claim 3, wherein communication operations between the application executing on the first processor and the remote application using the communication resource on the second processor comprise information about the socket descriptor.
5. The method of claim 1, wherein opening a communication resource on a second processor comprises creating a data table that maps communication requests to an application programming interface method that creates a communication resource on the second processor.
6. The method of claim 1, comprising closing the communication resource on the second processor.
7. The method of claim 1, wherein the first processor comprises a communication server for the remote application.
8. A multiprocessor computing system, comprising:
a first processor and a second processor; and
a demultiplexer module operable on the first processor to manage communication operations between an application executing on the first processor and a remote application using communication resources of the second processor.
9. The multiprocessor computing system of claim 8, wherein the demultiplexer module is configured to open a communication resource on the second processor in response to a communication request for an application executing on the first processor to communicate with a remote application.
10. The multiprocessor computing system of claim 9, wherein the demultiplexer module is configured to create a data table that maps communication requests to an application programming interface method that creates a communication resource on the second processor.
11. The multiprocessor computing system of claim 9, wherein the first processor comprises a communication server for the remote application.
12. The multiprocessor computing system of claim 8, wherein the demultiplexer module is configured to manage inter-processor communication between the first processor and the second processor.
13. The multiprocessor computing system of claim 8, wherein the demultiplexer module is configured to select between communication resources on the first processor and communication resources on the second processor.
14. The multiprocessor computing system of claim 8, wherein the communication resources comprise network sockets.
15. A non-transitory-machine-readable medium that stores machine-readable instructions executable by a first processor to perform multiprocessor communication, the non-transitory, machine-readable medium comprising:
machine-readable instructions that, when executed by the first processor, receive, at a first processor, a request to provide a communication resource for an application executing on the first processor to communicate with a remote application;
machine-readable instructions that, when executed by the first processor, in response to the communication request, open a communication resource on a second processor, wherein the communication resource comprises a network socket, and wherein opening a communication resource on a second processor comprises binding the network socket to a network connection managed by the second processor; and
machine-readable instructions that, when executed by the first processor, implement communication operations between the application executing on the first processor and the remote application using the communication resource on the second processor, wherein communication operations between the application executing on the first processor and the remote application are enabled by a demultiplexer module operable on the first processor.
16. The non-transitory machine-readable medium of claim 15, wherein the machine-readable instructions that, when executed by the first processor, open a communication resource on the second processor comprises machine-readable instructions that, when executed by the first processor, pass a socket descriptor from the second processor to the first processor.
17. The non-transitory machine-readable medium of claim 16, wherein communication operations between the application executing on the first processor and the remote application using the communication resource on the second processor comprise information about the socket descriptor.
18. The non-transitory machine-readable medium of claim 15, wherein opening a communication resource on a second processor comprises creating a data table that maps communication requests to an application programming interface method that creates a communication resource on the second processor.
19. The non-transitory machine-readable medium of claim 15, comprising machine-readable instructions that, when executed by the first processor, close the communication resource on the second processor.
20. The non-transitory machine-readable medium of claim 15, wherein the first processor comprises a communication server for the remote application.
US13/232,382 2004-05-27 2011-09-14 Communication in multiprocessor using proxy sockets Active US8484357B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/232,382 US8484357B2 (en) 2004-05-27 2011-09-14 Communication in multiprocessor using proxy sockets

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/856,263 US8090837B2 (en) 2004-05-27 2004-05-27 Communication in multiprocessor using proxy sockets
US13/232,382 US8484357B2 (en) 2004-05-27 2011-09-14 Communication in multiprocessor using proxy sockets

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/856,263 Continuation US8090837B2 (en) 2004-05-27 2004-05-27 Communication in multiprocessor using proxy sockets

Publications (2)

Publication Number Publication Date
US20120005350A1 US20120005350A1 (en) 2012-01-05
US8484357B2 true US8484357B2 (en) 2013-07-09

Family

ID=35461829

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/856,263 Expired - Fee Related US8090837B2 (en) 2004-05-27 2004-05-27 Communication in multiprocessor using proxy sockets
US13/232,379 Active 2024-12-05 US8650302B2 (en) 2004-05-27 2011-09-14 Communication in multiprocessor using proxy sockets
US13/232,382 Active US8484357B2 (en) 2004-05-27 2011-09-14 Communication in multiprocessor using proxy sockets

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/856,263 Expired - Fee Related US8090837B2 (en) 2004-05-27 2004-05-27 Communication in multiprocessor using proxy sockets
US13/232,379 Active 2024-12-05 US8650302B2 (en) 2004-05-27 2011-09-14 Communication in multiprocessor using proxy sockets

Country Status (1)

Country Link
US (3) US8090837B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11137913B2 (en) 2019-10-04 2021-10-05 Hewlett Packard Enterprise Development Lp Generation of a packaged version of an IO request

Families Citing this family (136)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101357162B1 (en) * 2007-03-30 2014-02-04 삼성전자주식회사 Apparatus, method and system to control communication between host apparatus and USB apparatus
US8924504B2 (en) * 2009-10-22 2014-12-30 Vixs Systems, Inc. Coprocessing module for processing ethernet data and method for use therewith
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US8479042B1 (en) * 2010-11-01 2013-07-02 Xilinx, Inc. Transaction-level lockstep
US8589640B2 (en) 2011-10-14 2013-11-19 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
US8972956B2 (en) * 2012-08-02 2015-03-03 International Business Machines Corporation Application deployment in heterogeneous environments
US9298511B2 (en) 2013-03-15 2016-03-29 International Business Machines Corporation Resolving deployment conflicts in heterogeneous environments
US9292349B2 (en) 2013-03-15 2016-03-22 International Business Machines Corporation Detecting deployment conflicts in heterogenous environments
US9836234B2 (en) 2014-06-04 2017-12-05 Pure Storage, Inc. Storage cluster
US11068363B1 (en) 2014-06-04 2021-07-20 Pure Storage, Inc. Proactively rebuilding data in a storage cluster
US9213485B1 (en) 2014-06-04 2015-12-15 Pure Storage, Inc. Storage system architecture
US9003144B1 (en) 2014-06-04 2015-04-07 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US8850108B1 (en) 2014-06-04 2014-09-30 Pure Storage, Inc. Storage cluster
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US9367243B1 (en) 2014-06-04 2016-06-14 Pure Storage, Inc. Scalable non-uniform storage sizes
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US9021297B1 (en) * 2014-07-02 2015-04-28 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US8868825B1 (en) 2014-07-02 2014-10-21 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US8874836B1 (en) 2014-07-03 2014-10-28 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US10853311B1 (en) 2014-07-03 2020-12-01 Pure Storage, Inc. Administration through files in a storage system
US9811677B2 (en) 2014-07-03 2017-11-07 Pure Storage, Inc. Secure data replication in a storage grid
US9082512B1 (en) 2014-08-07 2015-07-14 Pure Storage, Inc. Die-level monitoring in a storage cluster
US10983859B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Adjustable error correction based on memory health in a storage unit
US9483346B2 (en) 2014-08-07 2016-11-01 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US9495255B2 (en) 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US9558069B2 (en) 2014-08-07 2017-01-31 Pure Storage, Inc. Failure mapping in a storage array
US9766972B2 (en) 2014-08-07 2017-09-19 Pure Storage, Inc. Masking defective bits in a storage array
US10079711B1 (en) 2014-08-20 2018-09-18 Pure Storage, Inc. Virtual file server with preserved MAC address
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10178169B2 (en) 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US10846275B2 (en) 2015-06-26 2020-11-24 Pure Storage, Inc. Key management in a storage device
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US11341136B2 (en) 2015-09-04 2022-05-24 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US10762069B2 (en) 2015-09-30 2020-09-01 Pure Storage, Inc. Mechanism for a system where data and metadata are located closely together
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US9672905B1 (en) 2016-07-22 2017-06-06 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US11422719B2 (en) 2016-09-15 2022-08-23 Pure Storage, Inc. Distributed file deletion and truncation
US9747039B1 (en) 2016-10-04 2017-08-29 Pure Storage, Inc. Reservations over multiple paths on NVMe over fabrics
US10756816B1 (en) 2016-10-04 2020-08-25 Pure Storage, Inc. Optimized fibre channel and non-volatile memory express access
US11550481B2 (en) 2016-12-19 2023-01-10 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
US11307998B2 (en) 2017-01-09 2022-04-19 Pure Storage, Inc. Storage efficiency of encrypted host system data
US9747158B1 (en) 2017-01-13 2017-08-29 Pure Storage, Inc. Intelligent refresh of 3D NAND
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
US10516645B1 (en) 2017-04-27 2019-12-24 Pure Storage, Inc. Address resolution broadcasting in a networked device
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11138103B1 (en) 2017-06-11 2021-10-05 Pure Storage, Inc. Resiliency groups
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US10425473B1 (en) 2017-07-03 2019-09-24 Pure Storage, Inc. Stateful connection reset in a storage cluster with a stateless load balancer
US10402266B1 (en) 2017-07-31 2019-09-03 Pure Storage, Inc. Redundant array of independent disks in a direct-mapped flash storage system
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US10719265B1 (en) 2017-12-08 2020-07-21 Pure Storage, Inc. Centralized, quorum-aware handling of device reservation requests in a storage system
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US11036596B1 (en) 2018-02-18 2021-06-15 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US11385792B2 (en) 2018-04-27 2022-07-12 Pure Storage, Inc. High availability controller pair transitioning
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11803493B2 (en) * 2020-11-30 2023-10-31 Dell Products L.P. Systems and methods for management controller co-processor host to variable subsystem proxy
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US11832410B2 (en) 2021-09-14 2023-11-28 Pure Storage, Inc. Mechanical energy absorbing bracket apparatus
CN115361238B (en) * 2022-10-20 2023-03-24 粤港澳大湾区数字经济研究院(福田) Network communication method, terminal and storage medium

Citations (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5644789A (en) 1995-01-19 1997-07-01 Hewlett-Packard Company System and method for handling I/O requests over an interface bus to a storage disk array
US5657390A (en) * 1995-08-25 1997-08-12 Netscape Communications Corporation Secure socket layer application program apparatus and method
US5828840A (en) 1996-08-06 1998-10-27 Verifone, Inc. Server for starting client application on client if client is network terminal and initiating client application on server if client is non network terminal
US5867660A (en) 1995-05-11 1999-02-02 Bay Networks, Inc. Method and apparatus for communicating between a network workstation and an internet
US5931917A (en) 1996-09-26 1999-08-03 Verifone, Inc. System, method and article of manufacture for a gateway system architecture with system administration information accessible from a browser
US5964891A (en) 1997-08-27 1999-10-12 Hewlett-Packard Company Diagnostic system for a distributed data access networked system
US5978840A (en) * 1996-09-26 1999-11-02 Verifone, Inc. System, method and article of manufacture for a payment gateway system architecture for processing encrypted payment transactions utilizing a multichannel, extensible, flexible architecture
US6061665A (en) 1997-06-06 2000-05-09 Verifone, Inc. System, method and article of manufacture for dynamic negotiation of a network payment framework
US6076113A (en) 1997-04-11 2000-06-13 Hewlett-Packard Company Method and system for evaluating user-perceived network performance
US6098190A (en) 1998-08-04 2000-08-01 Hewlett-Packard Co. Method and apparatus for use of a host address to validate accessed data
US6141793A (en) 1998-04-01 2000-10-31 Hewlett-Packard Company Apparatus and method for increasing the performance of interpreted programs running on a server
US6163797A (en) 1996-08-06 2000-12-19 Hewlett-Packard Company Application dispatcher for seamless, server application support for network terminals and non-network terminals
US6169992B1 (en) 1995-11-07 2001-01-02 Cadis Inc. Search engine for remote access to database management systems
US6189046B1 (en) 1997-03-27 2001-02-13 Hewlett-Packard Company Mechanism and method for merging cached location information in a distributed object environment
US6192410B1 (en) 1998-07-06 2001-02-20 Hewlett-Packard Company Methods and structures for robust, reliable file exchange between secured systems
US6212560B1 (en) 1998-05-08 2001-04-03 Compaq Computer Corporation Dynamic proxy server
US6230240B1 (en) 1998-06-23 2001-05-08 Hewlett-Packard Company Storage management system and auto-RAID transaction manager for coherent memory map across hot plug interface
US6234582B1 (en) 1999-11-03 2001-05-22 Sports World Enterprise Co., Ltd. Support of a golf cartwheel
US20010042131A1 (en) * 2000-04-14 2001-11-15 John Mathon System for handling information and information transfers in a computer network
US6334056B1 (en) 1999-05-28 2001-12-25 Qwest Communications Int'l., Inc. Secure gateway processing for handheld device markup language (HDML)
US20020032780A1 (en) 2000-04-24 2002-03-14 Microsoft Corporation Systems and methods for uniquely and persistently identifying networks
US6397245B1 (en) 1999-06-14 2002-05-28 Hewlett-Packard Company System and method for evaluating the operation of a computer over a computer network
US20020078135A1 (en) 2001-03-15 2002-06-20 Ibm Corporation Method and apparatus for improving the operation of an application layer proxy
US20020111996A1 (en) 2001-01-26 2002-08-15 David Jones Method, system and apparatus for networking devices
US6438597B1 (en) 1998-08-17 2002-08-20 Hewlett-Packard Company Method and system for managing accesses to a data service system that supports persistent connections
US6477139B1 (en) 1998-11-15 2002-11-05 Hewlett-Packard Company Peer controller management in a dual controller fibre channel storage enclosure
US20020194292A1 (en) 2001-05-31 2002-12-19 King Peter F. Method of establishing a secure tunnel through a proxy server between a user device and a secure server
US20030055943A1 (en) 2001-09-17 2003-03-20 Hiroki Kanai Storage system and management method of the storage system
US6549956B1 (en) 1998-11-30 2003-04-15 Hewlett Packard Company Mechanism for connecting disparate publication and subscribe domains via the internet
US6553422B1 (en) 1999-04-26 2003-04-22 Hewlett-Packard Development Co., L.P. Reverse HTTP connections for device management outside a firewall
US20030088683A1 (en) 2001-11-07 2003-05-08 Hitachi, Ltd. Storage management computer
US6584567B1 (en) 1999-06-30 2003-06-24 International Business Machines Corporation Dynamic connection to multiple origin servers in a transcoding proxy
US20030120386A1 (en) 2001-12-20 2003-06-26 Storage Technology Corporation Automated physical disk storage and management
US20030149770A1 (en) * 2001-10-05 2003-08-07 Delaire Brian Augustine Storage area network methods and apparatus with file system extension
US20030154306A1 (en) * 2002-02-11 2003-08-14 Perry Stephen Hastings System and method to proxy inbound connections to privately addressed hosts
US20030165160A1 (en) 2001-04-24 2003-09-04 Minami John Shigeto Gigabit Ethernet adapter
US20030182464A1 (en) 2002-02-15 2003-09-25 Hamilton Thomas E. Management of message queues
US6640278B1 (en) 1999-03-25 2003-10-28 Dell Products L.P. Method for configuration and management of storage resources in a storage network
US6718388B1 (en) 1999-05-18 2004-04-06 Jp Morgan Chase Bank Secured session sequencing proxy system and method therefor
US6754831B2 (en) * 1998-12-01 2004-06-22 Sun Microsystems, Inc. Authenticated firewall tunneling framework
US20040157557A1 (en) 2003-02-07 2004-08-12 Lockheed Martin Corporation System for a dynamic ad-hoc wireless network
US20040264383A1 (en) 2003-06-27 2004-12-30 Microsoft Corporation Media foundation topology
US20040267935A1 (en) 2003-06-30 2004-12-30 Kestutis Patiejunas System and method for message-based scalable data transport
US20050165932A1 (en) 2004-01-22 2005-07-28 International Business Machines Corporation Redirecting client connection requests among sockets providing a same service
US20050229184A1 (en) 2004-03-17 2005-10-13 Nec Corporation Inter-processor communication system in parallel processing system by OS for single processors and program thereof
US7263701B2 (en) 2001-09-04 2007-08-28 Samsung Electronics Co., Ltd. Interprocess communication method and apparatus
US7296154B2 (en) 2002-06-24 2007-11-13 Microsoft Corporation Secure media path methods, systems, and architectures
US7380024B2 (en) * 2000-10-03 2008-05-27 Attachmate Corporation System and method for communication with host internal area access
US7406709B2 (en) * 2002-09-09 2008-07-29 Audiocodes, Inc. Apparatus and method for allowing peer-to-peer network traffic across enterprise firewalls
US7581006B1 (en) * 1998-05-29 2009-08-25 Yahoo! Inc. Web service
US7594230B2 (en) * 2001-06-11 2009-09-22 Microsoft Corporation Web server architecture
US7676788B1 (en) * 2003-03-25 2010-03-09 Electric Cloud, Inc. Architecture and method for executing program builds
US7802001B1 (en) * 2002-10-18 2010-09-21 Astute Networks, Inc. System and method for flow control within a stateful protocol processing system

Patent Citations (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5644789A (en) 1995-01-19 1997-07-01 Hewlett-Packard Company System and method for handling I/O requests over an interface bus to a storage disk array
US5867660A (en) 1995-05-11 1999-02-02 Bay Networks, Inc. Method and apparatus for communicating between a network workstation and an internet
US5657390A (en) * 1995-08-25 1997-08-12 Netscape Communications Corporation Secure socket layer application program apparatus and method
US6169992B1 (en) 1995-11-07 2001-01-02 Cadis Inc. Search engine for remote access to database management systems
US5828840A (en) 1996-08-06 1998-10-27 Verifone, Inc. Server for starting client application on client if client is network terminal and initiating client application on server if client is non network terminal
US6163797A (en) 1996-08-06 2000-12-19 Hewlett-Packard Company Application dispatcher for seamless, server application support for network terminals and non-network terminals
US5931917A (en) 1996-09-26 1999-08-03 Verifone, Inc. System, method and article of manufacture for a gateway system architecture with system administration information accessible from a browser
US5978840A (en) * 1996-09-26 1999-11-02 Verifone, Inc. System, method and article of manufacture for a payment gateway system architecture for processing encrypted payment transactions utilizing a multichannel, extensible, flexible architecture
US6189046B1 (en) 1997-03-27 2001-02-13 Hewlett-Packard Company Mechanism and method for merging cached location information in a distributed object environment
US6076113A (en) 1997-04-11 2000-06-13 Hewlett-Packard Company Method and system for evaluating user-perceived network performance
US6061665A (en) 1997-06-06 2000-05-09 Verifone, Inc. System, method and article of manufacture for dynamic negotiation of a network payment framework
US5964891A (en) 1997-08-27 1999-10-12 Hewlett-Packard Company Diagnostic system for a distributed data access networked system
US6141793A (en) 1998-04-01 2000-10-31 Hewlett-Packard Company Apparatus and method for increasing the performance of interpreted programs running on a server
US6212560B1 (en) 1998-05-08 2001-04-03 Compaq Computer Corporation Dynamic proxy server
US7581006B1 (en) * 1998-05-29 2009-08-25 Yahoo! Inc. Web service
US6230240B1 (en) 1998-06-23 2001-05-08 Hewlett-Packard Company Storage management system and auto-RAID transaction manager for coherent memory map across hot plug interface
US6192410B1 (en) 1998-07-06 2001-02-20 Hewlett-Packard Company Methods and structures for robust, reliable file exchange between secured systems
US6098190A (en) 1998-08-04 2000-08-01 Hewlett-Packard Co. Method and apparatus for use of a host address to validate accessed data
US6438597B1 (en) 1998-08-17 2002-08-20 Hewlett-Packard Company Method and system for managing accesses to a data service system that supports persistent connections
US6477139B1 (en) 1998-11-15 2002-11-05 Hewlett-Packard Company Peer controller management in a dual controller fibre channel storage enclosure
US6549956B1 (en) 1998-11-30 2003-04-15 Hewlett Packard Company Mechanism for connecting disparate publication and subscribe domains via the internet
US6754831B2 (en) * 1998-12-01 2004-06-22 Sun Microsystems, Inc. Authenticated firewall tunneling framework
US6640278B1 (en) 1999-03-25 2003-10-28 Dell Products L.P. Method for configuration and management of storage resources in a storage network
US6553422B1 (en) 1999-04-26 2003-04-22 Hewlett-Packard Development Co., L.P. Reverse HTTP connections for device management outside a firewall
US6718388B1 (en) 1999-05-18 2004-04-06 Jp Morgan Chase Bank Secured session sequencing proxy system and method therefor
US6334056B1 (en) 1999-05-28 2001-12-25 Qwest Communications Int'l., Inc. Secure gateway processing for handheld device markup language (HDML)
US6397245B1 (en) 1999-06-14 2002-05-28 Hewlett-Packard Company System and method for evaluating the operation of a computer over a computer network
US6584567B1 (en) 1999-06-30 2003-06-24 International Business Machines Corporation Dynamic connection to multiple origin servers in a transcoding proxy
US6234582B1 (en) 1999-11-03 2001-05-22 Sports World Enterprise Co., Ltd. Support of a golf cartwheel
US20010042131A1 (en) * 2000-04-14 2001-11-15 John Mathon System for handling information and information transfers in a computer network
US7032005B2 (en) * 2000-04-14 2006-04-18 Slam Dunk Networks, Inc. System for handling information and information transfers in a computer network
US20020032780A1 (en) 2000-04-24 2002-03-14 Microsoft Corporation Systems and methods for uniquely and persistently identifying networks
US7380024B2 (en) * 2000-10-03 2008-05-27 Attachmate Corporation System and method for communication with host internal area access
US20020111996A1 (en) 2001-01-26 2002-08-15 David Jones Method, system and apparatus for networking devices
US20020078135A1 (en) 2001-03-15 2002-06-20 Ibm Corporation Method and apparatus for improving the operation of an application layer proxy
US20030165160A1 (en) 2001-04-24 2003-09-04 Minami John Shigeto Gigabit Ethernet adapter
US20020194292A1 (en) 2001-05-31 2002-12-19 King Peter F. Method of establishing a secure tunnel through a proxy server between a user device and a secure server
US7594230B2 (en) * 2001-06-11 2009-09-22 Microsoft Corporation Web server architecture
US7263701B2 (en) 2001-09-04 2007-08-28 Samsung Electronics Co., Ltd. Interprocess communication method and apparatus
US20030055943A1 (en) 2001-09-17 2003-03-20 Hiroki Kanai Storage system and management method of the storage system
US20030149770A1 (en) * 2001-10-05 2003-08-07 Delaire Brian Augustine Storage area network methods and apparatus with file system extension
US20030088683A1 (en) 2001-11-07 2003-05-08 Hitachi, Ltd. Storage management computer
US20030120386A1 (en) 2001-12-20 2003-06-26 Storage Technology Corporation Automated physical disk storage and management
US6600967B2 (en) 2001-12-20 2003-07-29 Storage Technology Corporation Automated physical disk storage and management
US20030154306A1 (en) * 2002-02-11 2003-08-14 Perry Stephen Hastings System and method to proxy inbound connections to privately addressed hosts
US20030182464A1 (en) 2002-02-15 2003-09-25 Hamilton Thomas E. Management of message queues
US7296154B2 (en) 2002-06-24 2007-11-13 Microsoft Corporation Secure media path methods, systems, and architectures
US7406709B2 (en) * 2002-09-09 2008-07-29 Audiocodes, Inc. Apparatus and method for allowing peer-to-peer network traffic across enterprise firewalls
US7802001B1 (en) * 2002-10-18 2010-09-21 Astute Networks, Inc. System and method for flow control within a stateful protocol processing system
US20040157557A1 (en) 2003-02-07 2004-08-12 Lockheed Martin Corporation System for a dynamic ad-hoc wireless network
US7676788B1 (en) * 2003-03-25 2010-03-09 Electric Cloud, Inc. Architecture and method for executing program builds
US20040264383A1 (en) 2003-06-27 2004-12-30 Microsoft Corporation Media foundation topology
US7774375B2 (en) 2003-06-27 2010-08-10 Microsoft Corporation Media foundation topology
US20040267935A1 (en) 2003-06-30 2004-12-30 Kestutis Patiejunas System and method for message-based scalable data transport
US20050165932A1 (en) 2004-01-22 2005-07-28 International Business Machines Corporation Redirecting client connection requests among sockets providing a same service
US20050229184A1 (en) 2004-03-17 2005-10-13 Nec Corporation Inter-processor communication system in parallel processing system by OS for single processors and program thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11137913B2 (en) 2019-10-04 2021-10-05 Hewlett Packard Enterprise Development Lp Generation of a packaged version of an IO request
US11500542B2 (en) 2019-10-04 2022-11-15 Hewlett Packard Enterprise Development Lp Generation of a volume-level of an IO request

Also Published As

Publication number Publication date
US8650302B2 (en) 2014-02-11
US20120005350A1 (en) 2012-01-05
US8090837B2 (en) 2012-01-03
US20120005349A1 (en) 2012-01-05
US20050278460A1 (en) 2005-12-15

Similar Documents

Publication Publication Date Title
US8484357B2 (en) Communication in multiprocessor using proxy sockets
US11340672B2 (en) Persistent reservations for virtual disk using multiple targets
US9760408B2 (en) Distributed I/O operations performed in a continuous computing fabric environment
US7774785B2 (en) Cluster code management
US6044415A (en) System for transferring I/O data between an I/O device and an application program's memory in accordance with a request directly over a virtual connection
US10614096B2 (en) Disaster recovery of mobile data center via location-aware cloud caching
US7814065B2 (en) Affinity-based recovery/failover in a cluster environment
KR100232247B1 (en) Virtual shared disks with application-transparent recovery
US6397293B2 (en) Storage management system and auto-RAID transaction manager for coherent memory map across hot plug interface
US8495254B2 (en) Computer system having virtual storage apparatuses accessible by virtual machines
US7743372B2 (en) Dynamic cluster code updating in logical partitions
US5784617A (en) Resource-capability-based method and system for handling service processor requests
US8677034B2 (en) System for controlling I/O devices in a multi-partition computer system
US20060174087A1 (en) Computer system, computer, storage system, and control terminal
US20040254984A1 (en) System and method for coordinating cluster serviceability updates over distributed consensus within a distributed data system cluster
KR20200017363A (en) MANAGED SWITCHING BETWEEN ONE OR MORE HOSTS AND SOLID STATE DRIVES (SSDs) BASED ON THE NVMe PROTOCOL TO PROVIDE HOST STORAGE SERVICES
US8151070B2 (en) System and method for backup by splitting a copy pair and storing a snapshot
US7774656B2 (en) System and article of manufacture for handling a fabric failure
JP2000155729A (en) Improved cluster management method and device
MX2007002204A (en) Apparatus, system, and method for file system serialization reinitialization.
US7971004B2 (en) System and article of manufacture for dumping data in processing systems to a shared storage
JP2002505471A (en) Method and apparatus for interrupting and continuing remote processing
Cardoza et al. Design of the TruCluster multicomputer system for the Digital UNIX environment
US11513716B2 (en) Write first to winner in a metro cluster
Hai-Ying et al. NVD: the network virtual device for HA-SonD

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8