US20150026359A1 - Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process - Google Patents
Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process Download PDFInfo
- Publication number
- US20150026359A1 US20150026359A1 US14/508,666 US201414508666A US2015026359A1 US 20150026359 A1 US20150026359 A1 US 20150026359A1 US 201414508666 A US201414508666 A US 201414508666A US 2015026359 A1 US2015026359 A1 US 2015026359A1
- Authority
- US
- United States
- Prior art keywords
- operator
- stream
- operators
- data
- partition function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/004—Error avoidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Definitions
- the present disclosure relates to data streaming and, more particularly, to methods and systems for reconfiguration and repartitioning of a parallel distributed stream process.
- MapReduce is a programming model and library designed to simplify distributed processing of huge datasets on large clusters of computers and largely relieves the programmer from the burden of handling distributed computing tasks such as data distribution, process coordination, fault tolerance, and scaling.
- System scalability can be achieved in several ways.
- the process may be broken into a number of operators, which can be chained together in a directed acyclic graph (DAG), such that the input, output and/or execution of one or more operators is dependent on one or more other operators.
- DAG directed acyclic graph
- individual operators are partitioned into multiple independently operating units and each operator works on a subset of the data. In this approach, the result of all operators is redistributed to the next set of operators in the computational graph.
- a stream processing system with a distributed computational model
- data constantly flows from one or more input sources through a set of operators (e.g., filters, aggregates, and correlations), which are usually identified by a network name in the cluster, to one or more output sinks.
- operators e.g., filters, aggregates, and correlations
- the individual operators generally produce result sets that are either sent to applications or other nodes for additional processing.
- the output of an operator can branch to multiple downstream operators and can be combined by operators with multiple inputs.
- This form of computation and data transport is called a stream.
- the individual operators in the stream are long lived. Once the operators are instantiated on a node, and after the network connections have been established, the job configuration typically stays fixed except in cases of hardware and/or software failures.
- a method of reconfiguring a stream process in a distributed system includes the initial step of managing a stream process including one or more operators.
- the one or more operators are communicatively associated with one or more stream targets.
- the one or more operators use a partition function to determine the routing of messages to the one or more stream targets.
- the method includes the steps of determining a safe state within the stream process, and configuring a configuration state of the one or more operators during the safe state.
- a method for management of a stream processing system includes the initial step of managing a stream process in a distributed system.
- the stream process includes one or more operators.
- the one or more operators include an operator dataflow and a configuration state of the operator dataflow.
- the method includes the steps of determining a safe state within the stream process, and configuring the configuration state of the operator dataflow during the safe state.
- FIG. 1 is schematic representation of a stream processing system in accordance with an embodiment of the present disclosure
- FIG. 2 is schematic representation of an operator in a stream processing application in accordance with an embodiment of the present disclosure
- FIG. 3 schematically illustrates a stream process in accordance with an embodiment of the present disclosure
- FIG. 4 schematically illustrates a stream process in accordance with another embodiment of the present disclosure
- FIG. 5 schematically illustrates a method of adding a new operator into a running stream process in accordance with an embodiment of the present disclosure
- FIG. 6 schematically illustrates a method of removing an operator from a running stream process in accordance with an embodiment of the present disclosure
- FIG. 7 schematically illustrates the progression of re-partitioning of an operator in a stream process which computes over a time window in accordance with an embodiment of the present disclosure
- FIG. 8 is a flowchart illustrating a method of reconfiguring a stream process in a distributed system in accordance with an embodiment of the present disclosure.
- FIG. 9 is a flowchart illustrating a method for management of a stream processing system in accordance with an embodiment of the present disclosure.
- non-transitory computer-readable media include all computer-readable media, with the sole exception being a transitory, propagating signal.
- batch generally refers to a non-dividable unit of data.
- a batch may contain no data (an empty batch), a single datum or many data elements.
- Each batch in a stream may be assigned an identifier that uniquely names it within the stream.
- the values of batch identifiers in a stream are monotonically increasing. In alternative embodiments not shown the values of batch identifiers may be monotonically decreasing. In general, the values of batch identifiers may be realized as monotonically increasing or decreasing functions of some algorithm-dependent ordering. It is to be understood that the values of batch identifiers may be ordered in any suitable way.
- reconfiguration generally refers to “online reconfiguration”.
- online reconfiguration generally refers to reconfiguration of the stream process while it is active or running, e.g., the operators are connected over the network and individual operators may have accumulated state.
- a message may span multiple network packets.
- Various embodiments of the present disclosure provide a method and system for online reconfiguration and repartitioning of a parallel distributed stream process.
- Various embodiments of the present disclosure provide a method of reconfiguring one or more connections of a running data flow, and may preserve the semantics and correctness of the stream process and/or may provide the ability to restart individual operators in the face of failures (node failures, network failures, etc) and/or network partitions.
- Various embodiments of the present disclosure provide methods for dynamically reconfiguring parameters of an operator in a stream process and/or for dynamically reconfiguring the layout and connections of a stream process with minimal or no disruption of the data flow.
- Various embodiments of the present disclosure provide methods for dynamically changing the stream configuration.
- the presently-disclosed methods of online reconfiguration and repartitioning of a parallel distributed stream process may preserve fault tolerance and/or restartability of the stream process.
- the teachings of the present disclosure may also apply to reconfiguration and/or repartitioning of any stream-like process, e.g., to ensure that the system parts that interact with entities under reconfiguration do not fail or exhibit non-deterministic behavior because of reconfiguration.
- restartability may be achieved by buffering computed data in upstream operators until it is safe to discard the data.
- DAG directed acyclic graph
- downstream operators may send a commit message about the successful processing and delivery upstream.
- each of the upstream operators can safely discard the buffered data and further commit processing to its own upstream operators.
- buffered data may be discarded without receipt of a commit message.
- the operator when the operator has received all commits, the operator sends a commit to all upstream subscriptions for that particular stream, and may take its window size into account.
- the presently-disclosed chained buffering-and-commit approach ensures that intermediate data is always available, e.g., in case of a failure of any of the operators, and/or allows each downstream operator to re-create its input data set to resume the stream at the point of failure.
- an operator in stream-based processes may be connected to one or more stream input sources and/or one or more stream output sinks.
- an operator in stream-based processes may include an operator function performing a transformation on the input data, and/or may include a partition function that determines to which stream targets (e.g., operators and/or data sinks) to route the data.
- the presently-disclosed methods may treat the configuration of a stream as a fundamental element of the stream.
- the stream configuration is multiplexed into the actual data stream, and the partitioning function and/or operator configurations may be changed during the period between two consecutive batches (also referred to herein as the “quiescence period”) whereby the overall system configuration remains consistent, e.g., to preserve correctness, to ensure the reliability of the system, and/or to ensure that the system parts that interact with entities under reconfiguration do not fail because of reconfiguration.
- the stream system configuration and the data stream become an integral undividable part and a single error recovery mechanism for data and configuration can be used to restore the system in case of a failure.
- all or substantially all standard mechanisms in the stream processing system remain unchanged.
- FIG. 1 shows a schematic representation of a stream processing system (shown generally as 100 ) in accordance with an embodiment of the present disclosure that includes a plurality of computer nodes 110 , 120 and 130 interconnected by a network 140 .
- Each node 110 , 120 and 130 may include one or more central processing units (CPUs), which may be coupled to memory and/or one or more disks or other storage.
- CPUs may include any type of computing device, computational circuit, or any type of processor or processing circuit capable of executing a series of instructions that are stored in a memory.
- CPUs may be adapted to run an operating system platform and application programs.
- Nodes may communicate with other nodes over the network (also referred to herein as the “interconnect”), which may be Ethernet or other computer networking technologies.
- Memory may be random access memory (RAM), or any other form of memory, e.g., flash, static RAM (SRAM), dynamic RAM (DRAM), ROM (read-only memory), MRAM (magnetoresistive random access memory), EPROM or E2PROM.
- RAM random access memory
- SRAM static RAM
- DRAM dynamic RAM
- ROM read-only memory
- MRAM magnetoresistive random access memory
- EPROM EPROM or E2PROM.
- FIG. 2 shows a schematic representation of an operator (shown generally as 200 ) for use in a stream processing application according to an embodiment of the present disclosure.
- Operator 200 includes an operator dataflow 210 and the configuration state 204 of the operator dataflow 210 .
- Operator dataflow 210 includes an input connector 201 for receiving messages from other operators or a source external to the stream (not shown in FIG. 2 ), a de-multiplexer 202 , a operator function 203 which performs the data transformation and/or computation, and an output connector 206 to send messages to other operators (not shown in FIG. 2 ).
- the operator dataflow 210 may include a partition function 205 for message routing.
- a stream may include one or more operators which receive data from one or more other operators and/or from a source external to the stream, and which perform a data transformation and/or computation over a “batch window” (e.g., spanning one or more batches) and may send the result of the data transformation and/or computation to one or more operators and/or one or more sinks external to the stream process.
- Each batch window may be assigned a unique identifier. Batch identifiers may be generated by any suitable method.
- each batch window may be assigned a unique identifier which is derived from the smallest batch identifier covered by the window.
- Operators may depend on one or more input streams to perform an operation. For purposes of illustration only, in the embodiments described herein, it is assumed that an operator identifies correlated events in multiple streams by an identical batch ID.
- An operator may perform a data transformation and/or computation over the input batch window after some or all batches from some or all input streams have arrived, and may generate an output result. In some cases, an operator may start a data transformation and/or computation over the input batch window before all batches have arrived. In cases where an output result is generated, the output result is typically forwarded to one or more downstream operators and/or sink(s).
- the operator may assign a new batch ID for the output batch or can re-use the batch window ID (in the following, for ease of explanation, it is assumed that the batch ID remains the same).
- An operator may include an input connector 201 , e.g., to receive data from one or more operators and/or from one or more external sources, a de-multiplexer 202 , e.g., for use to divide a combined data stream into sub-streams, an operator function 203 , e.g., to perform the data transformation and/or computation, the configuration state 204 for the processed batch, a partitioning function 205 , e.g., to determine to which downstream operators the data should be forwarded, and an output connector 206 , e.g., to forward the data to downstream operators' input connectors and/or to one or more external data sinks.
- an input connector 201 e.g., to receive data from one or more operators and/or from one or more external sources
- a de-multiplexer 202 e.g., for use to divide a combined data stream into sub-streams
- an operator function 203 e.g., to perform the data transformation and/or computation
- a stream process which divides the stream into individual batches generally has a well-defined clock cycle.
- all operators reach a well-defined recoverable state at the end of the clock cycle.
- reconfiguration may take place on a super cycle or reconfiguration may take place on only those operators that have a common clock cycle.
- all operators can be reconfigured in a synchronized and consistent way. For systems with differently clocked data streams, reconfigurations may take place on a super cycle or reconfiguration may take place on those operators that have a common clock cycle.
- a stream process may process data coming from two different data sources, one clocked with a frequency of new data arriving every 5 seconds, and the second stream with data arriving every 8 seconds.
- both sub-streams have different clock cycles
- both sub-streams have a super cycle every 40 seconds, which is the smallest common denominator of 5 seconds and 8 seconds.
- the stream process can be safely reconfigured.
- a stream processing system may introduce a synthetic synchronization cycle for the purpose of reconfiguration.
- FIG. 3 schematically illustrates a stream process (shown generally as 300 ) in accordance with an embodiment of the present disclosure.
- stream process 300 includes two operators, i.e., operator 310 of type A and operator 311 of type B.
- the stream process 300 contains a controller process 315 , a data source 313 and a data sink 314 .
- Stream process 300 includes a controller process 315 which may serve as an additional data source to all operators which are connected to one or more external data sources.
- Controller process 315 may serve as a stream data source providing the configuration stream.
- Data source 313 may be configured to provide data on a regular basis, and may be any source of streaming data or polled data. Streaming or polled data may be provided in form of data batches 309 .
- Data sink 314 may be any entity that consumes the output result, or portion thereof, of the stream process.
- the configuration stream is similarly clocked as data stream and has associated batch IDs. With each clock cycle the batch IDs change in a monotonically increasing or decreasing fashion.
- the data items in the configuration stream consist of stream-wide configuration parameters covering all operators and connections in the stream.
- multiple configuration streams may be employed and/or the configuration data items may only cover parts of the stream process.
- each operator forwards the configuration stream to all outbound connections (e.g., downstream operator and/or sinks) multiplexed with other data elements.
- Operators 310 and 311 may receive input from multiple data sources, and may receive the same configuration state multiple times. It may be considered an error if the received configuration states differ from one another and/or may contain conflicting configuration state.
- multiple operators may subscribe to the configuration stream directly, e.g., instead of receiving the configuration state through the multiplexed in-bound stream.
- FIG. 4 schematically illustrates a stream process (shown generally as 400 ) in accordance with an embodiment of the present disclosure.
- stream process 400 includes one instance of an operator of type A (operator 410 ) and two instances of an operator of type B (operators 411 and 412 ).
- Operator 410 generally includes an inbound connector 401 a , a de-multiplexer 402 a , an operator function 403 a which performs the data transformation and/or computation, and an output connector 406 a .
- the operator 410 includes a configuration module 404 a , and may include a partition function 405 a for message routing.
- Operators 411 and 412 generally include an inbound connector 401 b , a de-multiplexer 402 b , an operator function 403 b which performs the data transformation and/or computation, and an output connector 406 b .
- the operators 411 and 412 include a configuration module 404 a , and may include a partition function 405 b for message routing.
- the configuration state 407 may be multiplexed with one or more data streams into one or more combined streams.
- the demultiplexer 402 a of operator 410 de-multiplexes the data stream delivered by the source 413 and the configuration state 407 delivered by the controller 415 , processes the data stream through the operator 403 a and forwards the configuration stream to an in-operator configuration module 404 a as depicted in FIG. 4 .
- the configuration module 404 a updates configuration parameters for the inbound connector 401 a , the operator function 403 a , the partition function 405 a and/or the outbound connector 406 a .
- the reconfiguration should take place either before or after a batch is processed and the operator is in a safe state.
- each of the listed elements may need to take different actions, e.g., inbound connectors may have to subscribe to different data sources, operators may have to change internal state and/or may have to read a different data set, partition functions may have to update their internal state and tables, and the outbound connectors may have to wait for, or drop, connections to the next operator.
- the stream topology does not change but only the internal state of an operator is modified.
- a name may be introduced for each operator in the system such that it is possible to identify the operator in the configuration state.
- an operator implements a keyword-based filter function and counts the number of elements which pass the filter, wherein the configuration stream contains an identifier for the operator and the list of keywords that should be filtered.
- each batch can be considered an individual sub-computation having a well-defined start point and end point, where the internal state (e.g., the counter) will be reset between two consecutive batches and the list of filter keywords can be updated at this point without having data leak between batch instances.
- the internal state e.g., the counter
- Stream processes may be long-running processes, potentially over days, weeks or months.
- the stream volume may fluctuate over time.
- the number of instances of each parallel operator may be adjusted depending on the current workload.
- FIG. 5 schematically illustrates a method of adding a new operator into a running stream process (shown generally as 500 ) in accordance with an embodiment of the present disclosure.
- stream process 500 includes three types of operators, i.e., operator 510 of type A, operator 511 , 512 , and 513 of type B, and operator 514 of type C.
- Stream process 500 includes one active instance of operator type A (operator 510 ), two active instances of type B (operators 511 and 512 ) and one active instance of type C (operator 514 ).
- one or more operators of one or more types may each include an inbound connector ( 501 a , 501 b and 501 c , respectively), a de-multiplexer ( 502 a , 502 b and 502 c , respectively), an operator function ( 503 a , 503 b and 503 c , respectively) which performs the data transformation and/or computation, a partition function ( 505 a , 505 b and 505 c , respectively) for message routing, and/or an output connector ( 506 a , 506 b and 506 c , respectively).
- a controller process 515 may decide and/or may receive a request to increase the parallelism of operator type B from two active instances to three active instances, e.g., by adding another instance of operator type B (operator 513 ).
- a method of performing the integration includes the steps described hereinbelow.
- step 1 the controller process launches a new instance of operator type B (operator 513 shown in FIG. 5 ) in the cluster and waits until the operator 513 has successfully launched and reported availability.
- operator type B operator 513 shown in FIG. 5
- step 2 the controller updates the configuration and adds operator 513 of type B into the stream.
- the new configuration updates the partition function 505 a of operator 510 from two partitions to three partitions, it updates the outbound connector 506 a of operator 510 to wait for three downstream subscriptions instead of two, and it updates the inbound connector 501 c of operator 514 of type C to subscribe to operators 511 , 512 and 513 .
- the controller then sends the updated configuration state 507 in the configuration stream to operator 510 .
- step 3 the operator 510 of type A receives the new configuration state 507 from the configuration stream and data batch 509 - 1 from source 523 .
- operator 510 After completing the operator function, operator 510 combines the result sets 509 - 2 and 509 - 3 with configuration state 507 into data stream 530 and forwards the result sets to operators 511 and 512 Operator 510 then updates its partition function 505 a and the outbound connector 506 a .
- the operator 510 receives the new configuration state 507 and data batch 509 - 1 and before completing the operator function and forwarding the result set 509 - 2 and 509 - 3 to operators 511 and 512 respectively, operator 510 updates its partition function 505 a and the outbound connector 506 a .
- Operator 510 forwards the new configuration state 507 to operators 511 and 512 as part of the data stream 530 .
- the new configuration of the outbound connector of operator 510 now requires three subscriptions, and operator 510 will not continue forwarding batches until a third connection from an operator 513 has been established. Since there is no modification to the configuration of operators 511 and 512 , both operators 511 and 512 generate the result sets as normally and each forward the result ( 509 - 4 and 509 - 5 , respectively) and the new configuration state 507 to operator 514 .
- step 4 the operator 514 of type C applies the new configuration state to its inbound connector 501 c and subscribes to the new operator 513 of type B.
- step 5 operator 513 of type B receives the subscription from operator 514 and operator 513 of type B subscribes to operator 510 of type A.
- operator 510 will start sending the new batch data partitioned according to the new partition function to all three operators of type B (operators 511 , 512 and 513 ).
- FIG. 6 schematically illustrates a method of removing an operator from a running stream process in accordance with an embodiment of the present disclosure.
- FIG. 6 depicts the “inverse” function of FIG. 5 , i.e., the shutdown of a active operator in a stream process, in this embodiment, operator 613 .
- the stream (shown generally as 600 ) includes one or more types of operators, e.g., three operator types A, B, and C.
- stream process 600 includes one instance of operator type A (operator 610 ), three instances of operator type B (operators 611 , 612 and 613 ), and one instance of operator type C (operator 614 ).
- one or more operators of one or more types may each include an inbound connector ( 601 a , 601 b and 601 c , respectively), a de-multiplexer ( 602 a , 602 b and 602 c , respectively), an operator function ( 603 a , 603 b and 603 c , respectively) which performs the data transformation and/or computation, a partition function ( 605 a , 605 b and 605 c , respectively) for message routing, and/or an output connector ( 606 a , 606 b and 606 c , respectively).
- controller process 615 decides and/or may receive a request to decrease the parallelism of operator type B from three operator instantiations to two operator instantiations, e.g., by shutting down operator 613 .
- a method of performing the shut-down of the operator includes the steps described hereinbelow.
- step 1 the controller updates the stream configuration such that it de-activates all data flow through operator 613 and marks operator 613 as inactive.
- the controller updates the partition function 605 a of operator 610 of type A from three partitions to two, it updates the outbound connector 606 a of the operator 610 from three connections to two, and operator C's inbound connector 601 c from three connections to two.
- the controller 615 then sends the new configuration state 607 into the configuration stream to operator 610 .
- operator 610 receives batch 609 - 1 from the data stream from source 623 and the configuration state 607 from the configuration stream from controller 615 .
- the operator de-multiplexes both streams, generates the result set according to the old operator configuration state.
- Operator 610 then sends the result sets (e.g., 609 - 2 ) and the new configuration state 607 using a combined batch 630 to downstream operators 611 , 612 , and 613 .
- step 3 operator 610 of type A updates its partition function 605 a from three to two partitions. It also marks the outbound connector 606 a that it requires two subscriptions instead of three.
- step 4 all operators of type B (e.g., operators 611 , 612 and 613 shown in FIG. 6 ) forward the generated results and the new configuration state to operator 614 of type C.
- operators of type B e.g., operators 611 , 612 and 613 shown in FIG. 6
- step 5 operator 614 forwards the result to the sink 624 .
- all data of the batch has been successfully delivered and operator 613 can be safely shut down.
- the operator 613 can be safely shut down once the data traversed to the next downstream operator (e.g., operator 614 ).
- step 6 operator 614 acknowledges the successful processing of the batch by sending a commit message to operators 611 , 612 and 613 .
- Operator 614 then reconfigures the inbound connector 601 c and unsubscribes from operator 613 .
- step 7 operators 611 , 612 and 613 acknowledge successful processing of the batch by sending a commit message to operator 610 .
- Operator 613 then unsubscribes from operator 610 .
- step 8 operator 610 acknowledges successful processing of the batch by sending a commit message to the controller 615 .
- This final acknowledgement notifies the controller that operator 613 has been removed from the stream and it is now safe to shut it down.
- step 9 the controller shuts down operator 613 and frees up the resources in the system.
- operator 613 may shut down by itself after step 7 or step 8 .
- the configuration stream is treated as a clocked data stream and on each clock cycle provides either the old or the new configuration state to the first set of stream operators.
- An optimization in accordance with an embodiment of the present disclosure, is to transmit only deltas between the old configuration state and the new configuration state once the configuration state change.
- the configuration stream may be clocked at a different rate than data streams reducing the number of messages crossing the network.
- a two-phase commit protocol may be employed where all operators first receive an updated configuration from the configuration manager for a particular batch ID in the future. The operators hold off forwarding any data packets until the second phase of the commit has been completed.
- all operations are executed independently of any batches before or after the current batch, each batch ID is independent, and data transformations and/or computations are side-effect free.
- One operation in a stream processing system in accordance with the present disclosure involves a sliding window over a number of batches.
- each batch may span the time period of one second while an operation may be performed over a time window of one minute containing 60 seconds worth of batches, i.e., 60 independent batches.
- the sliding window moves by one batch. The oldest batch is removed from the sliding window and a new batch is added to the sliding window.
- FIG. 7 schematically illustrates the progression of re-partitioning of an operator in a stream process which computes over a sliding window of batches in accordance with an embodiment of the present disclosure.
- the time of configuration change is denoted as t and the windows size as w.
- the sliding window spans multiple batches including the safe points between batches, and there is no single point in time where the configuration changes from the old to the new setting. Rather, there is an overlap of batches of size w ⁇ 1 where the system needs to compute the data set according to the old configuration state and according to the new configuration state in order to generate consistent results. After w ⁇ 1 ticks the stream system can disable the old configuration and only use the new configuration.
- the operator's partition function is modified from an old partitioning function to a new partitioning function.
- the operator partitions the data set according to both, the old and the new, partition function as specified in the old and new configuration state.
- the operator accumulated enough batches to span a complete three-element window over the data partitioned according to the new partition function and disables the old partition function.
- Stream processing systems use partition functions to parallelize and distribute workload between multiple downstream operators.
- the partition function determines which node processes which piece of data and is usually computed by a hash function of a data key chosen from the input set modulo the number of downstream nodes.
- the sender For each data element, the sender computes a partition key according to the old and the new partition function. If the result (i.e., the target operator identifier) of the partition function is the same, then the data is sent only once to that target operator. If the two computed partitions are different then the data is sent to each target operator.
- the inbound data stream is treated as a unified stream; however, before computing/transforming the windowed data by the operator function, the partition function is re-applied as a filter. For all batch IDs which are before batch t+w ⁇ 1 the filter removes those elements which do not match the old partition function. For all batches on and after point t+w ⁇ 1 the filter applies the new partition function and removes those items which do not match the new partition function.
- the filter function may be realized as part of the de-multiplexer step inside the operator.
- the stream processing system does not need to double buffer the data for different partition functions during the switch over period.
- the stream processing system handles a single stream between operators, which may substantially reduce the complexity of the implementation.
- the system does not need special case the implementation.
- the above-described optimization e.g., assuming a random distribution of keys in the partition space
- the percentage of overlap in the partition space for two partition functions is high, and thus merging and filtering the results of the partition function may substantially reduce network traffic in a cluster for the period of double partitioning.
- Embodiments of the presently-disclosed method of reconfiguring a stream process in a distributed system and method for management of a stream processing system may be implemented as a computer process, a computing system or as an article of manufacture such as a pre-recorded disk or other similar computer program product or computer-readable media.
- the computer program product may be a non-transitory, computer-readable storage media, readable by a computer system and encoding a computer program of instructions for executing a computer process.
- FIG. 8 is a flowchart illustrating a method (shown generally as 800 in FIG. 8 ) of reconfiguring a stream process in a distributed system in accordance with an embodiment of the present disclosure.
- a stream process e.g., stream process 600 shown in FIG. 6
- the stream process includes one or more operators (e.g., operators 610 , 611 , 612 , 613 and 614 shown in FIG. 6 ).
- the one or more operators may be communicatively associated with one or more stream targets.
- stream targets may include one or more operators, data sources (e.g., source 623 shown in FIG. 6 ) and/or stream output sinks (e.g., sink 624 shown in FIG. 6 ).
- the operator(s) may use a partition function (e.g., partition function 605 a shown in FIG. 6 ) to determine the routing of messages to the one or more stream targets.
- partition function e.g., partition function 605 a shown in FIG. 6
- the one or more operators may span a batch window over multiple batches, and the partition function of the operator(s) may be configured with an old partition function and a new partition function.
- the operator partitions a result of an operator function (e.g., operator function 603 a shown in FIG. 6 ) according to the old partition function and the new partition function until all batches partitioned according to the old partition function spanned by the batch window have been transmitted.
- the operator(s) may transmit batches partitioned according to the old partition function and the new partition function based on a time period derived from the batch window size.
- step 820 a safe state within the stream process is determined.
- a configuration state of the one or more operators is configured during the safe state.
- the configuration state may be provided as a data stream within the stream process.
- the configuration state may additionally, or alternatively, be multiplexed with a data stream within the stream process.
- the configuration state may be stored for use in the recovery of one or more operators, e.g., in case of a failure of any of the operators.
- the operator(s) may be removed from the stream process by configuration of an operator dataflow of the operator(s) of the stream process.
- the operator(s) of the stream process may use an acknowledgement protocol to upstream operators to determine when the operator(s) can be safely shut down.
- the operator(s) of the stream process may communicate with a controller process to shut down the operator(s).
- the presently-disclosed method of reconfiguring a stream process in a distributed system may additionally include the step of transmitting a combined data stream from one or more sending operators to one or more receiving operators partitioned according to the old partition function and the new partition function, wherein the receiving operator(s) may filter the combined data stream according to either the old partition function or the new partition function of the sending operator(s).
- the presently-disclosed method of reconfiguring a stream process in a distributed system may additionally, or alternatively, include the steps of instantiating one or more new operators, and integrating the new operator(s) into the stream process by configuring an operator dataflow of the operator(s) of the stream process.
- FIG. 9 is a flowchart illustrating a method (shown generally as 900 in FIG. 9 ) for management of a stream processing system in accordance with an embodiment of the present disclosure.
- a stream process in a distributed system is managed.
- the stream process includes one or more operators (e.g., operators 510 , 511 , 512 , 513 and 514 shown in FIG. 5 ).
- the one or more operators each include an operator dataflow 210 ( FIG. 2 ) and a configuration state 204 of the operator dataflow 210 .
- step 920 a safe state within the stream process is determined.
- the configuration state 204 of the operator dataflow 210 is configured during the safe state.
- the operator dataflow 210 includes an input connector 201 for receiving messages.
- the operator dataflow 210 may additionally, or alternatively, include an output connector for sending messages.
- the operator dataflow 210 may additionally, or alternatively, include a de-multiplexer for dividing a combined data stream into sub-streams.
- the operator dataflow 210 may additionally, or alternatively, include an operator function for performing data transformation and/or computation.
- the operator dataflow 210 may additionally, or alternatively, include a partition function for performing message routing.
- an internal state of the one or more operators is configured during the safe state.
- the safe state is defined as a state between the processing of two batches having different identifiers from one another.
- the values of the identifiers of the two batches may be either monotonically increasing or monotonically decreasing.
- the two batches may be two consecutive batches.
Abstract
A method of reconfiguring a stream process in a distributed system includes the initial step of managing a stream process including one or more operators. The one or more operators are communicatively associated with one or more stream targets. The one or more operators use a partition function to determine the routing of messages to the one or more stream targets. The method includes the steps of determining a safe state within the stream process, and configuring a configuration state of the one or more operators during the safe state.
Description
- This patent application is a continuation of U.S. patent application Ser. No. 13/308,064, filed Nov. 30, 2011, which claims priority to, and the benefit of, U.S. Provisional Application Ser. No. 61/418,371 filed on Nov. 30, 2010, entitled “METHOD AND SYSTEM FOR ONLINE RECONFIGURATION AND REPARTITIONING OF A PARALLEL DISTRIBUTED STREAM PROCESS”, and U.S. Provisional Application Ser. No. 61/418,221 filed on Nov. 30, 2010, entitled “FAULT-TOLERANT DISTRIBUTED STREAM PROCESSING THROUGH A REVERSE SUBSCRIBER MODEL”, the disclosures of which are herein incorporated by reference in their entireties.
- 1. Technical Field
- The present disclosure relates to data streaming and, more particularly, to methods and systems for reconfiguration and repartitioning of a parallel distributed stream process.
- 2. Discussion of Related Art
- Distributed computer systems have been widely employed to run jobs on multiple machines or processing nodes, which may be co-located within a single cluster or geographically distributed over wide areas, where some jobs run over long periods of time. In order to scale a job, multiple computers can be connected over a common network to form a cluster. An example of a system and method for large-scale data processing is MapReduce, disclosed in U.S. Pat. No. 7,650,331, entitled “System and method for efficient large-scale data processing”. MapReduce is a programming model and library designed to simplify distributed processing of huge datasets on large clusters of computers and largely relieves the programmer from the burden of handling distributed computing tasks such as data distribution, process coordination, fault tolerance, and scaling.
- System scalability can be achieved in several ways. In one approach, the process may be broken into a number of operators, which can be chained together in a directed acyclic graph (DAG), such that the input, output and/or execution of one or more operators is dependent on one or more other operators. In another approach, individual operators are partitioned into multiple independently operating units and each operator works on a subset of the data. In this approach, the result of all operators is redistributed to the next set of operators in the computational graph.
- During the life time of a job the workload may fluctuate substantially and data may be unevenly distributed between operators. A common approach to address the resource allocation problem is to break the job into a substantially larger number of sub-tasks than available nodes and schedule them in priority order. Long running jobs may be interspersed with shorter jobs thereby minimizing overall execution time. A number of algorithms to achieve efficient job scheduling with different performance and efficiency objectives have been researched and published.
- In a stream processing system with a distributed computational model, data constantly flows from one or more input sources through a set of operators (e.g., filters, aggregates, and correlations), which are usually identified by a network name in the cluster, to one or more output sinks. The individual operators generally produce result sets that are either sent to applications or other nodes for additional processing. In general, the output of an operator can branch to multiple downstream operators and can be combined by operators with multiple inputs. This form of computation and data transport is called a stream. Commonly, due to low-latency communication requirements in a stream process, the individual operators in the stream are long lived. Once the operators are instantiated on a node, and after the network connections have been established, the job configuration typically stays fixed except in cases of hardware and/or software failures.
- In a stream process, fixed job configurations generally lead to sub-optimal resource utilization. Commonly the system designer will provision the system for the peak load situation, and during non-peak times the system will be underutilized. An alternative approach which may avoid overprovisioning is to use a virtual machine environment and compress the virtual worker machines onto fewer physical nodes, thereby freeing up the physical resources. In conventional stream processing methods, it is generally assumed that once a network connection between operators has been established, the flow configuration remains static.
- According to one aspect, a method of reconfiguring a stream process in a distributed system is provided. The method includes the initial step of managing a stream process including one or more operators. The one or more operators are communicatively associated with one or more stream targets. The one or more operators use a partition function to determine the routing of messages to the one or more stream targets. The method includes the steps of determining a safe state within the stream process, and configuring a configuration state of the one or more operators during the safe state.
- According to a further aspect, a method for management of a stream processing system is provided. The method includes the initial step of managing a stream process in a distributed system. The stream process includes one or more operators. The one or more operators include an operator dataflow and a configuration state of the operator dataflow. The method includes the steps of determining a safe state within the stream process, and configuring the configuration state of the operator dataflow during the safe state.
- Objects and features of the presently-disclosed methods and systems for reconfiguration and repartitioning of a parallel distributed stream process will become apparent to those of ordinary skill in the art when descriptions of various embodiments thereof are read with reference to the accompanying drawings, of which:
-
FIG. 1 is schematic representation of a stream processing system in accordance with an embodiment of the present disclosure; -
FIG. 2 is schematic representation of an operator in a stream processing application in accordance with an embodiment of the present disclosure; -
FIG. 3 schematically illustrates a stream process in accordance with an embodiment of the present disclosure; -
FIG. 4 schematically illustrates a stream process in accordance with another embodiment of the present disclosure; -
FIG. 5 schematically illustrates a method of adding a new operator into a running stream process in accordance with an embodiment of the present disclosure; -
FIG. 6 schematically illustrates a method of removing an operator from a running stream process in accordance with an embodiment of the present disclosure; -
FIG. 7 schematically illustrates the progression of re-partitioning of an operator in a stream process which computes over a time window in accordance with an embodiment of the present disclosure; -
FIG. 8 is a flowchart illustrating a method of reconfiguring a stream process in a distributed system in accordance with an embodiment of the present disclosure; and -
FIG. 9 is a flowchart illustrating a method for management of a stream processing system in accordance with an embodiment of the present disclosure. - Hereinafter, embodiments of the presently-disclosed methods and systems for reconfiguration and repartitioning of a parallel distributed stream process are described with reference to the accompanying drawings. Like reference numerals may refer to similar or identical elements throughout the description of the figures. This description may use the phrases “in an embodiment,” “in embodiments,” “in some embodiments,” or “in other embodiments,” which may each refer to one or more of the same or different embodiments in accordance with the present disclosure.
- As it is used herein, the term “computer” generally refers to anything that transforms information in a purposeful way. For the purposes of this description, the terms “software” and “code” should be interpreted as being applicable to software, firmware, or a combination of software and firmware. For the purposes of this description, “non-transitory” computer-readable media include all computer-readable media, with the sole exception being a transitory, propagating signal.
- As it is used in this description, “batch” generally refers to a non-dividable unit of data. A batch may contain no data (an empty batch), a single datum or many data elements. Each batch in a stream may be assigned an identifier that uniquely names it within the stream. For purposes of illustration only, in the embodiments described herein, it is assumed that the values of batch identifiers in a stream are monotonically increasing. In alternative embodiments not shown the values of batch identifiers may be monotonically decreasing. In general, the values of batch identifiers may be realized as monotonically increasing or decreasing functions of some algorithm-dependent ordering. It is to be understood that the values of batch identifiers may be ordered in any suitable way.
- As it is used in this description, the term “reconfiguration” generally refers to “online reconfiguration”. For the purposes herein, “online reconfiguration” generally refers to reconfiguration of the stream process while it is active or running, e.g., the operators are connected over the network and individual operators may have accumulated state.
- For purposes of illustration only, in the embodiments described herein, it is assumed that the operators in a stream process exchange messages over the network to transport data and/or information, e.g., stream data, metadata, configuration information, sender addresses and/or receiver addresses, etc. A message may span multiple network packets.
- Various embodiments of the present disclosure provide a method and system for online reconfiguration and repartitioning of a parallel distributed stream process. Various embodiments of the present disclosure provide a method of reconfiguring one or more connections of a running data flow, and may preserve the semantics and correctness of the stream process and/or may provide the ability to restart individual operators in the face of failures (node failures, network failures, etc) and/or network partitions.
- Various embodiments of the present disclosure provide methods for dynamically reconfiguring parameters of an operator in a stream process and/or for dynamically reconfiguring the layout and connections of a stream process with minimal or no disruption of the data flow. Various embodiments of the present disclosure provide methods for dynamically changing the stream configuration. The presently-disclosed methods of online reconfiguration and repartitioning of a parallel distributed stream process may preserve fault tolerance and/or restartability of the stream process. Although the following description describes methods of online reconfiguration and repartitioning of a parallel distributed stream process, the teachings of the present disclosure may also apply to reconfiguration and/or repartitioning of any stream-like process, e.g., to ensure that the system parts that interact with entities under reconfiguration do not fail or exhibit non-deterministic behavior because of reconfiguration.
- In accordance with various embodiments of the present disclosure, restartability may be achieved by buffering computed data in upstream operators until it is safe to discard the data. In a stream process organized as a directed acyclic graph (DAG) in accordance with the present disclosure, it is safe to discard buffered data after the result of directly or indirectly dependent computation on data (and/or data transformation, etc.) has been successfully delivered to all data sinks outside of the stream process. After successful delivery of the output result, or portion thereof, downstream operators may send a commit message about the successful processing and delivery upstream. After successful delivery and/or receipt of a commit message, each of the upstream operators can safely discard the buffered data and further commit processing to its own upstream operators. In the some embodiments, where partial results may be acceptable and/or desirable, buffered data may be discarded without receipt of a commit message.
- In some embodiments of the present disclosure, when the operator has received all commits, the operator sends a commit to all upstream subscriptions for that particular stream, and may take its window size into account. The presently-disclosed chained buffering-and-commit approach ensures that intermediate data is always available, e.g., in case of a failure of any of the operators, and/or allows each downstream operator to re-create its input data set to resume the stream at the point of failure.
- In some embodiments of the present disclosure, an operator in stream-based processes may be connected to one or more stream input sources and/or one or more stream output sinks. In some embodiments of the present disclosure, an operator in stream-based processes may include an operator function performing a transformation on the input data, and/or may include a partition function that determines to which stream targets (e.g., operators and/or data sinks) to route the data.
- The presently-disclosed methods may treat the configuration of a stream as a fundamental element of the stream. In some embodiments, the stream configuration is multiplexed into the actual data stream, and the partitioning function and/or operator configurations may be changed during the period between two consecutive batches (also referred to herein as the “quiescence period”) whereby the overall system configuration remains consistent, e.g., to preserve correctness, to ensure the reliability of the system, and/or to ensure that the system parts that interact with entities under reconfiguration do not fail because of reconfiguration.
- In accordance with various embodiments of the present disclosure, the stream system configuration and the data stream become an integral undividable part and a single error recovery mechanism for data and configuration can be used to restore the system in case of a failure. In some embodiments, all or substantially all standard mechanisms in the stream processing system remain unchanged.
-
FIG. 1 shows a schematic representation of a stream processing system (shown generally as 100) in accordance with an embodiment of the present disclosure that includes a plurality ofcomputer nodes network 140. Eachnode -
FIG. 2 shows a schematic representation of an operator (shown generally as 200) for use in a stream processing application according to an embodiment of the present disclosure.Operator 200 includes anoperator dataflow 210 and theconfiguration state 204 of theoperator dataflow 210.Operator dataflow 210 includes aninput connector 201 for receiving messages from other operators or a source external to the stream (not shown inFIG. 2 ), a de-multiplexer 202, aoperator function 203 which performs the data transformation and/or computation, and anoutput connector 206 to send messages to other operators (not shown inFIG. 2 ). In some embodiments, theoperator dataflow 210 may include apartition function 205 for message routing. - A stream may include one or more operators which receive data from one or more other operators and/or from a source external to the stream, and which perform a data transformation and/or computation over a “batch window” (e.g., spanning one or more batches) and may send the result of the data transformation and/or computation to one or more operators and/or one or more sinks external to the stream process. Each batch window may be assigned a unique identifier. Batch identifiers may be generated by any suitable method. In some embodiments, each batch window may be assigned a unique identifier which is derived from the smallest batch identifier covered by the window.
- Operators may depend on one or more input streams to perform an operation. For purposes of illustration only, in the embodiments described herein, it is assumed that an operator identifies correlated events in multiple streams by an identical batch ID. An operator may perform a data transformation and/or computation over the input batch window after some or all batches from some or all input streams have arrived, and may generate an output result. In some cases, an operator may start a data transformation and/or computation over the input batch window before all batches have arrived. In cases where an output result is generated, the output result is typically forwarded to one or more downstream operators and/or sink(s). The operator may assign a new batch ID for the output batch or can re-use the batch window ID (in the following, for ease of explanation, it is assumed that the batch ID remains the same).
- An operator (e.g., 200 shown in
FIG. 2 ) may include aninput connector 201, e.g., to receive data from one or more operators and/or from one or more external sources, a de-multiplexer 202, e.g., for use to divide a combined data stream into sub-streams, anoperator function 203, e.g., to perform the data transformation and/or computation, theconfiguration state 204 for the processed batch, apartitioning function 205, e.g., to determine to which downstream operators the data should be forwarded, and anoutput connector 206, e.g., to forward the data to downstream operators' input connectors and/or to one or more external data sinks. - A stream process which divides the stream into individual batches generally has a well-defined clock cycle. In some embodiments, wherein a stream process has a common clock cycle, all operators reach a well-defined recoverable state at the end of the clock cycle. In some embodiments, where the stream process has different clock cycles for data streams between operators, reconfiguration may take place on a super cycle or reconfiguration may take place on only those operators that have a common clock cycle. At well-defined synchronization points, all operators can be reconfigured in a synchronized and consistent way. For systems with differently clocked data streams, reconfigurations may take place on a super cycle or reconfiguration may take place on those operators that have a common clock cycle. In an illustrative, non-limiting example, a stream process may process data coming from two different data sources, one clocked with a frequency of new data arriving every 5 seconds, and the second stream with data arriving every 8 seconds. In this example, while both sub-streams have different clock cycles, both sub-streams have a super cycle every 40 seconds, which is the smallest common denominator of 5 seconds and 8 seconds. When both sub-streams are in sync, the stream process can be safely reconfigured. In some cases, when it is not possible or impractical to find a super cycle, a stream processing system may introduce a synthetic synchronization cycle for the purpose of reconfiguration.
-
FIG. 3 schematically illustrates a stream process (shown generally as 300) in accordance with an embodiment of the present disclosure. For purposes of illustration only, in the embodiment illustrated inFIG. 3 ,stream process 300 includes two operators, i.e.,operator 310 of type A andoperator 311 of type B. - As shown in
FIG. 3 , thestream process 300 contains acontroller process 315, adata source 313 and adata sink 314.Stream process 300 includes acontroller process 315 which may serve as an additional data source to all operators which are connected to one or more external data sources.Controller process 315 may serve as a stream data source providing the configuration stream.Data source 313 may be configured to provide data on a regular basis, and may be any source of streaming data or polled data. Streaming or polled data may be provided in form ofdata batches 309. Data sink 314 may be any entity that consumes the output result, or portion thereof, of the stream process. - The configuration stream is similarly clocked as data stream and has associated batch IDs. With each clock cycle the batch IDs change in a monotonically increasing or decreasing fashion. The data items in the configuration stream consist of stream-wide configuration parameters covering all operators and connections in the stream. In some embodiments, multiple configuration streams may be employed and/or the configuration data items may only cover parts of the stream process. In some embodiments, as shown in
FIG. 3 , each operator forwards the configuration stream to all outbound connections (e.g., downstream operator and/or sinks) multiplexed with other data elements.Operators -
FIG. 4 schematically illustrates a stream process (shown generally as 400) in accordance with an embodiment of the present disclosure. For purposes of illustration only, in the embodiment illustrated inFIG. 4 ,stream process 400 includes one instance of an operator of type A (operator 410) and two instances of an operator of type B (operators 411 and 412).Operator 410 generally includes aninbound connector 401 a, a de-multiplexer 402 a, anoperator function 403 a which performs the data transformation and/or computation, and anoutput connector 406 a. In some embodiments, theoperator 410 includes aconfiguration module 404 a, and may include apartition function 405 a for message routing.Operators inbound connector 401 b, a de-multiplexer 402 b, anoperator function 403 b which performs the data transformation and/or computation, and anoutput connector 406 b. In some embodiments, theoperators configuration module 404 a, and may include apartition function 405 b for message routing. - In some embodiments, as shown in
FIG. 4 , the configuration state 407 may be multiplexed with one or more data streams into one or more combined streams. Following the reception of abatch 409, thedemultiplexer 402 a ofoperator 410 de-multiplexes the data stream delivered by thesource 413 and the configuration state 407 delivered by thecontroller 415, processes the data stream through theoperator 403 a and forwards the configuration stream to an in-operator configuration module 404 a as depicted inFIG. 4 . Theconfiguration module 404 a updates configuration parameters for theinbound connector 401 a, theoperator function 403 a, thepartition function 405 a and/or theoutbound connector 406 a. In order to preserve consistency across the cluster, the reconfiguration should take place either before or after a batch is processed and the operator is in a safe state. Depending on the type of update, each of the listed elements may need to take different actions, e.g., inbound connectors may have to subscribe to different data sources, operators may have to change internal state and/or may have to read a different data set, partition functions may have to update their internal state and tables, and the outbound connectors may have to wait for, or drop, connections to the next operator. - In some embodiments, the stream topology does not change but only the internal state of an operator is modified. A name may be introduced for each operator in the system such that it is possible to identify the operator in the configuration state. As an illustrative, non-limiting example, an operator implements a keyword-based filter function and counts the number of elements which pass the filter, wherein the configuration stream contains an identifier for the operator and the list of keywords that should be filtered.
- It is safe to reconfigure operators during the “quiescence period” which may be defined as the period between two consecutive batches. In some embodiments, as shown in
FIG. 4 , each batch can be considered an individual sub-computation having a well-defined start point and end point, where the internal state (e.g., the counter) will be reset between two consecutive batches and the list of filter keywords can be updated at this point without having data leak between batch instances. - Stream processes may be long-running processes, potentially over days, weeks or months. The stream volume may fluctuate over time. In some embodiments, in order to avoid provisioning for the highest possible data volume, the number of instances of each parallel operator may be adjusted depending on the current workload.
- Hereinafter, methods in accordance with the present disclosure are described in which an operator can be added and removed from the stream without disrupting the data flow. Any suitable mechanisms and algorithms for deciding when to adjust the number of operators may be used.
-
FIG. 5 schematically illustrates a method of adding a new operator into a running stream process (shown generally as 500) in accordance with an embodiment of the present disclosure. For purposes of illustration only, in the embodiment illustrated inFIG. 5 ,stream process 500 includes three types of operators, i.e.,operator 510 of type A,operator operator 514 of typeC. Stream process 500 includes one active instance of operator type A (operator 510), two active instances of type B (operators 511 and 512) and one active instance of type C (operator 514). - In some embodiments, as shown in
FIG. 5 , one or more operators of one or more types (e.g.,operator 510 of type A,operator operator 514 of type C shown inFIG. 5 ) may each include an inbound connector (501 a, 501 b and 501 c, respectively), a de-multiplexer (502 a, 502 b and 502 c, respectively), an operator function (503 a, 503 b and 503 c, respectively) which performs the data transformation and/or computation, a partition function (505 a, 505 b and 505 c, respectively) for message routing, and/or an output connector (506 a, 506 b and 506 c, respectively). - At some point, a
controller process 515 may decide and/or may receive a request to increase the parallelism of operator type B from two active instances to three active instances, e.g., by adding another instance of operator type B (operator 513). In accordance with an embodiment of the present disclosure, as shown inFIG. 5 , a method of performing the integration includes the steps described hereinbelow. - In
step 1, the controller process launches a new instance of operator type B (operator 513 shown inFIG. 5 ) in the cluster and waits until theoperator 513 has successfully launched and reported availability. - In
step 2, the controller updates the configuration and addsoperator 513 of type B into the stream. The new configuration updates thepartition function 505 a ofoperator 510 from two partitions to three partitions, it updates theoutbound connector 506 a ofoperator 510 to wait for three downstream subscriptions instead of two, and it updates theinbound connector 501 c ofoperator 514 of type C to subscribe tooperators configuration state 507 in the configuration stream tooperator 510. - In
step 3, theoperator 510 of type A receives thenew configuration state 507 from the configuration stream and data batch 509-1 fromsource 523. After completing the operator function,operator 510 combines the result sets 509-2 and 509-3 withconfiguration state 507 intodata stream 530 and forwards the result sets tooperators Operator 510 then updates itspartition function 505 a and theoutbound connector 506 a. In an alternative embodiment, theoperator 510 receives thenew configuration state 507 and data batch 509-1 and before completing the operator function and forwarding the result set 509-2 and 509-3 tooperators operator 510 updates itspartition function 505 a and theoutbound connector 506 a.Operator 510 forwards thenew configuration state 507 tooperators data stream 530. The new configuration of the outbound connector ofoperator 510 now requires three subscriptions, andoperator 510 will not continue forwarding batches until a third connection from anoperator 513 has been established. Since there is no modification to the configuration ofoperators operators new configuration state 507 tooperator 514. - In
step 4, theoperator 514 of type C applies the new configuration state to itsinbound connector 501 c and subscribes to thenew operator 513 of type B. - In
step 5,operator 513 of type B receives the subscription fromoperator 514 andoperator 513 of type B subscribes tooperator 510 of type A. When the connection has been established for all operators,operator 510 will start sending the new batch data partitioned according to the new partition function to all three operators of type B (operators -
FIG. 6 schematically illustrates a method of removing an operator from a running stream process in accordance with an embodiment of the present disclosure.FIG. 6 depicts the “inverse” function ofFIG. 5 , i.e., the shutdown of a active operator in a stream process, in this embodiment,operator 613. The stream (shown generally as 600) includes one or more types of operators, e.g., three operator types A, B, and C. For purposes of illustration only, in the embodiment illustrated inFIG. 6 ,stream process 600 includes one instance of operator type A (operator 610), three instances of operator type B (operators - In some embodiments, as shown in
FIG. 6 , one or more operators of one or more types (e.g.,operator 610 of type A,operators operator 614 of type C shown inFIG. 6 ) may each include an inbound connector (601 a, 601 b and 601 c, respectively), a de-multiplexer (602 a, 602 b and 602 c, respectively), an operator function (603 a, 603 b and 603 c, respectively) which performs the data transformation and/or computation, a partition function (605 a, 605 b and 605 c, respectively) for message routing, and/or an output connector (606 a, 606 b and 606 c, respectively). - At one point the
controller process 615 decides and/or may receive a request to decrease the parallelism of operator type B from three operator instantiations to two operator instantiations, e.g., by shutting downoperator 613. In accordance with an embodiment of the present disclosure, as shown inFIG. 6 , a method of performing the shut-down of the operator includes the steps described hereinbelow. - In
step 1, the controller updates the stream configuration such that it de-activates all data flow throughoperator 613 andmarks operator 613 as inactive. The controller updates thepartition function 605 a ofoperator 610 of type A from three partitions to two, it updates theoutbound connector 606 a of theoperator 610 from three connections to two, and operator C'sinbound connector 601 c from three connections to two. Thecontroller 615 then sends thenew configuration state 607 into the configuration stream tooperator 610. - In
step 2,operator 610 receives batch 609-1 from the data stream fromsource 623 and theconfiguration state 607 from the configuration stream fromcontroller 615. The operator de-multiplexes both streams, generates the result set according to the old operator configuration state.Operator 610 then sends the result sets (e.g., 609-2) and thenew configuration state 607 using a combinedbatch 630 todownstream operators - In
step 3,operator 610 of type A updates itspartition function 605 a from three to two partitions. It also marks theoutbound connector 606 a that it requires two subscriptions instead of three. - In
step 4, all operators of type B (e.g.,operators FIG. 6 ) forward the generated results and the new configuration state tooperator 614 of type C. - In
step 5,operator 614 forwards the result to thesink 624. At this point all data of the batch has been successfully delivered andoperator 613 can be safely shut down. In some embodiments, e.g., a system where recoverability in case of a failure is not required, theoperator 613 can be safely shut down once the data traversed to the next downstream operator (e.g., operator 614). - In
step 6,operator 614 acknowledges the successful processing of the batch by sending a commit message tooperators Operator 614 then reconfigures theinbound connector 601 c and unsubscribes fromoperator 613. - In
step 7,operators operator 610.Operator 613 then unsubscribes fromoperator 610. - In
step 8,operator 610 acknowledges successful processing of the batch by sending a commit message to thecontroller 615. This final acknowledgement notifies the controller thatoperator 613 has been removed from the stream and it is now safe to shut it down. - In
step 9, the controller shuts downoperator 613 and frees up the resources in the system. In some embodiments,operator 613 may shut down by itself afterstep 7 orstep 8. - Generally, in a parallel stream processing system, all operators use the same configuration data. Once the configuration data is in the stream system and tagged with a batch ID, reliable transport and replay mechanisms ensures consistent delivery to all operators. As described below, various embodiments of the present disclosure handle how configuration data is injected at source operators.
- In one embodiment, the configuration stream is treated as a clocked data stream and on each clock cycle provides either the old or the new configuration state to the first set of stream operators. An optimization, in accordance with an embodiment of the present disclosure, is to transmit only deltas between the old configuration state and the new configuration state once the configuration state change. In some embodiments, the configuration stream may be clocked at a different rate than data streams reducing the number of messages crossing the network.
- In another embodiment, a two-phase commit protocol may be employed where all operators first receive an updated configuration from the configuration manager for a particular batch ID in the future. The operators hold off forwarding any data packets until the second phase of the commit has been completed.
- In some embodiments, all operations are executed independently of any batches before or after the current batch, each batch ID is independent, and data transformations and/or computations are side-effect free. One operation in a stream processing system in accordance with the present disclosure involves a sliding window over a number of batches. In an illustrative, non-limiting example, each batch may span the time period of one second while an operation may be performed over a time window of one minute containing 60 seconds worth of batches, i.e., 60 independent batches. With each tick, the sliding window moves by one batch. The oldest batch is removed from the sliding window and a new batch is added to the sliding window.
-
FIG. 7 schematically illustrates the progression of re-partitioning of an operator in a stream process which computes over a sliding window of batches in accordance with an embodiment of the present disclosure. In the following description, the time of configuration change is denoted as t and the windows size as w. When reconfiguring a sliding-window operator in a streaming system, the sliding window spans multiple batches including the safe points between batches, and there is no single point in time where the configuration changes from the old to the new setting. Rather, there is an overlap of batches of size w−1 where the system needs to compute the data set according to the old configuration state and according to the new configuration state in order to generate consistent results. After w−1 ticks the stream system can disable the old configuration and only use the new configuration. - For purposes of illustration only, in the embodiment illustrated in
FIG. 7 , the sliding window of an operator includes three batches (w=3). At time t0 (window 732), the operator's partition function is modified from an old partitioning function to a new partitioning function. At times t1 (window 733) and t2 (window 734) the operator partitions the data set according to both, the old and the new, partition function as specified in the old and new configuration state. At time t3 (window 735), the operator accumulated enough batches to span a complete three-element window over the data partitioned according to the new partition function and disables the old partition function. - Stream processing systems use partition functions to parallelize and distribute workload between multiple downstream operators. The partition function determines which node processes which piece of data and is usually computed by a hash function of a data key chosen from the input set modulo the number of downstream nodes.
- When the partition function changes, then the distribution of keys will change and downstream nodes may receive different data set distribution before and after the partition function reconfiguration. For a windowed stream which depends on keys being distributed deterministically using existing methods such reconfiguration would yield incorrect results.
- In the above-described approach, the operator could run the old and the new partitioning function and transmit the data once according to each partition function. However, running the old and the new partitioning function and transmitting the data once according to each partition function may double the amount of data that needs to be transmitted over the network. In stream processing systems, network bandwidth is a limited resource and oftentimes a bottleneck. An optimization to reduce data duplication, in accordance with the present disclosure, is described below.
- For each data element, the sender computes a partition key according to the old and the new partition function. If the result (i.e., the target operator identifier) of the partition function is the same, then the data is sent only once to that target operator. If the two computed partitions are different then the data is sent to each target operator. In the receiving operator, the inbound data stream is treated as a unified stream; however, before computing/transforming the windowed data by the operator function, the partition function is re-applied as a filter. For all batch IDs which are before batch t+w−1 the filter removes those elements which do not match the old partition function. For all batches on and after point t+w−1 the filter applies the new partition function and removes those items which do not match the new partition function. The filter function may be realized as part of the de-multiplexer step inside the operator.
- Using the above-described optimization, the stream processing system does not need to double buffer the data for different partition functions during the switch over period. Using the above-described optimization, the stream processing system handles a single stream between operators, which may substantially reduce the complexity of the implementation. Using the above-described optimization, during error recovery in the middle of a switch-over from an old to a new partition function, the system does not need special case the implementation. Using the above-described optimization (e.g., assuming a random distribution of keys in the partition space) the percentage of overlap in the partition space for two partition functions is high, and thus merging and filtering the results of the partition function may substantially reduce network traffic in a cluster for the period of double partitioning.
- Hereinafter, a method of reconfiguring a stream process in a distributed system in accordance with the present disclosure is described with reference to
FIG. 8 and a method for management of a stream processing system in accordance with the present disclosure is described with reference toFIG. 9 . It is to be understood that the steps of the methods provided herein may be performed in combination and in a different order than presented herein without departing from the scope of the disclosure. Embodiments of the presently-disclosed method of reconfiguring a stream process in a distributed system and method for management of a stream processing system may be implemented as a computer process, a computing system or as an article of manufacture such as a pre-recorded disk or other similar computer program product or computer-readable media. The computer program product may be a non-transitory, computer-readable storage media, readable by a computer system and encoding a computer program of instructions for executing a computer process. -
FIG. 8 is a flowchart illustrating a method (shown generally as 800 inFIG. 8 ) of reconfiguring a stream process in a distributed system in accordance with an embodiment of the present disclosure. Instep 810, a stream process (e.g.,stream process 600 shown inFIG. 6 ) is managed. The stream process includes one or more operators (e.g.,operators FIG. 6 ). The one or more operators may be communicatively associated with one or more stream targets. In some embodiments, stream targets may include one or more operators, data sources (e.g.,source 623 shown inFIG. 6 ) and/or stream output sinks (e.g., sink 624 shown inFIG. 6 ). The operator(s) may use a partition function (e.g.,partition function 605 a shown inFIG. 6 ) to determine the routing of messages to the one or more stream targets. - In some embodiments, the one or more operators may span a batch window over multiple batches, and the partition function of the operator(s) may be configured with an old partition function and a new partition function. In some embodiments, the operator partitions a result of an operator function (e.g.,
operator function 603 a shown inFIG. 6 ) according to the old partition function and the new partition function until all batches partitioned according to the old partition function spanned by the batch window have been transmitted. In some embodiments, the operator(s) may transmit batches partitioned according to the old partition function and the new partition function based on a time period derived from the batch window size. - In
step 820, a safe state within the stream process is determined. - In
step 830, a configuration state of the one or more operators is configured during the safe state. In some embodiments, the configuration state may be provided as a data stream within the stream process. The configuration state may additionally, or alternatively, be multiplexed with a data stream within the stream process. The configuration state may be stored for use in the recovery of one or more operators, e.g., in case of a failure of any of the operators. - In some embodiments, the operator(s) may be removed from the stream process by configuration of an operator dataflow of the operator(s) of the stream process. The operator(s) of the stream process may use an acknowledgement protocol to upstream operators to determine when the operator(s) can be safely shut down. The operator(s) of the stream process may communicate with a controller process to shut down the operator(s).
- The presently-disclosed method of reconfiguring a stream process in a distributed system may additionally include the step of transmitting a combined data stream from one or more sending operators to one or more receiving operators partitioned according to the old partition function and the new partition function, wherein the receiving operator(s) may filter the combined data stream according to either the old partition function or the new partition function of the sending operator(s). The presently-disclosed method of reconfiguring a stream process in a distributed system may additionally, or alternatively, include the steps of instantiating one or more new operators, and integrating the new operator(s) into the stream process by configuring an operator dataflow of the operator(s) of the stream process.
-
FIG. 9 is a flowchart illustrating a method (shown generally as 900 inFIG. 9 ) for management of a stream processing system in accordance with an embodiment of the present disclosure. Instep 910, a stream process in a distributed system is managed. The stream process includes one or more operators (e.g.,operators FIG. 5 ). The one or more operators each include an operator dataflow 210 (FIG. 2 ) and aconfiguration state 204 of theoperator dataflow 210. - In
step 920, a safe state within the stream process is determined. - In
step 930, theconfiguration state 204 of theoperator dataflow 210 is configured during the safe state. In some embodiments, theoperator dataflow 210 includes aninput connector 201 for receiving messages. Theoperator dataflow 210 may additionally, or alternatively, include an output connector for sending messages. Theoperator dataflow 210 may additionally, or alternatively, include a de-multiplexer for dividing a combined data stream into sub-streams. Theoperator dataflow 210 may additionally, or alternatively, include an operator function for performing data transformation and/or computation. Theoperator dataflow 210 may additionally, or alternatively, include a partition function for performing message routing. - In
step 930, an internal state of the one or more operators is configured during the safe state. In some embodiments, the safe state is defined as a state between the processing of two batches having different identifiers from one another. The values of the identifiers of the two batches may be either monotonically increasing or monotonically decreasing. In some embodiments, the two batches may be two consecutive batches. - Although embodiments have been described in detail with reference to the accompanying drawings for the purpose of illustration and description, it is to be understood that the inventive processes and systems are not to be construed as limited thereby. It will be apparent to those of ordinary skill in the art that various modifications to the foregoing embodiments may be made without departing from the scope of the disclosure.
Claims (14)
1. (canceled)
2. A method of reconfiguring a stream process in a distributed system, the method comprising the steps of:
managing the stream process including at least one operator, the at least one operator communicatively associated with at least one stream target, the at least one operator using a partition function to determine routing of messages to the at least one stream target,
wherein the at least one operator transmits batches partitioned according to the partition function based on a time derived from a size of a batch window spanned by the at least one operator.
3. The method of claim 2 , wherein the method further comprises:
determining a safe state within the stream process; and
configuring a configuration state of the at least one operator during the safe state.
4. The method of claim 3 , wherein the configuration state is multiplexed with a data stream within the stream process.
5. The method of claim 3 , wherein the configuration state is stored for recovering of at least one operator in case of a failure.
6. The method of claim 3 , wherein the safe state is defined as a state between the processing of two batches having different identifiers from one another.
7. The method of claim 6 , wherein the values of the identifiers of the two batches are monotonically increasing or monotonically decreasing.
8. The method of claim 2 , wherein the at least one operator partitions a result of an operator function according to an old partition function and a new partition function until all batches partitioned according to the old partition function spanned by the batch window have been transmitted.
9. The method of claim 2 , further comprising the step of transmitting a combined data stream from a sending operator to at least one receiving operator partitioned according to an old partition function and a new partition function, wherein the at least one receiving operator filters the combined data stream according to the at least one sending operator's old partition function or new partition function.
10. The method of claim 2 , further comprising the steps of:
instantiating at least one new operator; and
integrating the at least one new operator into the stream process by configuring an operator dataflow of the at least one operator of the stream process.
11. The method of claim 10 , wherein at least one of the at least one new operator is launched in an un-configured state and configured at runtime.
12. The method of claim 2 , wherein at least one of the at least one operator is removed from the stream process by configuration of an operator dataflow of the at least one operator of the stream process.
13. The method of claim 2 , wherein the at least one operator uses an acknowledgement protocol to one or more upstream operators to determine when the at least one operator can be safely shut down.
14. The method of claim 2 , wherein the at least one operator communicates with a controller process to shut down the at least one operator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/508,666 US20150026359A1 (en) | 2010-11-30 | 2014-10-07 | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US41837110P | 2010-11-30 | 2010-11-30 | |
US41822110P | 2010-11-30 | 2010-11-30 | |
US13/308,064 US8856374B2 (en) | 2010-11-30 | 2011-11-30 | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process |
US14/508,666 US20150026359A1 (en) | 2010-11-30 | 2014-10-07 | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/308,064 Continuation US8856374B2 (en) | 2010-11-30 | 2011-11-30 | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150026359A1 true US20150026359A1 (en) | 2015-01-22 |
Family
ID=46127386
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/308,037 Expired - Fee Related US9325757B2 (en) | 2010-11-30 | 2011-11-30 | Methods and systems for fault-tolerant distributed stream processing |
US13/308,064 Active 2032-03-21 US8856374B2 (en) | 2010-11-30 | 2011-11-30 | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process |
US14/508,666 Abandoned US20150026359A1 (en) | 2010-11-30 | 2014-10-07 | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/308,037 Expired - Fee Related US9325757B2 (en) | 2010-11-30 | 2011-11-30 | Methods and systems for fault-tolerant distributed stream processing |
US13/308,064 Active 2032-03-21 US8856374B2 (en) | 2010-11-30 | 2011-11-30 | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process |
Country Status (1)
Country | Link |
---|---|
US (3) | US9325757B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150142956A1 (en) * | 2013-11-19 | 2015-05-21 | International Business Machines Corporation | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state |
Families Citing this family (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10044548B2 (en) * | 2012-10-15 | 2018-08-07 | Jetflow Technologies | Flowlet-based processing |
US9838242B2 (en) * | 2011-04-13 | 2017-12-05 | Jetflow Technologies | Flowlet-based processing with key/value store checkpointing |
US9158610B2 (en) * | 2011-08-04 | 2015-10-13 | Microsoft Technology Licensing, Llc. | Fault tolerance for tasks using stages to manage dependencies |
US9438656B2 (en) | 2012-01-11 | 2016-09-06 | International Business Machines Corporation | Triggering window conditions by streaming features of an operator graph |
US9430117B2 (en) | 2012-01-11 | 2016-08-30 | International Business Machines Corporation | Triggering window conditions using exception handling |
US9338061B2 (en) * | 2012-04-26 | 2016-05-10 | Hewlett Packard Enterprise Development Lp | Open station as a stream analysis operator container |
WO2014062637A2 (en) * | 2012-10-15 | 2014-04-24 | Hadapt, Inc. | Systems and methods for fault tolerant, adaptive execution of arbitrary queries at low latency |
CN102929738B (en) * | 2012-11-06 | 2015-02-11 | 无锡江南计算技术研究所 | Fault-tolerance method of large-scale heterogeneous parallel computing |
US9081870B2 (en) | 2012-12-05 | 2015-07-14 | Hewlett-Packard Development Company, L.P. | Streaming system performance optimization |
US9298788B1 (en) * | 2013-03-11 | 2016-03-29 | DataTorrent, Inc. | Checkpointing in distributed streaming platform for real-time applications |
US9106391B2 (en) * | 2013-05-28 | 2015-08-11 | International Business Machines Corporation | Elastic auto-parallelization for stream processing applications based on a measured throughput and congestion |
EP2809041A1 (en) * | 2013-05-31 | 2014-12-03 | Alcatel Lucent | Method, apparatus and computer program for optimizing a communication characteristic of an application comprising multiple processing components |
EP2808793A1 (en) * | 2013-05-31 | 2014-12-03 | Alcatel Lucent | A Method, apparatus and computer program for determining a load characteristic of an application comprising multiple processing components |
US9411868B2 (en) * | 2013-08-23 | 2016-08-09 | Morgan Stanley & Co. Llc | Passive real-time order state replication and recovery |
EP3044678A1 (en) * | 2013-09-13 | 2016-07-20 | Hewlett Packard Enterprise Development LP | Failure recovery of a task state in batch-based stream processing |
US9928263B2 (en) * | 2013-10-03 | 2018-03-27 | Google Llc | Persistent shuffle system |
CN104038364B (en) * | 2013-12-31 | 2015-09-30 | 华为技术有限公司 | The fault-tolerance approach of distributed stream treatment system, node and system |
US9535734B2 (en) * | 2014-03-06 | 2017-01-03 | International Business Machines Corporation | Managing stream components based on virtual machine performance adjustments |
US9679033B2 (en) * | 2014-03-21 | 2017-06-13 | International Business Machines Corporation | Run time insertion and removal of buffer operators |
US10360196B2 (en) | 2014-04-15 | 2019-07-23 | Splunk Inc. | Grouping and managing event streams generated from captured network data |
US10523521B2 (en) | 2014-04-15 | 2019-12-31 | Splunk Inc. | Managing ephemeral event streams generated from captured network data |
US10462004B2 (en) | 2014-04-15 | 2019-10-29 | Splunk Inc. | Visualizations of statistics associated with captured network data |
US10366101B2 (en) | 2014-04-15 | 2019-07-30 | Splunk Inc. | Bidirectional linking of ephemeral event streams to creators of the ephemeral event streams |
US11086897B2 (en) | 2014-04-15 | 2021-08-10 | Splunk Inc. | Linking event streams across applications of a data intake and query system |
US10127273B2 (en) | 2014-04-15 | 2018-11-13 | Splunk Inc. | Distributed processing of network data using remote capture agents |
US10693742B2 (en) | 2014-04-15 | 2020-06-23 | Splunk Inc. | Inline visualizations of metrics related to captured network data |
US10700950B2 (en) | 2014-04-15 | 2020-06-30 | Splunk Inc. | Adjusting network data storage based on event stream statistics |
US9762443B2 (en) | 2014-04-15 | 2017-09-12 | Splunk Inc. | Transformation of network data at remote capture agents |
US11281643B2 (en) | 2014-04-15 | 2022-03-22 | Splunk Inc. | Generating event streams including aggregated values from monitored network data |
US10102032B2 (en) | 2014-05-29 | 2018-10-16 | Raytheon Company | Fast transitions for massively parallel computing applications |
US9286001B2 (en) | 2014-06-30 | 2016-03-15 | Microsoft Licensing Technology Llc | Effective range partition splitting in scalable storage |
US10554709B2 (en) | 2014-07-08 | 2020-02-04 | Microsoft Technology Licensing, Llc | Stream processing utilizing virtual processing agents |
US10037187B2 (en) * | 2014-11-03 | 2018-07-31 | Google Llc | Data flow windowing and triggering |
US10356150B1 (en) | 2014-12-15 | 2019-07-16 | Amazon Technologies, Inc. | Automated repartitioning of streaming data |
US10057082B2 (en) * | 2014-12-22 | 2018-08-21 | Ebay Inc. | Systems and methods for implementing event-flow programs |
WO2016165651A1 (en) * | 2015-04-17 | 2016-10-20 | Yi Tai Fei Liu Information Technology Llc | Flowlet-based processing with key/value store checkpointing |
US9967160B2 (en) * | 2015-05-21 | 2018-05-08 | International Business Machines Corporation | Rerouting data of a streaming application |
CN106293892B (en) * | 2015-06-26 | 2019-03-19 | 阿里巴巴集团控股有限公司 | Distributed stream computing system, method and apparatus |
US9785507B2 (en) | 2015-07-30 | 2017-10-10 | International Business Machines Corporation | Restoration of consistent regions within a streaming environment |
US10191768B2 (en) | 2015-09-16 | 2019-01-29 | Salesforce.Com, Inc. | Providing strong ordering in multi-stage streaming processing |
US10198298B2 (en) | 2015-09-16 | 2019-02-05 | Salesforce.Com, Inc. | Handling multiple task sequences in a stream processing framework |
US10146592B2 (en) | 2015-09-18 | 2018-12-04 | Salesforce.Com, Inc. | Managing resource allocation in a stream processing framework |
US9965330B2 (en) * | 2015-09-18 | 2018-05-08 | Salesforce.Com, Inc. | Maintaining throughput of a stream processing framework while increasing processing load |
US9983655B2 (en) * | 2015-12-09 | 2018-05-29 | Advanced Micro Devices, Inc. | Method and apparatus for performing inter-lane power management |
US10437635B2 (en) | 2016-02-10 | 2019-10-08 | Salesforce.Com, Inc. | Throttling events in entity lifecycle management |
US10078559B2 (en) * | 2016-05-27 | 2018-09-18 | Raytheon Company | System and method for input data fault recovery in a massively parallel real time computing system |
GB2552454A (en) * | 2016-06-15 | 2018-01-31 | Nanotronix Inc | High performance computing network |
CN107665155B (en) | 2016-07-28 | 2021-07-09 | 华为技术有限公司 | Method and device for processing data |
US10346272B2 (en) * | 2016-11-01 | 2019-07-09 | At&T Intellectual Property I, L.P. | Failure management for data streaming processing system |
CN107273193A (en) * | 2017-04-28 | 2017-10-20 | 中国科学院信息工程研究所 | A kind of data processing method and system towards many Computational frames based on DAG |
US10176217B1 (en) * | 2017-07-06 | 2019-01-08 | Palantir Technologies, Inc. | Dynamically performing data processing in a data pipeline system |
US10776247B2 (en) | 2018-05-11 | 2020-09-15 | International Business Machines Corporation | Eliminating runtime errors in a stream processing environment |
US11243810B2 (en) * | 2018-06-06 | 2022-02-08 | The Bank Of New York Mellon | Methods and systems for improving hardware resiliency during serial processing tasks in distributed computer networks |
US10901797B2 (en) * | 2018-11-06 | 2021-01-26 | International Business Machines Corporation | Resource allocation |
US11381393B2 (en) | 2019-09-24 | 2022-07-05 | Akamai Technologies Inc. | Key rotation for sensitive data tokenization |
US20210329097A1 (en) * | 2020-04-20 | 2021-10-21 | Parsons Corporation | Radio-frequency data stream processing and visualization |
US11640402B2 (en) * | 2020-07-22 | 2023-05-02 | International Business Machines Corporation | Load balancing in streams parallel regions |
US11573876B2 (en) * | 2020-10-30 | 2023-02-07 | Google Llc | Scalable exactly-once data processing using transactional streaming writes |
US11593309B2 (en) | 2020-11-05 | 2023-02-28 | International Business Machines Corporation | Reliable delivery of event notifications from a distributed file system |
US11803448B1 (en) | 2021-06-29 | 2023-10-31 | Amazon Technologies, Inc. | Faster restart of task nodes using periodic checkpointing of data sources |
CN113626116B (en) * | 2021-07-20 | 2023-12-15 | 中国电子科技集团公司电子科学研究院 | Intelligent learning system and data analysis method |
US20230025059A1 (en) * | 2021-07-22 | 2023-01-26 | Akamai Technologies, Inc. | Systems and methods for failure recovery in at-most-once and exactly-once streaming data processing |
US20230132831A1 (en) * | 2021-10-29 | 2023-05-04 | International Business Machines Corporation | Task failover |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6978305B1 (en) * | 2001-12-19 | 2005-12-20 | Oracle International Corp. | Method and apparatus to facilitate access and propagation of messages in communication queues using a public network |
US20100293532A1 (en) * | 2009-05-13 | 2010-11-18 | Henrique Andrade | Failure recovery for stream processing applications |
US20110093491A1 (en) * | 2009-10-21 | 2011-04-21 | Microsoft Corporation | Partitioned query execution in event processing systems |
US20110247007A1 (en) * | 2010-04-01 | 2011-10-06 | International Business Machines Corporation | Operators with request-response interfaces for data stream processing applications |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6035379A (en) * | 1997-01-09 | 2000-03-07 | Microsoft Corporation | Transaction processing for user data employing both logging and shadow copying |
US5951695A (en) * | 1997-07-25 | 1999-09-14 | Hewlett-Packard Company | Fast database failover |
US7065540B2 (en) * | 1998-11-24 | 2006-06-20 | Oracle International Corporation | Managing checkpoint queues in a multiple node system |
US6865591B1 (en) * | 2000-06-30 | 2005-03-08 | Intel Corporation | Apparatus and method for building distributed fault-tolerant/high-availability computed applications |
US7487206B2 (en) * | 2005-07-15 | 2009-02-03 | International Business Machines Corporation | Method for providing load diffusion in data stream correlations |
US7921328B1 (en) * | 2008-04-18 | 2011-04-05 | Network Appliance, Inc. | Checkpoint consolidation for multiple data streams |
US20100036903A1 (en) * | 2008-08-11 | 2010-02-11 | Microsoft Corporation | Distributed load balancer |
WO2010022100A2 (en) * | 2008-08-18 | 2010-02-25 | F5 Networks, Inc. | Upgrading network traffic management devices while maintaining availability |
JP5439014B2 (en) * | 2009-04-10 | 2014-03-12 | 株式会社日立製作所 | Data processing system, processing method thereof, and computer |
US8707320B2 (en) * | 2010-02-25 | 2014-04-22 | Microsoft Corporation | Dynamic partitioning of data by occasionally doubling data chunk size for data-parallel applications |
-
2011
- 2011-11-30 US US13/308,037 patent/US9325757B2/en not_active Expired - Fee Related
- 2011-11-30 US US13/308,064 patent/US8856374B2/en active Active
-
2014
- 2014-10-07 US US14/508,666 patent/US20150026359A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6978305B1 (en) * | 2001-12-19 | 2005-12-20 | Oracle International Corp. | Method and apparatus to facilitate access and propagation of messages in communication queues using a public network |
US20100293532A1 (en) * | 2009-05-13 | 2010-11-18 | Henrique Andrade | Failure recovery for stream processing applications |
US20110093491A1 (en) * | 2009-10-21 | 2011-04-21 | Microsoft Corporation | Partitioned query execution in event processing systems |
US20110247007A1 (en) * | 2010-04-01 | 2011-10-06 | International Business Machines Corporation | Operators with request-response interfaces for data stream processing applications |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150142956A1 (en) * | 2013-11-19 | 2015-05-21 | International Business Machines Corporation | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state |
US9626209B2 (en) * | 2013-11-19 | 2017-04-18 | International Business Machines Corporation | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state |
US9891942B2 (en) | 2013-11-19 | 2018-02-13 | International Business Machines Corporation | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state |
US9898324B2 (en) | 2013-11-19 | 2018-02-20 | International Business Machines Corporation | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state |
US9983897B2 (en) | 2013-11-19 | 2018-05-29 | International Business Machines Corporation | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state |
US10042663B2 (en) | 2013-11-19 | 2018-08-07 | International Business Machines Corporation | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state |
Also Published As
Publication number | Publication date |
---|---|
US9325757B2 (en) | 2016-04-26 |
US8856374B2 (en) | 2014-10-07 |
US20120137018A1 (en) | 2012-05-31 |
US20120137164A1 (en) | 2012-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8856374B2 (en) | Methods and systems for reconfiguration and repartitioning of a parallel distributed stream process | |
Mai et al. | Chi: A scalable and programmable control plane for distributed stream processing systems | |
US11474874B2 (en) | Systems and methods for auto-scaling a big data system | |
WO2017128507A1 (en) | Decentralized resource scheduling method and system | |
CN106817408B (en) | Distributed server cluster scheduling method and device | |
EP2643771B1 (en) | Real time database system | |
US9152491B2 (en) | Job continuation management apparatus, job continuation management method and job continuation management program | |
WO2013114228A1 (en) | Processing element management in a streaming data system | |
JP2005524147A (en) | Distributed application server and method for implementing distributed functions | |
CN106325984B (en) | Big data task scheduling device | |
US9754032B2 (en) | Distributed multi-system management | |
EP3087483B1 (en) | System and method for supporting asynchronous invocation in a distributed data grid | |
EP3172682B1 (en) | Distributing and processing streams over one or more networks for on-the-fly schema evolution | |
CN112217847A (en) | Micro service platform, implementation method thereof, electronic device and storage medium | |
CN104484228A (en) | Distributed parallel task processing system based on Intelli-DSC (Intelligence-Data Service Center) | |
CN112468310B (en) | Streaming media cluster node management method and device and storage medium | |
CN110488714A (en) | A kind of asynchronism state machine control method and device | |
CN107621975B (en) | TIMER logic implementation method based on JAVA TIMER high availability | |
WO2019086120A1 (en) | A system and method for high-performance general-purpose parallel computing with fault tolerance and tail tolerance | |
CN114020408A (en) | Task fragment configuration method and device, equipment and storage medium | |
WO2014075425A1 (en) | Data processing method, computational node and system | |
CN113110935A (en) | Distributed batch job processing system | |
WO2022130005A1 (en) | Granular replica healing for distributed databases | |
Qiu et al. | Rta: Real time actionable events detection as a service | |
JP2002149439A (en) | Method for switching server and server device in distributed processing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |