US8108612B2 - Location updates for a distributed data store - Google Patents

Location updates for a distributed data store Download PDF

Info

Publication number
US8108612B2
US8108612B2 US12/466,390 US46639009A US8108612B2 US 8108612 B2 US8108612 B2 US 8108612B2 US 46639009 A US46639009 A US 46639009A US 8108612 B2 US8108612 B2 US 8108612B2
Authority
US
United States
Prior art keywords
location information
data
version
data store
version indicators
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/466,390
Other versions
US20100293334A1 (en
Inventor
Lu Xun
Hua-Jun Zeng
Muralidhar Krishnaprasad
Radhakrishnan Srikanth
Ankur Agrawal
Balachandar Pavadaisamy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US12/466,390 priority Critical patent/US8108612B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XUN, LU, KRISHNAPRASAD, MURALIDHAR, SRIKANTH, RADHAKRISHNAN, ZENG, HUA-JUN
Publication of US20100293334A1 publication Critical patent/US20100293334A1/en
Priority to US13/337,093 priority patent/US8484417B2/en
Application granted granted Critical
Publication of US8108612B2 publication Critical patent/US8108612B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGRAWAL, ANKUR, PAVADAISAMY, BALACHANDAR, XUN, LU, KRISHNAPRASAD, MURALIDHAR, SRIKANTH, RADHAKRISHNAN, ZENG, HUA-JUN
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9014Indexing; Data structures therefor; Storage structures hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration

Definitions

  • a cache is a collection of data that duplicates original value(s) stored elsewhere or computed earlier, where the cached data can be read from the cache in lieu of reading the original value(s).
  • a cache is typically implemented where it is more efficient to read the cached data than to read the original value(s) so that use of the cache can increase the overall efficiency of computing systems.
  • a distributed data store is a data store that is distributed across one or more data store nodes.
  • a distributed data store is distributed across one or more physical or virtual computing machines.
  • a distributed partitioned data store is a data store that is partitioned across multiple data store nodes, where a primary location for each partition is on a single node.
  • a node refers to a storage process in a data store system.
  • a node may be on a single machine or spread across multiple physical machines, and a single physical machine may include multiple storage nodes, such as where a single physical machine hosts multiple virtual machine processes.
  • a partition is a logical grouping of data, which may be implemented by associating a partition with a set of one or more keys, where the keys are in turn associated with data stored on a node.
  • a partition can be stored as a primary partition, which includes primary partition data in the data store, and one or more secondary partitions, which each include secondary partition data in the data store.
  • the term “primary” data indicates the data that is currently set up to be accessed in the data store, such as to be read from the data store, as opposed to secondary or replicated data that is currently being stored as a backup.
  • the primary data may also be replicated from other data outside the data store. For example, in a distributed cache the primary data may be replicated from more authoritative data that is stored in long-term mass storage.
  • the term “primary” is similarly used to refer to a primary region or partition, which is a region or partition currently set up to be accessed, as opposed to a replica of the primary region or partition.
  • the term “primary” can also be used to refer to a primary node, which is a node that stores the primary data, such as a primary region.
  • a cache node can be a primary node for one set of cache data and a secondary node for another set of cache data.
  • a distributed partitioned data store system is a system that is configured to implement such distributed partitioned data stores. In such a system, messages passing between nodes may not be reliable. For example, messages may be delayed, re-ordered, or lost.
  • version indicators within an existing range can be maintained.
  • Each version indicator can be associated with one of multiple data partitions in a distributed data store.
  • a partition reconfiguration in the data store can be associated with a reconfigured partition.
  • a new version indicator that is outside the existing range can be assigned to the reconfigured partition, producing in a new range of version indicators that includes the new version indicator.
  • a broadcast message can be sent to multiple nodes, which can include storage nodes and/or client nodes that can communicate with storage nodes to access data in a distributed data store.
  • the broadcast message can include updated location information for data in the distributed data store.
  • a response message can be sent to a requesting node of the multiple nodes in response to receiving from that node a message that requests updated location information for the data in the data store.
  • the response message can include the requested updated location information.
  • a message requesting access to data in a distributed data store can be sent to a node in the data store, and a failure notification can be received from the node.
  • a message requesting updated location information for the data can be sent, and the updated location information can be received.
  • FIG. 1 illustrates an exemplary layering arrangement
  • FIG. 2 illustrates a further topology model of a layering arrangement that relates to an independent separate tier model implementation.
  • FIG. 3 illustrates a topology model of a layering arrangement that pertains to an embedded application model.
  • FIG. 4 illustrates a distributed cache that includes a runtime deployed on multiple machines.
  • FIG. 5 illustrates a particular methodology of distributing cache.
  • FIG. 6 illustrates a further methodology of implementing a layering arrangement for a distributed cache.
  • FIG. 7 illustrates an exemplary illustration of a unified cache view.
  • FIG. 8 illustrates an artificial intelligence (AI) component that can be employed to facilitate inferring and/or determining when, where, and/or how to cache data in a distributed environment.
  • AI artificial intelligence
  • FIG. 9 illustrates an exemplary environment for implementing various aspects of the described caching tools and techniques.
  • FIG. 10 is a schematic block diagram of a sample-computing environment that can be employed for caching configurations such as distributing cache.
  • FIG. 11 is a schematic block diagram of a data system that provides location updates for a distributed data store.
  • FIG. 12 is a schematic diagram illustrating maintenance of a version data structure.
  • FIG. 13 is a flowchart illustrating a location update technique.
  • FIG. 14 is a flowchart illustrating another location update technique.
  • Described embodiments are directed to techniques and tools for improved location updates for a data store. Such improvements may result from the use of various techniques and tools separately or in combination.
  • Such techniques and tools may include maintaining version indicators in a version data structure.
  • the version indicators can be within an existing range, which may be a numeric range, an alphabetical range, an alphanumeric range, etc.
  • the range may include gaps between adjacent version indicator values, which may be numeric values, alphabetical values, alphanumeric values, etc.
  • Each version indicator can be associated with a data partition in a distributed data store. When a partition in the data store is reconfigured, a new version indicator, which is outside the existing range, can be assigned to the reconfigured partition.
  • a new range of version indicators can include the new version indicator at the end (e.g., high end or low end) of the new range.
  • the version indicators can be used to determine whether corresponding partitions have been reconfigured, making existing location information for the corresponding partitions out of date. For example, if a client node (i.e., a node that is configured to receive data from the data store) has location information versions corresponding to version indicators in a range of 85 to 125, but an authoritative version data structure (such as a general partition map for the data store) has location information versions corresponding to version indicators in the range of 85 to 150, then it can be determined that the client node's location information is out of date. Accordingly, the location information corresponding to version indicators 126 to 150 in the authoritative data structure can be sent to the client node so that the client node's location information will be up to date.
  • a client node i.e., a node that is configured to receive data from the data store
  • an authoritative version data structure such as a general partition map for the data store
  • reconfiguration of one or more partitions can be any change that would result in location information associated with the partition being changed, such as moving a partition from one node to another, adding a new partition, or even reconfiguring a primary node that hosts the partition so that existing location information is no longer accurate.
  • Partition location information is information that facilitates accessing the corresponding partition, such as an identification of a primary storage node for the partition and possibly additional information to access the primary storage node or to access the partition within the primary node.
  • location information can include IP addresses, URL's, storage drive and path identifiers, ports in storage nodes, etc.
  • Location information can be sent in a broadcast message to nodes, such as client and/or storage nodes.
  • a broadcast message is a message that is sent to multiple nodes. Broadcast messages may be sent periodically according to a set schedule or when a certain number of reconfigurations have occurred since a previous broadcast message, or according to some other scheme. Accordingly, client and/or storage nodes that have received the broadcast messages can use the location information in the broadcast messages to request access to the data in the data store.
  • a client node can request updated location information if the client node finds that its location information is out of date (such as where it receives a failure message in response to a request to access data in the data store). The client can receive a response with the requested updated location information, and can use that updated information to again request access to the data in the data store.
  • Version indicators can be used in these messages. For example, a client node can indicate in a location information request message what version indicators correspond to versions of location information the client has already received, and the response to the request can include all updated location information versions (as indicated by corresponding version indicators) that the client has not yet received. Version indicators can also be used to send appropriate updates in broadcast messages.
  • the memory capacity of multiple computers or processes can be aggregated into a single unified cache, which can be scalable (e.g., a dynamic scaling) to a plurality of machines via a layering arrangement.
  • Such layering arrangement can cache serializable Common Language Runtime (CLR) objects and provide access through a simple cache application programming interface (API).
  • the layering arrangement can include a data manager component, an object manager component and a distributed object manager component, which can be implemented in a modular fashion.
  • the data manager component supplies basic data functions (e.g., hash functions), and the object manager component implements object facade thereon including cache objects—while the distributed object manager provides distribution of the data in the distributed cache.
  • the object manager component can map regions to containers and manage data eviction thresholds and supply policy management for cached data.
  • Such regions can represent cache containers that typically guarantee co-locations of the object placed/inserted in the container (e.g., co-locations of objects in same node).
  • the object manager component can raise notifications (e.g., due to changes made to cached data) for various regions or objects of the distributed cache.
  • the distributed object manager component can dispatch requests to various nodes associated with different regions of the distributed cache.
  • the distributed object manager can interface with partition maps, or routing tables, of the distributed cache for a given request, and can facilitate abstraction of the aggregated cache in the distributed environment, to a single unified cache.
  • the distributed object manager component is positioned on top of the object manager component, which itself is placed on top of the data manager component.
  • tight integration can be provided with ASP.NET to enable cache ASP.NET session data in the cache without having to write it to source databases, for example.
  • These components can provide pluggable features that can readily adapt to a user's preferences (e.g., replacing a data manger component with another type thereof, based on user preferences).
  • the object manager component can be replaced with another object manager component, wherein plugging different models in the layering arrangement is enabled by enabling a call back mechanism with holding locks during call back throughout the stack.
  • the layering arrangement can provide for a modular arrangement that facilitates operation on different levels and communication substrates (e.g., TCP/IP), and which can be implemented in two topology models, namely as an independent separate tier model or an embedded application model.
  • the caching layer can function as an independent separate tier by itself (which can be positioned between application servers and data servers).
  • the distributed cache can run as a service hosted either by Windows Activation Services (WAS) or windows service, and can run separate from the application.
  • the applications can either employ the client stubs provided by the distributed cache to talk thereto, or can communicate through a representational state transfer (REST) API directly into the service.
  • REST representational state transfer
  • the cache can be embedded within the application itself (e.g., connecting the applications together to form a cluster—such as embedding caches in ASP.net instances to form a cluster of ASP.net machines, wherein upon storing an item in a local cache it can be viewed from other machines.)
  • This embedding can further enable tagging and Language Integrated Query (LINQ) queries on the objects from a functionality perspective. LINQ queries can then be run natively on stored objects, and can be embedded in .Net applications.
  • LINQ Language Integrated Query
  • FIG. 1 illustrates an exemplary layering arrangement that can enable aggregating memory capacity of multiple computers into a single unified cache.
  • Such layering arrangement ( 100 ) can provide for a scalable system that can be tailored to different types of communication layers such as TCP/IP, and pluggable features can be further enabled for readily adapting to a user's preferences.
  • the distributed cache system implementing the layering arrangement ( 100 ) can dynamically scale itself with growth of applications associated therewith, by addition of additional computers or storage processes as nodes to a cluster of machines and/or storage processes. As illustrated in FIG.
  • each of the cache nodes ( 131 , 133 ) (1 to n, n being an integer) of the layering arrangement ( 100 ) can include a data manager component ( 110 ), an object manager component ( 112 ) and a distributed object manager component ( 114 ), the set up of which can be implemented in a modular fashion.
  • the distributed object manager component ( 114 ) can be positioned on top of the object manager component ( 112 ), which can be placed on top of the data manager component ( 110 ).
  • the data manager component ( 110 ) can supply basic data functions (e.g., hash functions), and the object manager component ( 112 ) can implement object facade thereon including cache objects, with the distributed object manager component ( 114 ) providing the distribution.
  • the object manager component ( 112 ) and data manager component ( 110 ) can act as local entities, wherein the distributed object manager component ( 114 ) can perform distributions.
  • a clustering substrate ( 107 ) can establish clustering protocols among a plurality of nodes that form a single unified cache. For example, when a node is to join or leave the cluster, requisite operations for adding or leaving the cluster can be managed, wherein a distributed components availability substrate ( 111 ) can employ such information to manage operations (e.g., monitoring health of nodes, managing life cycles of nodes, creating a primary node on another machine). In addition, for each node, each of the components forming the layering arrangement can be pluggable based on user preferences, system features, and the like.
  • the data manager component ( 110 ) can provide primitive high performance data structures such as hash tables, Btrees, and the like. Since the data manager component ( 110 ) can be memory bound and all operations of the distributed cache can be atomic, the data manager component ( 110 ) can typically implement highly concurrent hash tables. The data manager component ( 110 ) and the hash table structures can further facilitate creating the infrastructure for supplying containers and indexes on containers. In addition, the data manager component ( 110 ) can provide simple eviction and expiration on these hash structures.
  • the layering arrangement ( 100 ) Due to pluggable features supplied by the layering arrangement ( 100 ), users can plug in different types of data managers tailored to users' preferences, such as a transaction data manager or a disk paged data manager, or the like.
  • the object manager component ( 112 ) can provide object abstraction and can implement the concept of named caches and regions by employing data structures provided by the data manager component ( 110 ).
  • the distributed object manager component ( 114 ) can employ the local object manager component ( 112 ) and integrate with the distributed components availability substrate ( 111 ) to provide the abstraction of the distributed cache.
  • the distributed components availability substrate ( 111 ) can provide the transport and data consistency operations to make the system scalable and available.
  • the distributed object manager component ( 114 ) can optionally be implemented as part of a client tier to facilitate dispatching requests (e.g., directly) to the nodes associated with the single unified cache.
  • the distributed object manager component ( 114 ) can further include a dispatch manager component ( 117 ) and a distributed manager component ( 119 ).
  • the dispatch manager component ( 117 ) can further look up the routing table to dispatch the requests to a primary node (e.g., where a primary region is located) as part of a dynamically scalable distributed cache.
  • the dispatch manager component ( 117 ) can also be present in the client so that the client can directly dispatch requests to the primary node.
  • the distributed object manager component ( 114 ) on the receiving node can interact with a partition map to check if the node is indeed designated as the primary node as part of a plurality of nodes associated with the distributed cache, and can call the object manager component ( 112 ) to perform the operation.
  • the distributed object manager component ( 114 ) can also communicate with a replicator to replicate the data to the secondary nodes.
  • the distributed object manager component ( 114 ) can also interact with failover manager systems (not shown) to clone regions to create new secondary or primary nodes during reconfiguration procedures subsequent to possible failures.
  • the object manager component ( 112 ) can further include a notification management component ( 123 ) that can track changes to regions and objects, and can relay notifications to delegates listening to those events. Moreover, applications can also register delegates for notifications on any node which may be different from the primary node on which the object resides.
  • the distributed object manager component ( 114 ) can further manage the propagation of notifications in a distributed fashion including providing high availability for such notifications when the primary node fails. For example, this can be handled by maintaining a local lookup table indexed by delegate id on the node where the application registers the delegate. The primary node that stores the object can maintain the delegate id and the originating node information. When an object changes, the distributed object manager component ( 114 ) of the primary node can notify all the originating nodes, passing along the delegate id.
  • the distributed object manager component ( 114 ) associated with the receiver can employ the lookup table to call the appropriate delegate, thus providing the change information to the application in a distributed fashion.
  • notifications can be asynchronous and can further be backed up using the same secondary nodes.
  • the secondary nodes can attempt to deliver the pending notifications, wherein in the event of primary node failure, notifications can be resent because the primary node may not have synchronized the information regarding the delivered notifications before failure. Since all notifications can carry the region, key and version information, the application can use the version to ignore duplicate notifications. Following are some examples of callback syntax.
  • the availability substrate ( 111 ) can provide scalability and availability to systems that contain a storage component associated with the distributed cache.
  • the availability substrate can include load balancers, fail over managers, replicators and the like.
  • a communication substrate ( 109 ) can provide for failure detection of nodes and reliable message delivery between nodes.
  • the communication substrate ( 109 ) can interact with the availability substrate ( 111 ).
  • the communication substrate ( 109 ) can also provide the communication channels and cluster management.
  • the communication substrate ( 109 ) can provide callbacks whenever a new node joins the cluster or when a node dies or fails to respond to exchanged messages (e.g., heart beat messages).
  • the communication substrate ( 109 ) can provide efficient point-to-point and multicast delivery channels, and can further provide reliable message delivery for implementing replication protocols.
  • the communication substrate ( 109 ) can support notifications by maintaining delegate information in cache items and triggering the notification when items are modified.
  • the communication substrate ( 109 ) can also trigger eviction based on policies defined at the region or named cache level.
  • FIG. 2 and FIG. 3 illustrate two topology models, namely an independent separate tier model, and an embedded application model, respectively.
  • the caching tier ( 220 ) can function as an independent separate tier by itself (which can be positioned between application servers and data servers).
  • the distributed cache system can run as a service hosted either by WAS or windows service and can run separate from the application.
  • the applications ( 201 , 203 , 205 ) (1 to m, m being an integer) can either employ the client stubs provided by the distributed cache to communicate with the cache system, or can communicate directly into the service, such as through a representational state transfer (REST) API.
  • REST representational state transfer
  • the cache system can be embedded within the application itself as illustrated in FIG. 3 .
  • Such can occur by connecting the applications ( 310 , 312 , 314 ) (1 to k, k being an integer) together to form a cluster; for instance as embedding caches in ASP.net instances to form a cluster of ASP.net machines, wherein upon storing an item in a local cache it can be viewed from other machines.
  • the distributed cache runtime dlls can be compiled with the application and the application can act as the cache host for the distributed cache runtime. All the thread pools and memory can come from the application's container.
  • a Load Balancer can dynamically redistribute load across the cluster in the event that one or more nodes are inundated. For example, data can be repartitioned to spread it to nodes that have less loads. All such nodes can periodically send their load status as part of the configuration metadata.
  • the load balancer ( 302 ) can also periodically query the configuration to determine which nodes are overloaded and can be balanced. For example, distributing the load may include repartitioning the overloaded partition of data on a primary node and spreading the overloaded partition to one (or more) of its secondary nodes. This may involve only a change in the configuration data (partition map) and no data movement (since the secondary nodes already have the data).
  • the data may be distributed to other non-secondary nodes since the secondary nodes themselves might be loaded and cannot handle the additional load.
  • either the data partitions on the secondary nodes (for which this node is the primary) can be further load balanced; or non-secondary nodes can be chosen to distribute the load, in which case in addition to the changes in the partition map, data can be moved.
  • FIG. 4 illustrates a distributed cache system ( 400 ) that includes the runtime deployed on multiple machines ( 410 , 411 , 412 ) (1 to m, m being an integer) that form a cluster.
  • machines 410 , 411 , 412
  • runtimes also referred to as cache hosts.
  • Each cache host ( 422 , 423 ) can host one or more named caches.
  • the named caches can be configured in the distributed cache configuration file.
  • the named caches can be spread around all or a subset of the machines ( 410 , 411 , 412 ) in the cluster.
  • one or more regions ( 433 ) can exist within each named cache.
  • Such regions can be implicitly created by the distributed cache or can be explicitly defined by the application.
  • all items in a region ( 433 ) can be guaranteed to be co-located on a cache host, such as by assigning one or more regions ( 433 ) to a single partition ( 436 ), rather than spreading regions across multiple partitions.
  • Such can improve performance for operations that operate on multiple items in the region, such as query and other set operations.
  • the node where a region is located can be deemed as the primary node of that region, wherein typically access to this region will be routed to the primary node for that region. If the named cache is configured to have “backups” for high availability, then one or more other nodes can be chosen to contain a copy of this data.
  • Such nodes are called secondary nodes for that region. All changes made to the primary node can also be reflected on these secondary nodes. Thus if the primary node for a region fails, the secondary node can be used to retrieve the data without having to have logs written to disk.
  • the following is a code example that shows the creation of a named cache and region.
  • Each cache region ( 433 ) can include one or more cache items ( 440 ).
  • Each cache item can include an identifier such as a key ( 442 ), a value or payload ( 444 ), and one or more tags ( 446 ).
  • Cache regions may also be nested so that a cache region may include one or more other cache regions ( 433 ) and/or one or more cache items ( 440 ).
  • FIG. 5 illustrates a related methodology ( 500 ) of distributing a cache. While the exemplary methods of FIG. 5 and other figures herein are illustrated and described herein as a series of blocks representative of various events and/or acts, the described tools and techniques are not limited by the illustrated ordering of such blocks. For instance, some acts or events may occur in different orders and/or concurrently with other acts or events, apart from the ordering illustrated herein, in accordance with the described tools and techniques. In addition, not all illustrated blocks, events or acts, may implement a methodology in accordance with the described tools and techniques. Moreover, the exemplary method and other methods according to the tools and techniques may be implemented in association with one or more of the methods illustrated and described herein, as well as in association with other systems and apparatus not illustrated or described.
  • cache available to the system can be identified ( 510 ), wherein the cache can be scalable to a plurality of machines via a layering arrangement (e.g., dynamic scaling by adding new nodes).
  • the cache can be aggregated ( 520 ) into a single unified cache, as presented to a user thereof.
  • Applications/data can be distributed ( 530 ) throughout the aggregated cache.
  • the aggregated cache can be scaled ( 540 ) depending on the changing characteristics of the applications and/or data.
  • FIG. 6 illustrates a related methodology ( 600 ) of implementing a distributed cache via a layering arrangement.
  • a layering arrangement can be supplied ( 610 ).
  • the layering arrangement can include a data manager component, an object manager component and a distributed object manager component—the set up of which can be implemented in a modular fashion; wherein, the distributed object manager component can be positioned on top of the object manager component, which can be placed on top of the data manager component.
  • the data manager component can be employed ( 620 ) to supply basic data functions (e.g., hash functions).
  • the object manager component can be employed ( 630 ) as an object facade thereon including cache objects, with the distributed object manager component providing the distribution.
  • the object manager component and data manager component can act as local entities, wherein the distribution manager can perform ( 640 ) the distributions.
  • FIG. 7 illustrates a cache system ( 700 ) that can provide a unified cache view ( 720 ) of one or more caches ( 730 ) for clients ( 740 ) spread across machines and/or processes.
  • a cache system ( 700 ) including this unified cache view ( 720 ) can provide an explicit, distributed, in-memory application cache for all kinds of data with consistency and query.
  • Such data can reside in different tiers (in different service boundaries) with different semantics.
  • data stored in the backend database can be authoritative and can make it desirable to have a high degree of data consistency and integrity.
  • Most data in the mid-tier, being operated by the business logic tends to be a copy of the authoritative data.
  • Such copies are typically suitable for caching.
  • understanding the different types of data and their semantics in different tiers can help define desired degrees of caching.
  • Reference data is a version of the authoritative data. It is either a direct copy (version) of the original data or aggregated and transformed from multiple data sources. Reference data is practically immutable—changing the reference data (or the corresponding authoritative data) creates a new version of the reference data. That is, every reference data version can be different from other reference data versions. Reference data is a candidate for caching; as the reference data typically does not change, it can be shared across multiple applications (users), thereby increasing the scale and performance.
  • a product catalog application aggregating product information across multiple backend application and data sources can be considered.
  • Most common operation on the catalog data is the read operation (or browse); a typical catalog browse operation iterates over a large amount of product data, filters it, personalizes it, and then presents the selected data to the users.
  • Key based and query based access is a common form of operation.
  • Caching can be beneficial for catalog access. If not cached, operations against such an aggregate catalog may include the operations to be decomposed into operations on the underlying sources, to invoke the underlying operations, to collect responses, and to aggregate the results into cohesive responses. Accessing the large sets of backend data for every catalog operation can be expensive, and can significantly impact the response time and throughput of the application. Caching the backend product data closer to the catalog application can significantly improve the performance and the scalability of the application.
  • aggregated flight schedules are another example of reference data.
  • Referenced data can be refreshed periodically, usually at configured intervals, from its sources, or refreshed when the authoritative data sources change. Access to reference data, though shared, is mostly read. Local updates are often performed for tagging (to help organize the data). To support large scale, reference data can be replicated in multiple caches on different machines in a cluster. As mentioned above, reference data can be readily cached, and can provide high scalability.
  • Activity data is generated by the currently executing activity.
  • the activity may be a business transaction.
  • the activity data can originate as part of the business transaction, and at the close of the business transaction, the activity data can be retired to the backend data source as historical (or log) information.
  • the shopping cart data in an online buying application can be considered. There is one shopping cart, which is exclusive, for each online buying session. During the buying session, the shopping cart is cached and updated with products purchased, wherein the shopping cart is visible and accessible only to the buying transaction.
  • the shopping cart is retired (from the cache) to a backend application for further processing. Once the business transaction is processed by the backend application, the shopping cart information is logged for auditing (and historical) purposes.
  • the shopping cart While the buying session is active, the shopping cart is accessed both for read and write; however, it is not shared. This exclusive access nature of the activity data makes it suitable for distributed caching.
  • the shipping carts can be distributed across the cluster of caches. Since the shopping carts are not shared, the set of shopping carts can be partitioned across the distributed cache. By dynamically configuring the distributed cache, the degree of scale can be controlled.
  • Both reference (shared read) and activity (exclusive write) data can be cached. It is to be appreciated that not all application data falls into these two categories. There is data that is shared, concurrently read and written into, and accessed by large number of transactions. For example, considering inventory management application, the inventory of an item has the description of the item and the current quantity. The quantity information is authoritative, volatile, and concurrently accessed by large number of users for read/write. Such data is known as the resource data; the business logic (e.g. the order application logic) runs close to the resource data (e.g. quantity data). The resource data is typically stored in the backend data stores. However, for performance reasons it can be cached in the application tier. While caching the quantity data in memory on a single machine can provide performance improvements, a single cache typically cannot provide availability or scale when the order volume is high. Accordingly, the quantity data can be replicated in multiple caches across the distributed cache system.
  • FIG. 8 illustrates an artificial intelligence (AI) component ( 830 ) that can be employed in a distributed cache ( 800 ) to facilitate inferring and/or determining when, where, and/or how to scale the distributed cache and/or distribute application data therebetween.
  • AI artificial intelligence
  • FIG. 8 illustrates an artificial intelligence (AI) component ( 830 ) that can be employed in a distributed cache ( 800 ) to facilitate inferring and/or determining when, where, and/or how to scale the distributed cache and/or distribute application data therebetween.
  • AI artificial intelligence
  • the term “inference” refers generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations, as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example.
  • the inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
  • the AI component ( 830 ) can employ any of a variety of suitable AI-based schemes as described supra in connection with facilitating various aspects of the herein described tools and techniques. For example, a process for learning explicitly or implicitly how or what candidates are of interest, can be facilitated via an automatic classification system and process.
  • Classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed.
  • SVM support vector machine
  • Other classification approaches include Bayesian networks, decision trees, and probabilistic classification models providing different patterns of independence can be employed.
  • Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
  • classifiers can be explicitly trained (e.g., via a generic training data) as well as implicitly trained (e.g., via observing user behavior, receiving extrinsic information) so that the classifier can be used to automatically determine according to a predetermined criteria which answer to return to a question.
  • SVM's are configured via a learning or training phase within a classifier constructor and feature selection module.
  • a rule based mechanism can further be employed for interaction of a routing manager and a routing layer associated therewith (e.g., load balancing, memory allocation and the like).
  • exemplary is used herein to mean serving as an example, instance or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Similarly, examples are provided herein solely for purposes of clarity and understanding and are not meant to limit the subject innovation or a portion thereof in any manner. It is to be appreciated that a myriad of additional or alternate examples could have been presented, but have been omitted for purposes of brevity.
  • computer readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ).
  • magnetic storage devices e.g., hard disk, floppy disk, magnetic strips . . .
  • optical disks e.g., compact disk (CD), digital versatile disk (DVD) . . .
  • smart cards e.g., card, stick, key drive . . .
  • a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
  • LAN local area network
  • FIGS. 9 and 10 are intended to provide a brief, general description of a suitable environment in which the various aspects of the disclosed subject matter may be implemented. While the subject matter has been described above in the general context of computer-executable instructions of a computer program that runs on a computer and/or computers, those skilled in the art will recognize that the tools and techniques also may be implemented in combination with other program modules.
  • a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a server and the server can be a component.
  • One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
  • program modules include routines, programs, components, data structures, and the like, which perform particular tasks and/or implement particular abstract data types.
  • program modules include routines, programs, components, data structures, and the like, which perform particular tasks and/or implement particular abstract data types.
  • program modules may be located in both local and remote memory storage devices.
  • the computer ( 912 ) can include a processing unit ( 914 ), a system memory ( 916 ), and a system bus ( 918 ).
  • the system bus ( 918 ) can couple system components including, but not limited to, the system memory ( 916 ) to the processing unit ( 914 ).
  • the processing unit ( 914 ) can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit ( 914 ).
  • the system bus ( 918 ) can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
  • ISA Industrial Standard Architecture
  • MSA Micro-Channel Architecture
  • EISA Extended ISA
  • IDE Intelligent Drive Electronics
  • VLB VESA Local Bus
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • AGP Advanced Graphics Port
  • PCMCIA Personal Computer Memory Card International Association bus
  • SCSI Small Computer Systems Interface
  • the system memory ( 916 ) can include volatile memory ( 920 ) and/or nonvolatile memory ( 922 ).
  • the basic input/output system (BIOS) containing the basic routines to transfer information between elements within the computer ( 912 ), such as during startup, can be stored in nonvolatile memory ( 922 ).
  • the nonvolatile memory ( 922 ) can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory.
  • the volatile memory ( 920 ) can include random access memory (RAM), which can act as external cache memory.
  • RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
  • SRAM synchronous RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Synchlink DRAM
  • DRRAM direct Rambus RAM
  • Computer ( 912 ) can also include removable/non-removable, volatile/nonvolatile computer storage media.
  • FIG. 9 illustrates a disk storage ( 924 ), wherein such disk storage ( 924 ) can include, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-60 drive, flash memory card, or memory stick.
  • disk storage ( 924 ) can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
  • CD-ROM compact disk ROM device
  • CD-R Drive CD recordable drive
  • CD-RW Drive CD rewritable drive
  • DVD-ROM digital versatile disk ROM drive
  • a removable or non-removable interface is typically used, such as interface ( 9
  • FIG. 9 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment ( 910 ).
  • Such software can include an operating system ( 928 ).
  • the operating system ( 928 ) which can be stored on disk storage ( 924 ), can act to control and allocate resources of the computer ( 912 ).
  • System applications ( 930 ) can take advantage of the management of resources by operating system ( 928 ) through program modules ( 932 ) and program data ( 934 ) stored either in system memory ( 916 ) or on disk storage ( 924 ). It is to be appreciated that various components described herein can be implemented with various operating systems or combinations of operating systems.
  • a user can enter commands or information into the computer ( 912 ) through input device(s) ( 936 ).
  • Input devices ( 936 ) include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like.
  • These and other input devices connect to the processing unit ( 914 ) through the system bus ( 918 ) via interface port(s) ( 938 ).
  • Interface port(s) ( 938 ) include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB).
  • Output device(s) ( 940 ) use some of the same type of ports as input device(s) ( 936 ).
  • a USB port may be used to provide input to computer ( 912 ), and to output information from computer ( 912 ) to an output device ( 940 ).
  • Output adapter ( 942 ) is provided to illustrate that there are some output devices ( 940 ) like monitors, speakers, and printers, among other output devices ( 940 ) that utilize such adapters.
  • the output adapters ( 942 ) include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device ( 940 ) and the system bus ( 918 ).
  • Other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) ( 944 ).
  • Computer ( 912 ) can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) ( 944 ).
  • the remote computer(s) ( 944 ) can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to the computer ( 912 ).
  • a memory storage device ( 946 ) is illustrated with remote computer(s) ( 944 ).
  • Remote computer(s) ( 944 ) is logically connected to the computer ( 912 ) through a network interface ( 948 ) and then physically connected via a communication connection ( 950 ).
  • the network interface ( 948 ) encompasses communication networks such as local-area networks (LAN) and wide area networks (WAN).
  • LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 802.3, Token Ring/IEEE 802.5 and the like.
  • WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
  • ISDN Integrated Services Digital Networks
  • DSL Digital Subscriber Lines
  • a communication connection(s) ( 950 ) refers to the hardware/software employed to connect the network interface ( 948 ) to the bus ( 918 ). While the communication connection ( 950 ) is shown for illustrative clarity inside computer ( 912 ), it can also be external to the computer ( 912 ).
  • the hardware/software for connection to the network interface ( 948 ) includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
  • FIG. 10 is a schematic block diagram of a sample computing environment ( 1000 ) that can be employed for distributing cache.
  • the environment ( 1000 ) can include one or more client(s) ( 1010 ).
  • the client(s) ( 1010 ) can be hardware and/or software (e.g., threads, processes, computing devices).
  • the environment ( 1000 ) can also include one or more server(s) ( 1030 ).
  • the server(s) ( 1030 ) can also be hardware and/or software (e.g., threads, processes, computing devices).
  • the servers ( 1030 ) can house threads to perform transformations by employing the components described herein, for example.
  • the environment ( 1000 ) can include a communication framework ( 1050 ) that can be employed to facilitate communications between the client(s) ( 1010 ) and the server(s) ( 1030 ).
  • the client(s) ( 1010 ) can be operatively connected to one or more client data store(s) ( 1060 ) that can be employed to store information local to the client(s) ( 1010 ).
  • the server(s) ( 1030 ) can be operatively connected to one or more server data store(s) ( 1040 ) that can be employed to store information local to the servers ( 1030 ).
  • FIG. 11 is a block diagram of a data system ( 1100 ) in conjunction with which one or more of the described embodiments may be implemented.
  • the data system ( 1100 ) can include a data store ( 1110 ), which can be a distributed partitioned data store.
  • the data store ( 1110 ) can host a distributed partitioned cache, such as the distributed partitioned caches discussed above.
  • the data store ( 1110 ) can be some other type of data store, such as a data store containing another type of distributed partitioned database, such as a distributed partitioned SQL database.
  • the data store ( 1110 ) can include a plurality storage nodes ( 1112 and 1113 ), including storage node 1 ( 1112 ) and storage node N ( 1113 ). Each of the storage nodes ( 1112 and 1113 ) can store data ( 1115 and 1116 )) for the data store ( 1110 ).
  • the data in the data store ( 1110 ) can be separated into partitions, with one or more partitions ( 1117 , 1118 , 1119 ) being stored in each storage node ( 1112 , 1113 ).
  • storage node 1 may store a single partition ( 1117 )
  • storage node N ( 1113 ) may store multiple partitions ( 1118 and 1119 ).
  • the partitions in the data store ( 1110 ) can be managed by a partition manager ( 1120 ), which can be located on a separate node, or even on a node that also acts as a storage node.
  • the partition manager ( 1120 ) can manage a version data structure ( 1122 ), such as a main partition map for the data store ( 1110 ).
  • the version data structure ( 1122 ) can indicate an authoritative status of partitions in the data store ( 1110 ).
  • the version data structure ( 1122 ) can include a partition identifier ( 1124 ) for each partition in the data store ( 1110 ), which can be a single identifier, an identifier range (such as a range of region identifiers), a list of identifiers, or some other type of identifier.
  • the version data structure ( 1122 ) can also include location information ( 1126 ) for each partition, such as an identification of primary and secondary storage node(s) ( 1112 , 1113 ) where each partition is stored.
  • the location information ( 1126 ) can include other information to allow the corresponding partition to be accessed, such as IP addresses, URL'S, storage drive and path identifiers, ports for storage nodes ( 1112 , 1113 ), etc.
  • the version data structure ( 1122 ) can include version indicators ( 1128 ).
  • the version indicators ( 1128 ) can form a range from a minimum version indicator ( 1130 ) to a maximum version indicator ( 1132 ).
  • Many different types of version indicators are possible, such as numbers (integers or non-integers), alphabetical ranges, alphanumeric ranges, etc.
  • a version data structure 1210
  • a partition is reconfigured so that location information ( 1214 ) is updated, such as where a partition is moved from one node to another, added to a data store, split into two partitions on different nodes, etc.
  • the version data structure ( 1210 ) of FIG. 12 includes partition identifiers ( 1212 ), location information ( 1214 ) and version indicators ( 1216 ) for corresponding partitions in the data store.
  • partition identifiers 1212
  • location information 1214
  • version indicators 1216
  • a version data structure can be a single unified data structure, or a disconnected set of version indicators listed as attributes of separate objects.
  • a partition with a partition identifier of “ 4500 - 6500 ” (which could indicate that regions from region identifier 4500 to region 6500 are included in the partition) can be reconfigured, such as by being moved to a new node.
  • the “ 4500 - 6500 ” partition entry in the data structure ( 1210 ) can be updated with new location information.
  • the version indicator of the “ 4500 - 6500 ” partition entry can be set to a new value that is outside the existing range ( 1218 ) so that the data structure ( 1210 ) then has a new range ( 1220 ) of version indicators ( 1216 ).
  • a data manager can maintain a “MaxVersion” integer variable that is equal to the maximum value of the version indicators ( 1216 ).
  • the MaxVersion variable can be incremented by 1 and the version indicator for the updated partition can be set equal to the new MaxVersion.
  • the version indicators can range from a highest value corresponding to a most recently configured partition to a lowest value corresponding to a least recently configured partition.
  • Other techniques can also be used to produce similar results. For example, a minimum MinVersion variable could be decremented instead of incrementing the maximum MaxVersion variable. Such techniques can result in gaps in the range of version indicators ( 1216 ), although if such gaps were undesirable, then other version indicators could be updated to fill in the gaps.
  • the data system ( 1100 ) can also include one or more client nodes ( 1160 , 1162 ), which can request access to the data ( 1115 , 1116 ) stored in the storage nodes ( 1112 , 1113 ) in the data store ( 1110 ).
  • client nodes 1160 , 1162
  • Many different configurations of the nodes ( 1112 , 1113 , 1160 , 1162 ) are possible.
  • some or all of the storage nodes ( 1112 , 1113 ) could be running on the same physical and/or virtual computer machine(s) as some or all of the client nodes ( 1160 , 1162 ), or all nodes ( 1112 , 1113 , 1160 , 1162 ) could be running on separate physical or virtual machines.
  • the partition manager ( 1120 ) can communicate with the storage nodes ( 1112 and 1113 ) and the client nodes ( 1160 and 1162 ), such as through the layering arrangement discussed above.
  • the partition manager ( 1120 ) can communicate with the storage nodes ( 1112 ) using the clustering substrate ( 107 ) and the communication substrate ( 109 ) discussed above with reference to FIG. 1 , and in some implementations the partition manager ( 1120 ) can be considered to be part of the clustering substrate ( 107 ).
  • each storage node ( 1112 , 1113 , 1160 , 1162 ) can communicate with each other.
  • each storage node ( 1112 , 1113 ) can access a corresponding partition map or routing table ( 1140 , 1142 ), which can include location information ( 1144 ) and partition identifiers ( 1146 ) for the partitions in the data store ( 1110 ).
  • Each storage node ( 1112 , 1113 ) can also access version information data structures ( 1148 , 1149 ), which can indicate ranges of version indicators corresponding to updated location information ( 1144 ) and partition identifiers ( 1146 ) that the storage node ( 1112 , 1113 ) has already received.
  • each client node ( 1160 , 1162 ) can also include a routing table ( 1170 , 1172 ), which can include location information ( 1174 ) and partition identifiers ( 1176 ) for the partitions in the data store ( 1110 ).
  • Each client node ( 1160 , 1162 ) can also access version information data structures ( 1178 , 1179 ), which can indicate ranges of version indicators corresponding to updated location information ( 1174 ) and partition identifiers ( 1176 ) that the client node ( 1160 , 1162 ) has already received.
  • the routing tables and version information data structures can take various forms. For example, the routing table and version information data structure for each node can be separate, or the routing tables and version information data structure for each node can be part of a single unified structure.
  • the partition manager ( 1120 ) can receive status updates ( 1182 ) from the storage nodes ( 1112 , 1113 ).
  • the partition manager ( 1120 ) can use the status updates to manage the partitions. For example, the partition manager ( 1120 ) can reconfigure partitions if one or more of the storage nodes ( 1112 , 1113 ) is not operating properly or is overloaded.
  • the partition manager ( 1120 ) can also send out a broadcast message ( 1184 ) to all the nodes ( 1112 , 1113 , 1160 , 1162 ).
  • the broadcast message ( 1184 ) can include location information ( 1126 ) that has been updated since a previous broadcast message ( 1184 ) was sent out, due to corresponding partitions having been reconfigured since the previous broadcast message ( 1184 ).
  • the broadcast message ( 1184 ) can also include partition identifiers ( 1124 ) and version indicators ( 1128 ) corresponding to the updated partition location information ( 1126 ).
  • the broadcast message ( 1184 ) can list ranges of version indicators corresponding to partition location information updates in the message ( 1184 ) with sets of ⁇ StartVersion, EndVersion ⁇ , where StartVersion is one below the range, and EndVersion is the last value in the range. Alternatively, such ranges could be indicated in other ways.
  • Such broadcast messages ( 1184 ) can be sent out periodically according to a schedule, when a predetermined number of partitions have been reconfigured, or according to some other scheme.
  • Each broadcast message ( 1184 ) may include location information update versions created since the immediately previous broadcast message ( 1184 ).
  • a broadcast message ( 1184 ) may include addition information updates, such as all versions created since the fifth previous broadcast message.
  • client nodes ( 1160 , 1162 ) can send access request messages to storage nodes ( 1112 , 1113 ), requesting access to the data partitions ( 1117 , 1118 , 1119 ) stored in those storage nodes ( 1112 , 1113 ).
  • client node 1 ( 1160 ) can send an access request message ( 1190 ) to storage node N ( 1113 ).
  • the access request message may be a put or get request.
  • the access request message ( 1190 ) is for data in partition PA ( 1117 ) in storage node 1 ( 1112 ), but the location information ( 1174 ) in the routing table ( 1170 ) of client node 1 ( 1160 ) is out of date and indicates the partition PA ( 1117 ) is on storage node N ( 1113 ), then storage node N ( 1113 ) can send a failure message ( 1192 ) in response to the access request message ( 1190 ).
  • the routing table ( 1170 ) may be out of date because changes have been made since the last broadcast message ( 1184 ) was sent by the partition manager ( 1120 ).
  • the routing table ( 1170 ) may be out of date because client node 1 ( 1160 ) did not receive one or more of the broadcast messages ( 1184 ) because the message(s) were lost or delayed.
  • client node 1 ( 1160 ) can send a location information request message ( 1194 ) to the partition manager ( 1120 ), requesting updated location information for the partition PA ( 1117 ).
  • the request message ( 1194 ) can request all updated location information ( 1126 ) that client node 1 ( 1160 ) has not already received.
  • the request message ( 1194 ) may include version indicator ranges from the version information data structure ( 1178 ), indicating version indicators corresponding to location information ( 1174 ) that client node 1 has already received.
  • the partition manager ( 1120 ) can send a location information response message ( 1196 ), which can include location information for the partition PA ( 1117 ).
  • the response message ( 1196 ) may include all updated location information that client node 1 ( 1160 ) has not yet received, such as location information corresponding to version indicators ( 1128 ) that were not within the range(s) of version indicators listed in the request message ( 1194 ).
  • Client node 1 ( 1160 ) can then send a second access request message (not shown) to storage node 1 ( 1112 ), requesting access to data in partition PA ( 1117 ).
  • Storage node 1 ( 1112 ) can then perform one or more actions requested in the second access request message, and may send a response message including data and/or a confirmation that requested actions have been performed on data.
  • the request message ( 1194 ) could list the ranges with sets of StartVersion, EndVersion, where StartVersion is one below the range, and EndVersion is the last value in the range.
  • the request message ( 1194 ) may list the following ranges: ⁇ 33, 122 ⁇ , ⁇ 130, 135 ⁇ , which can be the same ranges that are listed in the version information data structure ( 1178 ) for client node 1 ( 1160 ). This can indicate that the client node 1 ( 1160 ) already has location information corresponding to version indicators 34 to and including 122 and version indicators 131 to and including 135 .
  • the response message ( 1196 ) can include updated location information versions corresponding to version indicators 123 to 125 , 128 - 130 , and 136 - 138 .
  • the response message can also include the following StartVersion and EndVersion sets to indicate the new location information versions that are included in the response message: ⁇ 122, 125 ⁇ , ⁇ 127, 130 ⁇ , ⁇ 135, 138 ⁇ .
  • client node 1 Upon receiving the response message ( 1196 ), client node 1 ( 1160 ) can update its routing table C 1 ( 1170 ), and can update the version information data structure ( 1178 ) by merging the ranges already indicated in the existing data structure ( 1178 ) with the new version ranges received in the response message ( 1196 ). Accordingly, the updated version information data structure ( 1178 ) can indicate the following StartVersion and EndVersion sets: ⁇ 33, 125 ⁇ , ⁇ 127, 138 ⁇ .
  • the client or storage nodes can perform the following actions: (1) if ⁇ StartVersion, EndVersion ⁇ is a sub-range of any one of the existing ranges, it can be ignored, (2) if ⁇ StartVersion, EndVersion ⁇ has no overlap with any existing ranges, it can be inserted into the version indicator range list in the version information data structure ( 1148 , 1149 , 1178 , 1179 ), and the node can go through all partition entries for the range, and can update corresponding entries in its routing table ( 1140 , 1142 , 1170 , 1172 ) if the response message ( 1196 ) includes newer versions of the location information; (3) if ⁇ StartVersion, EndVersion ⁇ has an overlap with any of the
  • version indicators can be maintained ( 1310 ), where the version indicators can be within an existing range of indicators, and each version indicator in the existing range can be associated with a data partition in a distributed data store.
  • a reconfiguration of one or more partitions in the data store can be identified ( 1320 ), such as by a partition manager, and a new version indicator can be assigned ( 1330 ) to the reconfigured partition.
  • the new version indicator can be outside the existing range of version indicators, so that a new range of version indicators includes the new version indicator.
  • the technique of FIG. 13 can also include using the version indicators to identify ( 1340 ) recently reconfigured partitions in the data store, such as partitions that have been reconfigured since a previous broadcast message was sent.
  • Updated location information for the recently reconfigured partitions can be sent ( 1350 ) to a plurality of nodes, such as by sending a broadcast message to storage nodes and/or client nodes in a data system.
  • version indicators can be used to identify out-of-date location information at a node and to send to the node updated location information corresponding to the out-of-date location information. For example, this can include sending a broadcast message that includes updates, which have not been previously sent to the node. As another example, this can include receiving ( 1360 ) from a requesting node a request message that requests updated location information for data in the data store. The request message can identify location information already stored at the requesting node, such as by identifying one or more ranges of version indicators corresponding to location information already stored at the requesting node. In response to the request message, a response message can be sent ( 1370 ) to the requesting node. The response message can include available location information corresponding to version indicators outside the range(s) of version indicators identified by the request message.
  • a first access request message can be sent ( 1410 ) to a first node in a distributed data store.
  • the first access request message can request access to data in the data store.
  • a failure notification may be received ( 1420 ) from the first node.
  • a location information request message can be sent ( 1430 ), such as to a partition manager, requesting updated location information for the data requested in the first access request message.
  • updated location information for the data requested in the first access request message can be received ( 1440 ).
  • a version information data structure that represents the one or more ranges of version indicators corresponding to location information that has already been received can be updated ( 1450 ) to include one or more ranges of version indicators corresponding to the updated location information received in response to the request message.
  • the updated location information can be used to send ( 1460 ) a second access request message to a second node in the data store.
  • the second access request message can request access to the data to which access was requested in the first access request message, where the updated location information identified the second node as a location for the data.

Abstract

Version indicators within an existing range can be associated with a data partition in a distributed data store. A partition reconfiguration can be associated with one of multiple partitions in the data store, and a new version indicator that is outside the existing range can be assigned to the reconfigured partition. Additionally, a broadcast message can be sent to multiple nodes, which can include storage nodes and/or client nodes that are configured to communicate with storage nodes to access data in a distributed data store. The broadcast message can include updated location information for data in the data store. In addition, a response message can be sent to a requesting node of the multiple nodes in response to receiving from that node a message that requests updated location information for the data. The response message can include the requested updated location information.

Description

BACKGROUND
A continuing problem in computer systems remains handling the growing amount of available information or data. The sheer amount of information being stored on disks or other storage media for databases in some form has been increasing dramatically. While files and disks were measured in thousands of bytes a few decades ago—at that time being millions of bytes (megabytes), followed by billions of bytes (gigabytes)—now databases of a million megabytes (terabytes) and even billions of megabytes are being created and employed in day-to-day activities.
With the costs of memory going down, considerably large caches can be configured on the desktop and server machines. In addition, in a world where hundreds of gigabytes of storage is the norm, the ability to work with most data in large caches can increase productivity and efficiency because the caches can be configured to retrieve data more quickly than the same data can be retrieved from many mass data stores. A cache is a collection of data that duplicates original value(s) stored elsewhere or computed earlier, where the cached data can be read from the cache in lieu of reading the original value(s). A cache is typically implemented where it is more efficient to read the cached data than to read the original value(s) so that use of the cache can increase the overall efficiency of computing systems.
In an effort to scale the size of caches and other data stores in an organized manner, some data stores are configured as distributed partitioned data stores. A distributed data store is a data store that is distributed across one or more data store nodes. Typically, a distributed data store is distributed across one or more physical or virtual computing machines. A distributed partitioned data store is a data store that is partitioned across multiple data store nodes, where a primary location for each partition is on a single node. As used herein, a node refers to a storage process in a data store system. A node may be on a single machine or spread across multiple physical machines, and a single physical machine may include multiple storage nodes, such as where a single physical machine hosts multiple virtual machine processes. Thus, the distributed partitioned data store is spread over multiple storage processes, so that the entire set of primary data to be read from the data store is not stored on a single process, and typically is not stored on a single machine. A partition is a logical grouping of data, which may be implemented by associating a partition with a set of one or more keys, where the keys are in turn associated with data stored on a node. In a data store, a partition can be stored as a primary partition, which includes primary partition data in the data store, and one or more secondary partitions, which each include secondary partition data in the data store. As used herein, the term “primary” data indicates the data that is currently set up to be accessed in the data store, such as to be read from the data store, as opposed to secondary or replicated data that is currently being stored as a backup. The primary data may also be replicated from other data outside the data store. For example, in a distributed cache the primary data may be replicated from more authoritative data that is stored in long-term mass storage. The term “primary” is similarly used to refer to a primary region or partition, which is a region or partition currently set up to be accessed, as opposed to a replica of the primary region or partition. The term “primary” can also be used to refer to a primary node, which is a node that stores the primary data, such as a primary region. Note, however, that a cache node can be a primary node for one set of cache data and a secondary node for another set of cache data. A distributed partitioned data store system is a system that is configured to implement such distributed partitioned data stores. In such a system, messages passing between nodes may not be reliable. For example, messages may be delayed, re-ordered, or lost.
SUMMARY
Whatever the advantages of previous data store tools and techniques, they have neither recognized the distributed data store location update tools and techniques described and claimed herein, nor the advantages produced by such tools and techniques, which may include improved reliability and/or efficiency.
In one embodiment of the tools and techniques, version indicators within an existing range can be maintained. Each version indicator can be associated with one of multiple data partitions in a distributed data store. A partition reconfiguration in the data store can be associated with a reconfigured partition. A new version indicator that is outside the existing range can be assigned to the reconfigured partition, producing in a new range of version indicators that includes the new version indicator.
In another embodiment of the tools and techniques, a broadcast message can be sent to multiple nodes, which can include storage nodes and/or client nodes that can communicate with storage nodes to access data in a distributed data store. The broadcast message can include updated location information for data in the distributed data store. In addition, a response message can be sent to a requesting node of the multiple nodes in response to receiving from that node a message that requests updated location information for the data in the data store. The response message can include the requested updated location information.
In yet another embodiment of the tools and techniques, a message requesting access to data in a distributed data store can be sent to a node in the data store, and a failure notification can be received from the node. Upon receiving the failure notification, a message requesting updated location information for the data can be sent, and the updated location information can be received.
This Summary is provided to introduce a selection of concepts in a simplified form. The concepts are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Similarly, the invention is not limited to implementations that address the particular techniques, tools, environments, disadvantages, or advantages discussed in the Background, the Detailed Description, or the attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an exemplary layering arrangement.
FIG. 2 illustrates a further topology model of a layering arrangement that relates to an independent separate tier model implementation.
FIG. 3 illustrates a topology model of a layering arrangement that pertains to an embedded application model.
FIG. 4 illustrates a distributed cache that includes a runtime deployed on multiple machines.
FIG. 5 illustrates a particular methodology of distributing cache.
FIG. 6 illustrates a further methodology of implementing a layering arrangement for a distributed cache.
FIG. 7 illustrates an exemplary illustration of a unified cache view.
FIG. 8 illustrates an artificial intelligence (AI) component that can be employed to facilitate inferring and/or determining when, where, and/or how to cache data in a distributed environment.
FIG. 9 illustrates an exemplary environment for implementing various aspects of the described caching tools and techniques.
FIG. 10 is a schematic block diagram of a sample-computing environment that can be employed for caching configurations such as distributing cache.
FIG. 11 is a schematic block diagram of a data system that provides location updates for a distributed data store.
FIG. 12 is a schematic diagram illustrating maintenance of a version data structure.
FIG. 13 is a flowchart illustrating a location update technique.
FIG. 14 is a flowchart illustrating another location update technique.
DETAILED DESCRIPTION
Described embodiments are directed to techniques and tools for improved location updates for a data store. Such improvements may result from the use of various techniques and tools separately or in combination.
Such techniques and tools may include maintaining version indicators in a version data structure. The version indicators can be within an existing range, which may be a numeric range, an alphabetical range, an alphanumeric range, etc. Also, the range may include gaps between adjacent version indicator values, which may be numeric values, alphabetical values, alphanumeric values, etc. Each version indicator can be associated with a data partition in a distributed data store. When a partition in the data store is reconfigured, a new version indicator, which is outside the existing range, can be assigned to the reconfigured partition. Thus, a new range of version indicators can include the new version indicator at the end (e.g., high end or low end) of the new range.
The version indicators can be used to determine whether corresponding partitions have been reconfigured, making existing location information for the corresponding partitions out of date. For example, if a client node (i.e., a node that is configured to receive data from the data store) has location information versions corresponding to version indicators in a range of 85 to 125, but an authoritative version data structure (such as a general partition map for the data store) has location information versions corresponding to version indicators in the range of 85 to 150, then it can be determined that the client node's location information is out of date. Accordingly, the location information corresponding to version indicators 126 to 150 in the authoritative data structure can be sent to the client node so that the client node's location information will be up to date.
As used herein, reconfiguration of one or more partitions can be any change that would result in location information associated with the partition being changed, such as moving a partition from one node to another, adding a new partition, or even reconfiguring a primary node that hosts the partition so that existing location information is no longer accurate. Partition location information is information that facilitates accessing the corresponding partition, such as an identification of a primary storage node for the partition and possibly additional information to access the primary storage node or to access the partition within the primary node. For example, location information can include IP addresses, URL's, storage drive and path identifiers, ports in storage nodes, etc.
Location information can be sent in a broadcast message to nodes, such as client and/or storage nodes. A broadcast message is a message that is sent to multiple nodes. Broadcast messages may be sent periodically according to a set schedule or when a certain number of reconfigurations have occurred since a previous broadcast message, or according to some other scheme. Accordingly, client and/or storage nodes that have received the broadcast messages can use the location information in the broadcast messages to request access to the data in the data store.
In addition to these broadcast messages, a client node can request updated location information if the client node finds that its location information is out of date (such as where it receives a failure message in response to a request to access data in the data store). The client can receive a response with the requested updated location information, and can use that updated information to again request access to the data in the data store. Version indicators can be used in these messages. For example, a client node can indicate in a location information request message what version indicators correspond to versions of location information the client has already received, and the response to the request can include all updated location information versions (as indicated by corresponding version indicators) that the client has not yet received. Version indicators can also be used to send appropriate updates in broadcast messages.
Accordingly, one or more substantial benefits can be realized from the data store location update tools and techniques described herein. However, the subject matter defined in the appended claims is not necessarily limited to the benefits described herein. A particular implementation of the invention may provide all, some, or none of the benefits described herein. Although operations for the various techniques are described herein in a particular, sequential order for the sake of presentation, it should be understood that this manner of description encompasses rearrangements in the order of operations, unless a particular ordering is required. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Techniques described herein with reference to flowcharts may be used with one or more of the systems described herein and/or with one or more other systems. Moreover, for the sake of simplicity, flowcharts may not show the various ways in which particular techniques can be used in conjunction with other techniques.
I. General Cache Layering Arrangement
The memory capacity of multiple computers or processes can be aggregated into a single unified cache, which can be scalable (e.g., a dynamic scaling) to a plurality of machines via a layering arrangement. Such layering arrangement can cache serializable Common Language Runtime (CLR) objects and provide access through a simple cache application programming interface (API). The layering arrangement can include a data manager component, an object manager component and a distributed object manager component, which can be implemented in a modular fashion. In one aspect, the data manager component supplies basic data functions (e.g., hash functions), and the object manager component implements object facade thereon including cache objects—while the distributed object manager provides distribution of the data in the distributed cache.
As such, the object manager component can map regions to containers and manage data eviction thresholds and supply policy management for cached data. Such regions can represent cache containers that typically guarantee co-locations of the object placed/inserted in the container (e.g., co-locations of objects in same node). Additionally, the object manager component can raise notifications (e.g., due to changes made to cached data) for various regions or objects of the distributed cache. Likewise, the distributed object manager component can dispatch requests to various nodes associated with different regions of the distributed cache.
Moreover, the distributed object manager can interface with partition maps, or routing tables, of the distributed cache for a given request, and can facilitate abstraction of the aggregated cache in the distributed environment, to a single unified cache. In one aspect, the distributed object manager component is positioned on top of the object manager component, which itself is placed on top of the data manager component. Moreover, tight integration can be provided with ASP.NET to enable cache ASP.NET session data in the cache without having to write it to source databases, for example.
These components can provide pluggable features that can readily adapt to a user's preferences (e.g., replacing a data manger component with another type thereof, based on user preferences). Likewise, the object manager component can be replaced with another object manager component, wherein plugging different models in the layering arrangement is enabled by enabling a call back mechanism with holding locks during call back throughout the stack.
In a related aspect, the layering arrangement can provide for a modular arrangement that facilitates operation on different levels and communication substrates (e.g., TCP/IP), and which can be implemented in two topology models, namely as an independent separate tier model or an embedded application model. In the independent and separate tier model, the caching layer can function as an independent separate tier by itself (which can be positioned between application servers and data servers). For example, in such a configuration the distributed cache can run as a service hosted either by Windows Activation Services (WAS) or windows service, and can run separate from the application. The applications can either employ the client stubs provided by the distributed cache to talk thereto, or can communicate through a representational state transfer (REST) API directly into the service.
Alternatively, in the embedded application model the cache can be embedded within the application itself (e.g., connecting the applications together to form a cluster—such as embedding caches in ASP.net instances to form a cluster of ASP.net machines, wherein upon storing an item in a local cache it can be viewed from other machines.) This embedding can further enable tagging and Language Integrated Query (LINQ) queries on the objects from a functionality perspective. LINQ queries can then be run natively on stored objects, and can be embedded in .Net applications.
The various aspects of the described tools and techniques will now be described with reference to the annexed drawings, wherein like numerals refer to like or corresponding elements throughout. However, the drawings and detailed description relating thereto are not intended to limit the claimed subject matter to the particular form disclosed. Rather, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the claimed subject matter. For example, data store updating may be implemented in an arrangement other than the disclosed cache layering arrangement.
II. Cache System & Tools
A. Cache Layering
FIG. 1 illustrates an exemplary layering arrangement that can enable aggregating memory capacity of multiple computers into a single unified cache. Such layering arrangement (100) can provide for a scalable system that can be tailored to different types of communication layers such as TCP/IP, and pluggable features can be further enabled for readily adapting to a user's preferences. The distributed cache system implementing the layering arrangement (100) can dynamically scale itself with growth of applications associated therewith, by addition of additional computers or storage processes as nodes to a cluster of machines and/or storage processes. As illustrated in FIG. 1, each of the cache nodes (131, 133) (1 to n, n being an integer) of the layering arrangement (100) can include a data manager component (110), an object manager component (112) and a distributed object manager component (114), the set up of which can be implemented in a modular fashion. The distributed object manager component (114) can be positioned on top of the object manager component (112), which can be placed on top of the data manager component (110). The data manager component (110) can supply basic data functions (e.g., hash functions), and the object manager component (112) can implement object facade thereon including cache objects, with the distributed object manager component (114) providing the distribution. As such, the object manager component (112) and data manager component (110) can act as local entities, wherein the distributed object manager component (114) can perform distributions.
Moreover, a clustering substrate (107) can establish clustering protocols among a plurality of nodes that form a single unified cache. For example, when a node is to join or leave the cluster, requisite operations for adding or leaving the cluster can be managed, wherein a distributed components availability substrate (111) can employ such information to manage operations (e.g., monitoring health of nodes, managing life cycles of nodes, creating a primary node on another machine). In addition, for each node, each of the components forming the layering arrangement can be pluggable based on user preferences, system features, and the like.
As explained earlier, the data manager component (110) (e.g., in memory) can provide primitive high performance data structures such as hash tables, Btrees, and the like. Since the data manager component (110) can be memory bound and all operations of the distributed cache can be atomic, the data manager component (110) can typically implement highly concurrent hash tables. The data manager component (110) and the hash table structures can further facilitate creating the infrastructure for supplying containers and indexes on containers. In addition, the data manager component (110) can provide simple eviction and expiration on these hash structures. Due to pluggable features supplied by the layering arrangement (100), users can plug in different types of data managers tailored to users' preferences, such as a transaction data manager or a disk paged data manager, or the like. Likewise, the object manager component (112) can provide object abstraction and can implement the concept of named caches and regions by employing data structures provided by the data manager component (110).
Similarly, the distributed object manager component (114) can employ the local object manager component (112) and integrate with the distributed components availability substrate (111) to provide the abstraction of the distributed cache. The distributed components availability substrate (111) can provide the transport and data consistency operations to make the system scalable and available. The distributed object manager component (114) can optionally be implemented as part of a client tier to facilitate dispatching requests (e.g., directly) to the nodes associated with the single unified cache.
In one particular aspect, the distributed object manager component (114) can further include a dispatch manager component (117) and a distributed manager component (119). The dispatch manager component (117) can further look up the routing table to dispatch the requests to a primary node (e.g., where a primary region is located) as part of a dynamically scalable distributed cache. Moreover, the dispatch manager component (117) can also be present in the client so that the client can directly dispatch requests to the primary node. For example, the distributed object manager component (114) on the receiving node can interact with a partition map to check if the node is indeed designated as the primary node as part of a plurality of nodes associated with the distributed cache, and can call the object manager component (112) to perform the operation. In the case of write operations, the distributed object manager component (114) can also communicate with a replicator to replicate the data to the secondary nodes. The distributed object manager component (114) can also interact with failover manager systems (not shown) to clone regions to create new secondary or primary nodes during reconfiguration procedures subsequent to possible failures.
The object manager component (112) can further include a notification management component (123) that can track changes to regions and objects, and can relay notifications to delegates listening to those events. Moreover, applications can also register delegates for notifications on any node which may be different from the primary node on which the object resides. The distributed object manager component (114) can further manage the propagation of notifications in a distributed fashion including providing high availability for such notifications when the primary node fails. For example, this can be handled by maintaining a local lookup table indexed by delegate id on the node where the application registers the delegate. The primary node that stores the object can maintain the delegate id and the originating node information. When an object changes, the distributed object manager component (114) of the primary node can notify all the originating nodes, passing along the delegate id.
Similarly, the distributed object manager component (114) associated with the receiver can employ the lookup table to call the appropriate delegate, thus providing the change information to the application in a distributed fashion. For example, notifications can be asynchronous and can further be backed up using the same secondary nodes. Accordingly, in the event of failures, the secondary nodes can attempt to deliver the pending notifications, wherein in the event of primary node failure, notifications can be resent because the primary node may not have synchronized the information regarding the delivered notifications before failure. Since all notifications can carry the region, key and version information, the application can use the version to ignore duplicate notifications. Following are some examples of callback syntax.
Example Region Level Callback
public delegate CacheCallback
elec_cbk = new CacheCallback( myclass.handler );
catalog.addCallback(“ElectronicsRegion”, elec_cbk);
Callback called for any updates to region
Example Object Level Callback
public delegate CacheItemRemovedCallback
elec_cbk = new CacheItemRemovedCallback( );
// Add the callback to the object ; the elec_cbk delegate will be called
// whenever the object changes regardless of where the object is present
catalog.Add(“ElectronicsRegion”, “key”, object, elec_cbk);
The availability substrate (111) can provide scalability and availability to systems that contain a storage component associated with the distributed cache. For example, the availability substrate can include load balancers, fail over managers, replicators and the like. A communication substrate (109) can provide for failure detection of nodes and reliable message delivery between nodes. The communication substrate (109) can interact with the availability substrate (111). Moreover, the communication substrate (109) can also provide the communication channels and cluster management. The communication substrate (109) can provide callbacks whenever a new node joins the cluster or when a node dies or fails to respond to exchanged messages (e.g., heart beat messages). Moreover, the communication substrate (109) can provide efficient point-to-point and multicast delivery channels, and can further provide reliable message delivery for implementing replication protocols. For example, the communication substrate (109) can support notifications by maintaining delegate information in cache items and triggering the notification when items are modified. The communication substrate (109) can also trigger eviction based on policies defined at the region or named cache level.
B. Cache Topology
FIG. 2 and FIG. 3 illustrate two topology models, namely an independent separate tier model, and an embedded application model, respectively. According to one particular aspect, in the independent and separate tier model of FIG. 2, the caching tier (220) can function as an independent separate tier by itself (which can be positioned between application servers and data servers). For example, in such configuration the distributed cache system can run as a service hosted either by WAS or windows service and can run separate from the application. The applications (201, 203, 205) (1 to m, m being an integer) can either employ the client stubs provided by the distributed cache to communicate with the cache system, or can communicate directly into the service, such as through a representational state transfer (REST) API.
Alternatively, in the embedded application model the cache system can be embedded within the application itself as illustrated in FIG. 3. Such can occur by connecting the applications (310, 312, 314) (1 to k, k being an integer) together to form a cluster; for instance as embedding caches in ASP.net instances to form a cluster of ASP.net machines, wherein upon storing an item in a local cache it can be viewed from other machines. For example, the distributed cache runtime dlls can be compiled with the application and the application can act as the cache host for the distributed cache runtime. All the thread pools and memory can come from the application's container.
In a related aspect, a Load Balancer (302) can dynamically redistribute load across the cluster in the event that one or more nodes are inundated. For example, data can be repartitioned to spread it to nodes that have less loads. All such nodes can periodically send their load status as part of the configuration metadata. The load balancer (302) can also periodically query the configuration to determine which nodes are overloaded and can be balanced. For example, distributing the load may include repartitioning the overloaded partition of data on a primary node and spreading the overloaded partition to one (or more) of its secondary nodes. This may involve only a change in the configuration data (partition map) and no data movement (since the secondary nodes already have the data). In other scenarios, the data may be distributed to other non-secondary nodes since the secondary nodes themselves might be loaded and cannot handle the additional load. In such cases, either the data partitions on the secondary nodes (for which this node is the primary) can be further load balanced; or non-secondary nodes can be chosen to distribute the load, in which case in addition to the changes in the partition map, data can be moved.
C. Distributed Cache Structure
FIG. 4 illustrates a distributed cache system (400) that includes the runtime deployed on multiple machines (410, 411, 412) (1 to m, m being an integer) that form a cluster. On each machine (410, 411, 412) there can exist one or more runtimes also referred to as cache hosts. Each cache host (422, 423) can host one or more named caches. The named caches can be configured in the distributed cache configuration file. Moreover, the named caches can be spread around all or a subset of the machines (410, 411, 412) in the cluster. In addition, one or more regions (433) can exist within each named cache. Such regions can be implicitly created by the distributed cache or can be explicitly defined by the application. In general, all items in a region (433) can be guaranteed to be co-located on a cache host, such as by assigning one or more regions (433) to a single partition (436), rather than spreading regions across multiple partitions. Such can improve performance for operations that operate on multiple items in the region, such as query and other set operations. Moreover, the node where a region is located can be deemed as the primary node of that region, wherein typically access to this region will be routed to the primary node for that region. If the named cache is configured to have “backups” for high availability, then one or more other nodes can be chosen to contain a copy of this data. Such nodes are called secondary nodes for that region. All changes made to the primary node can also be reflected on these secondary nodes. Thus if the primary node for a region fails, the secondary node can be used to retrieve the data without having to have logs written to disk.
The following is a code example that shows the creation of a named cache and region.
// CacheFactory class provides methods to return cache objects
// Create instance of cachefactory (reads appconfig)
DataCacheFactory fac = new DataCacheFactory( );
// Get a named cache from the factory
DataCache catalog = fac.GetCache(“catalogcache”);
//-------------------------------------------------------
// Simple Get/Put
catalog.Put(“toy-101”, new Toy(“thomas”, .,.));
// From the same or a different client
Toy toyObj = (Toy)catalog.Get(“toy-101”);
// ------------------------------------------------------
// Region based Get/Put
catalog.CreateRegion(“toyRegion”);
// Both toy and toyparts are put in the same region
catalog.Put(“toy-101”, new Toy( .,.), “toyRegion”);
catalog.Put(“toypart-100”, new ToyParts(...),“toyRegion”);
Toy toyObj = (Toy)catalog.Get(“toy-101”, “toyRegion”);
Each cache region (433) can include one or more cache items (440). Each cache item can include an identifier such as a key (442), a value or payload (444), and one or more tags (446). Cache regions may also be nested so that a cache region may include one or more other cache regions (433) and/or one or more cache items (440).
III. Cache Layering Techniques
FIG. 5 illustrates a related methodology (500) of distributing a cache. While the exemplary methods of FIG. 5 and other figures herein are illustrated and described herein as a series of blocks representative of various events and/or acts, the described tools and techniques are not limited by the illustrated ordering of such blocks. For instance, some acts or events may occur in different orders and/or concurrently with other acts or events, apart from the ordering illustrated herein, in accordance with the described tools and techniques. In addition, not all illustrated blocks, events or acts, may implement a methodology in accordance with the described tools and techniques. Moreover, the exemplary method and other methods according to the tools and techniques may be implemented in association with one or more of the methods illustrated and described herein, as well as in association with other systems and apparatus not illustrated or described.
In the methodology (500), cache available to the system can be identified (510), wherein the cache can be scalable to a plurality of machines via a layering arrangement (e.g., dynamic scaling by adding new nodes). The cache can be aggregated (520) into a single unified cache, as presented to a user thereof. Applications/data can be distributed (530) throughout the aggregated cache. In addition, the aggregated cache can be scaled (540) depending on the changing characteristics of the applications and/or data.
FIG. 6 illustrates a related methodology (600) of implementing a distributed cache via a layering arrangement. Initially a layering arrangement can be supplied (610). The layering arrangement can include a data manager component, an object manager component and a distributed object manager component—the set up of which can be implemented in a modular fashion; wherein, the distributed object manager component can be positioned on top of the object manager component, which can be placed on top of the data manager component.
The data manager component can be employed (620) to supply basic data functions (e.g., hash functions). Likewise, the object manager component can be employed (630) as an object facade thereon including cache objects, with the distributed object manager component providing the distribution. As such, the object manager component and data manager component can act as local entities, wherein the distribution manager can perform (640) the distributions.
IV. Unified Cache System & Data Types
FIG. 7 illustrates a cache system (700) that can provide a unified cache view (720) of one or more caches (730) for clients (740) spread across machines and/or processes. A cache system (700) including this unified cache view (720) can provide an explicit, distributed, in-memory application cache for all kinds of data with consistency and query. Such data can reside in different tiers (in different service boundaries) with different semantics. For example, data stored in the backend database can be authoritative and can make it desirable to have a high degree of data consistency and integrity. Typically, there tends to be a single authoritative source for any data instance. Most data in the mid-tier, being operated by the business logic tends to be a copy of the authoritative data. Such copies are typically suitable for caching. As such, understanding the different types of data and their semantics in different tiers can help define desired degrees of caching.
A. Reference Data
Reference data is a version of the authoritative data. It is either a direct copy (version) of the original data or aggregated and transformed from multiple data sources. Reference data is practically immutable—changing the reference data (or the corresponding authoritative data) creates a new version of the reference data. That is, every reference data version can be different from other reference data versions. Reference data is a candidate for caching; as the reference data typically does not change, it can be shared across multiple applications (users), thereby increasing the scale and performance.
For example, a product catalog application aggregating product information across multiple backend application and data sources can be considered. Most common operation on the catalog data is the read operation (or browse); a typical catalog browse operation iterates over a large amount of product data, filters it, personalizes it, and then presents the selected data to the users. Key based and query based access is a common form of operation. Caching can be beneficial for catalog access. If not cached, operations against such an aggregate catalog may include the operations to be decomposed into operations on the underlying sources, to invoke the underlying operations, to collect responses, and to aggregate the results into cohesive responses. Accessing the large sets of backend data for every catalog operation can be expensive, and can significantly impact the response time and throughput of the application. Caching the backend product data closer to the catalog application can significantly improve the performance and the scalability of the application. Similarly, aggregated flight schedules are another example of reference data.
Referenced data can be refreshed periodically, usually at configured intervals, from its sources, or refreshed when the authoritative data sources change. Access to reference data, though shared, is mostly read. Local updates are often performed for tagging (to help organize the data). To support large scale, reference data can be replicated in multiple caches on different machines in a cluster. As mentioned above, reference data can be readily cached, and can provide high scalability.
B. Activity Data
Activity data is generated by the currently executing activity. For example, the activity may be a business transaction. The activity data can originate as part of the business transaction, and at the close of the business transaction, the activity data can be retired to the backend data source as historical (or log) information. For example, the shopping cart data in an online buying application can be considered. There is one shopping cart, which is exclusive, for each online buying session. During the buying session, the shopping cart is cached and updated with products purchased, wherein the shopping cart is visible and accessible only to the buying transaction. Upon checkout, once the payment is applied, the shopping cart is retired (from the cache) to a backend application for further processing. Once the business transaction is processed by the backend application, the shopping cart information is logged for auditing (and historical) purposes.
While the buying session is active, the shopping cart is accessed both for read and write; however, it is not shared. This exclusive access nature of the activity data makes it suitable for distributed caching. To support large scalability of the buying application, the shipping carts can be distributed across the cluster of caches. Since the shopping carts are not shared, the set of shopping carts can be partitioned across the distributed cache. By dynamically configuring the distributed cache, the degree of scale can be controlled.
C. Resource Data
Both reference (shared read) and activity (exclusive write) data can be cached. It is to be appreciated that not all application data falls into these two categories. There is data that is shared, concurrently read and written into, and accessed by large number of transactions. For example, considering inventory management application, the inventory of an item has the description of the item and the current quantity. The quantity information is authoritative, volatile, and concurrently accessed by large number of users for read/write. Such data is known as the resource data; the business logic (e.g. the order application logic) runs close to the resource data (e.g. quantity data). The resource data is typically stored in the backend data stores. However, for performance reasons it can be cached in the application tier. While caching the quantity data in memory on a single machine can provide performance improvements, a single cache typically cannot provide availability or scale when the order volume is high. Accordingly, the quantity data can be replicated in multiple caches across the distributed cache system.
V. Distributed Cache with Artificial Intelligence Component
FIG. 8 illustrates an artificial intelligence (AI) component (830) that can be employed in a distributed cache (800) to facilitate inferring and/or determining when, where, and/or how to scale the distributed cache and/or distribute application data therebetween. For example, such artificial intelligence component (830) can supply additional analysis with the distributed cache manager to improve distribution and/or scaling of the system. As used herein, the term “inference” refers generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations, as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
The AI component (830) can employ any of a variety of suitable AI-based schemes as described supra in connection with facilitating various aspects of the herein described tools and techniques. For example, a process for learning explicitly or implicitly how or what candidates are of interest, can be facilitated via an automatic classification system and process. Classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed. For example, a support vector machine (SVM) classifier can be employed. Other classification approaches include Bayesian networks, decision trees, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
As will be readily appreciated from the subject specification, classifiers can be explicitly trained (e.g., via a generic training data) as well as implicitly trained (e.g., via observing user behavior, receiving extrinsic information) so that the classifier can be used to automatically determine according to a predetermined criteria which answer to return to a question. For example, with respect to SVM's that are well understood, SVM's are configured via a learning or training phase within a classifier constructor and feature selection module. A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class—that is, f(x)=confidence(class). Moreover, a rule based mechanism can further be employed for interaction of a routing manager and a routing layer associated therewith (e.g., load balancing, memory allocation and the like).
VI. Suitable Computing Environment
The word “exemplary” is used herein to mean serving as an example, instance or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Similarly, examples are provided herein solely for purposes of clarity and understanding and are not meant to limit the subject innovation or a portion thereof in any manner. It is to be appreciated that a myriad of additional or alternate examples could have been presented, but have been omitted for purposes of brevity.
Furthermore, all or portions of the described tools and techniques can be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware or any combination thereof to control a computer to implement the disclosed tools and techniques. For example, computer readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Additionally a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
In order to provide a context for the various aspects of the disclosed subject matter, FIGS. 9 and 10 as well as the following discussion are intended to provide a brief, general description of a suitable environment in which the various aspects of the disclosed subject matter may be implemented. While the subject matter has been described above in the general context of computer-executable instructions of a computer program that runs on a computer and/or computers, those skilled in the art will recognize that the tools and techniques also may be implemented in combination with other program modules.
As used in this application, the terms “component”, “system”, and “engine” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
Generally, program modules include routines, programs, components, data structures, and the like, which perform particular tasks and/or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the innovative methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, mini-computing devices, mainframe computers, as well as personal computers, hand-held computing devices (e.g., personal digital assistant (PDA), phone, watch . . . ), microprocessor-based or programmable consumer or industrial electronics, and the like. The illustrated aspects may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all aspects of the tools and techniques can be practiced on stand-alone computers. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
With reference to FIG. 9, an exemplary environment (910) for implementing various aspects of the described tools and techniques is described that includes a computer (912). The computer (912) can include a processing unit (914), a system memory (916), and a system bus (918). The system bus (918) can couple system components including, but not limited to, the system memory (916) to the processing unit (914). The processing unit (914) can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit (914).
The system bus (918) can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
The system memory (916) can include volatile memory (920) and/or nonvolatile memory (922). The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer (912), such as during startup, can be stored in nonvolatile memory (922). By way of illustration, and not limitation, the nonvolatile memory (922) can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. The volatile memory (920) can include random access memory (RAM), which can act as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
Computer (912) can also include removable/non-removable, volatile/nonvolatile computer storage media. FIG. 9 illustrates a disk storage (924), wherein such disk storage (924) can include, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-60 drive, flash memory card, or memory stick. In addition, disk storage (924) can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of the disk storage (924) to the system bus (918), a removable or non-removable interface is typically used, such as interface (926).
It is to be appreciated that FIG. 9 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment (910). Such software can include an operating system (928). The operating system (928), which can be stored on disk storage (924), can act to control and allocate resources of the computer (912). System applications (930) can take advantage of the management of resources by operating system (928) through program modules (932) and program data (934) stored either in system memory (916) or on disk storage (924). It is to be appreciated that various components described herein can be implemented with various operating systems or combinations of operating systems.
A user can enter commands or information into the computer (912) through input device(s) (936). Input devices (936) include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit (914) through the system bus (918) via interface port(s) (938). Interface port(s) (938) include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) (940) use some of the same type of ports as input device(s) (936). Thus, for example, a USB port may be used to provide input to computer (912), and to output information from computer (912) to an output device (940). Output adapter (942) is provided to illustrate that there are some output devices (940) like monitors, speakers, and printers, among other output devices (940) that utilize such adapters. The output adapters (942) include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device (940) and the system bus (918). Other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) (944).
Computer (912) can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) (944). The remote computer(s) (944) can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to the computer (912). For purposes of brevity, only a memory storage device (946) is illustrated with remote computer(s) (944). Remote computer(s) (944) is logically connected to the computer (912) through a network interface (948) and then physically connected via a communication connection (950). The network interface (948) encompasses communication networks such as local-area networks (LAN) and wide area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 802.3, Token Ring/IEEE 802.5 and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
A communication connection(s) (950) refers to the hardware/software employed to connect the network interface (948) to the bus (918). While the communication connection (950) is shown for illustrative clarity inside computer (912), it can also be external to the computer (912). The hardware/software for connection to the network interface (948) includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
FIG. 10 is a schematic block diagram of a sample computing environment (1000) that can be employed for distributing cache. The environment (1000) can include one or more client(s) (1010). The client(s) (1010) can be hardware and/or software (e.g., threads, processes, computing devices). The environment (1000) can also include one or more server(s) (1030). The server(s) (1030) can also be hardware and/or software (e.g., threads, processes, computing devices). The servers (1030) can house threads to perform transformations by employing the components described herein, for example. One possible communication between a client (1010) and a server (1030) may be in the form of a data packet adapted to be transmitted between two or more computer processes. The environment (1000) can include a communication framework (1050) that can be employed to facilitate communications between the client(s) (1010) and the server(s) (1030). The client(s) (1010) can be operatively connected to one or more client data store(s) (1060) that can be employed to store information local to the client(s) (1010). Similarly, the server(s) (1030) can be operatively connected to one or more server data store(s) (1040) that can be employed to store information local to the servers (1030).
VII. Location Information Update Data System and Environment
FIG. 11 is a block diagram of a data system (1100) in conjunction with which one or more of the described embodiments may be implemented. The data system (1100) can include a data store (1110), which can be a distributed partitioned data store. For example, the data store (1110) can host a distributed partitioned cache, such as the distributed partitioned caches discussed above. Alternatively, the data store (1110) can be some other type of data store, such as a data store containing another type of distributed partitioned database, such as a distributed partitioned SQL database.
The data store (1110) can include a plurality storage nodes (1112 and 1113), including storage node 1 (1112) and storage node N (1113). Each of the storage nodes (1112 and 1113) can store data (1115 and 1116)) for the data store (1110). The data in the data store (1110) can be separated into partitions, with one or more partitions (1117, 1118, 1119) being stored in each storage node (1112, 1113). For example, storage node 1 may store a single partition (1117), while storage node N (1113) may store multiple partitions (1118 and 1119).
The partitions in the data store (1110) can be managed by a partition manager (1120), which can be located on a separate node, or even on a node that also acts as a storage node.
The partition manager (1120) can manage a version data structure (1122), such as a main partition map for the data store (1110). The version data structure (1122) can indicate an authoritative status of partitions in the data store (1110). The version data structure (1122) can include a partition identifier (1124) for each partition in the data store (1110), which can be a single identifier, an identifier range (such as a range of region identifiers), a list of identifiers, or some other type of identifier. The version data structure (1122) can also include location information (1126) for each partition, such as an identification of primary and secondary storage node(s) (1112, 1113) where each partition is stored. The location information (1126) can include other information to allow the corresponding partition to be accessed, such as IP addresses, URL'S, storage drive and path identifiers, ports for storage nodes (1112, 1113), etc. In addition, the version data structure (1122) can include version indicators (1128). For example, the version indicators (1128) can form a range from a minimum version indicator (1130) to a maximum version indicator (1132). Many different types of version indicators are possible, such as numbers (integers or non-integers), alphabetical ranges, alphanumeric ranges, etc.
Referring now to FIG. 12, maintenance of a version data structure (1210) will be described, where a partition is reconfigured so that location information (1214) is updated, such as where a partition is moved from one node to another, added to a data store, split into two partitions on different nodes, etc. As with the version data structure of FIG. 11, the version data structure (1210) of FIG. 12 includes partition identifiers (1212), location information (1214) and version indicators (1216) for corresponding partitions in the data store. However, different formats of version data structures could be used. For example, a version data structure can be a single unified data structure, or a disconnected set of version indicators listed as attributes of separate objects. The version indicators (1216) can form an existing range (1218), which can extend from a version indicator that is equal to a value of a minimum version indicator for the existing range (1218) (“=existing minversion”), to a version indicator that is equal to a value of a maximum version for the existing range (1218) (“=existing maxversion”).
A partition with a partition identifier of “4500-6500” (which could indicate that regions from region identifier 4500 to region 6500 are included in the partition) can be reconfigured, such as by being moved to a new node. When that happens, the “4500-6500” partition entry in the data structure (1210) can be updated with new location information. In addition, the version indicator of the “4500-6500” partition entry can be set to a new value that is outside the existing range (1218) so that the data structure (1210) then has a new range (1220) of version indicators (1216). For example, a data manager can maintain a “MaxVersion” integer variable that is equal to the maximum value of the version indicators (1216). When location information (1214) is updated for a partition, then the MaxVersion variable can be incremented by 1 and the version indicator for the updated partition can be set equal to the new MaxVersion. In this way, the version indicators can range from a highest value corresponding to a most recently configured partition to a lowest value corresponding to a least recently configured partition. Other techniques can also be used to produce similar results. For example, a minimum MinVersion variable could be decremented instead of incrementing the maximum MaxVersion variable. Such techniques can result in gaps in the range of version indicators (1216), although if such gaps were undesirable, then other version indicators could be updated to fill in the gaps.
Referring back to FIG. 11, the data system (1100) can also include one or more client nodes (1160, 1162), which can request access to the data (1115, 1116) stored in the storage nodes (1112, 1113) in the data store (1110). Many different configurations of the nodes (1112, 1113, 1160, 1162) are possible. For example, some or all of the storage nodes (1112, 1113) could be running on the same physical and/or virtual computer machine(s) as some or all of the client nodes (1160, 1162), or all nodes (1112, 1113, 1160, 1162) could be running on separate physical or virtual machines.
The partition manager (1120) can communicate with the storage nodes (1112 and 1113) and the client nodes (1160 and 1162), such as through the layering arrangement discussed above. For example, the partition manager (1120) can communicate with the storage nodes (1112) using the clustering substrate (107) and the communication substrate (109) discussed above with reference to FIG. 1, and in some implementations the partition manager (1120) can be considered to be part of the clustering substrate (107).
In addition, the nodes (1112, 1113, 1160, 1162) can communicate with each other. To facilitate such inter-nodal communication, each storage node (1112, 1113) can access a corresponding partition map or routing table (1140, 1142), which can include location information (1144) and partition identifiers (1146) for the partitions in the data store (1110). Each storage node (1112, 1113) can also access version information data structures (1148, 1149), which can indicate ranges of version indicators corresponding to updated location information (1144) and partition identifiers (1146) that the storage node (1112, 1113) has already received. Similarly, each client node (1160, 1162) can also include a routing table (1170, 1172), which can include location information (1174) and partition identifiers (1176) for the partitions in the data store (1110). Each client node (1160, 1162) can also access version information data structures (1178, 1179), which can indicate ranges of version indicators corresponding to updated location information (1174) and partition identifiers (1176) that the client node (1160, 1162) has already received. The routing tables and version information data structures can take various forms. For example, the routing table and version information data structure for each node can be separate, or the routing tables and version information data structure for each node can be part of a single unified structure.
Referring still to FIG. 11, some communications between components of the data system (1100) will be described. The partition manager (1120) can receive status updates (1182) from the storage nodes (1112, 1113). The partition manager (1120) can use the status updates to manage the partitions. For example, the partition manager (1120) can reconfigure partitions if one or more of the storage nodes (1112, 1113) is not operating properly or is overloaded. The partition manager (1120) can also send out a broadcast message (1184) to all the nodes (1112, 1113, 1160, 1162). The broadcast message (1184) can include location information (1126) that has been updated since a previous broadcast message (1184) was sent out, due to corresponding partitions having been reconfigured since the previous broadcast message (1184). The broadcast message (1184) can also include partition identifiers (1124) and version indicators (1128) corresponding to the updated partition location information (1126). As an example, the broadcast message (1184) can list ranges of version indicators corresponding to partition location information updates in the message (1184) with sets of {StartVersion, EndVersion}, where StartVersion is one below the range, and EndVersion is the last value in the range. Alternatively, such ranges could be indicated in other ways. Such broadcast messages (1184) can be sent out periodically according to a schedule, when a predetermined number of partitions have been reconfigured, or according to some other scheme.
Each broadcast message (1184) may include location information update versions created since the immediately previous broadcast message (1184). Alternatively, a broadcast message (1184) may include addition information updates, such as all versions created since the fifth previous broadcast message. More generally, where “i” refers to the ith broadcast, the partition manager (1120) can choose StartVersioni=EndVersioni-1, so that only the updated location information versions created since the immediately previous broadcast (i−1) are included in the present broadcast (i). Alternatively, the partition manager (1120) can choose StartVersioni=EndVersioni-x, so that the updated location information for a reconfigured partition has x chances to be broadcasted, which can reduce the impact of broadcast messages being lost.
As another example, client nodes (1160, 1162) can send access request messages to storage nodes (1112, 1113), requesting access to the data partitions (1117, 1118, 1119) stored in those storage nodes (1112, 1113). For example, client node 1 (1160) can send an access request message (1190) to storage node N (1113). For example, the access request message may be a put or get request. If the access request message (1190) is for data in partition PA (1117) in storage node 1 (1112), but the location information (1174) in the routing table (1170) of client node 1 (1160) is out of date and indicates the partition PA (1117) is on storage node N (1113), then storage node N (1113) can send a failure message (1192) in response to the access request message (1190). For example, the routing table (1170) may be out of date because changes have been made since the last broadcast message (1184) was sent by the partition manager (1120). As another example, the routing table (1170) may be out of date because client node 1 (1160) did not receive one or more of the broadcast messages (1184) because the message(s) were lost or delayed.
In response to receiving the failure message (1192), client node 1 (1160) can send a location information request message (1194) to the partition manager (1120), requesting updated location information for the partition PA (1117). Indeed, the request message (1194) can request all updated location information (1126) that client node 1 (1160) has not already received. For example, the request message (1194) may include version indicator ranges from the version information data structure (1178), indicating version indicators corresponding to location information (1174) that client node 1 has already received. In response to receiving the request message (1194), the partition manager (1120) can send a location information response message (1196), which can include location information for the partition PA (1117). Additionally, the response message (1196) may include all updated location information that client node 1 (1160) has not yet received, such as location information corresponding to version indicators (1128) that were not within the range(s) of version indicators listed in the request message (1194). Client node 1 (1160) can then send a second access request message (not shown) to storage node 1 (1112), requesting access to data in partition PA (1117). Storage node 1 (1112) can then perform one or more actions requested in the second access request message, and may send a response message including data and/or a confirmation that requested actions have been performed on data.
As an example, the request message (1194) could list the ranges with sets of StartVersion, EndVersion, where StartVersion is one below the range, and EndVersion is the last value in the range. For example, the request message (1194) may list the following ranges: {33, 122}, {130, 135}, which can be the same ranges that are listed in the version information data structure (1178) for client node 1 (1160). This can indicate that the client node 1 (1160) already has location information corresponding to version indicators 34 to and including 122 and version indicators 131 to and including 135. If the main version data structure (1122) includes version indicators (1128) from 34 to and including 125 and from 128 to and including 138, then the response message (1196) can include updated location information versions corresponding to version indicators 123 to 125, 128-130, and 136-138. The response message can also include the following StartVersion and EndVersion sets to indicate the new location information versions that are included in the response message: {122, 125}, {127, 130}, {135, 138}. Upon receiving the response message (1196), client node 1 (1160) can update its routing table C1 (1170), and can update the version information data structure (1178) by merging the ranges already indicated in the existing data structure (1178) with the new version ranges received in the response message (1196). Accordingly, the updated version information data structure (1178) can indicate the following StartVersion and EndVersion sets: {33, 125}, {127, 138}.
In merging the StartVersion and EndVersion ranges of received broadcast messages (1184) and response messages (1196) with existing version information data structures (1148, 1149, 1178, 1179), the client or storage nodes (1112, 1113, 1160, and 1162) can perform the following actions: (1) if {StartVersion, EndVersion} is a sub-range of any one of the existing ranges, it can be ignored, (2) if {StartVersion, EndVersion} has no overlap with any existing ranges, it can be inserted into the version indicator range list in the version information data structure (1148, 1149, 1178, 1179), and the node can go through all partition entries for the range, and can update corresponding entries in its routing table (1140, 1142, 1170, 1172) if the response message (1196) includes newer versions of the location information; (3) if {StartVersion, EndVersion} has an overlap with any of the existing ranges, the node can try to update one range to include the new range, and the node can go through all partition entries in the response message (1196), and if there is a newer version of location information for a partition, then the node can update the corresponding entry in its routing table (1140, 1142, 1170, 1172); and (4) the node can go through the range list in the version information data structure (1178) to see if there are ranges that can be merged (for example, if EndVersion of range 1 is greater than or equal to StartVersion of range i+1, range and range i+1 can be merged).
VIII. Distributed Data Store Location Update Techniques
The techniques described herein can be performed with the systems described above or with other systems. In addition, parts of the techniques can be performed alone or combined with other techniques in part or in their entirety. Referring to FIG. 13, a location update technique will be described. In the technique, version indicators can be maintained (1310), where the version indicators can be within an existing range of indicators, and each version indicator in the existing range can be associated with a data partition in a distributed data store. A reconfiguration of one or more partitions in the data store can be identified (1320), such as by a partition manager, and a new version indicator can be assigned (1330) to the reconfigured partition. The new version indicator can be outside the existing range of version indicators, so that a new range of version indicators includes the new version indicator.
The technique of FIG. 13 can also include using the version indicators to identify (1340) recently reconfigured partitions in the data store, such as partitions that have been reconfigured since a previous broadcast message was sent. Updated location information for the recently reconfigured partitions can be sent (1350) to a plurality of nodes, such as by sending a broadcast message to storage nodes and/or client nodes in a data system.
In addition, version indicators can be used to identify out-of-date location information at a node and to send to the node updated location information corresponding to the out-of-date location information. For example, this can include sending a broadcast message that includes updates, which have not been previously sent to the node. As another example, this can include receiving (1360) from a requesting node a request message that requests updated location information for data in the data store. The request message can identify location information already stored at the requesting node, such as by identifying one or more ranges of version indicators corresponding to location information already stored at the requesting node. In response to the request message, a response message can be sent (1370) to the requesting node. The response message can include available location information corresponding to version indicators outside the range(s) of version indicators identified by the request message.
Referring now to FIG. 14, another location update technique will be described. In the technique, a first access request message can be sent (1410) to a first node in a distributed data store. The first access request message can request access to data in the data store. In response to the first access request message, a failure notification may be received (1420) from the first node. In response, a location information request message can be sent (1430), such as to a partition manager, requesting updated location information for the data requested in the first access request message. In response to the location information request message, updated location information for the data requested in the first access request message can be received (1440). A version information data structure that represents the one or more ranges of version indicators corresponding to location information that has already been received can be updated (1450) to include one or more ranges of version indicators corresponding to the updated location information received in response to the request message. In addition, the updated location information can be used to send (1460) a second access request message to a second node in the data store. The second access request message can request access to the data to which access was requested in the first access request message, where the updated location information identified the second node as a location for the data.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. One or more computer-readable storage media having computer-executable instructions embodied thereon that, when executed, perform acts comprising:
maintaining a plurality of version indicators in a version data structure, the version indicators being within an existing range, and each version indicator being associated with a data partition in a distributed data store;
identifying a partition reconfiguration in the data store, the reconfiguration being associated with a reconfigured partition of a plurality of partitions in the data store; and
assigning a new version indicator that is outside the existing range to the reconfigured partition, so that a new range of version indicators includes the new version indicator.
2. The one or more computer-readable media of claim 1, wherein the version indicators comprise numbers.
3. The one or more computer-readable media of claim 1, wherein the version indicators comprise integers.
4. The one or more computer-readable media of claim 1, wherein the new version indicator has a value that is higher than a highest value of the existing range.
5. The one or more computer-readable media of claim 1, wherein the data store is a cache.
6. The one or more computer-readable media of claim 1, wherein the acts further comprise:
using the version indicators to identify recently reconfigured data partitions in the distributed data store; and
sending updated location information for the recently reconfigured data partitions to a plurality of nodes.
7. The one or more computer-readable media of claim 1, wherein the acts further comprise using the version indicators to identify out-of-date location information at a node and sending to the node updated location information corresponding to the out-of-date location information.
8. The one or more computer-readable media of claim 1, wherein the acts further comprise:
using the version indicators to identify out-of-date location information at a plurality of client nodes that are configured to communicate with a plurality of storage nodes to access data in the distributed data store; and
sending a broadcast message to the plurality of client nodes, the broadcast message comprising updated location information corresponding to the out-of-date location information.
9. The one or more computer-readable media of claim 1, wherein the acts further comprise:
receiving from a requesting node a request message that requests updated location information for data in the distributed data store, the request message identifying one or more ranges of version indicators corresponding to location information already stored at the requesting node; and
sending a response message to a requesting node in response to the request message, the response message comprising location information corresponding to version indicators outside the one or more ranges of version indicators identified by the request message.
10. The one or more computer-readable media of claim 1, wherein the data store is a cache, and the acts further comprise:
using the version indicators to identify out-of-date location information at a plurality of client nodes that are configured to communicate with a plurality of storage nodes to access data in the distributed data store;
sending a broadcast message to the plurality of client nodes, the broadcast message comprising updated location information corresponding to the out-of-date location information;
receiving from a requesting node of the client nodes a request message that requests updated location information for data in the distributed data store, the request message identifying one or more ranges of version indicators corresponding to location information already stored at the requesting node; and
sending a response message to a requesting node in response to the request message, the response message comprising location information corresponding to version indicators outside the one or more ranges of version indicators identified by the request message.
11. A computer-implemented method comprising:
maintaining a plurality of version indicators in a version data structure, the version indicators being within an existing range, and each version indicator being associated with a data partition in a distributed data store;
identifying a partition reconfiguration in the data store, the reconfiguration being associated with a reconfigured partition of a plurality of partitions in the data store; and
assigning a new version indicator that is outside the existing range to the reconfigured partition, so that a new range of version indicators includes the new version indicator.
12. The method of claim 11, wherein the version indicators comprise numbers.
13. The method of claim 11, wherein the version indicators comprise integers.
14. The method of claim 11, wherein the new version indicator has a value that is higher than a highest value of the existing range.
15. The method of claim 11, wherein the data store is a cache.
16. The method of claim 11, further comprising:
using the version indicators to identify recently reconfigured data partitions in the distributed data store; and
sending updated location information for the recently reconfigured data partitions to a plurality of nodes.
17. The method of claim 11, further comprising using the version indicators to identify out-of-date location information at a node and sending to the node updated location information corresponding to the out-of-date location information.
18. The method of claim 11, further comprising:
using the version indicators to identify out-of-date location information at a plurality of client nodes that are configured to communicate with a plurality of storage nodes to access data in the distributed data store; and
sending a broadcast message to the plurality of client nodes, the broadcast message comprising updated location information corresponding to the out-of-date location information.
19. The method of claim 11, further comprising:
receiving from a requesting node a request message that requests updated location information for data in the distributed data store, the request message identifying one or more ranges of version indicators corresponding to location information already stored at the requesting node; and
sending a response message to a requesting node in response to the request message, the response message comprising location information corresponding to version indicators outside the one or more ranges of version indicators identified by the request message.
20. The method of claim 11, wherein the data store is a cache, and the method further comprises:
using the version indicators to identify out-of-date location information at a plurality of client nodes that are configured to communicate with a plurality of storage nodes to access data in the distributed data store;
sending a broadcast message to the plurality of client nodes, the broadcast message comprising updated location information corresponding to the out-of-date location information;
receiving from a requesting node of the client nodes a request message that requests updated location information for data in the distributed data store, the request message identifying one or more ranges of version indicators corresponding to location information already stored at the requesting node; and
sending a response message to a requesting node in response to the request message, the response message comprising location information corresponding to version indicators outside the one or more ranges of version indicators identified by the request message.
US12/466,390 2009-05-15 2009-05-15 Location updates for a distributed data store Expired - Fee Related US8108612B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/466,390 US8108612B2 (en) 2009-05-15 2009-05-15 Location updates for a distributed data store
US13/337,093 US8484417B2 (en) 2009-05-15 2011-12-24 Location updates for a distributed data store

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/466,390 US8108612B2 (en) 2009-05-15 2009-05-15 Location updates for a distributed data store

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/337,093 Division US8484417B2 (en) 2009-05-15 2011-12-24 Location updates for a distributed data store

Publications (2)

Publication Number Publication Date
US20100293334A1 US20100293334A1 (en) 2010-11-18
US8108612B2 true US8108612B2 (en) 2012-01-31

Family

ID=43069434

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/466,390 Expired - Fee Related US8108612B2 (en) 2009-05-15 2009-05-15 Location updates for a distributed data store
US13/337,093 Active 2029-05-25 US8484417B2 (en) 2009-05-15 2011-12-24 Location updates for a distributed data store

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/337,093 Active 2029-05-25 US8484417B2 (en) 2009-05-15 2011-12-24 Location updates for a distributed data store

Country Status (1)

Country Link
US (2) US8108612B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140337257A1 (en) * 2013-05-09 2014-11-13 Metavana, Inc. Hybrid human machine learning system and method
US20150237131A1 (en) * 2014-02-18 2015-08-20 Fastly Inc. Data purge distribution and coherency

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562543B (en) * 2009-05-25 2013-07-31 阿里巴巴集团控股有限公司 Cache data processing method and processing system and device thereof
US8244698B2 (en) * 2010-07-23 2012-08-14 Google Inc. Encoding a schema version in table names
US8392368B1 (en) * 2010-08-27 2013-03-05 Disney Enterprises, Inc. System and method for distributing and accessing files in a distributed storage system
US8934925B2 (en) * 2010-10-15 2015-01-13 Microsoft Corporation Mobile messaging message notifications processing
US8793250B1 (en) 2010-12-17 2014-07-29 Amazon Technologies, Inc. Flexible partitioning of data
US8943034B2 (en) * 2011-12-22 2015-01-27 Sap Se Data change management through use of a change control manager
US20130268930A1 (en) * 2012-04-06 2013-10-10 Arm Limited Performance isolation within data processing systems supporting distributed maintenance operations
US8965921B2 (en) * 2012-06-06 2015-02-24 Rackspace Us, Inc. Data management and indexing across a distributed database
US9639539B1 (en) * 2012-09-28 2017-05-02 EMC IP Holding Company LLC Method of file level archiving based on file data relevance
US8769031B1 (en) 2013-04-15 2014-07-01 Upfront Media Group, Inc. System and method for implementing a subscription-based social media platform
US9619155B2 (en) * 2014-02-07 2017-04-11 Coho Data Inc. Methods, systems and devices relating to data storage interfaces for managing data address spaces in data storage devices
US20160050112A1 (en) * 2014-08-13 2016-02-18 PernixData, Inc. Distributed caching systems and methods
US10102086B2 (en) * 2014-12-19 2018-10-16 Futurewei Technologies, Inc. Replicated database distribution for workload balancing after cluster reconfiguration
US9959332B2 (en) * 2015-01-21 2018-05-01 Futurewei Technologies, Inc. System and method for massively parallel processor database
US10541936B1 (en) 2015-04-06 2020-01-21 EMC IP Holding Company LLC Method and system for distributed analysis
US10791063B1 (en) 2015-04-06 2020-09-29 EMC IP Holding Company LLC Scalable edge computing using devices with limited resources
US10776404B2 (en) 2015-04-06 2020-09-15 EMC IP Holding Company LLC Scalable distributed computations utilizing multiple distinct computational frameworks
US10404787B1 (en) 2015-04-06 2019-09-03 EMC IP Holding Company LLC Scalable distributed data streaming computations across multiple data processing clusters
US10425350B1 (en) 2015-04-06 2019-09-24 EMC IP Holding Company LLC Distributed catalog service for data processing platform
US10812341B1 (en) * 2015-04-06 2020-10-20 EMC IP Holding Company LLC Scalable recursive computation across distributed data processing nodes
US10511659B1 (en) 2015-04-06 2019-12-17 EMC IP Holding Company LLC Global benchmarking and statistical analysis at scale
US10515097B2 (en) 2015-04-06 2019-12-24 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10706970B1 (en) 2015-04-06 2020-07-07 EMC IP Holding Company LLC Distributed data analytics
US10505863B1 (en) 2015-04-06 2019-12-10 EMC IP Holding Company LLC Multi-framework distributed computation
US10496926B2 (en) 2015-04-06 2019-12-03 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10528875B1 (en) 2015-04-06 2020-01-07 EMC IP Holding Company LLC Methods and apparatus implementing data model for disease monitoring, characterization and investigation
US10541938B1 (en) 2015-04-06 2020-01-21 EMC IP Holding Company LLC Integration of distributed data processing platform with one or more distinct supporting platforms
US10277668B1 (en) 2015-04-06 2019-04-30 EMC IP Holding Company LLC Beacon-based distributed data processing platform
US10509684B2 (en) 2015-04-06 2019-12-17 EMC IP Holding Company LLC Blockchain integration for scalable distributed computations
US10860622B1 (en) 2015-04-06 2020-12-08 EMC IP Holding Company LLC Scalable recursive computation for pattern identification across distributed data processing nodes
US10069941B2 (en) 2015-04-28 2018-09-04 Microsoft Technology Licensing, Llc Scalable event-based notifications
US9760591B2 (en) 2015-05-14 2017-09-12 Walleye Software, LLC Dynamic code loading
US10977276B2 (en) * 2015-07-31 2021-04-13 International Business Machines Corporation Balanced partition placement in distributed databases
US9513968B1 (en) * 2015-12-04 2016-12-06 International Business Machines Corporation Dynamic resource allocation based on data transferring to a tiered storage
US10656861B1 (en) 2015-12-29 2020-05-19 EMC IP Holding Company LLC Scalable distributed in-memory computation
FR3050848B1 (en) * 2016-04-29 2019-10-11 Neuralcat SERVER ARCHITECTURE AND DATA REDISTRIBUTION METHOD FOR DISTRIBUTION OF A VERSIONED GRAPH.
US10374968B1 (en) 2016-12-30 2019-08-06 EMC IP Holding Company LLC Data-driven automation mechanism for analytics workload distribution
US10198469B1 (en) 2017-08-24 2019-02-05 Deephaven Data Labs Llc Computer data system data source refreshing using an update propagation graph having a merged join listener
US11144530B2 (en) 2017-12-21 2021-10-12 International Business Machines Corporation Regulating migration and recall actions for high latency media (HLM) on objects or group of objects through metadata locking attributes
CN110601868B (en) * 2018-06-13 2022-06-21 阿里巴巴集团控股有限公司 Distributed system and method for distributing configuration information in real time and electronic equipment
EP3938895A1 (en) 2019-03-15 2022-01-19 INTEL Corporation Graphics processor data access and sharing
US11934342B2 (en) 2019-03-15 2024-03-19 Intel Corporation Assistance for hardware prefetch in cache access
BR112021016138A2 (en) 2019-03-15 2022-01-04 Intel Corp Apparatus, method, general purpose graphics processor and data processing system
EP4024223A1 (en) * 2019-03-15 2022-07-06 Intel Corporation Systems and methods for cache optimization
US11861761B2 (en) 2019-11-15 2024-01-02 Intel Corporation Graphics processing unit processing and caching improvements
US11727306B2 (en) 2020-05-20 2023-08-15 Bank Of America Corporation Distributed artificial intelligence model with deception nodes
US11436534B2 (en) 2020-05-20 2022-09-06 Bank Of America Corporation Distributed artificial intelligence model
CN112258266B (en) * 2020-10-10 2023-09-26 同程网络科技股份有限公司 Distributed order processing method, device, equipment and storage medium
CN115277735B (en) * 2022-07-20 2023-11-28 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4853843A (en) * 1987-12-18 1989-08-01 Tektronix, Inc. System for merging virtual partitions of a distributed database
US5634053A (en) 1995-08-29 1997-05-27 Hughes Aircraft Company Federated information management (FIM) system and method for providing data site filtering and translation for heterogeneous databases
US5864854A (en) 1996-01-05 1999-01-26 Lsi Logic Corporation System and method for maintaining a shared cache look-up table
US6185611B1 (en) 1998-03-20 2001-02-06 Sun Microsystem, Inc. Dynamic lookup service in a distributed system
US6219045B1 (en) 1995-11-13 2001-04-17 Worlds, Inc. Scalable virtual world chat client-server system
US6341311B1 (en) 1998-05-29 2002-01-22 Microsoft Corporation Directing data object access requests in a distributed cache
US6721907B2 (en) 2002-06-12 2004-04-13 Zambeel, Inc. System and method for monitoring the state and operability of components in distributed computing systems
US6901410B2 (en) 2001-09-10 2005-05-31 Marron Pedro Jose LDAP-based distributed cache technology for XML
US6973546B2 (en) 2002-09-27 2005-12-06 International Business Machines Corporation Method, system, and program for maintaining data in distributed caches
US20050273779A1 (en) 1996-06-07 2005-12-08 William Cheng Automatic updating of diverse software products on multiple client computer systems
US20060015561A1 (en) 2004-06-29 2006-01-19 Microsoft Corporation Incremental anti-spam lookup and update service
US7099927B2 (en) 2000-09-01 2006-08-29 Ncr Corporation Downloading and uploading data in information networks using proxy server clients
US7103650B1 (en) 2000-09-26 2006-09-05 Microsoft Corporation Client computer configuration based on server computer update
US20060230305A1 (en) 2005-04-07 2006-10-12 Microsoft Corporation Retry request overload protection
US7188145B2 (en) 2001-01-12 2007-03-06 Epicrealm Licensing Llc Method and system for dynamic distributed data caching
US20070094491A1 (en) * 2005-08-03 2007-04-26 Teo Lawrence C S Systems and methods for dynamically learning network environments to achieve adaptive security
US20070198478A1 (en) 2006-02-15 2007-08-23 Matsushita Electric Industrial Co., Ltd. Distributed meta data management middleware
US20080243899A1 (en) 2007-03-30 2008-10-02 Freescale Semiconductor, Inc. Systems, apparatus and method for performing digital pre-distortion based on lookup table gain values
US20090006529A1 (en) 2007-06-27 2009-01-01 Microsoft Corporation Client side based data synchronization and storage
US20090044055A1 (en) 2007-08-10 2009-02-12 Asustek Computer Inc. Method for servicing hardware of computer system and method and system for guiding to solve errors
US20100280991A1 (en) * 2009-05-01 2010-11-04 International Business Machines Corporation Method and system for versioning data warehouses

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7343432B1 (en) * 2003-09-19 2008-03-11 Emc Corporation Message based global distributed locks with automatic expiration for indicating that said locks is expired
JP4767139B2 (en) * 2006-09-15 2011-09-07 富士通株式会社 Storage management program, storage management device, and storage management method
US9361229B2 (en) * 2008-08-25 2016-06-07 International Business Machines Corporation Distributed shared caching for clustered file systems
US8543773B2 (en) * 2008-08-25 2013-09-24 International Business Machines Corporation Distributed shared memory

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4853843A (en) * 1987-12-18 1989-08-01 Tektronix, Inc. System for merging virtual partitions of a distributed database
US5634053A (en) 1995-08-29 1997-05-27 Hughes Aircraft Company Federated information management (FIM) system and method for providing data site filtering and translation for heterogeneous databases
US6219045B1 (en) 1995-11-13 2001-04-17 Worlds, Inc. Scalable virtual world chat client-server system
US5864854A (en) 1996-01-05 1999-01-26 Lsi Logic Corporation System and method for maintaining a shared cache look-up table
US20050273779A1 (en) 1996-06-07 2005-12-08 William Cheng Automatic updating of diverse software products on multiple client computer systems
US6185611B1 (en) 1998-03-20 2001-02-06 Sun Microsystem, Inc. Dynamic lookup service in a distributed system
US6341311B1 (en) 1998-05-29 2002-01-22 Microsoft Corporation Directing data object access requests in a distributed cache
US7099927B2 (en) 2000-09-01 2006-08-29 Ncr Corporation Downloading and uploading data in information networks using proxy server clients
US7103650B1 (en) 2000-09-26 2006-09-05 Microsoft Corporation Client computer configuration based on server computer update
US7188145B2 (en) 2001-01-12 2007-03-06 Epicrealm Licensing Llc Method and system for dynamic distributed data caching
US6901410B2 (en) 2001-09-10 2005-05-31 Marron Pedro Jose LDAP-based distributed cache technology for XML
US6721907B2 (en) 2002-06-12 2004-04-13 Zambeel, Inc. System and method for monitoring the state and operability of components in distributed computing systems
US6973546B2 (en) 2002-09-27 2005-12-06 International Business Machines Corporation Method, system, and program for maintaining data in distributed caches
US20060015561A1 (en) 2004-06-29 2006-01-19 Microsoft Corporation Incremental anti-spam lookup and update service
US20060230305A1 (en) 2005-04-07 2006-10-12 Microsoft Corporation Retry request overload protection
US20070094491A1 (en) * 2005-08-03 2007-04-26 Teo Lawrence C S Systems and methods for dynamically learning network environments to achieve adaptive security
US20070198478A1 (en) 2006-02-15 2007-08-23 Matsushita Electric Industrial Co., Ltd. Distributed meta data management middleware
US20080243899A1 (en) 2007-03-30 2008-10-02 Freescale Semiconductor, Inc. Systems, apparatus and method for performing digital pre-distortion based on lookup table gain values
US20090006529A1 (en) 2007-06-27 2009-01-01 Microsoft Corporation Client side based data synchronization and storage
US20090044055A1 (en) 2007-08-10 2009-02-12 Asustek Computer Inc. Method for servicing hardware of computer system and method and system for guiding to solve errors
US20100280991A1 (en) * 2009-05-01 2010-11-04 International Business Machines Corporation Method and system for versioning data warehouses

Non-Patent Citations (20)

* Cited by examiner, † Cited by third party
Title
"Patterns & Practices", Retrieved at <<http://www.codeplex.com/AppArchGuide/Wiki/View.aspx?title=Chapter%2017%20-%20Rich%20Client%20Applications&referringTitle=Home>> Feb. 9, 2009, pp. 1-9.
"Phishing Protection: Server Spec", Retrieved at <<https://wiki.mozilla.org/Phishing—Protection:—Server—Spec>> Feb. 9, 2009, pp. 1-9.
"Phishing Protection: Server Spec", Retrieved at > Feb. 9, 2009, pp. 1-9.
"Software Update Management", Retrieved at<<http://technet.microsoft.com/en-us/library/cc181545.aspx>> on Mar. 19, 2009, pp. 7.
"Software Update Management", Retrieved at> on Mar. 19, 2009, pp. 7.
"Store Clustered Hash Table Entry (QcstStoreCHTEntry) API", Retrieved at <<http://publib.boulder.ibm.com/infocenter/iseries/v5r4/topic/apis/clchtstoreentry.htm>> Feb. 9, 2009, pp. 1-5.
"Store Clustered Hash Table Entry (QcstStoreCHTEntry) API", Retrieved at > Feb. 9, 2009, pp. 1-5.
"Update Local Data Caches with Sync Services" Retrieved at << http://visualstudiomagazine.com/features/article.aspx?editorialsid=1336>> Feb. 9, 2009, pp. 1-7.
"Update Local Data Caches with Sync Services" Retrieved at > Feb. 9, 2009, pp. 1-7.
Berardi, Nick. "Posts Tagged ‘Distributed Cache’," The Coder Journal, retrieved online at [http://www.coderjournal.com/tags/distributed-cache/] on Jul. 21, 2008.
Berardi, Nick. "Posts Tagged 'Distributed Cache'," The Coder Journal, retrieved online at [http://www.coderjournal.com/tags/distributed-cache/] on Jul. 21, 2008.
Decoro, Christopher; Langston, Harper; Weinberger, Jeremy. "Cash: Distributed Cooperative Buffer Caching," Courant Institute of Mathematical Sciences, New York University. Retrieved online at [http://cs.nyu.edu/~harper/papers/cash.pdf] on Jul. 21, 2008.
Decoro, Christopher; Langston, Harper; Weinberger, Jeremy. "Cash: Distributed Cooperative Buffer Caching," Courant Institute of Mathematical Sciences, New York University. Retrieved online at [http://cs.nyu.edu/˜harper/papers/cash.pdf] on Jul. 21, 2008.
Krishnaprasad, et al. "Distributed Cache Arrangement", Matter No. 324850.02, U.S. Appl. No. 12/363,505, filed Jan. 30, 2009, pp. 25.
Microsoft Project Codename "Velocity," retrieved online at [http://download.microsoft.com/download/a/1/5/a156ef5b-5613-4e4c-8d0a-33c9151bbef5/Microsoft-Project-Velocity-Datasheet-.pdf] on Jul. 21, 2008.
Microsoft Project Codename "Velocity," retrieved online at [http://download.microsoft.com/download/a/1/5/a156ef5b-5613-4e4c-8d0a-33c9151bbef5/Microsoft—Project—Velocity—Datasheet—.pdf] on Jul. 21, 2008.
Mitchell Scott, "Caching Data at Application Startup", Retrieved at<<http://www.asp.net/LEARN/data-access/tutorial-60-vb.aspx>> on Mar. 19, 2009, pp. 14.
Mitchell Scott, "Caching Data at Application Startup", Retrieved at> on Mar. 19, 2009, pp. 14.
Yang, Jiong; Nittel, Sylvia; Wang, Wei; Muntz, Richard. "DynamO: Dynamic Objects with Persistent Storage," Department of Computer Science, University of California, Los Angeles, California. Retrieved online at [http://www.cs.unc.edu/~weiwang/paper/pos98.ps] on Jul. 21, 2008.
Yang, Jiong; Nittel, Sylvia; Wang, Wei; Muntz, Richard. "DynamO: Dynamic Objects with Persistent Storage," Department of Computer Science, University of California, Los Angeles, California. Retrieved online at [http://www.cs.unc.edu/˜weiwang/paper/pos98.ps] on Jul. 21, 2008.

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140337257A1 (en) * 2013-05-09 2014-11-13 Metavana, Inc. Hybrid human machine learning system and method
US9471883B2 (en) * 2013-05-09 2016-10-18 Moodwire, Inc. Hybrid human machine learning system and method
US20150237131A1 (en) * 2014-02-18 2015-08-20 Fastly Inc. Data purge distribution and coherency
US10530883B2 (en) * 2014-02-18 2020-01-07 Fastly Inc. Data purge distribution and coherency
US11265395B2 (en) * 2014-02-18 2022-03-01 Fastly, Inc. Data purge distribution and coherency

Also Published As

Publication number Publication date
US20120096103A1 (en) 2012-04-19
US8484417B2 (en) 2013-07-09
US20100293334A1 (en) 2010-11-18

Similar Documents

Publication Publication Date Title
US8484417B2 (en) Location updates for a distributed data store
US8176256B2 (en) Cache regions
US9952971B2 (en) Distributed cache arrangement
US20200242129A1 (en) System and method to improve data synchronization and integration of heterogeneous databases distributed across enterprise and cloud using bi-directional transactional bus of asynchronous change data system
US8261020B2 (en) Cache enumeration and indexing
US8108623B2 (en) Poll based cache event notifications in a distributed cache
US9460185B2 (en) Storage device selection for database partition replicas
US9971823B2 (en) Dynamic replica failure detection and healing
Auradkar et al. Data infrastructure at LinkedIn
US9489443B1 (en) Scheduling of splits and moves of database partitions
KR102013005B1 (en) Managing partitions in a scalable environment
US20190104179A1 (en) System and method for topics implementation in a distributed data computing environment
US20110225121A1 (en) System for maintaining a distributed database using constraints
US9438665B1 (en) Scheduling and tracking control plane operations for distributed storage systems
US20210004712A1 (en) Machine Learning Performance and Workload Management
US10158709B1 (en) Identifying data store requests for asynchronous processing
US10102230B1 (en) Rate-limiting secondary index creation for an online table
US10747739B1 (en) Implicit checkpoint for generating a secondary index of a table
US9875270B1 (en) Locking item ranges for creating a secondary index from an online table
US10635650B1 (en) Auto-partitioning secondary index for database tables
US20110225120A1 (en) System for maintaining a distributed database using leases
US10474653B2 (en) Flexible in-memory column store placement
Waseem et al. A taxonomy and survey of data partitioning algorithms for big data distributed systems
Lev-Ari et al. Quick: a queuing system in cloudkit
US11762860B1 (en) Dynamic concurrency level management for database queries

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XUN, LU;ZENG, HUA-JUN;KRISHNAPRASAD, MURALIDHAR;AND OTHERS;SIGNING DATES FROM 20090506 TO 20090507;REEL/FRAME:023034/0815

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XUN, LU;ZENG, HUA-JUN;KRISHNAPRASAD, MURALIDHAR;AND OTHERS;SIGNING DATES FROM 20090506 TO 20090507;REEL/FRAME:030095/0100

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034564/0001

Effective date: 20141014

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20240131