US20100121855A1 - Lookup Partitioning Storage System and Method - Google Patents

Lookup Partitioning Storage System and Method Download PDF

Info

Publication number
US20100121855A1
US20100121855A1 US12/689,984 US68998410A US2010121855A1 US 20100121855 A1 US20100121855 A1 US 20100121855A1 US 68998410 A US68998410 A US 68998410A US 2010121855 A1 US2010121855 A1 US 2010121855A1
Authority
US
United States
Prior art keywords
storage
server
partition
resource
lookup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/689,984
Inventor
Apurva F. Dalia
Craig Allen Harry
Nishant Dani
Shan Jiang
Brad Dean Thompson
Bradley J. Barrows
David R. Shutt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US12/689,984 priority Critical patent/US20100121855A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARROWS, BRADLEY J., DALIA, APURVA F., DANI, NISHANT, HARRY, CRAIG ALLEN, JIANG, SHAN, SHUTT, DAVID, THOMPSON, BRAD DEAN
Publication of US20100121855A1 publication Critical patent/US20100121855A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present invention relates in general to online storage and in particular to a system and method for providing access to online storage in a configurable and efficient manner.
  • Networks are well known in the computer field. By definition, a network is a group of computers and associated devices that are connected by communication facilities or links.
  • An internetwork in turn, is the joining of multiple computer networks, both similar and dissimilar, by means of gateways or routers that facilitate data transfer and conversion from the multiple computer networks.
  • a well known abbreviation for the term internetwork is “internet.”
  • Internet refers to the collection of networks and routers that use the Internet Protocol to communicate with one another. The Internet has recently seen increased growth by virtue of its ability to link computers located throughout the world.
  • embodiments of the present invention could find use in many network environments; however, for purposes of discussion, the Internet is used as an exemplary network environment for implementing embodiments of the present invention.
  • the Internet has quickly become a popular method of disseminating information due in large part to its ability to deliver information quickly and reliably.
  • a user To retrieve stored resources or other data over the network, a user typically uses communications or network browsing software.
  • a common way of retrieving storage resources is to use such communications or network browsing software to access storage resources at a uniform resource identifier (“URI”) address, such as a uniform resource locator (“URL”) address, that indicates the location of a storage resource on a server connected to the network.
  • URI uniform resource identifier
  • URL uniform resource locator
  • Storage systems for computing devices are also well known in the computing field.
  • Software applications and operating systems generally have access to some form of storage.
  • Such storage may include hard drives, solid state memory, removable storage devices, etc.
  • Most conventional computing devices have local storage.
  • One form of conventional online storage system is a file server in which computing devices are able to store and retrieve files.
  • a more sophisticated form of online storage employs multiple file servers some of which may replicate other servers in order to provide redundancy in the event the main file server becomes inoperable or inaccessible
  • file servers are not designed to efficiently control and route accesses to particular resources, such as a particular user's address book, for example.
  • network accessible applications proliferate, so has the need for storing online resources at separate locations associated with particular network accessible applications and the users of such applications.
  • Some previously developed on-line file servers have used rigid hash-based allocations to segment where online storage resources should be saved. Rigid segmentation is inflexible and does not provide sufficiently fine “granularity” (level of control) when accessing resources stored in on-line servers. If a particular server is under-utilized, a rigid hash-based load balancing system is not able to efficiently adjust its load assignments to increase the load on the under utilized server. Still further, moving storage resources with such hash-based allocations requires locking entire hash buckets, which increases the difficulty of moving resource for end-users.
  • Embodiments of present invention relate to a method, system, and computer-readable medium for accessing and managing an online storage system.
  • a method for accessing and managing a resource stored in a multiple remote file server system is provided.
  • a resource identifier is sent by a client device to a remotely located lookup partitioning service (“LPS”) server, via another server such as a front end server.
  • LPS lookup partitioning service
  • the resource identifier is associated with a resource stored in a particular storage partition of a particular storage file server of the multiple remote file server system.
  • the LPS server returns a looked up storage server location, i.e., a location that identifies the particular storage partition in the particular storage file server, to a front end server.
  • the partition housing the identified resource is located on multiple storage file servers, preferably two storage file servers, one functioning as a primary file storage server and the other functioning as a backup storage file server.
  • the backup storage file server is only available for access if the primary storage file server becomes unavailable, e.g., crashes.
  • the backup storage file server is a redundant storage file server.
  • multiple LPS servers are provided and the method includes determining which LPS server will provide the looked up storage server location.
  • One way of determining which LPS server will provide the looked up storage server location includes processing the resource identifier using a hash function to provide a hashed resource identifier, which identifies the LPS server.
  • the LPS server uses the resource identifier to look up, in a resource lookup store, the storage server location, i.e., the location that identifies the particular partition in the particular storage file server, where the resource associated with the resource identifier is located.
  • the LPS server grants the client device access to the storage resource by providing the storage server location to a front end server accessible by the client device.
  • the LPS server determines that no storage resource partition exists when receiving a resource identifier from a front end server, the LPS server automatically requests the creation of a storage resource at a particular storage server partition in a particular storage file server and associates the resource identifier with the newly created storage partition location in the resource lookup store. This newly created storage partition location is then be provided to the front end server.
  • creating a new storage resource on a storage server includes calculating a load balancing factor for each storage file server in a multiple file server storage system.
  • the load balancing factor is used to determine where a new storage resource should be located.
  • the load balancing factor may be based on a mapping number, a count of mapping accesses, a manual waiting value, or other information.
  • embodiments of the present invention provide an improved method for accessing and managing an online storage system and a related computer-readable medium and system.
  • FIG. 1 is a pictorial diagram of an exemplary system for providing client device access to online resources.
  • FIG. 2 is a pictorial diagram of a portion of an exemplary system for providing access to online resources that illustrates the redundant aspects of one exemplary embodiment of the invention suitable for use in FIG. 1 .
  • FIG. 3 is a block diagram of a lookup partitioning service server, suitable for use in FIGS. 1 and 2 .
  • FIG. 4 is a diagram illustrating the actions of a client device, front end server, and a storage clearinghouse server when accessing online storage for the exemplary system shown in FIG. 1 .
  • FIG. 5 is a diagram illustrating the actions of a client device, front end server, lookup partitioning service server, and storage clearinghouse server when granting access to online storage for the exemplary system shown in FIG. 1 .
  • FIG. 6 is a flow diagram illustrating an exemplary lookup partitioning service server locating routine according to embodiments of the present invention.
  • FIG. 7 is a flow diagram illustrating an exemplary storage partition locating routine according to embodiments of the present invention.
  • FIG. 8 is an overview flow diagram illustrating an exemplary storage operation performing routine according to embodiments of the present invention.
  • FIG. 9 is a flow diagram illustrating a storage resource creation subroutine suitable for use in FIG. 8 .
  • FIG. 10 is a flow diagram illustrating an exemplary storage partition determining routine for a new storage resource according to embodiments of the present invention.
  • Embodiments of the present invention relate to providing a flexible and efficient method for accessing and managing online storage resources across remotely located multiple file servers.
  • lookup partitioning service servers add efficiency and flexibility to an online storage system employing embodiments of the present invention.
  • FIG. 1 is a pictorial diagram of an exemplary online storage system 100 for providing access to online storage resources to client devices 110 A, 110 B and 110 C . . . , via the Internet 105 .
  • client devices 110 A, 110 B and 110 C are shown pictorially as a personal digital assistant (PDA) 110 A, a personal computer 110 B and a cellular telephone 110 C in FIG. 1 , it being recognized that a large number of client devices in a variety of forms would be included in an actual online storage system 100 employing an embodiment of the invention.
  • PDA personal digital assistant
  • the client devices 110 A, 110 B, 110 C have computing capabilities and may be any form of device capable of communicating with the server devices of embodiments of the present invention.
  • the client devices 110 A, 110 B and 110 C are pictorially shown as a PDA, a personal computer and cellular telephone, this depiction should be taken as illustrative and not limiting.
  • the online storage system 100 functions in a distributed computing environment that includes the plurality of computing devices 110 A, 110 B, 110 C . . . , interconnected by the Internet 105 (or some other suitable network) to a storage clearinghouse 200 .
  • the storage clearinghouse 200 includes a front end server 130 , lookup partitioning service (“LPS”) server 300 , and storage server 150 , all interconnected via a suitable network.
  • LPS lookup partitioning service
  • the front end server 130 , the LPS server 300 , and the storage server 150 may reside on any device accessible by the client devices 110 A, 110 B, and 110 C, shown in FIG. 1 .
  • An exemplary LPS server 300 is shown in detail in FIG. 3 and described below.
  • the front end server 130 , the LPS server 300 and the storage server 150 of the storage clearinghouse 200 are illustrated and described as separate devices, they may be formed by more or fewer devices.
  • the LPS server 300 and the storage server 150 may be “virtual” servers residing on the same device.
  • the storage server 150 may be formed by several “virtual” servers residing on a single device, For example, a storage server that houses a redundant copy of a partition of another storage server as a redundant partition could be on the same device as the “other” storage server.
  • a single front end server 130 , LPS server 300 , and storage server 150 have been shown in FIG.
  • LPS servers 300 can be included in an actual system practicing embodiments of the present invention.
  • One such embodiment that comprises several multiple LPS servers and several storage servers is illustrated in FIG. 2 and described below.
  • the LPS servers and the storage servers may be file server, database servers or a mixture of file servers and database servers.
  • FIG. 2 An exemplary embodiment of the storage clearinghouse 200 is illustrated in more detail in FIG. 2 .
  • the exemplary storage clearinghouse 200 illustrated in FIG. 2 includes three LPS servers 300 A-C, six storage servers 150 A-F and a single front end server 130 .
  • the three LPS servers and the six storage servers are in communication with the front end server 130 . Further as shown in FIG. 1 , the three LPS servers and the six storage servers are in communication with one another.
  • Each of the LPS servers 300 A-C includes a primary lookup partition, the two redundant look-up partitions, one for each of the other LPS servers. Storing (mirroring) information in two redundant partitions lookup provides for access to the storage servers even if the LPS servers continuing the primary and one of the redundant partitions are unavailable.
  • each of the storage servers includes a primary storage resource partition and a redundant storage resource partition. Providing redundant storage partitions on the storage servers 150 A-F, provides for access to storage resources even if a storage resource's primary partition is not available due, for example, to storage server being offline. Communication between the LPS servers 300 and the storage servers 150 is illustrated in FIGS. 4 and 5 and described below.
  • FIG. 3 illustrates an exemplary LPS server 300 suitable for use in the storage clearinghouse 200 shown in FIGS. 1 and 2 .
  • the LPS server 300 typically includes at least one processing unit 302 and memory 304 .
  • memory 304 may be volatile (such as RAM), nonvolatile (such as ROM, flash memory, etc.), or some combination of the two.
  • the most basic configuration of an LPS server is illustrated in FIG. 3 surrounded by dashed line 306 .
  • the LPS server 300 may also have additional features and/or functionality.
  • the LPS server 300 may also include additional storage (removable and/or nonremovable) including, but not limited to, magnetic or optical discs or tape.
  • Computer storage media includes volatile and nonvolatile, removable and nonremovable media implemented in any method or technology for storage of computing information (e.g., computer readable instructions, data structures, program modules, other data, etc.).
  • Memory 304 , removable storage 308 , and nonremovable storage 310 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory, or other memory technology, CD, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disc storage, or other magnetic storage devices, or any other medium which can be used to store or read desired information and which can be accessed by the LPS server 300 .
  • Such computer storage media may be part of the LPS server 300 .
  • the memory 304 of a LPS server 300 practicing embodiments of the present invention stores a lookup store 320 that includes associations (mappings) between resource identifiers (“RIDs”) and partitions (both primary and redundant storage partitions, if applicable) on the storage servers 150 where storage resources are stored.
  • RIDs resource identifiers
  • partitions both primary and redundant storage partitions, if applicable
  • the LPS server 300 also contains a communications connection 312 that the LPS server uses to communicate with other devices.
  • the communications connection 312 is used to communicate computer readable instructions, data structures, program modules or other data preferably using a modulated data signal that includes a carrier wave or other transport mechanism modulated by the data to be communicated.
  • communication connection 312 includes wired connections, both copper and optical, and wireless connections such as acoustic, radio frequency, infrared, etc.
  • LPS server 300 may also have input device(s) 314 , such as a keyboard, a mouse, a pen, a voice input device, a touch input device, etc.
  • Output device(s) 316 such as a display, speakers, a printer, etc., may also be included. Since all these devices are well known in the art, they are not described here. Since, in general, the front end server 130 and storage server 150 can be similar to the LPS server 300 described above, except for the lookup stores 320 , these servers are not described in detail here.
  • FIG. 4 illustrates one exemplary sequence of communication interactions between a client device 110 and the servers shown in FIG. 1 , i.e. the front end server 130 , the LPS server 300 , and the storage server 150 .
  • the exemplary communication interactions shown in FIG. 4 begin by the client device 110 sending 401 a resource identifier, i.e., an RID, to the front end server 130 .
  • a resource identifier i.e., an RID
  • the front end server 130 hashes 405 the RID to determine a “bucket value.”
  • hashing is the conversion of an identifier or key into a hash value, also called a bucket value, that identifies the location of the corresponding data in a data source (e.g., table, database, etc.). Hashing is typically accomplished by passing an identifier through a “hash function” to generate bucket or other hashed values.
  • the hashing of the RID distributes the RID into bucket values that are essentially randomly distributed across the range of bucket values generated by the hash function.
  • each LPS server 130 determines 410 the LPS partition associated with the bucket value (e.g., such as in a hash table on the front end server 130 ).
  • each LPS server includes a primary LPS partition and one or more redundant LPS partitions.
  • Each primary and redundant LPS partition is associated with a particular bucket value.
  • the front end server 130 has an entry for both the primary and any redundant LPS servers indexed by bucket values. That way if a primary LPS partition is unavailable, then the front end server 130 know where to locate the redundant LPS partition.
  • Determining 410 the primary and redundant LPS partitions from the bucket values allows the front end server 130 to communicate with the LPS servers 300 that house the LPS partition associated with the RID.
  • the front end server 130 will communicate with that LPS server 300 . If, however, the LPS server 300 containing the primary LPS partition is not available, the front end server 130 will communicate with one of the LPS servers 300 that contains a redundant LPS partition associated with the RID (based on some predetermined algorithm if more than one redundant LPS partition is available).
  • the front end server 130 communicates 415 the RID to the LPS server 300 housing the (primary or redundant) LPS partition.
  • the LPS server 300 looks up 420 the storage partition for the RID in its lookup store 320 .
  • a new storage resource is created 435 .
  • the new storage resource is created in response to a storage resource creating request 430 generated by the LPS server 300 that is forwarded via the front end server 130 as a storage resource creating request 433 for the storage server 150 .
  • the new storage resource creation is based on load balance data collected by the LPS server 300 .
  • the storage server 150 containing the storage partition associated with the newly created storage resource returns 440 partition location information to the LPS server 300 .
  • the LPS server 300 stores 445 the association between the RID and the new storage partition.
  • the LPS server 300 next returns 450 the storage partition location to the front end server 130 .
  • the client device 110 may then request 455 a storage operation from the storage server 150 , via the front end server 130 since the front end server 130 now knows the location of the partition housing the storage resource.
  • the storage server 150 performs 460 the storage operation at the storage partition location indicated by the front end server 130 , after which the storage server 150 returns a storage operation response 465 to the client device 110 via the front end server 130 .
  • FIG. 5 illustrates another exemplary sequence of communication interactions between a client device 110 and the servers shown in FIG. 1 . While similar to FIG. 4 , FIG. 5 does not include the creation of a new resource prior to performing a storage operation. Like FIG. 4 , in addition to a client device 110 , FIG. 5 includes a front end server 130 , an LPS server 300 and a storage server 150 .
  • the exemplary communications interactions shown in FIG. 5 begin with the client device 110 sending 501 an RID to the front end server 130 .
  • the front end server 130 hashes the RID to determine a “bucket value.”
  • the front end server 130 determines 510 the LPS partitions associated with the bucket value.
  • Each LPS server 300 in this exemplary embodiment of the present invention includes a primary LPS partition and one or more redundant LPS partition. As with FIG. 4 , determining 510 the LPS partition from the bucket value allows the front end server 130 to communicate with the LPS server that houses the LPS partition associated with the RID.
  • the front end server 130 communicates 515 the RID to the LPS server 300 housing that LPS partition.
  • the LPS server 300 looks up 520 the storage partition associated with the RID. In the exemplary embodiment illustrated in FIG. 5 , the LPS server 300 returns 525 the location of the storage partition associated with the RID to the front end server 130 .
  • the client device 110 may then request 530 a storage operation from the storage server 150 via the front end server 130 .
  • the storage server 150 performs 535 the storage operation at the storage partition location indicated by the front end server 130 , after which the storage server 150 returns the storage operation response 540 to the client device 110 via the front end server 130 .
  • FIGS. 4 and 5 provide structured and efficient access to storage resources without having to maintain rigid mappings between an RID and a storage location.
  • Each RID to partition association stored on the LPS servers can be updated or modified without disturbing of any other RIDs to partition association.
  • the system 100 is able to automatically create new storage resources, when they are needed.
  • FIGS. 4 and 5 represent only exemplary sets of communication interactions between the devices of the online storage system 100 and that various changes can be made therein without departing from the spirit and scope of the invention.
  • RIDs may not always be necessary, particularly if RIDs are associated with specific LPS partitions and/or servers (e.g., if there were only a single LPS partition and/or LPS server 300 associated with an RID).
  • invention communications are formatted using Simple Open Access Protocol (“SOAP”) with Extensible Markup Language (“XML”) formatted instructions and/or parameters.
  • SOAP Simple Open Access Protocol
  • XML Extensible Markup Language
  • RAdd resource creation request
  • the storage clearinghouse 200 of the online storage system 100 described herein includes a front end server 130 that is used to manage communications between client devices 110 A, 110 B, 110 C . . . , and one or more of the LPS servers 300 and one or more storage servers 150 .
  • FIG. 6 is a flow diagram illustrating an exemplary LPS server locating routine 600 suitable for implementation by the front end server 130 for locating an LPS server 300 able to conduct storage partition lookups based on an RID.
  • the LPS server locating routine 600 begins at block 601 and proceeds to block 605 where an RID associated with a storage resource is obtained, i.e., received from a front end server 130 .
  • the front end server 130 hashes the RID to generate a bucket value as shown by block 610 .
  • hashing a value is accomplished by processing the value using a hash function.
  • the front end server 130 determines the location of the LPS partition associated with the bucket value generated by the hash function. Routine 600 then ends at block 699 .
  • determining the location of the LPS partition associated with the bucket value generated by the hash function includes determining if a primary LPS is available. If a primary LPS partition is unavailable, then a further determination of the location of a redundant LPS partition associated with the bucket value generated by the hash function is needed.
  • the front end server 130 can communicate with the LPS server 300 that houses the associated LPS partition.
  • the LPS server 300 is responsible for looking up which storage partition (and therefore which storage server 150 ) houses the storage resource identified by an RID.
  • FIG. 7 illustrates an exemplary storage partition locating routine 700 suitable for implementation by an LPS server 300 for determining the location of a storage partition associated with an RID.
  • the storage partition locating routine 700 begins at block 701 and proceeds to block 705 where an RID for a storage resource is received from a front end server 130 .
  • the LPS server 300 looks up the storage partition for the RID in its lookup store 320 ( FIG. 3 ).
  • the RID received by the LPS server 300 may have the same format as the RID sent to the front end server 130 or may be a transformation of the RID, e.g., a hashed value, or other transformation of the RID received from the front end server 130 .
  • Such transformations may be desirable if the RID received at the front end server 130 is not appropriate for performing an efficient lookup of the location of a storage partition in a lookup store 320 .
  • the RID is an arbitrary textual name assigned by a user, a hashed value of the arbitrarily assigned name would provide a more efficient “key” for looking up a storage partition in a conventional lookup store 320 .
  • the RID may in fact be identical or closely related (e.g., a zero extended value to bring the RID up to a uniform number of digits) to the RID received by the front end server 130 .
  • processing continues in decision block 715 where a determination is made whether the storage partition was located in the lookup store 320 . If so, processing proceeds to block 720 where the location of the storage partition associated with the RID is sent to the front end server 130 .
  • the storage partition locating routine 700 ends at block 799 .
  • processing proceeds to block 725 where a message is sent to the front end server 130 indicating that no storage resource was located.
  • the storage partition locating routine 700 then ends at block 799 .
  • the front end server 130 may then communicate storage operations to the storage server 150 to be performed on the storage resource associated with the RID at the storage partition.
  • the storage server 150 processes storage operation requests from the front end server.
  • FIG. 8 illustrates an exemplary storage operation performing routine 800 on a storage server 150 .
  • the storage operation performing routine 800 is an alternative to the communication interactions shown in FIG. 4 wherein the LPS server determines if a storage resource is available. In FIG. 8 , this determination is made by a storage server.
  • Routine 800 begins at block 801 and proceeds to block 805 where a storage operation request is received along with a RID from a calling server. The storage operation request may be received from either an LPS server 300 (in the case of a resource creation request) or from the front end server 130 (for other storage operation requests).
  • decision block 810 a determination is made by the storage server whether a storage resource associated with the RID is available by checking the partitions of the storage server 150 .
  • processing continues to block 820 where the storage operation is performed on the storage resource associated with the RID.
  • the storage operation can be any one of a number of different storage operations performed on storage resources including but in no way limited to read requests, write requests, create requests, update requests, delete requests, copy requests, insertion requests, backup requests, restore requests, and the like.
  • determining if a partition associated with an RID is available includes determining if a primary storage partition is available. Next, if a primary storage partition is unavailable, then a further determination is made whether any redundant storage partitions associated with the RID are available, and only if none are available is a determination made that the storage partition is not available.
  • the storage resource is a hierarchical in nature. Storage operation requests can be directed to specific levels in the storage hierarchy.
  • a hierarchical storage resource is a university storage resources comprising department records, course records, professor records, students enrolled in course records and student records. Storage operations might apply to any record level in this example. For example, a new department record might be created. This creation would, in turn, require course records, professor records, and students enrolled in course records to be created and added to the department records in a hierarchical fashion.
  • the above example is merely meant to be illustrative of one form of storage resource. Those of ordinary skill in the art and others will appreciate that many other forms of storage resources, including, but not limited to, flat files, databases, and link lists may form storage resources suitable for storage in partitions on the storage server 150 .
  • the storage operation performing routine 800 ends at block 899 .
  • decision block 810 determines whether a storage resource associated with the RID is not available. If in decision block 810 it is determined that a storage resource associated with the RID is not available, processing proceeds to decision block 830 where a determination is made whether the storage operation request is a request to create a new storage resource. If so, processing proceeds to subroutine block 900 where a new resource is created.
  • An exemplary new storage resource creation subroutine 900 is illustrated in FIG. 9 and described below.
  • processing proceeds to block 835 where the location of the new storage resource associated with the RID is sent to the LPS server 300 (where it is saved in a lookup store 320 that associates storage resource locations with the RIDs). Processing then ends at block 899 . If, however, in decision block 830 it was determined that the request was not a request to create a storage resource, processing proceeds to block 850 where a response indicating that no storage resource is available is sent to the calling server (either the front end server 130 or an LPS server 300 ). Then processing ends at block 899 .
  • FIG. 9 illustrates an exemplary storage resource creation subroutine 900 suitable for use in FIG. 8 .
  • the exemplary storage resource creation subroutine 900 begins at block 901 and proceeds to block 905 where a storage resource creation request for an RID is received.
  • a new storage resource is created in a storage server 150 .
  • the association of the new storage resource to the RID is stored in either an LPS server or a storage server depending on how the invention is implemented.
  • the storage of the association is used to identify the partition where the new resource is stored when subsequent storage operation requests are received.
  • the storage resource creation subroutine 900 ends at block 999 , returning the location of the new storage resource's storage partition.
  • creating a new storage resource also involves choosing the location of the new partition, i.e., which storage server is to provide the partition that stores the new storage resource.
  • FIG. 10 illustrates an exemplary new partition storage partition selection routine 1000 .
  • the new storage partition selection routine 1000 begins in block 1001 and proceeds to block 1005 where an RID to be associated with a new storage resource is received from a front end server 130 .
  • the RID is obtained from an explicit storage resource creation request.
  • an LPS server 300 may automatically initiate the creation of a new storage resource when an RID that is not associated with any storage partition is received.
  • a random number “R” is generated.
  • the random number R is any real number between zero and one.
  • the storage partitions on the storage servers 150 are ranked in ascending order according to a load balancing factor (“LBF”) for each storage partition.
  • LBFs are determined (or arbitrarily set to increase or decrease a storage partition's usage) values that represent a current load on a storage partition.
  • LBF values may be determined using a number of different factors, including, but not limited to mapping numbers (number of storage resources on a storage partition), mapping accesses (number of accesses to storage resources on a partition), assigned manual weighting values (e.g., arbitrarily set weighting values or weighting values set according to an LBF value desired for a particular storage partition) or some combination thereof.
  • LBF 1 ( 1/20)/( 1/20+ 1/30+ 1/50) ⁇ 48%
  • LBF 2 ( 1/30)/( 1/20+ 1/30+ 1/50) ⁇ 32%
  • LBF 3 ( 1/50)/( 1/20+ 1/30+ 1/50) ⁇ 20%.
  • this is merely one possible method of calculating LBF values, and those of ordinary skill in the art will appreciate that other methods of calculating LBF values are possible.
  • a storage partition is located where the sum of all lower ranked LBFs is less than or equal to and the sum of all lower ranked LBFs plus the LBF of the located partition is greater than R.
  • a new resource creation request is sent to the located storage partition.
  • the located storage server containing the storage partition processes the new resource creation request (see FIGS. 8-9 ).
  • the location of the new storage resource's partition is received back at the LPS server 300 . See block 1030 .
  • the new partition location is associated with the previously received RID and the association is saved (in block 1035 ) in the lookup store 320 of the LPS server 300 .
  • the storage partition selection routine 1000 then ends at block 1099 .
  • the new storage partition selection routine 1000 described has an inherent load balancing effect because storage partitions are chosen when new partitions are assigned storage resources based on the load balance factors (LBFs) of each storage partition.
  • LPFs load balance factors
  • the new storage partition selection routine 1000 described above should be taken as exemplary, not limiting. Many other new storage partition selection routines may be used without departing from the spirit and scope of the present invention. For example, the ordering of partitions LPFs may be reverse, with an equivalent reversal of the conditions the random number R must meet. Still other variations will be apparent to those of ordinary skill in the art.
  • the location of a storage resource may be moved from one partition to another partition (e.g., to a partition on a server with more available storage space, with a faster connection, with more reliable storage hardware, etc.). Moving a storage resource from one storage partition to another storage partition involved briefly locking the mapping of the RID to the storage resource's storage partition, but does not require locking any other storage resource's mapping (as a hash-based allocation would).
  • the LPS server 300 associated with the RID of the storage resource updates its lookup store 320 to map the RID of the storage resource to it new storage partition location.
  • the lookup store 320 comprises a lookup table containing resource and partition information as shown below in Table 2:
  • the lookup store 320 stores a list of partitions described by “tbl_Partition” entries that are mapped to “tbl_PartitionMapping” entries for storage resources that are associated with one of the partitions listing in the lookup store 320 (note the “PartitionID” field in the tbl_PartitionMapping entry).
  • a storage resource entry of a lookup store 320 using a tbl_PartitionMapping entry, as listed above also includes an “ApplicationID” field that designates a type of application for use with a storage resource. It will also be appreciated that including an ApplicationID enables embodiments of the present invention to store multiple types of storage resource for multiple types of applications.
  • Such a multiple application type/resource embodiment of the present invention is substantially similar to a single application type/resource type embodiment of the present invention, however, in addition to an RID used to designate a storage resource, an ApplicationID would also be used. Accordingly, in such an embodiment an RID could be associated with multiple storage resources if each storage resource had a separate ApplicationID.
  • an RID could be associated with multiple storage resources if each storage resource had a separate ApplicationID.
  • a network-based digital photograph storing system might store digital images as well as images descriptions for each digital image. In such a system the digital images and the images descriptions would have the same RID, however each could have different ApplicationIDs and may even be stored in a separate partition.

Abstract

A method, system, and computer-readable medium for accessing and managing an online storage system is disclosed. Access to a storage resource in a multiple server storage system is provided by sending to a lookup partitioning service server a resource identifier that is associated with a storage resource stored in a particular storage partition of a storage server. The LPS returns the looked-up partition that stores the storage resource associated with the resource identifier. Access to the storage resource is then enabled to the looked-up storage partition.

Description

    RELATED APPLICATIONS
  • This U.S. Non-provisional application for Letters Patent is a divisional of and claims the benefit of priority to U.S. patent application Ser. No. 10/606,626, filed on Jun. 25, 2003, the disclosure of which is incorporated by reference herein.
  • FIELD OF THE INVENTION
  • The present invention relates in general to online storage and in particular to a system and method for providing access to online storage in a configurable and efficient manner.
  • BACKGROUND OF THE INVENTION
  • Networks are well known in the computer field. By definition, a network is a group of computers and associated devices that are connected by communication facilities or links. An internetwork, in turn, is the joining of multiple computer networks, both similar and dissimilar, by means of gateways or routers that facilitate data transfer and conversion from the multiple computer networks. A well known abbreviation for the term internetwork is “internet.” As currently understood, the capitalized term “Internet” refers to the collection of networks and routers that use the Internet Protocol to communicate with one another. The Internet has recently seen increased growth by virtue of its ability to link computers located throughout the world. As will be better appreciated from the following description, embodiments of the present invention could find use in many network environments; however, for purposes of discussion, the Internet is used as an exemplary network environment for implementing embodiments of the present invention.
  • The Internet has quickly become a popular method of disseminating information due in large part to its ability to deliver information quickly and reliably. To retrieve stored resources or other data over the network, a user typically uses communications or network browsing software. A common way of retrieving storage resources is to use such communications or network browsing software to access storage resources at a uniform resource identifier (“URI”) address, such as a uniform resource locator (“URL”) address, that indicates the location of a storage resource on a server connected to the network.
  • Storage systems for computing devices are also well known in the computing field. Software applications and operating systems generally have access to some form of storage. Such storage may include hard drives, solid state memory, removable storage devices, etc. Most conventional computing devices have local storage. However, as the use of networks and network applications increases, so has the use of online storage that is remote from computing devices. One form of conventional online storage system is a file server in which computing devices are able to store and retrieve files. A more sophisticated form of online storage employs multiple file servers some of which may replicate other servers in order to provide redundancy in the event the main file server becomes inoperable or inaccessible While single and multiple file servers accessible by computing devices in networks have solved some of the problems of accessing online storage resources, such file servers are not designed to efficiently control and route accesses to particular resources, such as a particular user's address book, for example. As network accessible applications proliferate, so has the need for storing online resources at separate locations associated with particular network accessible applications and the users of such applications.
  • More specifically, advanced network applications usually accesses storage resources at a remote server over the Internet. As the Internet (and other networks) has developed, some of the functions that were formerly performed by applications running on client devices are now provided by applications running on network accessible servers. One such example is a Web-based e-mail network application. In a network accessible e-mail application, e-mails and address book information are stored on remote servers. Remote server storage eliminates the need for a user to export or synchronize their e-mail information when the user changes to a new device and/or adds a new device to the user's inventory of devices. Unfortunately, previously developed remote file servers, in particular multiple remote file servers, accessible by network applications have not provided an efficient storage system for such user dependent applications. User dependent applications, such as e-mail applications access separate online storage resources. In the past, multiple file servers have not provided enough flexibility to grow and adapt while still maintaining efficient access (or routing) to storage resources.
  • Some previously developed on-line file servers have used rigid hash-based allocations to segment where online storage resources should be saved. Rigid segmentation is inflexible and does not provide sufficiently fine “granularity” (level of control) when accessing resources stored in on-line servers. If a particular server is under-utilized, a rigid hash-based load balancing system is not able to efficiently adjust its load assignments to increase the load on the under utilized server. Still further, moving storage resources with such hash-based allocations requires locking entire hash buckets, which increases the difficulty of moving resource for end-users.
  • Accordingly, there is a need for an improved method of accessing and managing online storage systems that is efficient and sufficiently granular. It is desirable that such a method provide information in an application independent manner.
  • SUMMARY OF THE INVENTION
  • Embodiments of present invention relate to a method, system, and computer-readable medium for accessing and managing an online storage system. In accordance with one aspect of the present invention, a method for accessing and managing a resource stored in a multiple remote file server system is provided. In accordance with this aspect of the present invention, a resource identifier is sent by a client device to a remotely located lookup partitioning service (“LPS”) server, via another server such as a front end server. The resource identifier is associated with a resource stored in a particular storage partition of a particular storage file server of the multiple remote file server system. The LPS server returns a looked up storage server location, i.e., a location that identifies the particular storage partition in the particular storage file server, to a front end server.
  • In accordance with further aspects of the present invention, the partition housing the identified resource is located on multiple storage file servers, preferably two storage file servers, one functioning as a primary file storage server and the other functioning as a backup storage file server. Preferably, the backup storage file server is only available for access if the primary storage file server becomes unavailable, e.g., crashes. Thus, the backup storage file server is a redundant storage file server.
  • In accordance with another aspect of the present invention, multiple LPS servers are provided and the method includes determining which LPS server will provide the looked up storage server location. One way of determining which LPS server will provide the looked up storage server location includes processing the resource identifier using a hash function to provide a hashed resource identifier, which identifies the LPS server.
  • In accordance with still further aspects of the present invention, the LPS server uses the resource identifier to look up, in a resource lookup store, the storage server location, i.e., the location that identifies the particular partition in the particular storage file server, where the resource associated with the resource identifier is located. The LPS server grants the client device access to the storage resource by providing the storage server location to a front end server accessible by the client device.
  • In accordance with yet another aspect of the present invention, if the LPS server determines that no storage resource partition exists when receiving a resource identifier from a front end server, the LPS server automatically requests the creation of a storage resource at a particular storage server partition in a particular storage file server and associates the resource identifier with the newly created storage partition location in the resource lookup store. This newly created storage partition location is then be provided to the front end server.
  • In accordance with still further aspects of the present invention, creating a new storage resource on a storage server includes calculating a load balancing factor for each storage file server in a multiple file server storage system. The load balancing factor is used to determine where a new storage resource should be located. The load balancing factor may be based on a mapping number, a count of mapping accesses, a manual waiting value, or other information.
  • As can be seen from the foregoing summary, embodiments of the present invention provide an improved method for accessing and managing an online storage system and a related computer-readable medium and system.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
  • FIG. 1 is a pictorial diagram of an exemplary system for providing client device access to online resources.
  • FIG. 2 is a pictorial diagram of a portion of an exemplary system for providing access to online resources that illustrates the redundant aspects of one exemplary embodiment of the invention suitable for use in FIG. 1.
  • FIG. 3 is a block diagram of a lookup partitioning service server, suitable for use in FIGS. 1 and 2.
  • FIG. 4 is a diagram illustrating the actions of a client device, front end server, and a storage clearinghouse server when accessing online storage for the exemplary system shown in FIG. 1.
  • FIG. 5 is a diagram illustrating the actions of a client device, front end server, lookup partitioning service server, and storage clearinghouse server when granting access to online storage for the exemplary system shown in FIG. 1.
  • FIG. 6 is a flow diagram illustrating an exemplary lookup partitioning service server locating routine according to embodiments of the present invention.
  • FIG. 7 is a flow diagram illustrating an exemplary storage partition locating routine according to embodiments of the present invention.
  • FIG. 8 is an overview flow diagram illustrating an exemplary storage operation performing routine according to embodiments of the present invention.
  • FIG. 9 is a flow diagram illustrating a storage resource creation subroutine suitable for use in FIG. 8.
  • FIG. 10 is a flow diagram illustrating an exemplary storage partition determining routine for a new storage resource according to embodiments of the present invention.
  • DETAILED DESCRIPTION
  • The detailed description which follows is represented largely in terms of processes and symbolic representations of operations by conventional computing components, including processors, memory storage devices for the processors, connected display devices, and input devices, all of which are well known in the art. These processes and operations may utilize conventional computing components in a heterogeneous distributed computing environment, including remote storage servers, computer servers, and memory storage devices; such processes, devices, and operations also being known to those skilled in the art and others. Each of these conventional distributed computing components is accessible by the processors via a communications network.
  • Embodiments of the present invention relate to providing a flexible and efficient method for accessing and managing online storage resources across remotely located multiple file servers. As will be better understood from the following description, lookup partitioning service servers add efficiency and flexibility to an online storage system employing embodiments of the present invention.
  • As previously explained the capitalized term “Internet” refers to the collection of networks and monitors that communicate with one another. FIG. 1 is a pictorial diagram of an exemplary online storage system 100 for providing access to online storage resources to client devices 110A, 110B and 110C . . . , via the Internet 105. For ease of illustration, three representative client devices 110A, 110B and 110C are shown pictorially as a personal digital assistant (PDA) 110A, a personal computer 110B and a cellular telephone 110C in FIG. 1, it being recognized that a large number of client devices in a variety of forms would be included in an actual online storage system 100 employing an embodiment of the invention. In general, the client devices 110A, 110B, 110C have computing capabilities and may be any form of device capable of communicating with the server devices of embodiments of the present invention. Thus, while the client devices 110A, 110B and 110C are pictorially shown as a PDA, a personal computer and cellular telephone, this depiction should be taken as illustrative and not limiting.
  • The online storage system 100 functions in a distributed computing environment that includes the plurality of computing devices 110A, 110B, 110C . . . , interconnected by the Internet 105 (or some other suitable network) to a storage clearinghouse 200. The storage clearinghouse 200 includes a front end server 130, lookup partitioning service (“LPS”) server 300, and storage server 150, all interconnected via a suitable network. As will be appreciated by those of ordinary skill in the art, the front end server 130, the LPS server 300, and the storage server 150, may reside on any device accessible by the client devices 110A, 110B, and 110C, shown in FIG. 1. An exemplary LPS server 300 is shown in detail in FIG. 3 and described below.
  • It will also be appreciated that while the front end server 130, the LPS server 300 and the storage server 150 of the storage clearinghouse 200 are illustrated and described as separate devices, they may be formed by more or fewer devices. For example the LPS server 300 and the storage server 150 may be “virtual” servers residing on the same device. Likewise, the storage server 150 may be formed by several “virtual” servers residing on a single device, For example, a storage server that houses a redundant copy of a partition of another storage server as a redundant partition could be on the same device as the “other” storage server. Additionally, while only a single front end server 130, LPS server 300, and storage server 150 have been shown in FIG. 1, it will be appreciated that several front end servers 130, LPS servers 300, and storage servers 150 can be included in an actual system practicing embodiments of the present invention. One such embodiment that comprises several multiple LPS servers and several storage servers is illustrated in FIG. 2 and described below. It will also be appreciated that the LPS servers and the storage servers may be file server, database servers or a mixture of file servers and database servers.
  • An exemplary embodiment of the storage clearinghouse 200 is illustrated in more detail in FIG. 2. The exemplary storage clearinghouse 200 illustrated in FIG. 2, includes three LPS servers 300A-C, six storage servers 150A-F and a single front end server 130. The three LPS servers and the six storage servers are in communication with the front end server 130. Further as shown in FIG. 1, the three LPS servers and the six storage servers are in communication with one another.
  • Each of the LPS servers 300A-C includes a primary lookup partition, the two redundant look-up partitions, one for each of the other LPS servers. Storing (mirroring) information in two redundant partitions lookup provides for access to the storage servers even if the LPS servers continuing the primary and one of the redundant partitions are unavailable. Similarly, each of the storage servers includes a primary storage resource partition and a redundant storage resource partition. Providing redundant storage partitions on the storage servers 150A-F, provides for access to storage resources even if a storage resource's primary partition is not available due, for example, to storage server being offline. Communication between the LPS servers 300 and the storage servers 150 is illustrated in FIGS. 4 and 5 and described below.
  • FIG. 3 illustrates an exemplary LPS server 300 suitable for use in the storage clearinghouse 200 shown in FIGS. 1 and 2. In its most basic form, the LPS server 300 typically includes at least one processing unit 302 and memory 304. Depending on the exact configuration and type of LPS server 300, memory 304 may be volatile (such as RAM), nonvolatile (such as ROM, flash memory, etc.), or some combination of the two. The most basic configuration of an LPS server is illustrated in FIG. 3 surrounded by dashed line 306. The LPS server 300 may also have additional features and/or functionality. For example, the LPS server 300 may also include additional storage (removable and/or nonremovable) including, but not limited to, magnetic or optical discs or tape. Such additional storage is illustrated in FIG. 3 by removable storage 308 and nonremovable storage 310. In general, computer storage media includes volatile and nonvolatile, removable and nonremovable media implemented in any method or technology for storage of computing information (e.g., computer readable instructions, data structures, program modules, other data, etc.). Memory 304, removable storage 308, and nonremovable storage 310, are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory, or other memory technology, CD, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disc storage, or other magnetic storage devices, or any other medium which can be used to store or read desired information and which can be accessed by the LPS server 300. Such computer storage media may be part of the LPS server 300. The memory 304 of a LPS server 300 practicing embodiments of the present invention stores a lookup store 320 that includes associations (mappings) between resource identifiers (“RIDs”) and partitions (both primary and redundant storage partitions, if applicable) on the storage servers 150 where storage resources are stored.
  • The LPS server 300 also contains a communications connection 312 that the LPS server uses to communicate with other devices. The communications connection 312 is used to communicate computer readable instructions, data structures, program modules or other data preferably using a modulated data signal that includes a carrier wave or other transport mechanism modulated by the data to be communicated. By way of example and not limitation, communication connection 312 includes wired connections, both copper and optical, and wireless connections such as acoustic, radio frequency, infrared, etc. LPS server 300 may also have input device(s) 314, such as a keyboard, a mouse, a pen, a voice input device, a touch input device, etc. Output device(s) 316, such as a display, speakers, a printer, etc., may also be included. Since all these devices are well known in the art, they are not described here. Since, in general, the front end server 130 and storage server 150 can be similar to the LPS server 300 described above, except for the lookup stores 320, these servers are not described in detail here.
  • The operation of the online storage system 100 shown in FIGS. 1 and 2 will be best understood by reference to FIG. 4, which illustrates one exemplary sequence of communication interactions between a client device 110 and the servers shown in FIG. 1, i.e. the front end server 130, the LPS server 300, and the storage server 150. The exemplary communication interactions shown in FIG. 4 begin by the client device 110 sending 401 a resource identifier, i.e., an RID, to the front end server 130. The front end server 130 hashes 405 the RID to determine a “bucket value.” Those of ordinary skill in the art and others will appreciate that hashing is the conversion of an identifier or key into a hash value, also called a bucket value, that identifies the location of the corresponding data in a data source (e.g., table, database, etc.). Hashing is typically accomplished by passing an identifier through a “hash function” to generate bucket or other hashed values. In embodiments of the present invention the hashing of the RID distributes the RID into bucket values that are essentially randomly distributed across the range of bucket values generated by the hash function. Next, the front end server 130 determines 410 the LPS partition associated with the bucket value (e.g., such as in a hash table on the front end server 130). As shown in FIG. 2 and described above, in one embodiment of the present invention, each LPS server includes a primary LPS partition and one or more redundant LPS partitions. Each primary and redundant LPS partition is associated with a particular bucket value. For example, the front end server 130 has an entry for both the primary and any redundant LPS servers indexed by bucket values. That way if a primary LPS partition is unavailable, then the front end server 130 know where to locate the redundant LPS partition. Determining 410 the primary and redundant LPS partitions from the bucket values allows the front end server 130 to communicate with the LPS servers 300 that house the LPS partition associated with the RID. Those of ordinary skill in the art and others will appreciate that if the LPS server 300 containing the primary LPS partition associated with the RID is available, the front end server 130 will communicate with that LPS server 300. If, however, the LPS server 300 containing the primary LPS partition is not available, the front end server 130 will communicate with one of the LPS servers 300 that contains a redundant LPS partition associated with the RID (based on some predetermined algorithm if more than one redundant LPS partition is available).
  • After an LPS partition is determined by the front end server 130 in the manner described above, the front end server 130 communicates 415 the RID to the LPS server 300 housing the (primary or redundant) LPS partition. The LPS server 300 then looks up 420 the storage partition for the RID in its lookup store 320.
  • If the LPS server 300 determines 425 that no storage partition is associated with the RID, a new storage resource is created 435. The new storage resource is created in response to a storage resource creating request 430 generated by the LPS server 300 that is forwarded via the front end server 130 as a storage resource creating request 433 for the storage server 150. The new storage resource creation is based on load balance data collected by the LPS server 300.
  • The storage server 150 containing the storage partition associated with the newly created storage resource returns 440 partition location information to the LPS server 300. The LPS server 300 stores 445 the association between the RID and the new storage partition. The creation of storage resources based on load balance data is discussed in greater detail below with regard to a storage resource creation subroutine 900 illustrated in FIG. 9 and a storage partition determining routine 1000 illustrated in FIG. 10.
  • The LPS server 300 next returns 450 the storage partition location to the front end server 130. The client device 110 may then request 455 a storage operation from the storage server 150, via the front end server 130 since the front end server 130 now knows the location of the partition housing the storage resource. The storage server 150 performs 460 the storage operation at the storage partition location indicated by the front end server 130, after which the storage server 150 returns a storage operation response 465 to the client device 110 via the front end server 130.
  • The operation of online storage system 100 shown in FIGS. 1 and 2 will be further understood by reference to FIG. 5, which illustrates another exemplary sequence of communication interactions between a client device 110 and the servers shown in FIG. 1. While similar to FIG. 4, FIG. 5 does not include the creation of a new resource prior to performing a storage operation. Like FIG. 4, in addition to a client device 110, FIG. 5 includes a front end server 130, an LPS server 300 and a storage server 150.
  • The exemplary communications interactions shown in FIG. 5 begin with the client device 110 sending 501 an RID to the front end server 130. Next, the front end server 130 hashes the RID to determine a “bucket value.” Then, the front end server 130 determines 510 the LPS partitions associated with the bucket value. Each LPS server 300 in this exemplary embodiment of the present invention includes a primary LPS partition and one or more redundant LPS partition. As with FIG. 4, determining 510 the LPS partition from the bucket value allows the front end server 130 to communicate with the LPS server that houses the LPS partition associated with the RID.
  • After an LPS partition is determined 510 at the front end server 130, the front end server 130 communicates 515 the RID to the LPS server 300 housing that LPS partition. The LPS server 300 then looks up 520 the storage partition associated with the RID. In the exemplary embodiment illustrated in FIG. 5, the LPS server 300 returns 525 the location of the storage partition associated with the RID to the front end server 130. The client device 110 may then request 530 a storage operation from the storage server 150 via the front end server 130. The storage server 150 performs 535 the storage operation at the storage partition location indicated by the front end server 130, after which the storage server 150 returns the storage operation response 540 to the client device 110 via the front end server 130.
  • Those of ordinary skill in the art and others will appreciate that the communication interactions illustrated in FIGS. 4 and 5 provide structured and efficient access to storage resources without having to maintain rigid mappings between an RID and a storage location. Each RID to partition association stored on the LPS servers can be updated or modified without disturbing of any other RIDs to partition association. As illustrated in FIG. 4, the system 100 is able to automatically create new storage resources, when they are needed. Those of ordinary skill in the art and others will appreciate that FIGS. 4 and 5 represent only exemplary sets of communication interactions between the devices of the online storage system 100 and that various changes can be made therein without departing from the spirit and scope of the invention. For example, the hashing of RIDs to form bucket values may not always be necessary, particularly if RIDs are associated with specific LPS partitions and/or servers (e.g., if there were only a single LPS partition and/or LPS server 300 associated with an RID).
  • The communication interactions illustrated in FIGS. 4 and 5 between various devices of the online storage system 100 may employ any conventional communications form. In one exemplary embodiment of the present, invention communications are formatted using Simple Open Access Protocol (“SOAP”) with Extensible Markup Language (“XML”) formatted instructions and/or parameters. An exemplary XML formatted instruction for a resource creation request (RAdd) is illustrated by the following code in Table 1:
  • TABLE 1
    POST /rservice/rservice.asmx HTTP/1.1
    Host: contacts.msn.com
    Content-Type: text/xml; charset=utf-8
    Content-Length: length
    SOAPAction: “http://www.msn.com/webservices/Resource/RAdd”
    <?xml version=“1.0” encoding=“utf-8”?>
    <soap:Envelope xmlns:xsi=“http://www.w3.org/2001/
    XMLSchema-instance”
    xmlns:xsd=“http://www.w3.org/2001/XMLSchema”
    xmlns:soap=“http://schemas.xmlsoap.org/soap/envelope/”>
     <soap:Header>
      <RApplicationHeader xmlns=“http://www.msn.com/webservices/
      Resource”>
       <ApplicationId>000000000000000000010010efd4e487
       </ApplicationId>
       <IsMigration>0</IsMigration>
      </RApplicationHeader>
      <RAuthHeader xmlns=“http://www.msn.com/webservices/
      Resource”>
       <ManagedGroupRequest>0</ManagedGroupRequest>
      </RAuthHeader>
     </soap:Header>
     <soap:Body>
      <RAdd xmlns=“http://www.msn.com/webservices/Resource”>
       <rInfo>
        <name></name>
        <ownerPuid>7893478923</ownerPuid>
        <ownerEmail>ken@hotmail.com</ownerEmail>
       </rInfo>
      </RAdd>
     </soap:Body>
    </soap:Envelope>
  • Those of ordinary skill in the art and others will appreciate that the resource creation request illustrated above is merely one exemplary form of communication interaction between the devices of the online storage system 100 illustrated in FIGS. 4 and 5 and that many other forms of communication interactions are possible.
  • The storage clearinghouse 200 of the online storage system 100 described herein includes a front end server 130 that is used to manage communications between client devices 110A, 110B, 110C . . . , and one or more of the LPS servers 300 and one or more storage servers 150. FIG. 6 is a flow diagram illustrating an exemplary LPS server locating routine 600 suitable for implementation by the front end server 130 for locating an LPS server 300 able to conduct storage partition lookups based on an RID. The LPS server locating routine 600 begins at block 601 and proceeds to block 605 where an RID associated with a storage resource is obtained, i.e., received from a front end server 130. The front end server 130 hashes the RID to generate a bucket value as shown by block 610. As described above, hashing a value, such as an RID, is accomplished by processing the value using a hash function. Next, in block 615 the front end server 130 determines the location of the LPS partition associated with the bucket value generated by the hash function. Routine 600 then ends at block 699.
  • Those of ordinary skill in the art and others will appreciate that in a storage clearinghouse 200 with LPS servers 300 that have redundant LPS partitions, determining the location of the LPS partition associated with the bucket value generated by the hash function (as in block 615 above) includes determining if a primary LPS is available. If a primary LPS partition is unavailable, then a further determination of the location of a redundant LPS partition associated with the bucket value generated by the hash function is needed.
  • As described above with regard to FIGS. 4 and 5, after the front end server 130 has determined which LPS partition is associated with a particular bucket value (and accordingly a particular RID), the front end server 130 can communicate with the LPS server 300 that houses the associated LPS partition. The LPS server 300 is responsible for looking up which storage partition (and therefore which storage server 150) houses the storage resource identified by an RID.
  • FIG. 7 illustrates an exemplary storage partition locating routine 700 suitable for implementation by an LPS server 300 for determining the location of a storage partition associated with an RID. The storage partition locating routine 700 begins at block 701 and proceeds to block 705 where an RID for a storage resource is received from a front end server 130. Next, in block 710 the LPS server 300 looks up the storage partition for the RID in its lookup store 320 (FIG. 3). Those of ordinary skill in the art and others will appreciate that the RID received by the LPS server 300 may have the same format as the RID sent to the front end server 130 or may be a transformation of the RID, e.g., a hashed value, or other transformation of the RID received from the front end server 130. Such transformations may be desirable if the RID received at the front end server 130 is not appropriate for performing an efficient lookup of the location of a storage partition in a lookup store 320. For example, if the RID is an arbitrary textual name assigned by a user, a hashed value of the arbitrarily assigned name would provide a more efficient “key” for looking up a storage partition in a conventional lookup store 320. However, Alternatively, as those of ordinary skill in the art and others will appreciate, the RID may in fact be identical or closely related (e.g., a zero extended value to bring the RID up to a uniform number of digits) to the RID received by the front end server 130.
  • Next, processing continues in decision block 715 where a determination is made whether the storage partition was located in the lookup store 320. If so, processing proceeds to block 720 where the location of the storage partition associated with the RID is sent to the front end server 130. Those of ordinary skill in the art and others will appreciate that in a storage clearinghouse 200 with storage servers 150 that have redundant storage partitions, location and sending a location of the storage partition associated with the RID, as in blocks 710 and 720, also includes locating and sending any locations of redundant storage partitions associated with the RID. Then, the storage partition locating routine 700 ends at block 799. If, however, in decision block 715 a determination was made that the storage partition was not located in block 710, processing proceeds to block 725 where a message is sent to the front end server 130 indicating that no storage resource was located. The storage partition locating routine 700 then ends at block 799.
  • If a storage partition on a storage server 150 was identified to the front end server 130 as housing a storage partition associated with an RID, the front end server 130 may then communicate storage operations to the storage server 150 to be performed on the storage resource associated with the RID at the storage partition. The storage server 150 processes storage operation requests from the front end server.
  • FIG. 8 illustrates an exemplary storage operation performing routine 800 on a storage server 150. The storage operation performing routine 800 is an alternative to the communication interactions shown in FIG. 4 wherein the LPS server determines if a storage resource is available. In FIG. 8, this determination is made by a storage server. Routine 800 begins at block 801 and proceeds to block 805 where a storage operation request is received along with a RID from a calling server. The storage operation request may be received from either an LPS server 300 (in the case of a resource creation request) or from the front end server 130 (for other storage operation requests). In decision block 810, a determination is made by the storage server whether a storage resource associated with the RID is available by checking the partitions of the storage server 150. If a storage resource associated with the RID is available, processing continues to block 820 where the storage operation is performed on the storage resource associated with the RID. Those of ordinary skill in the art and others will appreciate that the storage operation can be any one of a number of different storage operations performed on storage resources including but in no way limited to read requests, write requests, create requests, update requests, delete requests, copy requests, insertion requests, backup requests, restore requests, and the like.
  • Those of ordinary skill in the art and others will appreciate that in a storage clearinghouse 200 with storage servers 150 that have redundant storage partitions, determining if a partition associated with an RID is available, as in decision block 810, includes determining if a primary storage partition is available. Next, if a primary storage partition is unavailable, then a further determination is made whether any redundant storage partitions associated with the RID are available, and only if none are available is a determination made that the storage partition is not available.
  • In one exemplary embodiment, the storage resource is a hierarchical in nature. Storage operation requests can be directed to specific levels in the storage hierarchy. One example of a hierarchical storage resource is a university storage resources comprising department records, course records, professor records, students enrolled in course records and student records. Storage operations might apply to any record level in this example. For example, a new department record might be created. This creation would, in turn, require course records, professor records, and students enrolled in course records to be created and added to the department records in a hierarchical fashion. The above example is merely meant to be illustrative of one form of storage resource. Those of ordinary skill in the art and others will appreciate that many other forms of storage resources, including, but not limited to, flat files, databases, and link lists may form storage resources suitable for storage in partitions on the storage server 150.
  • After a storage operation is performed on the storage resource in block 820, the response (if any) to the storage operation is sent back, in block 825, to the front end server 130. Then, the storage operation performing routine 800 ends at block 899.
  • If in decision block 810 it is determined that a storage resource associated with the RID is not available, processing proceeds to decision block 830 where a determination is made whether the storage operation request is a request to create a new storage resource. If so, processing proceeds to subroutine block 900 where a new resource is created. An exemplary new storage resource creation subroutine 900 is illustrated in FIG. 9 and described below.
  • After the new storage resource creation subroutine 900 returns, processing proceeds to block 835 where the location of the new storage resource associated with the RID is sent to the LPS server 300 (where it is saved in a lookup store 320 that associates storage resource locations with the RIDs). Processing then ends at block 899. If, however, in decision block 830 it was determined that the request was not a request to create a storage resource, processing proceeds to block 850 where a response indicating that no storage resource is available is sent to the calling server (either the front end server 130 or an LPS server 300). Then processing ends at block 899.
  • FIG. 9 illustrates an exemplary storage resource creation subroutine 900 suitable for use in FIG. 8. The exemplary storage resource creation subroutine 900 begins at block 901 and proceeds to block 905 where a storage resource creation request for an RID is received. Next, in block 910 a new storage resource is created in a storage server 150. In block 915, the association of the new storage resource to the RID is stored in either an LPS server or a storage server depending on how the invention is implemented. The storage of the association is used to identify the partition where the new resource is stored when subsequent storage operation requests are received. Then, the storage resource creation subroutine 900 ends at block 999, returning the location of the new storage resource's storage partition.
  • In one exemplary embodiment of the present invention, creating a new storage resource also involves choosing the location of the new partition, i.e., which storage server is to provide the partition that stores the new storage resource. FIG. 10 illustrates an exemplary new partition storage partition selection routine 1000. The new storage partition selection routine 1000 begins in block 1001 and proceeds to block 1005 where an RID to be associated with a new storage resource is received from a front end server 130. In one exemplary embodiment, the RID is obtained from an explicit storage resource creation request. Alternatively, an LPS server 300 may automatically initiate the creation of a new storage resource when an RID that is not associated with any storage partition is received.
  • Next, in block 1010 a random number “R” is generated. In one exemplary embodiment of the present invention the random number R is any real number between zero and one. In block 1015 the storage partitions on the storage servers 150 are ranked in ascending order according to a load balancing factor (“LBF”) for each storage partition. LBFs are determined (or arbitrarily set to increase or decrease a storage partition's usage) values that represent a current load on a storage partition. LBF values may be determined using a number of different factors, including, but not limited to mapping numbers (number of storage resources on a storage partition), mapping accesses (number of accesses to storage resources on a partition), assigned manual weighting values (e.g., arbitrarily set weighting values or weighting values set according to an LBF value desired for a particular storage partition) or some combination thereof.
  • One exemplary embodiment of present invention calculates LBF values for storage partitions as follows: given “n” partitions (P1, P2, . . . Pn) and the mapping counts for the partitions are C1, C2, . . . Cn, then the LBF for any storage partition “m” can be calculated as LBFm=(1/Cm)/(1/C1+1/C2+ . . . +1/Cn). For example, given three partitions with proportionate mapping counts of C1=20%, C2=30% and C3=50%, then LBF1=( 1/20)/( 1/20+ 1/30+ 1/50)≈48%, LBF2=( 1/30)/( 1/20+ 1/30+ 1/50)≈32% and LBF3=( 1/50)/( 1/20+ 1/30+ 1/50)≈20%. Of course this is merely one possible method of calculating LBF values, and those of ordinary skill in the art will appreciate that other methods of calculating LBF values are possible.
  • In block 1020, a storage partition is located where the sum of all lower ranked LBFs is less than or equal to and the sum of all lower ranked LBFs plus the LBF of the located partition is greater than R. Next, in block 1025, a new resource creation request is sent to the located storage partition. The located storage server containing the storage partition processes the new resource creation request (see FIGS. 8-9). The location of the new storage resource's partition is received back at the LPS server 300. See block 1030. The new partition location is associated with the previously received RID and the association is saved (in block 1035) in the lookup store 320 of the LPS server 300. The storage partition selection routine 1000 then ends at block 1099.
  • As will be appreciated by those skilled in the art the new storage partition selection routine 1000 described has an inherent load balancing effect because storage partitions are chosen when new partitions are assigned storage resources based on the load balance factors (LBFs) of each storage partition. Those of ordinary skill in the art and others will also appreciate that the new storage partition selection routine 1000 described above should be taken as exemplary, not limiting. Many other new storage partition selection routines may be used without departing from the spirit and scope of the present invention. For example, the ordering of partitions LPFs may be reverse, with an equivalent reversal of the conditions the random number R must meet. Still other variations will be apparent to those of ordinary skill in the art.
  • In another exemplary embodiment of the present invention, the location of a storage resource may be moved from one partition to another partition (e.g., to a partition on a server with more available storage space, with a faster connection, with more reliable storage hardware, etc.). Moving a storage resource from one storage partition to another storage partition involved briefly locking the mapping of the RID to the storage resource's storage partition, but does not require locking any other storage resource's mapping (as a hash-based allocation would). When a storage resource is moved to a new partition, the LPS server 300 associated with the RID of the storage resource updates its lookup store 320 to map the RID of the storage resource to it new storage partition location.
  • In one exemplary embodiment of the present invention, the lookup store 320 comprises a lookup table containing resource and partition information as shown below in Table 2:
  • TABLE 2
    Name Type Length
    tbl_Partition:
    PartitionID smallint 2
    PartitionName nvarchar 64
    LoadBalanceFactor float 8
    ProvisionTo bit 1
    MappingCount int 4
    LastModifiedDate datetime 8
    tbl_PartitionMapping:
    ApplicationID smallint 2
    RID uniqueidentifier 16
    PartitionID smallint 2
    Status tinyint 1
    Hashbucket smallint 2
    LastModifiedDate datetime 8
  • The lookup store 320 stores a list of partitions described by “tbl_Partition” entries that are mapped to “tbl_PartitionMapping” entries for storage resources that are associated with one of the partitions listing in the lookup store 320 (note the “PartitionID” field in the tbl_PartitionMapping entry). Those of ordinary skill in the art will also appreciate that a storage resource entry of a lookup store 320 using a tbl_PartitionMapping entry, as listed above, also includes an “ApplicationID” field that designates a type of application for use with a storage resource. It will also be appreciated that including an ApplicationID enables embodiments of the present invention to store multiple types of storage resource for multiple types of applications. Such a multiple application type/resource embodiment of the present invention is substantially similar to a single application type/resource type embodiment of the present invention, however, in addition to an RID used to designate a storage resource, an ApplicationID would also be used. Accordingly, in such an embodiment an RID could be associated with multiple storage resources if each storage resource had a separate ApplicationID. For example, a network-based digital photograph storing system might store digital images as well as images descriptions for each digital image. In such a system the digital images and the images descriptions would have the same RID, however each could have different ApplicationIDs and may even be stored in a separate partition. It will also be apparent to those skilled in the art that an embodiment combining the RID and ApplicationlD is also possible, however, such an embodiment is that is substantially similar to the single application embodiment of the present invention. The above-described embodiment should be taken as illustrative and not limiting
  • While the presently preferred embodiment of the invention has been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (20)

1. A computer implemented method comprising:
obtaining from a front end server, a resource identifier to be associated with a storage resource in a multiple storage server system;
utilizing the resource identifier to determine in a lookup store of a lookup partitioning service server that no storage resource exists;
calculating a load balancing factor for each of a plurality of storage partitions of the multiple storage server system;
determining, using the load balancing factors, at least one storage partition of a storage server in which a new storage resource should be created;
creating the new storage resource in the at least one storage partition;
associating the resource identifier with the at least one storage partition in the lookup store; and
granting access to the new storage resource by providing a location of the at least one storage partition of the storage server to the front end server.
2. A computer implemented method as recited in claim 1, wherein the load balancing factor is based on a mapping number.
3. A computer implemented method as recited in claim 1, wherein the load balancing factor is based on a count of mapping accesses.
4. A computer implemented method as recited in claim 1, wherein the load balancing factor is based on a manual waiting value.
5. A computer implemented method as recited in claim 1, wherein the new storage resource is directed to a level in a storage hierarchy.
6. A computer implemented method as recited in claim 1, wherein each storage server in the multiple storage server system comprises a primary storage resource partition and at least one redundant storage resource partition.
7. A computer implemented method as recited in claim 1, further comprising associating via a bucket value generated by a hash function a primary lookup partitioning service partition and its corresponding redundant lookup partitioning service partition.
8. One or more computer-readable storage media storing computer executable instructions that, when executed by a processor, configure a lookup partitioning server to perform acts comprising:
receiving a resource identifier to be associated with a storage resource in a multiple storage server system;
utilizing the resource identifier to determine in a lookup store of the lookup partitioning service server that no storage resource exists;
calculating a load balancing factor for each of a plurality of storage partitions of the multiple storage server system;
determining, using the load balancing factors, at least one storage partition in which a new storage resource should be created;
creating the new storage resource in the at least one storage partition;
associating the resource identifier with the at least one storage partition in the lookup store; and
granting access to the new storage resource by providing a location of the at least one storage partition.
9. One or more computer-readable storage media as recited in claim 8, wherein the load balancing factor is based on a mapping number.
10. One or more computer-readable storage media as recited in claim 8, wherein the load balancing factor is based on a count of mapping accesses.
11. One or more computer-readable storage media as recited in claim 8, wherein the load balancing factor is based on a manual waiting value.
12. One or more computer-readable storage media as recited in claim 8, wherein the new storage resource is directed to a level in a storage hierarchy.
13. A system comprising:
a front-end server;
a lookup partitioning service server; and
a storage server;
the lookup partitioning service server comprising a primary lookup partitioning service partition and at least one redundant lookup partitioning service partition, and the lookup partitioning service server being configured to determine a location of a storage partition on the storage server based at least on a resource identifier;
the storage server comprising a primary storage resource partition and at least one redundant storage resource partition; and
the front-end server being configured to:
receive the resource identifier from a client;
communicate with the lookup partitioning service server to locate a storage resource; and
request a storage operation on behalf of the client.
14. The system of claim 13, wherein the lookup partitioning service server is further configured to:
utilize the resource identifier to determine that no storage resource exists;
calculate a load balancing factor for each of a plurality of storage partitions;
determine, using the load balancing factors, at least one storage partition in which a new storage resource should be created; and
create the new storage resource in the at least one storage partition.
15. The system of claim 13, wherein the load balancing factor is based on a mapping number.
16. The system of claim 13, wherein the load balancing factor is based on a count of mapping accesses.
17. The system of claim 13, wherein the load balancing factor is based on a manual waiting value.
18. The system of claim 14, wherein the new storage resource is directed to a level in a storage hierarchy.
19. The system of claim 13, wherein each primary lookup partitioning service partition and its corresponding redundant lookup partitioning service partition are associated with a bucket value generated by a hash function.
20. The system of claim 13, wherein the corresponding redundant lookup partitioning service partition is utilized when the primary lookup partitioning service partition is unavailable.
US12/689,984 2003-06-25 2010-01-19 Lookup Partitioning Storage System and Method Abandoned US20100121855A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/689,984 US20100121855A1 (en) 2003-06-25 2010-01-19 Lookup Partitioning Storage System and Method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/606,626 US7676551B1 (en) 2003-06-25 2003-06-25 Lookup partitioning storage system and method
US12/689,984 US20100121855A1 (en) 2003-06-25 2010-01-19 Lookup Partitioning Storage System and Method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/606,626 Division US7676551B1 (en) 2003-06-25 2003-06-25 Lookup partitioning storage system and method

Publications (1)

Publication Number Publication Date
US20100121855A1 true US20100121855A1 (en) 2010-05-13

Family

ID=41785095

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/606,626 Active 2028-02-26 US7676551B1 (en) 2003-06-25 2003-06-25 Lookup partitioning storage system and method
US12/689,984 Abandoned US20100121855A1 (en) 2003-06-25 2010-01-19 Lookup Partitioning Storage System and Method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/606,626 Active 2028-02-26 US7676551B1 (en) 2003-06-25 2003-06-25 Lookup partitioning storage system and method

Country Status (1)

Country Link
US (2) US7676551B1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9098525B1 (en) * 2012-06-14 2015-08-04 Emc Corporation Concurrent access to data on shared storage through multiple access points
US20160307646A1 (en) * 2015-04-17 2016-10-20 HGST Netherlands B.V. Verification of storage media upon deployment
CN106445683A (en) * 2016-09-12 2017-02-22 北京中电普华信息技术有限公司 Method and device for distributing server resource
US10237070B2 (en) 2016-12-31 2019-03-19 Nok Nok Labs, Inc. System and method for sharing keys across authenticators
US10282533B2 (en) 2013-03-22 2019-05-07 Nok Nok Labs, Inc. System and method for eye tracking during authentication
US10326761B2 (en) 2014-05-02 2019-06-18 Nok Nok Labs, Inc. Web-based user authentication techniques and applications
US20200097200A1 (en) * 2018-09-20 2020-03-26 Western Digital Technologies, Inc. Customizing configuration of storage device(s) for operational environment
US10637853B2 (en) 2016-08-05 2020-04-28 Nok Nok Labs, Inc. Authentication techniques including speech and/or lip movement analysis
US10769635B2 (en) 2016-08-05 2020-09-08 Nok Nok Labs, Inc. Authentication techniques including speech and/or lip movement analysis
US10798087B2 (en) 2013-10-29 2020-10-06 Nok Nok Labs, Inc. Apparatus and method for implementing composite authenticators
EP3631732A4 (en) * 2017-05-31 2021-01-06 Intuit Inc. System for managing transactional data
US11792024B2 (en) 2019-03-29 2023-10-17 Nok Nok Labs, Inc. System and method for efficient challenge-response authentication
US11831409B2 (en) 2018-01-12 2023-11-28 Nok Nok Labs, Inc. System and method for binding verifiable claims
US11868995B2 (en) 2017-11-27 2024-01-09 Nok Nok Labs, Inc. Extending a secure key storage for transaction confirmation and cryptocurrency
US11929997B2 (en) 2013-03-22 2024-03-12 Nok Nok Labs, Inc. Advanced authentication techniques and applications

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8762552B2 (en) * 2005-04-13 2014-06-24 Brocade Communications Systems, Inc. Fine granularity access control for a storage area network
US8392482B1 (en) 2008-03-31 2013-03-05 Amazon Technologies, Inc. Versioning of database partition maps
US8386540B1 (en) * 2008-03-31 2013-02-26 Amazon Technologies, Inc. Scalable relational database service
US8452731B2 (en) * 2008-09-25 2013-05-28 Quest Software, Inc. Remote backup and restore
US20110289046A1 (en) * 2009-10-01 2011-11-24 Leach R Wey Systems and Methods for Archiving Business Objects
US9003086B1 (en) 2012-10-27 2015-04-07 Twitter, Inc. Dynamic distribution of replicated data
US20140172506A1 (en) * 2012-12-17 2014-06-19 Microsoft Corporation Customer segmentation
US9553822B2 (en) * 2013-11-12 2017-01-24 Microsoft Technology Licensing, Llc Constructing virtual motherboards and virtual storage devices
US9477699B2 (en) 2013-12-30 2016-10-25 Sybase, Inc. Static row identifier space partitioning for concurrent data insertion in delta memory store

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6286001B1 (en) * 1999-02-24 2001-09-04 Doodlebug Online, Inc. System and method for authorizing access to data on content servers in a distributed network
US20010037379A1 (en) * 2000-03-31 2001-11-01 Noam Livnat System and method for secure storage of information and grant of controlled access to same
US6351775B1 (en) * 1997-05-30 2002-02-26 International Business Machines Corporation Loading balancing across servers in a computer network
US20020156887A1 (en) * 2001-04-18 2002-10-24 Hitachi, Ltd. Storage network switch
US20030014503A1 (en) * 2001-07-12 2003-01-16 Arnaud Legout Method and apparatus for providing access of a client to a content provider server under control of a resource locator server
US6523130B1 (en) * 1999-03-11 2003-02-18 Microsoft Corporation Storage system having error detection and recovery
US6842784B1 (en) * 2000-06-27 2005-01-11 Emc Corporation Use of global logical volume identifiers to access logical volumes stored among a plurality of storage elements in a computer storage system
US6857012B2 (en) * 2000-10-26 2005-02-15 Intel Corporation Method and apparatus for initializing a new node in a network
US6985956B2 (en) * 2000-11-02 2006-01-10 Sun Microsystems, Inc. Switching system
US7136903B1 (en) * 1996-11-22 2006-11-14 Mangosoft Intellectual Property, Inc. Internet-based shared file service with native PC client access and semantics and distributed access control
US7406473B1 (en) * 2002-01-30 2008-07-29 Red Hat, Inc. Distributed file system using disk servers, lock servers and file servers
US7444414B2 (en) * 2002-07-10 2008-10-28 Hewlett-Packard Development Company, L.P. Secure resource access in a distributed environment
US7587426B2 (en) * 2002-01-23 2009-09-08 Hitachi, Ltd. System and method for virtualizing a distributed network storage as a single-view file system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136903B1 (en) * 1996-11-22 2006-11-14 Mangosoft Intellectual Property, Inc. Internet-based shared file service with native PC client access and semantics and distributed access control
US6351775B1 (en) * 1997-05-30 2002-02-26 International Business Machines Corporation Loading balancing across servers in a computer network
US6286001B1 (en) * 1999-02-24 2001-09-04 Doodlebug Online, Inc. System and method for authorizing access to data on content servers in a distributed network
US6523130B1 (en) * 1999-03-11 2003-02-18 Microsoft Corporation Storage system having error detection and recovery
US20010037379A1 (en) * 2000-03-31 2001-11-01 Noam Livnat System and method for secure storage of information and grant of controlled access to same
US6842784B1 (en) * 2000-06-27 2005-01-11 Emc Corporation Use of global logical volume identifiers to access logical volumes stored among a plurality of storage elements in a computer storage system
US6857012B2 (en) * 2000-10-26 2005-02-15 Intel Corporation Method and apparatus for initializing a new node in a network
US6985956B2 (en) * 2000-11-02 2006-01-10 Sun Microsystems, Inc. Switching system
US20020156887A1 (en) * 2001-04-18 2002-10-24 Hitachi, Ltd. Storage network switch
US20030014503A1 (en) * 2001-07-12 2003-01-16 Arnaud Legout Method and apparatus for providing access of a client to a content provider server under control of a resource locator server
US7587426B2 (en) * 2002-01-23 2009-09-08 Hitachi, Ltd. System and method for virtualizing a distributed network storage as a single-view file system
US7406473B1 (en) * 2002-01-30 2008-07-29 Red Hat, Inc. Distributed file system using disk servers, lock servers and file servers
US7444414B2 (en) * 2002-07-10 2008-10-28 Hewlett-Packard Development Company, L.P. Secure resource access in a distributed environment

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9098525B1 (en) * 2012-06-14 2015-08-04 Emc Corporation Concurrent access to data on shared storage through multiple access points
US10366218B2 (en) 2013-03-22 2019-07-30 Nok Nok Labs, Inc. System and method for collecting and utilizing client data for risk assessment during authentication
US11929997B2 (en) 2013-03-22 2024-03-12 Nok Nok Labs, Inc. Advanced authentication techniques and applications
US10282533B2 (en) 2013-03-22 2019-05-07 Nok Nok Labs, Inc. System and method for eye tracking during authentication
US10762181B2 (en) 2013-03-22 2020-09-01 Nok Nok Labs, Inc. System and method for user confirmation of online transactions
US10776464B2 (en) 2013-03-22 2020-09-15 Nok Nok Labs, Inc. System and method for adaptive application of authentication policies
US10706132B2 (en) * 2013-03-22 2020-07-07 Nok Nok Labs, Inc. System and method for adaptive user authentication
US10798087B2 (en) 2013-10-29 2020-10-06 Nok Nok Labs, Inc. Apparatus and method for implementing composite authenticators
US10326761B2 (en) 2014-05-02 2019-06-18 Nok Nok Labs, Inc. Web-based user authentication techniques and applications
CN106055435A (en) * 2015-04-17 2016-10-26 Hgst荷兰公司 Verification of storage media upon deployment
US9934871B2 (en) * 2015-04-17 2018-04-03 Western Digital Technologies, Inc. Verification of storage media upon deployment
US20180226137A1 (en) * 2015-04-17 2018-08-09 Western Digital Technologies, Inc. Verification of storage media upon deployment
US10607714B2 (en) * 2015-04-17 2020-03-31 Western Digital Technologies, Inc. Verification of storage media upon deployment
US20160307646A1 (en) * 2015-04-17 2016-10-20 HGST Netherlands B.V. Verification of storage media upon deployment
US10637853B2 (en) 2016-08-05 2020-04-28 Nok Nok Labs, Inc. Authentication techniques including speech and/or lip movement analysis
US10769635B2 (en) 2016-08-05 2020-09-08 Nok Nok Labs, Inc. Authentication techniques including speech and/or lip movement analysis
CN106445683A (en) * 2016-09-12 2017-02-22 北京中电普华信息技术有限公司 Method and device for distributing server resource
US10237070B2 (en) 2016-12-31 2019-03-19 Nok Nok Labs, Inc. System and method for sharing keys across authenticators
EP3631732A4 (en) * 2017-05-31 2021-01-06 Intuit Inc. System for managing transactional data
US11868995B2 (en) 2017-11-27 2024-01-09 Nok Nok Labs, Inc. Extending a secure key storage for transaction confirmation and cryptocurrency
US11831409B2 (en) 2018-01-12 2023-11-28 Nok Nok Labs, Inc. System and method for binding verifiable claims
US10732869B2 (en) * 2018-09-20 2020-08-04 Western Digital Technologies, Inc. Customizing configuration of storage device(s) for operational environment
US20200097200A1 (en) * 2018-09-20 2020-03-26 Western Digital Technologies, Inc. Customizing configuration of storage device(s) for operational environment
US11792024B2 (en) 2019-03-29 2023-10-17 Nok Nok Labs, Inc. System and method for efficient challenge-response authentication

Also Published As

Publication number Publication date
US7676551B1 (en) 2010-03-09

Similar Documents

Publication Publication Date Title
US20100121855A1 (en) Lookup Partitioning Storage System and Method
EP0329779B1 (en) Session control in network for digital data processing system which supports multiple transfer protocols
JP5090450B2 (en) Method, program, and computer-readable medium for updating replicated data stored in a plurality of nodes organized in a hierarchy and linked via a network
US8190741B2 (en) Customizing a namespace in a decentralized storage environment
JP3434276B2 (en) Method and apparatus for expressing and applying network topology data
US8151003B2 (en) System and method for routing data by a server
US20050102297A1 (en) Directory system
US20070299804A1 (en) Method and system for federated resource discovery service in distributed systems
US8856068B2 (en) Replicating modifications of a directory
US7882130B2 (en) Method and apparatus for requestor sensitive role membership lookup
US20110106822A1 (en) Virtual List View Support in a Distributed Directory
EP1589691B1 (en) Method, system and apparatus for managing computer identity
KR20070011413A (en) Methods, systems and programs for maintaining a namespace of filesets accessible to clients over a network
US20040078457A1 (en) System and method for managing network-device configurations
MX2007015188A (en) Extensible and automatically replicating server farm configuration management infrastructure.
US6560591B1 (en) System, method, and apparatus for managing multiple data providers
Deugo Mobile agent messaging models
US20100070366A1 (en) System and method for providing naming service in a distributed processing system
US20020099768A1 (en) High performance client-server communication system
JP3994059B2 (en) Clustered computer system
US7188150B2 (en) System and method for sharing, searching, and retrieving web-based educational resources
CN107918527A (en) Memory allocation method and device and file memory method and device
US20010039549A1 (en) Object-oriented interface to LDAP directory
US20070266369A1 (en) Methods, systems and computer program products for retrieval of management information related to a computer network using an object-oriented model
JP3598522B2 (en) Distributed database management device

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION,WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DALIA, APURVA F.;HARRY, CRAIG ALLEN;DANI, NISHANT;AND OTHERS;REEL/FRAME:023812/0874

Effective date: 20030625

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001

Effective date: 20141014