US20120059799A1 - Managing objects for a data file - Google Patents

Managing objects for a data file Download PDF

Info

Publication number
US20120059799A1
US20120059799A1 US13/295,485 US201113295485A US2012059799A1 US 20120059799 A1 US20120059799 A1 US 20120059799A1 US 201113295485 A US201113295485 A US 201113295485A US 2012059799 A1 US2012059799 A1 US 2012059799A1
Authority
US
United States
Prior art keywords
file
data
servers
metadata
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/295,485
Inventor
Fernando Oliveira
Stephen Fridella
Rossen Dimitrov
Patrick Eaton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC Corp filed Critical EMC Corp
Priority to US13/295,485 priority Critical patent/US20120059799A1/en
Assigned to EMC CORPORATION reassignment EMC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OLIVEIRA, FERNANDO, DIMITROV, ROSSEN, EATON, PATRICK, FRIDELLA, STEPHEN
Publication of US20120059799A1 publication Critical patent/US20120059799A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices

Definitions

  • This application relates to the field of storing data, and more particularly to the field of data storage services in a scalable high capacity system.
  • the rich digital media and unstructured business content have unique characteristics and storage requirements that are different than structured data types (e.g. database records), for which many of today's storage systems were specially designed.
  • structured data types e.g. database records
  • Many conventional storage systems are highly optimized to deliver high performance I/O for small chunks of data.
  • these systems were designed to support gigabyte and terabyte sized information stores.
  • rich digital media and unstructured business content have greater capacity requirements (petabyte versus gigabyte/terabyte sized systems), less predictable growth and access patterns, large file sizes, billions and billions of objects, high throughput requirements, single writer, multiple reader access patterns, and a need for multi-platform accessibility.
  • Conventional storage systems have met these needs in part by using specialized hardware platforms to achieve required levels of performance and reliability.
  • Unfortunately, the use of specialized hardware results in higher customer prices and may not support volume economics as the capacity demands grow large—a differentiating characteristic of rich digital media and unstructured business content.
  • tiered storage attempts to reduce the capital and operational costs associated with keeping all information on a single high-cost storage tier.
  • tiered storage comes with a complex set of decisions surrounding technology, data durability, functionality and even storage vendor.
  • Tiered storage solutions may introduce unrelated platforms, technologies, and software titles having non-zero operational costs and management requirements that become strained as the quantity of data increases.
  • tiered storage may cause a data replica incoherence which results in multiple, disjoint copies of information existing across the tiers of storage.
  • storage management software handling data backup and recovery may make multiple copies of information sets on each storage tier (e.g. snapshots, backup sets, etc).
  • Information Life-cycle Management (ILM) software dealing with information migration from one tier to another may create additional and often overlapping copies of the data.
  • Replication software may make an extra copy of the information set within a particular tier in order to increase performance to accessing applications.
  • Each of these functions typically runs autonomously from one another. The software may be unable to realize and/or take advantage of the multiple replicas of the same information set.
  • a two petabyte information store may be comprised of eight thousand 250-gigabyte disk drives. Disk failures should be handled in a different manner in a system of this scale so that the system continues to operate relatively smoothly whenever one or only a few of the disk drives fail.
  • managing file objects in a data storage system includes providing a plurality of metadata servers, each metadata server having information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system and includes providing at least one metadata location server, where an entity accessing a specific file object of the system determines which of the metadata servers contains data for the specific file object by first accessing the at least one metadata location server.
  • Information for the subset of file objects may include a table having file identifiers and metadata location information.
  • the metadata location information may point to data storage for a metadata object for a file.
  • the table may also include lease information that indicates entities that have read permission for the file object and entities that have write permission for the file object.
  • File objects may be provided with unique object identifiers.
  • the at least one metadata location server may determine an appropriate metadata server to use for a file object according to an object identification value for the file object.
  • At least some of the metadata servers may maintain a table that correlates a subset of the object identification values with file objects.
  • At least some of the metadata servers may be provided with a unique subset of object identification values for assigning to new file objects.
  • computer software provided in a computer-readable medium, manages file objects in a data storage system.
  • the software includes executable code, provided on a plurality of metadata servers, that maintains information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system and includes executable code, provided on at least one metadata location server, that provides an indication of which of the metadata servers contains data for a specific file object in response to a request therefor provided to the at least one metadata server.
  • Information for the subset of file objects may include a table having file identifiers and metadata location information.
  • the metadata location information may point to data storage for a metadata object for a file.
  • the table may also includes lease information that indicates entities that have read permission for the file object and entities that have write permission for the file object.
  • File objects may be provided with unique object identifiers.
  • the at least one metadata location server may include executable code that determines an appropriate metadata server to use for a file object according to an object identification value for the file object.
  • At least some of the metadata servers may maintain a table that correlates a subset of the object identification values with file objects.
  • At least some of the metadata servers may be provided with a unique subset of object identification values for assigning to new file objects.
  • a data storage system includes a plurality of clients that access file objects of the storage system and a plurality of interconnected servers coupled to the clients, the servers including a plurality of metadata servers, each metadata server having information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system, the servers also including at least one metadata location server, where a client accessing a specific file object of the system determines which of the metadata servers contains data for the specific file object by first accessing the at least one metadata location server.
  • Information for the subset of file objects may include a table having file identifiers and metadata location information.
  • the metadata location information may point to data storage for a metadata object for a file.
  • File objects may be provided with unique object identifiers.
  • the at least one metadata location server may determine an appropriate metadata server to use for a file object according to an object identification value for the file objects for the file.
  • At least some of the metadata servers may maintain a table that correlates a subset of the object identification values with file objects.
  • At least some of the metadata servers may be provided with a unique subset of object identification values for assigning to new file objects.
  • managing file objects in a data storage system includes providing a plurality of a first type of server, each having information for only a subset of file objects in the storage system and includes providing at least one of a second type of server that provides information indicating which of the servers of the first type contains data for a specific file object by first accessing the at least one server of the second type.
  • a data storage system includes a plurality of clients that access file objects of the storage system and includes a plurality of interconnected servers coupled to the clients, the servers including a plurality of metadata servers, each metadata server having information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system, the servers also including at least one metadata location server, where a client accessing a specific file of the system determines which of the metadata servers contains data for the specific file object by first accessing the at least one metadata location server.
  • managing a plurality of files includes providing at least one server having a table with plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes accessing a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine synchronous and asynchronous mirrors for the file.
  • Each layout storage object may be provided as a tree and leaves of the tree may map logical addresses of a corresponding file to data storage locations. At least one internal node of the tree may indicate that at least one leaf node thereof is a mirror of an other leaf node thereof.
  • data may be written to the first leaf node and an update message may be provided to the at least one server.
  • Managing a plurality of files may also include the at least one server placing information on a queue corresponding to the message and servicing the queue to write the data to the second leaf node.
  • data may be written to the first leaf node and to the second leaf node.
  • accessing data being unsuccessful, a mirror thereof may be accessed.
  • the layout storage object may include pointers to data file objects that map a logical address space for a corresponding file to data storage space corresponding to physical data storage.
  • Each data file object may have a state indicating that the data file object is one of: current, an out-of-date mirror of another data object, immutable, and invalid.
  • new storage space may be allocated and data may be written to the new storage space.
  • zeros may be returned.
  • computer software provided in a computer-readable medium, manages a plurality of files.
  • the software includes executable code provided in at least one server that manages a table with plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes executable code that accesses a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine synchronous and asynchronous mirrors for the file.
  • Each layout storage object may be provided as a tree and leaves of the tree may map logical addresses of a corresponding file to data storage locations.
  • At least one internal node of the tree may indicate that at least one leaf node thereof is a mirror of an other leaf node thereof.
  • executable code may cause the data to be written to the first leaf node and an update message to be provided to the at least one server.
  • Executable code at the at least one server may place information on a queue corresponding to the message and may service the queue to write the data to the second leaf node.
  • executable code may cause the data to be written to the first leaf node and to the second leaf node.
  • the computer software may also include executable code that, in response to accessing data being unsuccessful, accesses a mirror thereof.
  • the layout storage object may include pointers to data file objects that map a logical address space for a corresponding file to data storage space corresponding to physical data storage. Each data file object may have a state indicating that the data file object is one of: current, an out-of-date mirror of another data object, immutable, and invalid.
  • the computer software may also include executable code that allocates new storage space and writes data to the new storage space in response to an attempt to write to a data file object that is immutable.
  • the computer software may also include executable code that causes zeros to be returned in response to an attempt to read a data file object that is invalid.
  • managing a plurality of files includes providing at least one server having a plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes accessing a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine how data for the file is stored and retrieved.
  • a data storage system includes a plurality of servers, where at least one server has a table with plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes a plurality of clients that access a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine synchronous and asynchronous mirrors for the file.
  • a data storage system includes a plurality of servers, where at least one server has a plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes a plurality of clients that access a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine how data for the file is stored and retrieved.
  • managing files includes receiving a plurality of policies for a new file, where different policies may be applied according to predetermined conditions, receiving a request to create a new file, selecting an appropriate policy for the new file according to the predetermined conditions, determining if sufficient resources are available according to the appropriate policy, and obtaining file objects to create the file in response to their being sufficient resources according to the appropriate policy.
  • Policies may be received via a user management interface.
  • the predetermined conditions may include an identity of a client creating a file.
  • the appropriate policy may include information regarding mirrors for the file.
  • the resource manager may receive information about system resources and maintains a table indicative thereof.
  • the system resources may include data storage areas.
  • computer software provided in a computer-readable medium, manages files.
  • the software includes executable code that receives a plurality of policies for a new file, wherein different policies may be applied according to predetermined conditions, executable code that receives a request to create a new file, executable code that selects an appropriate policy for the new file according to the predetermined conditions, executable code that determines if sufficient resources are available according to the appropriate policy and executable code that obtains file objects to create the file in response to their being sufficient resources according to the appropriate policy.
  • Policies may be received via a user management interface.
  • the predetermined conditions may include an identity of a client creating a file.
  • the appropriate policy may include information regarding mirrors for the file.
  • Executable code that determines if sufficient resources are available may include executable code that consults a resource manager.
  • the resource manager may include executable code that receives information about system resources and maintains a table indicative thereof.
  • the system resources may include data storage areas.
  • a data storage system includes a plurality of clients that access files and includes a plurality of interconnected servers, coupled to the clients, a subset of the servers receiving a plurality of policies for a new file, where different policies may be applied according to predetermined conditions, a subset of the servers receiving a request to create a new file, a subset of the servers selecting an appropriate policy for the new file according to the predetermined conditions, a subset of the servers determining if sufficient resources are available according to the appropriate policy, and a subset of the servers obtaining file objects to create the file in response to their being sufficient resources according to the appropriate policy.
  • Policies may be received via a user management interface.
  • the predetermined conditions may include an identity of a client creating a file.
  • the appropriate policy may include information regarding mirrors for the file.
  • the servers include a subset of servers that are consulted to determine if sufficient resources are available.
  • the subset of servers that are consulted to determine if sufficient resources are available may receive information about system resources and maintains a table indicative thereof.
  • the system resources may include data storage areas.
  • managing file objects includes receiving a plurality of policies for file objects, where different policies may be applied according to predetermined conditions, receiving a request to obtain new file objects, selecting an appropriate policy for the new file objects according to the predetermined conditions, determining if sufficient resources are available according to the appropriate policy, and obtaining file objects in response to their being sufficient resources according to the appropriate policy.
  • a data storage system includes a plurality of clients that access file objects and includes a plurality of interconnected servers, coupled to the clients, a subset of the servers receiving a plurality of policies for new file objects, where different policies may be applied according to predetermined conditions, a subset of the servers receiving a request to obtain new file objects, a subset of the servers selecting an appropriate policy for the new file objects according to the predetermined conditions, a subset of the servers determining if sufficient resources are available according to the appropriate policy, and a subset of the servers obtaining the file objects in response to their being sufficient resources according to the appropriate policy.
  • tracking storage resources includes providing a table containing storage resources along with capabilities and statuses thereof, updating the table in response to a change of status of a storage resource, updating the table in response to a change in capabilities of a storage resource and, in response to an inquiry for a storage resource having a particular capability, searching the table for a storage resource having the particular capability.
  • Tracking storage resources may also include adding an element to the table in response to a new resource being added to the system.
  • the capabilities may include RAID striping, data deduplication, and green operation.
  • the status may be one of: on-line, off-line, and full.
  • the storage resources may be disk drives.
  • the disk drives may be managed by data storage servers that present an OSD interface for the disk drives.
  • the table may be maintained by a resource manager server that receives information about storage resources from other servers.
  • computer software provided in a computer-readable storage medium, tracks storage resources.
  • the software includes a table that contains storage resources along with capabilities and statuses thereof, executable code that updates the table in response to a change of status of a storage resource, executable code that updates the table in response to a change in capabilities of a storage resource, and executable code that searches the table for a storage resource having a particular capability in response to an inquiry for a storage resource having the particular capability.
  • the software may also include executable code that adds an element to the table in response to a new resource being added to the system.
  • the capabilities may include RAID striping, data deduplication, and green operation.
  • the status may be one of: on-line, off-line, and full.
  • the storage resources may be disk drives.
  • the disk drives may be managed by data storage servers that present an OSD interface for the disk drives.
  • the table may be maintained by a resource manager server that receives information about storage resources from other servers.
  • a resource manager that manages storage resources for a storage system includes a processing device and a computer-readable memory coupled to the processing device, the computer-readable memory having a table provided in a data structure and containing storage resources along with capabilities and statuses thereof, the computer-readable memory also having executable code that updates the table in response to a change of status of a storage resource, executable code that updates the table in response to a change in capabilities of a storage resource, and executable code that searches the table for a storage resource having a particular capability in response to an inquiry for a storage resource having the particular capability.
  • the computer-readable memory may also contain executable code that adds an element to the table in response to a new resource being added to the system.
  • the capabilities may include RAID striping, data deduplication, and green operation.
  • the status may be one of: on-line, off-line, and full.
  • the storage resources may be disk drives.
  • the disk drives may be managed by data storage servers that present an OSD interface for the disk drives.
  • a data storage system includes a plurality of clients and a plurality of servers coupled to the clients, where a subset of the servers manage storage resources using a table containing storage resources along with capabilities and statuses thereof, where the subset updates the table in response to a change of status of a storage resource, updates the table in response to a change in capabilities of a storage resource, and searches the table for a storage resource having the particular capability in response to an inquiry for a storage resource having a particular capability.
  • the subset of servers may add an element to the table in response to a new resource being added to the system.
  • the storage resources may be disk drives.
  • providing information to a resource manager of a data storage system includes providing information to the resource manager in response to a change in capabilities of a storage resource, providing information to the resource manager in response to a change in status of a storage resource, and providing information to the resource manager in response to adding a new storage resource.
  • the storage resources may be disk drives.
  • accessing data file objects includes providing a file system interface on at least one client, where the file system interface allows an application running on the at least one client to make file system calls to access the data file objects and providing a Web Services interface on the at least one client, where the Web interface allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface.
  • the Web Services interface may be a SOAP interface and/or a REST interface.
  • Accessing data file objects may also include providing a direct file object interface on the at least one client, where the direct file object interface allows an application to directly access file objects at the same time that file objects are being accessed using the Web Services interface and using the file system interface.
  • Accessing data file objects may also include providing file name services that translate file names into file object identifiers, where the file name services are accessed by the file system interface and/or the Web Services interface.
  • Providing the file system interface may include providing a virtual file system in a kernel address space of the at least one client.
  • Accessing data file objects may also include providing a layout manager that manages file objects. The layout manager may be provided in the kernel address space of the at least one client. The layout manager is provided in user address space of the at least one client.
  • the virtual file system may access the layout manager using a bridge between kernel memory address space and user memory address space.
  • a client that accesses a plurality of file objects includes a file system interface that allows an application running on the at least one client to make file system calls to access the data file objects, a Web Services interface that allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface, and a communication interface, coupled to the file system interface and to the Web Services interface, that exchanges file object data between the client and a plurality of servers containing the file objects.
  • the client may also include a direct file object interface that allows an application to directly access file objects at the same time that file objects are being accessed using the Web Services interface and using the file system interface.
  • the client may also include a file name service that translates file names into file object identifiers, where the file name service is accessed by the file system interface and/or the Web Services interface.
  • the file system interface may include a virtual file system in a kernel address space of the at least one client.
  • the client may also include a layout manager that manages file objects.
  • the layout manager may be provided in the kernel address space of the at least one client.
  • the layout manager may be provided in user address space of the at least one client.
  • the virtual file system may access the layout manager using a bridge between kernel memory address space and user memory address space.
  • computer software provided in a computer-readable storage medium, includes executable code that allows an application running on at least one client to make file system calls to access the data file objects and includes executable code that allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface.
  • the computer software may also include executable code that allows an application to directly access file objects at the same time that file objects are being accessed using the Web and using file system calls.
  • the computer software may also include executable code that translates file names into file object identifiers.
  • a system simultaneously allows accessing a plurality of file objects using a file system interface and a Web Services interface.
  • the system includes a plurality of servers that contain the file objects and at least one client, coupled to the servers.
  • the client includes a file system interface that allows an application running on the at least one client to make file system calls to access the data file objects, a Web Services interface that allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface, and a communication interface, coupled to the file system interface and the Web Services interface, that exchanges file object data between the client and a plurality of servers containing the file objects.
  • the file system interface may include a virtual file system in a kernel address space of the client.
  • managing objects for a data file includes obtaining a metadata object for the file, determining if the metadata object indicates that a portion of the data file includes synchronous mirrors, and, in response to a write of new data to the data file, writing the new data to each synchronous mirror.
  • Data objects of a data file may be in one of four states: current, stale, empty, and immutable. In response to a read operation and the corresponding data object being in a stale state, data may be read for a corresponding synchronous mirror that is in a current state.
  • FIG. 1 is a diagram illustrating servers and clients according to an embodiment of the system described herein.
  • FIGS. 2A and 2B are diagrams illustrating a client coupled to servers and to other network(s) according to an embodiment of the system described herein.
  • FIG. 3 is a diagram illustrating a client having server operations software, client software, and a plurality of interfaces therebetween according to an embodiment of the system described herein.
  • FIG. 4 is a diagram illustrating a file having a metadata file object and a plurality of data file objects according to an embodiment of the system described herein.
  • FIG. 5 is a diagram illustrating a metadata file object for a file according to an embodiment of the system described herein.
  • FIG. 6 is a diagram illustrating an example of a layout storage object tree for a file according to an embodiment of the system described herein.
  • FIG. 7 is a diagram illustrating an example of a layout storage object tree with multiple maps for a file according to an embodiment of the system described herein.
  • FIG. 8 is a diagram illustrating another example of a layout storage object tree with multiple maps and replication nodes for a file according to an embodiment of the system described herein.
  • FIG. 9 is a flowchart illustrating a client obtaining a lease for and operating on a file according to an embodiment of the system described herein.
  • FIG. 10 is a flowchart illustrating a client reading data from a file according to an embodiment of the system described herein.
  • FIG. 11 is a flowchart illustrating a client writing data to a file according to an embodiment of the system described herein.
  • FIG. 12 is a flowchart illustrating steps performed by a client in connection with finding an alternative copy of data according to an embodiment of the system described herein.
  • FIG. 13 is a flowchart illustrating a client writing to synchronous mirrors for data according to an embodiment of the system described herein.
  • FIG. 14 is a flow chart illustrating a client converting file names to object identifiers according to an embodiment of the system described herein.
  • FIG. 15 is a diagram illustrating a client having an application in user memory address space and a having a VFS, file name services, kernel I/O drivers, layout manager, and a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 16 is a flow chart illustrating operation of a VFS at a client according to an embodiment of the system described herein.
  • FIG. 17 is a diagram illustrating a client having an application, file name services, user level I/O drivers, and a layout manager in user memory address space and having a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 18 is a diagram illustrating a client having an application, a file presentation layer, user level I/O drivers, and a layout manager in user memory address space and having a VFS and communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 19 is a diagram illustrating a client having an application in user memory address space and having file name services, kernel I/O drivers, a layout manager, and a communication interface in kernel address space according to an embodiment of the system described herein.
  • FIG. 20 is a diagram illustrating a client having an application, file name services, user level I/O drivers, and a layout manager in user memory address space and having a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 21 is a diagram illustrating a client having an application, file name services, user level I/O drivers, and a layout manager in user memory address space and having a communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 22 is a diagram illustrating a client having an application in user memory address space and having a Web Services module, kernel I/O drivers, a layout manager, and a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 23 is a diagram illustrating a client having an application, a Web Services layer, user level I/O drivers, and a layout manager in user memory address space and having a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 24 is a diagram illustrating a client having an application, a Web Services layer, user level I/O drivers, and a layout manager in user memory address space and having a communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 25 is a diagram illustrating a client having a plurality of applications, a Web Services layer, file name services, user level I/O drivers, and a layout manager in user memory address space and having a VFS, a communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 26 is a diagram illustrating different types of servers and a user management interface according to an embodiment of the system described herein.
  • FIG. 27 is a flow chart illustrating steps performed in connection with using security managers servers to obtain credentials for using policy manager servers according to an embodiment of the system described herein.
  • FIG. 28 is a diagram illustrating a resource manager table according to an embodiment of the system described herein.
  • FIG. 29 is a flow chart illustrating steps performed in connection with processing resource information to update a resource table according to an embodiment of the system described herein.
  • FIG. 30 is a flow chart illustrating steps performed in connection with finding a resource with a desired capability according to an embodiment of the system described herein.
  • FIG. 31 is a flow chart illustrating steps performed in connection with a metadata server servicing a lease request according to an embodiment of the system described herein.
  • FIG. 32 is a flow chart illustrating steps performed in connection with monitoring and recalling expired leases according to an embodiment of the system described herein.
  • FIG. 33 is a flow chart illustrating steps performed in connection with lease recall processing according to an embodiment of the system described herein.
  • FIG. 34 is a table illustrating data maintained for files handled by a metadata server according to an embodiment of the system described herein.
  • FIG. 35 illustrates a linked list for file leases handled that are by a metadata server according to an embodiment of the system described herein.
  • FIG. 36 is a table illustrating data maintained by a metadata server to correlate object identifiers and data storage locations according to an embodiment of the system described herein.
  • FIG. 37 illustrates a table at a metadata location server that correlates object identifier ranges and metadata servers according to an embodiment of the system described herein.
  • FIG. 38 is a flow chart illustrating processing performed by a metadata location server in connection with returning a particular metadata server according to an embodiment of the system described herein.
  • FIG. 39 is a flow chart illustrating processing performed by a metadata server in connection with deleting a file according to an embodiment of the system described herein.
  • FIG. 40 is a flow chart illustrating processing performed by a metadata server in connection with creating a file according to an embodiment of the system described herein.
  • FIG. 41 is a flow chart illustrating processing performed by a metadata server in connection with responding to a failed data write operation according to an embodiment of the system described herein.
  • FIG. 42 is a flow chart illustrating processing performed by a metadata server in connection with responding to a stale mirror update message according to an embodiment of the system described herein.
  • FIG. 43 is a flow chart illustrating processing performed by a metadata server in connection with adding an asynchronous copy job to a queue according to an embodiment of the system described herein.
  • FIG. 44 is a flow chart illustrating processing performed by a metadata server in connection with servicing an asynchronous copy job according to an embodiment of the system described herein.
  • a diagram illustrates servers 102 coupled to a plurality of clients 104 - 106 .
  • Each of the clients 104 - 106 represents one or more processing devices that receives file services from the servers 102 .
  • Each of the clients 104 - 106 may or may not be independent of other ones of the clients 104 - 106 .
  • One or more of the clients 104 - 106 may be a multiprocessing/multiuser system and possibly have multiple independent users.
  • the clients 104 - 106 are meant to represent any number of clients.
  • the file services provided by the servers 102 may include data storage and retrieval as well as related operations, such as data mirroring, cloning, etc.
  • the servers 102 may be implemented using a plurality of services (and/or interconnected file servers including SAN components) that are provided by interconnected processing and/or storage devices.
  • each of the clients 104 - 106 may be coupled to the servers 102 using the Web, possibly in conjunction with local TCP/IP connections.
  • the client 104 is shown as being coupled to the servers 102 and to one or more other network(s).
  • the other network(s) may include a local area network (LAN).
  • the client 104 may be a gateway between the servers 102 and a LAN to which one or more other devices (not shown) may also be coupled.
  • the client 104 may act as a local file server to the one or more other devices coupled to the LAN by providing data from the servers 102 to the one or more other devices.
  • reference to a particular one of the clients 104 - 106 may be understood to include reference to any or all of the clients 104 - 106 coupled to the servers 102 unless otherwise indicated.
  • a diagram shows the client 104 being coupled to the servers 102 and one or more other network(s) (e.g., a LAN) in a configuration that is different from that shown in FIG. 2A .
  • a router 108 is coupled between the servers 102 and the client 104 .
  • the router 108 may be any conventional router that may be accessed by the client 104 .
  • the client 104 uses only a single connection point to both the servers 102 and to the other network(s).
  • the client 104 may act as local file server and gateway between the servers 102 and one or more other devices (not shown) coupled to the other network(s).
  • any other appropriate connection configurations may be used by any of the client 104 - 106 coupled to the servers 102 and/or to other network(s).
  • the client 104 is shown in more detail having server operations software 122 , client software 124 , and an interface layer 125 that includes a plurality of interfaces 126 - 128 between the server operations software 122 and the client software 124 .
  • the server operations software 122 facilitates the exchange of information/data between the client 104 and the servers 102 to provide the functionality described herein.
  • the server operations software 122 is described in more detail elsewhere herein.
  • the client software 124 represents any software that may be run on the client 104 , including application software, operating system software, Web server software, etc., that is not part of the server operations software 122 or the interface layer 125 . As described in more detail elsewhere herein, it is possible to have the client software 124 interact with the servers 102 through different ones of the interfaces 126 - 128 at the same time.
  • the file services described herein may be implemented by the servers 102 using a set of file objects where a file that is accessed by the client software includes a metadata file object which points to one or more data file objects that contain the data for the file. Accessing the file would involve first accessing the metadata file object to locate the corresponding data file objects for the file. Doing this is described in more detail elsewhere herein. Note, however, that any appropriate file object mechanism may be used for the system described herein.
  • a file 130 is shown as including a metadata file object 132 and a plurality of data file objects.
  • the metadata file object 132 contains information that points to each of the data file objects 134 - 136 .
  • Accessing the file includes first accessing the metadata file object 132 and then using information therein to locate the appropriate one or more of the corresponding data file object 134 - 136 .
  • the metadata file object 132 is shown in more detail as including an object attributes section 142 and a Layout Storage Object (LSO) tree section 144 .
  • the object attributes section contains conventional file-type attributes such as owner id, group id, access control list, last modification time, last access time, last change time, creation time, file size, and link count. Many of the attributes are self-explanatory.
  • the last modification time corresponds to the last time that the data for the data objects 134 - 136 had been modified while the last change time corresponds to when the object metadata had last been changed.
  • the link count indicates the number of other objects that reference a particular file (e.g., aliases that point to the same file). In an embodiment herein, a file and its related objects are deleted when the link count is decremented to zero.
  • the LSO tree section 144 includes a data structure that includes one or more maps for mapping the logical space of the file to particular data file objects.
  • the LSO tree section 144 may also indicate any mirrors for the data and whether the mirrors are synchronous or asynchronous. LSO trees and mirrors are described in more detail elsewhere herein.
  • a simple LSO tree 160 is shown as including an LSO root node 162 and a single map 164 .
  • the LSO root node 162 is used to identify the LSO tree 160 and includes links to one or more map(s) used in connection with the file corresponding to the LSO tree 160 .
  • the map 164 maps logical locations within the file to actual data storage location. A process that accesses logical storage space of a file represented by the LSO tree 160 first uses the LSO root node 162 to find the map 164 and then uses the map 164 to translate logical addresses within the file to an actual data storage locations.
  • an LSO tree 170 is shown as including an LSO root node 172 and a plurality of maps 174 - 176 .
  • Each of the maps 174 - 176 may represent a different range of logical offsets within the file corresponding to the LSO tree 170 .
  • the map 174 may correspond to a first range of logical offsets in the file.
  • the map 174 may map logical locations in the first range to a first actual storage device.
  • the map 175 may correspond to a second range of logical offsets in the file, different than the first range, which may be mapped to a different actual storage device or may be mapped to the same actual storage device as the map 174 .
  • the map 176 may correspond to a third range of logical offsets in the file, different than the first range and the second range, which may be mapped to a different actual storage device or may be mapped to the same actual storage device as the map 174 and/or the map 175 .
  • an LSO tree 180 is shown as including an LSO root node 181 and a pair of replication nodes 182 a , 182 b , which indicate that the underlying data is to be mirrored (replicated) and which indicate whether the mirror is synchronous or asynchronous. Synchronous and asynchronous mirrors are discussed in more detail elsewhere herein.
  • the node 182 a has a plurality of children maps 183 - 185 associated therewith while the node 182 b has a plurality of children maps 186 - 188 associated therewith.
  • the replication nodes 182 a , 182 b indicate that the data corresponding to the maps 183 - 185 is a mirror of data corresponding to the maps 186 - 188 .
  • the nodes 182 a , 182 b may be implemented using a single node 189 to indicate replication.
  • a process accessing a file having the LSO tree 180 would traverse the tree 180 and determine that data is mirrored. As discussed in more detail elsewhere herein, depending upon the type of mirroring, the process accessing the LSO tree 180 would either write the data to the children of both of the nodes 182 a , 182 b or would provide a message to another process/server (e.g., the servers 102 ) that would perform the asynchronous mirroring. Mirroring is discussed in more detail elsewhere herein.
  • file objects are accessed by one of the clients 104 - 106 by first requesting, and obtaining, a lease from the servers 102 .
  • the lease corresponds to the file objects for the particular file being accessed and to the type of access.
  • a lease may be for reading, writing, and/or more some other operation (e.g., changing file attributes).
  • the servers 102 may issue only one write lease at a time to any of the clients 104 - 106 but may issue multiple read leases simultaneously and may issue read lease(s) at the same time as issuing a write lease.
  • a first client may obtain lease for writing to a first logical range of a file while a second client may, independently, obtain a lease for writing to a second and separate logical range of the same file.
  • the two write leases for different logical ranges may overlap in time without violating the general rule that the system never issues overlapping write leases for the same data.
  • the lease provided to the clients 104 - 106 from the servers 102 includes security information (security token) that allows the client appropriate access to the data.
  • the security token may expire after a certain amount of time.
  • a client accesses data by providing an appropriate security token for the data as well as client users/ownership information.
  • a user wishing to access data would first obtain a lease and then would provide the access request to the servers 102 along with the security token and information identifying the owner (client) accessing the data.
  • the servers 102 would then determine whether the access requested by the client was permissible. After the lease expires (the security token expires), the user requests the lease again.
  • Data security may be implemented using conventional data security mechanisms.
  • a client may then cache the corresponding metadata, including the LSO tree, into local storage of the client. The client may then use and manipulate the local cached version of the metadata and may use the metadata to obtain access to the data. As described in more detail elsewhere herein, a client does not directly modify metadata stored by the servers 102 but, instead, sends update messages to the servers 102 to signal that metadata for a file may need to be modified by the servers 102 .
  • a flowchart 200 illustrates steps performed by a client in connection with requesting a lease for a file (objects associated with a file) for performing operations thereon.
  • Processing begins at a first step 202 where the client requests the lease for the file.
  • a client requesting a lease includes specifying the type of access (e.g., read, write, etc.).
  • a test step 204 where it is determined if the request has been granted. If not, then control transfers from the test step 204 to a step 206 where processing is performed in connection with the lease not being granted to the client.
  • the particular processing performed at the step 206 may include, for example, providing an error message to the client process requesting access to the file corresponding to the lease and/or waiting for an amount of time and then retrying the request. Note that it is possible that a lease for a particular file is not available at one time is subsequently available at another time because, for example, the lease is released by another client in between the first request and the second request. In any event, any appropriate processing may be performed at the step 206 . Following the step 206 , processing is complete.
  • Operations performed at the step 208 include reading data and/or writing data. Different types of processing that may be performed at the step 208 are described in more detail elsewhere herein.
  • a test step 212 where it is determined if the operations performed at the step 208 require an update.
  • a client may obtain a lease and perform operations that do not affect the file or the underlying file objects.
  • a client may acquire a lease for reading a file and the operation performed at the step 208 may include the client reading the file. In such a case, no update may be necessary since the file and corresponding file objects (metadata, data objects, etc.) have not changed.
  • the client obtains a lease for writing data the file and the operation performed at the step 208 includes writing data to the file, then the underlying file objects will have been changed and an update message needs to be sent the servers 102 . If it is determined at the test step 212 that an update is necessary, then control passes from the test step 212 to a step 214 where an update message is sent by the client to the servers 102 .
  • the client may perform a small number of operations on the file, after which the client would be finished with the file at the step 216 .
  • the client may be performing a series of operations and may not yet have completed all of the operations.
  • a lease may be provided by the servers 102 to the client with a particular expiration time and/or the associated security token may expire.
  • the servers 102 may recall leases provided to clients under certain circumstances. In either case, the lease may no longer be valid. Accordingly, if it is determined at the step 218 that the lease has expired (and/or has been recalled by the servers 102 ), then control passes from the test step 218 back to the step 202 request the lease again. Otherwise, if the lease has not expired, then control passes from the test step 218 back to the step 208 to perform another iteration.
  • data file objects may be indicated as having one of four possible states: current, stale, immutable, or empty.
  • the current state indicates that the data object is up to date and current.
  • the stale state indicates that the data is not valid but, instead, requires updating, perhaps by some other process. In some instances, the stale state may be used only in connection with mirror copies of data (explained in more detail elsewhere herein). Data may be stale because it is a mirror of other data that was recently written but not yet copied.
  • the immutable state indicates that the corresponding data is write protected, perhaps in connection with a previous clone (snapshot) operation.
  • the empty state indicates that no actual storage space has yet been allocated for the data.
  • a flow chart 240 illustrates steps performed by a client in connection with performing read operations after obtaining a read lease for a file. Processing begins at a first test step 242 where it is determined if the data object being read is in the current state. If not, then control transfers from the test step 242 to a step 244 where it is determined if the data object being read is in the immutable state. If it is determined at the step 244 that the data object being read is in the immutable state or if it is determined at the test step 242 that the data object being read is in the current state, then control transfers to a step 246 where the read operation is performed.
  • a client reads file data by providing the appropriate data file object identifier to the servers 102 as well as providing appropriate security credentials. Accordingly, the read operation performed at the step 246 includes the client sending an appropriate request to the servers 102 and waiting for a result therefrom.
  • a test step 248 where it is determined if the servers 102 have returned a result indicating that the data file object is unavailable. In some cases, a data file object that is otherwise current or immutable may nevertheless become unavailable. For example, the physical storage space that holds the data file object may become temporarily disconnected and/or temporarily busy doing some other operation. If it is determined at the test step 248 that the data file object is available, then control transfers from the test step 248 to a test step 252 where it is determined if the read operation was successful. If so, then control transfers from the test step 252 to a step 254 where the result of the read operation is returned to the process at the client that caused the read operation to be performed. The result may include the data that was read and a status indicator. Following the step 254 , processing is complete.
  • error processing is performed.
  • the particular error processing performed at the step 256 is implementation dependent and may include, for example, reporting the error to a calling process and/or possibly retrying the read operation a specified number of times. Following the step 256 , processing is complete.
  • test step 244 determines whether the data object being read is not in the immutable state. If it is determined at the test step 244 that the data object being read is not in the immutable state, then control transfers from the test step 244 to a test step 258 where it is determined if the data object is in the stale state. If not, then, by virtue of the test steps 242 , 244 , 258 and process of elimination, the data object is in the empty state. In an embodiment herein, reading a data object in the empty state causes zeros to be returned to the calling process. Accordingly, if it is determined at the test step 258 that the data object is not in the stale state, then control transfers from the test step 258 to a step 262 where zeros are returned in response to the read operation. Following the step 262 , processing is complete.
  • an alternative version of the data file object is available for reading.
  • the test performed at the step 264 is described in more detail elsewhere herein.
  • a flow chart 280 illustrates steps performed by a client in connection with performing write operations after obtaining a write lease for a file. Processing begins at a first test step 282 where it is determined if the data file object to which the write is being performed is in the immutable state. If so, then control transfers from the step 282 to a step 284 where new actual storage space is allocated for the data file object to avoid overwriting the immutable data. Allocating new storage space for a data object may include providing an appropriate request to the servers 102 . Following the step 284 , control transfers back to the step 282 to begin the processing for the write operation again.
  • step 288 If it is determined at the step 288 that the data file object to which the write is being performed is not in the empty state, then control transfers from the test step 288 to a step 292 where the write operation is performed. Note that the step 292 is reached if the data file object to which the write operation is being performed is not in the immutable state, not in the stale state, and not in the empty state (and thus is in the current state).
  • a client writes file data by providing the appropriate data file object location identifier to the servers 102 as well as providing appropriate security credentials. Accordingly, the write operation performed at the step 292 includes the client sending an appropriate request to the servers 102 and waiting for a result therefrom.
  • a test step 294 where it is determined if the write operation performed at the step 292 was successful. If so, then control transfers from the test step 294 to a test step 296 where it is determined if there are synchronous mirrors of the data file object to which the write is being performed.
  • the test performed at the step 296 may include, for example, determining if a parent node of the data file object in the file LSO tree indicates replication. If not, then control transfers from the test step 296 to a step 298 where an update (message) is sent to the servers 102 indicating that the write had been performed. Following the step 298 , processing is complete.
  • the processing performed at the step 302 is discussed in more detail elsewhere herein.
  • processing is complete.
  • the client may inform the servers 102 (at the step 298 ) of the difficulty in writing to the data object so that the servers 102 can take appropriate action, if necessary.
  • a test step 306 where it is determined if an alternative version of the data is available.
  • an alternative version of the data there may be multiple versions of the same data file objects that exist at the same time due to mirroring. Accordingly, if the data file object to which the write operation is being performed is stale or otherwise cannot be written to, it may be possible to write to a mirror copy of the data.
  • the test performed at the step 306 is like the test performed at the step 264 and is described in more detail elsewhere herein. If it is determined at the test step 306 that an alternative version of the data corresponding to the data file object is available, then control transfers from the test step 306 to a step 308 where the alternative version is selected for writing. Following the step 308 , control transfers back to the test step 282 for another iteration with the alternative data file object.
  • the particular error processing performed at the step 312 is implementation dependent and may include, for example, reporting the error to a calling process and/or possibly retrying the write operation a specified number of times before reporting the error.
  • processing is complete.
  • a flow chart 320 illustrates in more detail steps performed in connection with the alternative available test step 264 of FIG. 10 and/or the alternative available test step 306 of FIG. 11 .
  • Processing begins at a first test step 322 where it is determined if the file has any mirror data file objects at all. In some instances, a file may not use mirrors, in which case there would be no alternative copy available. Accordingly, if it is determined at the test step 322 that the file does not have any mirror data file objects, then control transfers from the test step 322 to a step 324 where a value is returned indicating that no alternative copies are available. Following the step 324 , processing is complete.
  • a pointer may be used to iterate through mirror data file objects to find a useable data file object.
  • a test step 328 it is determined if the pointer is past the end of the list of mirror data file objects (has iterated through all of the mirror data file objects). If so, then control passes from the test step 328 to the step 324 , discussed above, to return a value that indicates that no alternatives are available.
  • a flow chart 350 illustrates in more detail operations performed in connection with the step 302 of the flow chart 280 of FIG. 11 where data that has been written is copied to a number of synchronous mirrors (mirror data file objects).
  • Processing begins at a first step 352 where a pointer that is used to iterate through the mirror data file objects is set to point the first one of the mirror data file objects.
  • a test step 354 it is determined if the pointer used for iterating through the mirror data file objects points past the end (i.e., if all of the mirror data file objects have been processed). If so, then processing is complete.
  • a mirror data file object is neither in the stale state nor in the current state, then the mirror data file object is either in the empty state or in the immutable state. In either case, it may be necessary to allocate new space for a data file object to which the data is to be written. Accordingly, if it is determined at the test step 358 that the data file object is not in the stale state, then control passes from the test step 358 to a step 366 where new space is allocated for the mirror data file object. Following the step 366 is a step 368 where the data that is being copied across synchronous mirror data file objects is written to the mirror data file object pointed to by the pointer used to iterate through the mirror data file objects.
  • step 368 may also be reached from the test step 356 if it is determined that the mirror data file object is current. Following the step 368 is a step 372 where the pointer used to iterate through the mirror data file objects is made to point to the next one. Note that the step 372 is also reached following the step 364 . Following the step 372 , control transfers back to the test step 354 for another iteration.
  • the system described herein may access file objects using object identifiers.
  • each file object that is stored among the servers 102 may be assigned a unique object identifier that identifies each file object and distinguishes each file object from other file objects in the system.
  • many applications use a file naming structure and/or a hierarchical directory to access files and data therein.
  • a file name “C: ⁇ ABC ⁇ DEF ⁇ GHI.doc” indicates a file called “GHI.doc” stored in a sub-directory “DEF” that is stored in another directory “ABC” located on a root volume “C”.
  • a nested directory structure may be provided by implementing directories as special files that are stored in other directories.
  • the sub-directory “DEF” may be implemented as a file stored in the directory “ABC”.
  • each directory may include a table that correlates file names and sub-directory names with file object identifiers.
  • the system may examine one directory at a time and traverse sub-directories until a target file is reached.
  • a flow chart 380 illustrates steps performed in connection with providing a file name translation service (file name service) that translates a conventional hierarchical file name into a file object identifier.
  • the file name service may receive a conventional hierarchical file name as an input and may return an object identifier (or, in some cases, an error).
  • Processing begins at a first step 382 where the file name service receives a file name, such as a conventional hierarchical file name.
  • a test step 384 where it is determined if the syntax of the file name is OK. Checking the syntax of a hierarchical file name is know and includes, for example, checking that only appropriate characters have been used. If it is determined at the test step 384 that the syntax is not OK, then control transfers from the test step 384 to a step 386 where an error indicator (error message) is returned to the calling process. Following the step 386 , processing is complete.
  • all file name paths begin at a single common root directory used for all file objects stored in the servers 102 .
  • there may be multiple root directories where specification of a particular root directory may be provided by any appropriate means, such as using a volume identifier, specifically selecting a particular root directory, etc.
  • a test step 392 where it is determined if the target file (or sub-directory that is part of the file name path) is in the directory that has been read. If not, then control passes from the test step 392 to the step 386 , discussed above, where an error is returned. In some embodiments, the file-not-found error that results from the test at the step 392 may be different from the syntax error that results from the test at the step 384 .
  • a diagram shows the client 104 as including user address memory space and kernel address memory space.
  • user address memory space is memory space that is generally used by user applications and related processes
  • kernel address memory space is memory space that is generally accessible only by system processes, such as an operating system kernel and related processes.
  • system processes such as an operating system kernel and related processes.
  • the client 104 it is possible for the client 104 to have multiple different interfaces to access file objects at the servers.
  • the client 104 is shown as including an application in the user memory address space and a virtual file system (VFS), file name services, kernel I/O drivers, a layout manager, and a communication interface in the kernel memory address space.
  • VFS virtual file system
  • the VFS is an abstraction layer on top of a more concrete file system. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way.
  • the VFS allows the application running on the client 104 to access file objects on the servers 102 without the application needing to understand the details of the underlying file system.
  • the VFS may be implemented in a conventional fashion by translating file system calls by the application into file object manipulations and vice versa. For example, the VFS may translate file system calls such as open, read, write, close, etc. into file object calls such as create object, delete object, etc.
  • the VFS may use the file name services, described elsewhere herein, to translate file names into object identifiers.
  • the kernel I/O drivers provide an interface to low-level object level I/O operations.
  • the kernel I/O drivers may be modeled after, and be similar to, Linux I/O drivers.
  • the layout manager may perform some of the processing on LSO trees corresponding to files, as discussed in more detail elsewhere herein.
  • the communication interface provides communication between the client 104 and the servers 102 .
  • the communication interface may be implemented using any appropriate communication mechanism. For example, if the client 104 communicates with the servers 102 via an Internet connection, then the communication interface may use TCP/IP to facilitate communication between the servers 102 and the client 104 .
  • the application of FIG. 15 may correspond to the client software 124 of FIG. 3 .
  • the VFS of FIG. 15 may correspond to one of the interfaces 126 - 128 of FIG. 3 .
  • the file name services, kernel I/O drivers, layout manager, and communication interface of FIG. 15 may correspond to the server operations software 122 of FIG. 3 . Similar correlation between components of FIG. 3 and other figures may also be found.
  • a flow chart 410 illustrates steps performed by a VFS to provide file services in connection with an application running on the client 104 .
  • Processing begins at a first step 412 where a file system operation requested by an application may be translated into one or more object operations. For example, a file operation to open a file for reading may be converted to object operations that include obtaining an object lease for reading as discussed elsewhere herein.
  • a file operation to open a file for reading may be converted to object operations that include obtaining an object lease for reading as discussed elsewhere herein.
  • the VFS translates the file name into an object identifiers using the file name services discussed above in connection with FIG. 14 . Operations that follow may be performed using the object identifiers obtained at the step 414 .
  • a test step 416 where it is determined if the requested operation requires the LSO tree.
  • operations such as read, write, etc. use LSO trees corresponding to file objects. However, some possible file operations may not require accessing a corresponding LSO tree. If it is determined at the test step 416 that the LSO tree is needed, then control transfers from the test step 416 to a step 418 where the VFS accesses the LSO manager to perform the necessary operations. For example, for a read operation, the LSO manager may perform processing like that illustrated in the flow chart 240 of FIG. 10 .
  • a step 422 where the operations are passed to low level kernel I/O drivers (e.g., via one or more appropriate API's).
  • the kernel I/O drivers use the communication module to communicate between the client 104 and the servers 102 in connection with performing the requested operation(s).
  • the data and/or information may be passed back up through the communication interface, kernel I/O drivers, etc. to the VFS and ultimately to the application.
  • the client 104 is shown as having an application, file name services, user level I/O drivers, and a layout manager all provided in user memory address space.
  • the functionality of the VFS that was shown in FIG. 15 and described above may be performed instead by library routines linked to the application, and thus are part of the application. These routines would provide functionality like that discussed above in connection with FIG. 16 . Accordingly, it is the application that uses the file name services and makes calls to the user level I/O drivers (like the kernel I/O drivers) and to the layout manager.
  • the communication interface is still maintained in the kernel memory address space.
  • modifications are provided by modifying system processes (the operating system), which is disadvantageous for a number of reasons.
  • the configuration of FIG. 17 is advantageous since it allows modification of the system in the application/user memory address space so that the operating system of the client 104 does not need to be modified.
  • the configuration of FIG. 17 does not use a VFS, and thus does not obtain the advantageous separation of the application from the file system that is provided by the VFS in FIG. 15 .
  • the client 104 is shown as having an application in user memory address space that accesses file objects through a VFS in kernel memory address space like that illustrated in FIG. 15 .
  • the file name services, I/O drivers, and the layout manager all reside in the user memory address space like the system illustrated in FIG. 17 .
  • the VFS communicates with components in the user memory address space through a bridge between kernel memory address space and user memory address space, such as a FUSE (or similar) interface.
  • the bridge allows file system components to be provided in user memory space instead of kernel address memory space while still preserving the VFS in the kernel address memory space.
  • FIG. 18 provides the advantages of using a VFS, as illustrated in the configuration of FIG. 15 , along with the advantages of having file system components in the user address memory space, as illustrated in the configuration of FIG. 17 .
  • Accessing file objects directly may include invoking routines that create objects, read objects, modify objects, delete objects, etc.
  • the application would need to know how to interpret and/or manipulate the object data, which may not always be desirable. For example, an application that accesses file objects through the VFS may not need to take into account (or even know about) the structure of an LSO tree while an application that accesses objects directly may need to use the LSO tree.
  • removing the file services layer may provide an opportunity for optimizations not otherwise available.
  • the servers 102 since the servers 102 exchange object information/operations with the clients 104 - 106 , the servers 102 may not need to distinguish or be able to distinguish between application on the clients 104 - 106 using a file system interface (file services like the VFS) and those that are not.
  • file system interface file services like the VFS
  • the client 104 is shown as including an application in the user memory address space and kernel I/O drivers, a layout manager, and file name services in the kernel memory address space.
  • the configuration illustrated in FIG. 19 is like that illustrated in FIG. 15 , except that the VFS is not used.
  • the application could directly access the file name services, the kernel I/O drivers, and the layout manager.
  • the communication interface in the kernel memory address space communicates with the servers 102 just as in other configurations.
  • the direct access illustrated in FIG. 19 allows applications to manipulate file objects (via, for example, appropriate API's) while access via the VFS (or similar) allows applications to accesses file objects indirectly through file system calls to the VFS.
  • the client 104 is shown as having an application, user level I/O drivers, a layout manager, and file name services all provided in user memory address space.
  • the configuration shown in FIG. 20 is like that shown in FIG. 17 .
  • the configuration of FIG. 17 includes file service libraries that are linked into, and thus part of, the application.
  • the application is not linked into libraries with extensive file services.
  • the application in the configuration of FIG. 20 uses minimal file services and, instead, uses and operates upon file objects directly using the user level I/O drivers, the layout manager and, if a file name translation is needed, the file name services.
  • the client 104 is shown as having an application in user memory address space and a bridge in the kernel memory address space.
  • File name services, user level I/O drivers, and a layout manager are provided in user memory address space.
  • the application does not make direct calls to the file system components in the user memory address space. Instead, the application calls the file system components indirectly through the bridge.
  • the configuration of FIG. 21 advantageously locates file system components in the user memory address space and, at the same time, provides a kernel memory address space layer between the application and the file system components.
  • the client 104 is shown as having an application in user memory address space and a Web Services module in kernel memory address space.
  • the application may be a Web server application or any application that handles communication with the Web.
  • the application allows communication with the client 104 , which acts as a Web server to other computing devices (not shown) that access the client 104 through a Web connection.
  • the configuration illustrated in FIG. 22 provides Web Services in a manner similar to the file services and/or file object access provided by other configurations.
  • the Web Services receives requests/data via a Web data protocol, such as HTML, and provides responses/data also in a Web data protocol, which may be the same or different from the protocol used for requests/data.
  • Operations handled by the Web Services may include object-level operations such as create object, delete object, read object, modify object, modify object metadata, etc. It is also possible to provide more file system level operations, via the Web Services, that open files, read data from files, etc. by including at least some of the functionality of the file services, described elsewhere herein, with the Web Services.
  • the Web Services may present to the other computing devices a conventional well-known Web Services protocol, such as REST or SOAP, or may provide any other appropriate protocol.
  • the client 104 is shown as having an application, Web Services, user level I/O drivers, and a layout manager in user memory address space.
  • the application may include a Web connection that allows communication with the client 104 , which acts as a Web server to other computing devices (not shown) that access the client 104 through the Web connection.
  • the configuration of FIG. 23 is like that of FIG. 17 and FIG. 20 .
  • the advantages of the configuration shown in FIG. 23 over the configuration shown in FIG. 22 is that, generally, changes to the configuration shown in FIG. 23 do not require reconfiguring kernel memory address space processes.
  • the 104 is shown as having an application, Web Services, user level I/O drivers, and a layout manager in user memory address space.
  • the application may include a Web connection that allows communication with the client 104 , which acts as a Web server to other computing devices (not shown) that access the client 104 through the Web connection.
  • a bridge is provided in the kernel memory address space.
  • the configuration of FIG. 24 has similar advantages to the configuration shown in FIG. 23 , but also has the advantages provided by providing the bridge, discussed elsewhere herein.
  • the client 104 is shown as having a plurality of applications in user memory address space, each of which may use a different interface to access file objects of the servers 102 .
  • Each of the applications shown in FIG. 25 is meant to represent one or more applications.
  • APP1 may present one or more applications that access file objects at the servers 102 using a Web Services interface.
  • the APP1 application may include a Web connection that allows communication with the client 104 , which acts as a Web server to other computing devices (not shown) that access the client 104 through the Web connection.
  • APP2 may represent one or more applications that access file objects at the servers 102 using the VFS, and APP3 may represent one or more applications that directly operate on file objects at the servers 102 .
  • the different interfaces may operate at the client 104 at the same time.
  • the servers 102 provide the file objects to the clients 104 provided: 1) the requesting client has appropriate authorization for whatever operation is requested for the file objects; and 2) there is no conflict with any previous request. For example, in systems where only one client is allowed to write to an object at any one time, the servers 102 would not allow one of the clients 104 - 106 to modify a particular object while another one of the clients 104 - 106 is also modifying the object.
  • the servers 102 are shown in more detail as including one or more policy manager servers 402 , one or more security manager servers 403 , one or more audit servers 404 , one or more metadata servers 405 , one or more resource manager servers 406 , one or more data storage servers 407 , and one or more metadata location servers 408 .
  • Each of the servers 402 - 408 may be implemented as one or more unitary processing devices capable of providing the functionality described herein.
  • reference to servers should be understood as a reference to one or more servers.
  • the servers 402 - 408 may be interconnected using any appropriate data communication mechanism, such as TCP/IP, and may be coupled to the clients 104 - 106 (not shown in FIG. 26 ) using any appropriate data communication mechanism, such as TCP/IP.
  • the servers 102 may include a user management interface 412 that facilitates system management.
  • the user management interface 412 exchanges data with the policy management servers 402 , the security management servers 403 , and the audit servers 404 to affect how the servers 102 interact with the clients 104 - 106 and corresponding users.
  • the data may be provided through the user management interface 412 in any one of a number of ways, including conventional interactive computer screen input and data file input (e.g., a text file having user management commands).
  • the data may include information that correlates classes of users and storage parameters such as Quality of Service (QOS), RAID protection level, number and geographic location(s) of mirrors, etc.
  • QOS Quality of Service
  • RAID protection level number and geographic location(s) of mirrors, etc.
  • an administrator may specify through the user management interface 412 that users of a particular class (users belonging to a particular group) store data file objects on storage devices having a particular RAID level protection.
  • the servers 102 also include physical storage 414 coupled to the data storage servers 407 .
  • the physical storage 414 is shown as a single item in FIG. 26 , there may be any number of separate physical storage units that may be geographically dispersed. In addition, there may be different types of physical storage units having different capabilities. Accordingly, the physical storage 414 generically represents one or more instances of physical data storage for the system that is managed by the data storage servers 407 , as explained in more detail below.
  • the security manager servers 403 may restrict/inhibit the ability of certain administrators (users) to modify and/or create policies for classes of users.
  • a flow chart 430 illustrates steps performed by the user management interface 412 to obtain and use security credentials for accessing the policy manager servers 402 .
  • Processing begins at a first step 432 where the user management interface 412 sends a request to the security manager servers 403 to obtain a token (or other appropriate security credentials) for the operation to be performed by the user management interface 412 .
  • a test step 434 where it is determined if the token has been granted (provided).
  • the security manager servers 403 may not issue a security token at all. For example, if the administrator (user) does not have sufficient rights to perform the desired function.
  • the operations performed at the step 436 may including providing a message to the administrator (user) through the security management interface 412 indicating that the administrator does not have sufficient rights to perform the desired operation. Following the step 436 , processing is complete.
  • the user management interface 412 provides the security token, and user id information, to the policy manager servers 402 .
  • information indicating the desired operation/modification may also be provided at the step 438 .
  • a test step 442 it is determined if the policy manager servers 402 have allowed the requested operation. Note that, in some instances, the policy manager servers 402 may not allow a particular operation even though the security manager servers 403 have provided a security token. For example, if the user id and the user indicated by the security token do not match and/or if the requested operation and the operation indicated by the security token do not match.
  • the processing performed at the step 436 may include providing a message to an administrator (user) indicating that the operation cannot be performed because of insufficient security rights.
  • the message provided when the step 436 is reached from the step 442 may be different than the message provided when the step 436 is reached from the step 434 .
  • Performing the operation at the step 444 may include modifying policy data, as described in more detail elsewhere herein. Following the step 444 , processing is complete.
  • an administrator (user) accessing the policy manager servers 402 would first provide identification information to the security manager servers 403 that would return a security token (perhaps having an expiration time). The administrator presents the token and identification information to the policy manager servers 402 , which would decide to grant or deny access based on the token and the identification information.
  • the security mechanism illustrated by the flow chart 430 of FIG. 27 may be extended to be used in connection with accessing any of the servers 402 - 408 and/or other data.
  • one of the clients 104 - 106 could obtain/modify file objects by first requesting a security token from the security manager servers 403 prior to performing an operation that includes operations with file objects. Accordingly, for the discussion herein, it can be assumed that access to file objects, servers, etc. includes appropriate security procedures like those illustrated in FIG. 27 .
  • the policy manager servers 402 handle placement and protection of file objects.
  • An administrator may input, through the user management interface 412 , different policy templates that may be assigned to different ones of the clients 104 - 106 , different users, different classes of users, or any other appropriate group.
  • a policy template may indicate that, for a particular group of users, whenever a new file is created, a mirror will be created that is geographically farther from the initial data set by at least a certain distance. In such a case, when a first user of the group creates an initial data set in New York, a mirror may be automatically created in Los Angeles while, when a second user creates an initial data set in Los Angeles, a mirror may be created in New York.
  • the audit servers 404 may be used to provide system auditing capability.
  • a user may communicate to the audit servers 404 through the user management interface 412 .
  • the user may indicate the type of information to be audited (tracked).
  • the resource manager servers 406 keep track of available system resources. In some instances, the resource manager servers 406 may interact with the policy manager servers 402 in connection with establishing policy templates and/or assigning policy templates. In some cases, a user may attempt to construct a policy template that is impossible to fulfill if assigned to a group. For example, if all of the physical data storage is in a single geographic location, then it would not be appropriate to have a policy template indicating that new files should include a mirror that is geographically distant from the initial data set.
  • the resource manager servers 406 receive information from other components of the system in order to be able to keep track which resources are available. Whenever a resource is added to the system, the resource or another component reports that information to the resource manager servers 406 . For example, if new physical storage is added to the system, the new physical storage itself, or a corresponding one of the data storage servers 407 , sends a message to the resource manager servers 406 . Similarly, if a resource becomes full (e.g., a physical disk is full) or is removed from the system (planned removal or unplanned resource failure), information is provided to the resource manager servers 406 .
  • system resources may correspond to portions of the physical storage 414 and/or data servers 407 that manage the physical storage 414 .
  • a resource table 460 is shown as including a plurality of entries 462 - 464 , each of which corresponds to a particular storage resource. Although only three entries are shown, the table 460 may contain any number of entries.
  • the table 460 may be implemented using any appropriate technique, including an array, linked list, etc.
  • Each of the entries 462 - 464 includes a resource field identifying a particular resource corresponding to the entry.
  • each of the entries 462 - 464 may correspond to a particular one of the data storage servers 407 and/or a portion thereof.
  • Each of the entries 462 - 464 includes a status field corresponding to the status of the corresponding resource.
  • the status field may indicate that a resource is on-line (available) or off-line (unavailable).
  • the status field may also indicate the percentage of used space of a resource, and perhaps indicate any performance degradation.
  • Each of the entries 462 - 464 may also include a capabilities field that indicates the capabilities of the corresponding resource.
  • the capabilities field may indicate particular capabilities of a corresponding storage area. Particular capabilities may include the resource being green (low energy use through, for example, spinning disks down when not in use), capable of data deduplication (maintaining only a single copy of data that is otherwise duplicated), capable of various RAID configurations, etc.
  • the capabilities field may indicate any appropriate data storage capabilities.
  • a flow chart 480 indicates operation of the resource manager servers 406 in connection with maintaining information about system resources. Processing begins at a first step 482 where the resource manager servers 406 are initialized with information about resources. The initialization processing performed at the step 482 may take any form, including loading a fixed table of initially available resources, having the resource manager servers 406 poll system resources, etc.
  • a test step 484 where the resource manager servers 406 wait for new information to be provided.
  • the resource manager servers 406 wait to receive information from other system components. In other embodiments, it may be possible to have the resource manager servers 406 periodically poll system components to see if anything has changed. If it is determined at the test step 484 that no new information is available, control loops back on the test step 484 to continue polling.
  • step 486 If it is determined at the step 486 that the received resource information does not related to a new resource (and thus relates to a change of an existing resource), then control transfers from the step 486 to a step 492 where the existing entry is located in the resource table. Following the step 492 is a test step 494 where it is determined if the capability is being changed for the modified resource.
  • the capability of a resource may change under many different circumstances. For example, a resource may degrade and lose capabilities, a resource may be modified/enhanced and gain capabilities, a local manager of a resource may decide to make certain capabilities available/unavailable, etc.
  • the resource manager servers 406 may represent a plurality of separate computing devices that may be dispersed throughout the system. Furthermore, each of the separate computing devices may maintain its own copy of the resource table. The separate computing devices that are used to implement the resource manager servers 406 may or may not share resource information and may or may not receive the same resource status messages. In instances where information sharing and/or receipt of status messages is not perfect, then each of the computing devices may have a somewhat different version of the resource table and it is possible for no one version of the resource table to reflect a completely accurate picture of the exact state of all of the resources of the system.
  • the physical storage 414 may be provided using relatively inexpensive off-the-shelf mass produced storage hardware.
  • at least some of the physical storage 414 may be implemented using serial ATA disk drives, which are available from a number of manufactures such as Seagate and Western Digital.
  • the physical storage may be geographically dispersed.
  • each portion of the physical storage may be managed/controlled by at least one of the data storage servers 407 , which may be implemented using conventional computing devices local to the corresponding portion of the physical storage 414 .
  • the data storage servers 407 may present an OSD Standard interface to the system.
  • the servers 102 and/or the clients 104 - 106 may access physical storage 414 through the data storage servers 407 using OSD calls and may receive information/data according to the OSD protocol.
  • the data storage servers 407 may handle managing/posting the capabilities and status of different portions of the physical storage 414 .
  • the particular server may send a message to the resource manager servers 406 indicating the new status.
  • a flow chart 510 illustrates steps performed by the resource manager servers 406 in connection with servicing an inquiry for a resource with particular capabilities (i.e., finding a resource with particular capabilities). Processing begins at a first step 512 where a pointer, used to iterate through each entry of the resource table, is set to point to the first entry. Following the step 512 is a test step 514 where it is determined if the pointer points past the end of the table (i.e., all entries have been examined). If so, then control passes from the test step 514 to a step 516 where a result indicating no match for the requested capabilities is returned by the resource manager servers 406 . Following the step 516 , processing is complete.
  • test step 514 If it is determined at the test step 514 that the pointer used to iterate through the entries does not point past the end of the table, then control transfers from the test step 514 to a test step 518 where it is determined if the entry currently indicated by the pointer is a match for the requested capability. Note that the test at the step 518 may include checking the status of a resource to ensure that the resource is on-line and not full or otherwise unusable. If it is determined at the step 518 that the resource indicated by the pointer has the requested capability, then control transfers from the test step 518 to a step 522 where the resource manager servers 406 return an indicator indicating the matching resource. Following the step 522 , processing is complete.
  • the LSO trees that are part of the metadata objects for files are created, maintained, and manipulated by the metadata servers 405 .
  • the metadata servers 205 handle updates from the clients 104 - 106 in connection with manipulation of file objects (e.g., at the step 214 of the flow chart 200 of FIG. 9 , at the step 298 of the flow chart 280 of FIG. 11 , etc.).
  • the metadata servers 205 may also handle any actions, besides modifying the LSO tree, that may need to be performed in connection with the updates.
  • the metadata servers 205 also handle leases obtained for file objects.
  • a flow chart 550 illustrates steps performed by the metadata servers 205 in connection with servicing a request for a lease for a file.
  • a lease may be requested by one of the clients 104 - 106 .
  • one of the metadata servers 405 may request a lease in connection with file maintenance (e.g., repairing mirrors), as described in more detail elsewhere herein.
  • leases are maintained by the metadata servers 405 in a table that indicates the entity that has acquired the lease, the type of lease (e.g., read or write), and possibly an expiration time.
  • Processing begins at a first test step 552 where it is determined if the requested lease is available.
  • the test at the step 552 determines if the requestor has appropriate security credentials, if the corresponding data file exists, etc. Also, as discussed in more detail elsewhere herein, leases may be purposely made unavailable in certain circumstances. If it is determined at the test step 552 that the lease is not available, then control transfers from the test step 552 to a step 554 where a failure indicator is returned to the requestor.
  • the failure indicator may include a reason for the failure (e.g., improper security credentials, file does not exist, etc.). Following the step 554 , processing is complete.
  • multiple users e.g., clients, servers
  • the test step 556 may read from the same file simultaneously while only one user may write to the same file.
  • leases may also be provided with a predetermined expiration time after which the leases are no longer valid.
  • leases may be provided along with security credentials (generated, perhaps, by the security manager servers 403 ) that only allow for the requested operation (e.g., read only, read and write, etc.).
  • security credentials may also expire at or around the same time that the lease expires in order to enforce lease expirations. Following the step 558 , processing is complete.
  • test step 556 If it is determined at the test step 556 that the user is requesting a write lease, then control transfers from the test step 558 to a test step 562 where it is determined if another user has already obtained a write lease for the same file. As discussed elsewhere herein, only one write lease at a time is granted for a file. If it is determined at the test step 562 that another write lease has already been granted, then control transfers from the test step 562 to a step 564 where a failure indicator is returned. Just as with the step 562 , the failure indicator returned at the step 564 may include information identifying the nature of the failure. Following the step 564 , processing is complete. If it is determined at the test step 562 that another write lease has not been granted, then control transfers from the test step 562 to a step 566 where the metadata servers 405 return the lease, possibly along with an expiration. Following the step 566 , processing is complete.
  • a particular one of the metadata servers 405 may be responsible for a particular file and corresponding file objects.
  • the responsible one of the metadata servers 405 issues leases for the file and corresponding file objects and handles lease expiration processing.
  • the lease information may be stored in appropriate data structures (e.g., table(s), linked list(s), etc.) by the responsible one of the metadata servers 405 .
  • a flow chart 580 illustrates steps performed by a responsible one of the metadata servers 405 in connection with monitoring lease expiration. Processing begins at a first step 582 where a pointer that iterates through all of the leases (file and lease combinations) for which the server is responsible is made to point to the first one. Following the step 582 is a test step 584 where it is determined if the lease has expired. The processing performed at the step 584 may use any appropriate technique, including comparing lease expiration times with the current time. If it is determined at the step 584 that a lease has expired, then control transfers from the step 584 to a step 586 where the lease is recalled. Recalling the lease at the step 586 is discussed in more detail elsewhere herein.
  • step 588 where the pointer that iterates through the files and leases for which the server is responsible is incremented.
  • a test step 592 where it is determined if the pointer points past the end (i.e., all files and corresponding leases have been processed). If so, then control transfers from the step 592 back to the step 582 , discussed above, to reset the pointer to point to the first one and begin another pass to check for expired leases. If it is determined at the step 592 that the pointer does not point past the end, then control transfers from the test step 592 back to the step 584 , discussed above, for another iteration.
  • the system may provide close-to-open consistency where data consistency is provided after an entity has released write access. Said differently, consistency is provided for a file when no entity has an active write lease for the file. Conversely, while any entity has an active write lease, the state of the data may not be guaranteed for any entity reading the data.
  • leases may be recalled after expiration or may be recalled for other reasons. Recalling the leases may improve the consistency of the data being accessed by other entities.
  • a flow chart 610 illustrates steps performed in connection with a particular one of the metadata servers 405 recalling a lease. Processing begins at a first step 612 where a message is sent to the entity holding the lease (e.g., one of the clients 104 - 106 ) to alert the entity that the lease is being recalled. As discussed elsewhere herein, any appropriate entity may hold a lease, including clients 104 - 106 , other servers, or even one of the metadata servers 405 .
  • the step 612 may include ensuring receipt of the message by the entity (e.g., waiting for an acknowledgement) while in other embodiments the message is simply sent and it is assumed that the message is received by any operational recipient. Of course, in systems that wait for an acknowledgement, there may be a timeout and/or a limit on the number of attempts.
  • step 614 the appropriate tables are adjusted to reflect that the lease that has been recalled is no longer outstanding. Tables and other data structures used with leases are discussed in more detail elsewhere herein.
  • step 616 it is determined if the lease that was recalled was a write lease (lease to allow writing data). As discussed elsewhere herein, the system may provide close-to-open consistency so that, when a write lease is released, data reads are made consistent. This may be facilitated by recalling all read leases when a write lease is recalled. Entities for whom a read lease is recalled may flush their internal buffers prior to reacquiring the read lease after the recall.
  • an entity for which a write lease is recalled may also flush buffers by writing unsaved data to the physical storage 414 (through the data storage servers 407 ) in response to receiving a recall notification.
  • an entity receiving a recall message for a write lease may be provided with a certain amount of time in which to write any unsaved data to physical storage.
  • the security credentials provided along with a write lease may be set to expire a predetermined amount of time after the write lease expires.
  • step 616 If it is determined at the step 616 that the lease that is being recalled is not a write lease, then processing is complete. Otherwise, control transfers from the test step 616 to a step 618 where a pointer, used to iterate through all of the outstanding leases for the file for which the write lease is being recalled, is made to point to the first outstanding lease. Following the step 618 is a test step 622 where it is determined if the pointer points past the end (i.e., all outstanding leases have been recalled). If so, then processing is complete. Otherwise, control transfers from the test step 622 to a step 624 where the lease is recalled. The lease may be recalled by calling the processing illustrated by the flow chart 610 , and thus may be recursive. Following the step 624 is a step 626 where the pointer used to iterate through entities having outstanding leases for a file is incremented. Following the step 626 , control transfers back to the step 622 for another iteration.
  • a table 630 is shown as including a plurality of entries used to keep track of files for which a particular one of the metadata servers 405 is responsible.
  • other appropriate data structures may be used instead of the table 630 , such as a linked list, a doubly linked list, etc.
  • Each primary one of the metadata servers 405 , and any corresponding secondary one(s) of the metadata servers 405 may contain data corresponding to specific file sets for which the particular one(s) of the metadata servers 405 are responsible.
  • a first primary one of the metadata servers 405 may contain a version of the table 630 for a first set of files for which the first primary one of the metadata servers is responsible while a second primary one of the metadata servers 405 may contain a completely different version of the table 630 for a second (different) set of files for which the second primary one of the metadata servers 405 is responsible.
  • Each entry of the table 630 includes a file identifier field (FID) that uniquely identifies the file corresponding to an entry.
  • the FID field may be the object id of the metadata object for the file (for example, the metadata object 132 in the diagram 130 of FIG. 4 ).
  • the metadata object for a file may be used to locate all of the other data objects for the file.
  • the MD LOC field may describe the data storage location of the metadata object for the file.
  • the MD LOC field may contain an identifier for the one of the data storage servers 407 that stores the metadata object for the file.
  • THE MD LOC field may also contain a unique identifier (perhaps initially assigned by the one of the data storage servers 407 ) that may be used to retrieve and store data at the one of the data storage servers 407 .
  • the LEASE LIST field may contain information about all entities that have active leases outstanding for the corresponding file.
  • the LEASE LIST field may contain a pointer to a linked list of elements that corresponding to entities having outstanding leases.
  • any other appropriate data structure e.g., array may be used.
  • a diagram 640 illustrates a linked list that may be used to keep track of entities having an outstanding active lease for a file.
  • Each element of the list contains a NEXT field that points to the next element in the list.
  • the element at the end of the list contains a null indicator.
  • the list may be traversed by starting with the element pointed to by the LEASE LIST pointer and subsequently pointing to the elements pointed to by the NEXT field. (e.g., in connection with the processing illustrated in the flow chart 580 of FIG. 32 ).
  • conventional linked list operations may be used to add and remove elements.
  • Each element of the list also contains a TYPE field that indicates the type of lease (e.g., read or write) and includes an EXP field that indicates when the lease corresponding to the element expires.
  • Each element also contains an ENT field that indicates the entity (e.g., one of the clients 104 - 106 , another server, etc.) that holds the corresponding lease.
  • Manipulation of the linked list is fairly straight-forward.
  • fields of an element are populated with the type, expiration, and entity corresponding to the lease and the element is then added to the list.
  • the corresponding element is removed from the list.
  • other data structures may be used instead of a linked list.
  • a table 650 is shown as containing a plurality of entries that correlate object identifiers (OID) with location information (LOC).
  • object identifiers are a numerical value.
  • Each of the metadata servers 405 is provided with a unique range of object identifiers to use/manage.
  • object identifiers may be reused while in other embodiments object identifiers are not reused.
  • each of the metadata servers 405 needs to be provided with a sufficient range of useable object identifiers.
  • the LOC field is like the MD LOC field for the table 630 .
  • the LOC field describes the data storage location of the corresponding object.
  • the LOC field may contain an identifier for the one of the data storage servers 407 containing (handling) the object as a unique identifier (perhaps assigned by the one of the data storage servers 407 ) that may be used to retrieve and store data for the object.
  • an entity can pass the object identifier to the one of the metadata servers 405 and receive in return the corresponding LOC information to allow the entity to access the appropriate one of data storage servers 407 directly.
  • Metadata servers 405 Having a number of metadata servers 405 allows for distributed servicing of file operations (and thus significant scalability) as well as providing failover/redundancy capability. In some instances, objects may be reassigned from one of the metadata servers 405 to another. However, since each of the metadata servers 405 contains information for only a subset of files (and corresponding file objects), it may be necessary to provide a mechanism for locating an appropriate one of the metadata servers 405 in connection with performing operations.
  • the metadata location servers 408 provide location services for an entity seeking the appropriate one of the metadata servers 405 for operations on a particular file.
  • each of the metadata location servers 408 may receive a call having an object identifier and can return a specific one of the metadata servers 405 that handles the particular object.
  • the metadata location servers 408 may assist in connection with the creation of new objects by indicating to a calling entity (e.g., one of the clients 104 - 106 ) a specific one of the metadata servers 405 to be used for the new object.
  • the metadata servers 408 may operate like Domain Name Servers on the Web, and each of the clients 104 - 106 (and other entities) may be provided with a primary and a secondary one of the metadata location servers 408 to consult.
  • a table 670 is shown as containing entries for use by one of the metadata location servers 408 .
  • Each entry includes an OID RANGE field, which indicates a range of object identifiers, and an MDS ID field, which identifies a particular one of the metadata servers 405 , or possibly a group of the metadata servers 405 , with one being primary and the remainder being secondary.
  • An entity may provide a particular object identifier to the metadata location server 408 , which may then consult the table 670 and return the corresponding value from the MDS ID field.
  • the metadata location servers 408 may assign a particular one of the metadata servers 405 in connection with creation of a new object.
  • the assignment may be based on any appropriate metric, including random assignment, assignment based on geographic proximity, load balancing, and/or a policy input by a user through the user management interface 412 , discussed above.
  • a policy may indicate, for example, that new objects created by a particular client are provided on a particular metadata server.
  • a flow chart 680 illustrates processing by one of the metadata location servers 408 to assign a particular one of the metadata servers in connection with creation of a new file object. Processing begins at a first step 682 where the policy may be consulted. As discussed elsewhere herein, it may be possible to input policies through the user management interface 412 . The policies may dictate (directly or indirectly) which of the metadata servers 405 are to be used for which of the clients 104 - 106 . Note that other policies are possible. For example, the policies may indicate which of the metadata servers 405 are to be used at different times of the day (independent of the clients 104 - 106 ) or based on the load, the user, etc.
  • the policy information may be stored at the user management interface 412 and accessed in connection with the step 682 , or may be stored at the one of the metadata location servers 408 after having been previously passed thereto.
  • a step 684 where identification infatuation for a specific one of the metadata location servers 405 is returned to the calling entity. Following the step 684 , processing is complete.
  • a flow chart 700 illustrates steps performed by one of the metadata servers 405 in connection with deleting a file for which the one of the metadata servers 405 is responsible.
  • Processing begins at a first step 702 where all leases for the file are recalled (e.g., by iterating through the lease list 640 and providing recall processing like that illustrated by the flow chart 610 of FIG. 33 ).
  • a step 704 where the leases are made unavailable for other processes (e.g., by setting an appropriate flag that prevents granting further leases for the file).
  • the metadata for the file is obtained (e.g., by retrieving the metadata object for the file).
  • the metadata object for a file contains information about the location of all the file objects used in connection with the file.
  • a pointer used to iterate through the objects used in connection with a file, is made to point to the first object.
  • the pointer and subsequent iterative processing uses information obtained at the step 706 to determine the objects for the file.
  • a test step 712 where it is determined if the pointer, used to iterate through the objects, points past the end (i.e., all of the file objects have been processed). If so, then control transfers from the test step 712 to a step 714 where the table entry corresponding to the file (i.e., the entry in the table 630 of FIG. 34 ) is deleted (e.g., set to null).
  • processing is complete.
  • step 712 If it is determined at the step 712 that there are more file objects to process, then control transfers from the step 712 to a step 716 where the LOC information is obtained for the object.
  • the LOC information is like the information stored in the table 650 of FIG. 36 , discussed above. In some instances, the LOC information will be local to the one of the metadata servers 405 performing the processing. In other instances, it may be necessary to call one of the metadata location servers 408 to get the location information for the object.
  • step 718 Following the step 716 is a step 718 where a message is sent to the appropriate one of the data storage servers 407 (i.e., the one handling the object) to cause the object to be deleted.
  • the one of the data storage servers 407 would simply decrement a counter for the object indicating the number of users thereof.
  • the data storage server may delete the data corresponding to the object.
  • the object(s) associated with a file may be deleted asynchronously.
  • a step 722 where the pointer used to iterate through the file objects is incremented.
  • a flow chart 740 illustrates steps performed by one of the metadata servers 405 in connection with creating a new file.
  • the entity creating the file e.g., one of the clients 104 - 106
  • Processing begins at a first step 742 where the policy manager servers 402 are consulted to obtain policy information for new files (e.g., new files for client X have a mirror geographically located at least a certain distance from the primary data set).
  • a step 744 where the resource manager servers 406 are consulted to determine the available resources to meet the dictates of the policy obtained at the step 742 .
  • a test step 746 it is determined if it is possible to meet the dictates of the policy given the available resources. For example, it may not be possible to satisfy the policy of having geographically separated mirrors if all of the remaining physical storage in a system is in one geographic location.
  • test step 746 If it is determined at the test step 746 that it is not possible to fulfill a policy, then control transfers from the test step 746 to a step 748 where alternative processing is performed. Any appropriate processing may be performed at the step 748 , including returning an error indicator to the calling entity, creating the file with the next best available resources, etc. Following the step 748 , processing is complete.
  • step 746 determines whether it is possible to fulfill the policy with available resources. If it is determined at the step 746 that it is possible to fulfill the policy with available resources, then control transfers from the test step 746 to a step 752 where the metadata object for the file is created. Creating the metadata object may include populating the data fields of the metadata object and obtaining storage from an appropriate one of the data storage servers 407 . In an embodiment herein, the data storage servers 407 may be a pool and, absent any other specific requirements, may provide storage space at any appropriate portion of the physical storage 414 upon request.
  • the metadata objects created at the step 752 will be like those described herein. See, for example, FIGS. 4-8 and the corresponding description. Following the step 752 is a step 754 where a table entry is created in the table 630 for the new file. Following the step 754 is a step 756 where the object id of the metadata object for the file is returned to the calling entity. Following the step 756 , processing is complete.
  • a message is sent to the servers 102 by the client or other entity.
  • a message may also be sent to the servers 102 in connection with finding a stale mirror in connection with a synchronous mirror copy (see the step 358 of the flow chart 350 of FIG. 13 ), and/or writing to data having an asynchronous mirror.
  • a flow chart 760 illustrates steps performed by one of the metadata servers 405 in connection with handling a message that a write operation was unsuccessful. Processing begins at a first step 762 where all of the leases for the file are recalled. Following the step 762 is a step 764 where leases for the file are made unavailable. Following the step 764 is a step 766 where new storage space is allocated to replace the old storage space to which the write operation was unsuccessful. Following the step 766 is a step 768 where the appropriate information in the metadata object for the file is adjusted. Following the step 768 is a step 772 where a data copy operation is begun to repopulate the new data storage space with, for example, data from one of the minors.
  • step 772 While the data copy operation is being performed, the data may be indicated as being stale at the step 772 . Of course, if there are no mirrors or other data that can be used to repopulate the new storage space, then the processing at the step 772 would not be performed. Following the step 772 is a step 774 where the leases for the file are made available. Following the step 774 , processing is complete.
  • a flow chart 780 illustrates processing preformed in connection with one of the metadata servers 405 receiving an indication that a synchronous mirror was stale. Processing begins at a first test step 782 where it is determined if the mirror is currently in the process of being populated with data (perhaps in connection with a previous bad write operation). If so, then the data population operation is allowed to continue and processing is complete. Otherwise, control transfers from the test step 782 to a step 784 where processing like that illustrated in the flow chart 760 of FIG. 41 , discussed above, is performed. Following the step 784 , processing is complete.
  • a flow chart 790 illustrates steps performed in connection with the servers 102 receiving a message that data corresponding to asynchronous copies has been written, thus necessitating update of the asynchronous copies.
  • Processing begins at a first step 792 where information is added to a queue (e.g., a job queue provided at the affected one of the metadata servers 405 ) indicating that asynchronous data needs to be copied.
  • a queue e.g., a job queue provided at the affected one of the metadata servers 405
  • a process at each of the metadata servers 405 services the corresponding queue. Following the step 792 , processing is complete.
  • a flow chart 800 illustrates processing performed by a process at each of the metadata servers 405 that services the corresponding queue that is populated by the processing illustrated by the flow chart 790 of FIG. 43 .
  • Processing begins at a first test step 802 where it is determined if the queue is empty. If so, then control transfers back to the test step 802 to continue to poll the queue. If the queue is not empty, then control transfers from the test step 802 to a step 804 where the next entry in the queue is removed.
  • queue elements may be processed on a first in first out basis. However, it is also possible to selectively remove elements from the queue in any order. For example, elements associated with files that receive higher priority may be removed before other elements (e.g., in connection with a flush operation for the file). As another example, elements may be removed according to size (of needed write operation) or according to any other appropriate criteria.
  • step 806 where the write lease for the affected file is recalled.
  • step 808 where the write lease is obtained.
  • step 812 where the write operation is performed to write the asynchronous data to the mirror.
  • step 814 where the write lock is released.
  • the system described herein may be used with any server, or any group of servers, capable of providing file objects to clients.
  • the particular form of the file objects may vary without departing from the spirit and scope of the invention. In some instances, the order of steps in the flow charts may be modified, where appropriate.
  • the system described herein may be implemented using a computer program product/software provided in a computer-readable storage medium.

Abstract

Managing file objects in a data storage system includes providing a plurality of metadata servers, each metadata server having information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system and includes providing at least one metadata location server, where an entity accessing a specific file object of the system determines which of the metadata servers contains data for the specific file object by first accessing the at least one metadata location server. Information for the subset of file objects may include a table having file identifiers and metadata location information. The metadata location information may point to data storage for a metadata object for a file.

Description

    BACKGROUND OF THE INVENTION
  • 1. Technical Field
  • This application relates to the field of storing data, and more particularly to the field of data storage services in a scalable high capacity system.
  • 2. Description of Related Art
  • It has been estimated that the amount of digital information created, captured, and replicated in 2006 was 161 exabytes or 161 billion gigabytes, which is about three million times the information in all the books ever written. It is predicted that between 2006 and 2010, the information added annually to the digital universe will increase more than six fold from 161 exabytes to 988 exabytes. The type of information responsible for this massive growth is rich digital media and unstructured business content. There is also an ongoing conversion from analog to digital formats—film to digital image capture, analog to digital voice, and analog to digital TV.
  • The rich digital media and unstructured business content have unique characteristics and storage requirements that are different than structured data types (e.g. database records), for which many of today's storage systems were specially designed. Many conventional storage systems are highly optimized to deliver high performance I/O for small chunks of data. Furthermore, these systems were designed to support gigabyte and terabyte sized information stores.
  • In contrast, rich digital media and unstructured business content have greater capacity requirements (petabyte versus gigabyte/terabyte sized systems), less predictable growth and access patterns, large file sizes, billions and billions of objects, high throughput requirements, single writer, multiple reader access patterns, and a need for multi-platform accessibility. Conventional storage systems have met these needs in part by using specialized hardware platforms to achieve required levels of performance and reliability. Unfortunately, the use of specialized hardware results in higher customer prices and may not support volume economics as the capacity demands grow large—a differentiating characteristic of rich digital media and unstructured business content.
  • Some of the cost issues have been addressed with tiered storage, which attempts to reduce the capital and operational costs associated with keeping all information on a single high-cost storage tier. However, tiered storage comes with a complex set of decisions surrounding technology, data durability, functionality and even storage vendor. Tiered storage solutions may introduce unrelated platforms, technologies, and software titles having non-zero operational costs and management requirements that become strained as the quantity of data increases.
  • In addition, tiered storage may cause a data replica incoherence which results in multiple, disjoint copies of information existing across the tiers of storage. For example, storage management software handling data backup and recovery may make multiple copies of information sets on each storage tier (e.g. snapshots, backup sets, etc). Information Life-cycle Management (ILM) software dealing with information migration from one tier to another may create additional and often overlapping copies of the data. Replication software may make an extra copy of the information set within a particular tier in order to increase performance to accessing applications. Each of these functions typically runs autonomously from one another. The software may be unable to realize and/or take advantage of the multiple replicas of the same information set.
  • In addition, for large scale unstructured information stores, it may be difficult to maintain a system and manage the environment as components fail. For example, a two petabyte information store may be comprised of eight thousand 250-gigabyte disk drives. Disk failures should be handled in a different manner in a system of this scale so that the system continues to operate relatively smoothly whenever one or only a few of the disk drives fail.
  • Thus, it would be desirable to provide a storage system that addresses difficulties associated with high-cost specialized hardware, storage tiering, and failure management.
  • SUMMARY OF THE INVENTION
  • According to the system described herein, managing file objects in a data storage system includes providing a plurality of metadata servers, each metadata server having information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system and includes providing at least one metadata location server, where an entity accessing a specific file object of the system determines which of the metadata servers contains data for the specific file object by first accessing the at least one metadata location server. Information for the subset of file objects may include a table having file identifiers and metadata location information. The metadata location information may point to data storage for a metadata object for a file. The table may also include lease information that indicates entities that have read permission for the file object and entities that have write permission for the file object. File objects may be provided with unique object identifiers. The at least one metadata location server may determine an appropriate metadata server to use for a file object according to an object identification value for the file object. At least some of the metadata servers may maintain a table that correlates a subset of the object identification values with file objects. At least some of the metadata servers may be provided with a unique subset of object identification values for assigning to new file objects.
  • According further to the system described herein, computer software, provided in a computer-readable medium, manages file objects in a data storage system. The software includes executable code, provided on a plurality of metadata servers, that maintains information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system and includes executable code, provided on at least one metadata location server, that provides an indication of which of the metadata servers contains data for a specific file object in response to a request therefor provided to the at least one metadata server. Information for the subset of file objects may include a table having file identifiers and metadata location information. The metadata location information may point to data storage for a metadata object for a file. The table may also includes lease information that indicates entities that have read permission for the file object and entities that have write permission for the file object. File objects may be provided with unique object identifiers. The at least one metadata location server may include executable code that determines an appropriate metadata server to use for a file object according to an object identification value for the file object. At least some of the metadata servers may maintain a table that correlates a subset of the object identification values with file objects. At least some of the metadata servers may be provided with a unique subset of object identification values for assigning to new file objects.
  • According further to the system described herein, a data storage system includes a plurality of clients that access file objects of the storage system and a plurality of interconnected servers coupled to the clients, the servers including a plurality of metadata servers, each metadata server having information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system, the servers also including at least one metadata location server, where a client accessing a specific file object of the system determines which of the metadata servers contains data for the specific file object by first accessing the at least one metadata location server. Information for the subset of file objects may include a table having file identifiers and metadata location information. The metadata location information may point to data storage for a metadata object for a file. File objects may be provided with unique object identifiers. The at least one metadata location server may determine an appropriate metadata server to use for a file object according to an object identification value for the file objects for the file. At least some of the metadata servers may maintain a table that correlates a subset of the object identification values with file objects. At least some of the metadata servers may be provided with a unique subset of object identification values for assigning to new file objects.
  • According further to the system described herein, managing file objects in a data storage system includes providing a plurality of a first type of server, each having information for only a subset of file objects in the storage system and includes providing at least one of a second type of server that provides information indicating which of the servers of the first type contains data for a specific file object by first accessing the at least one server of the second type.
  • According further to the system described herein, a data storage system includes a plurality of clients that access file objects of the storage system and includes a plurality of interconnected servers coupled to the clients, the servers including a plurality of metadata servers, each metadata server having information for only a subset of file objects in the storage system, each subset for each of the metadata servers being less than a total number of file objects for the system, the servers also including at least one metadata location server, where a client accessing a specific file of the system determines which of the metadata servers contains data for the specific file object by first accessing the at least one metadata location server.
  • According further to the system described herein, managing a plurality of files includes providing at least one server having a table with plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes accessing a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine synchronous and asynchronous mirrors for the file. Each layout storage object may be provided as a tree and leaves of the tree may map logical addresses of a corresponding file to data storage locations. At least one internal node of the tree may indicate that at least one leaf node thereof is a mirror of an other leaf node thereof. In response to a data write and an internal node indicating that a first leaf node thereof is an asynchronous mirror of a second leaf node thereof, data may be written to the first leaf node and an update message may be provided to the at least one server. Managing a plurality of files may also include the at least one server placing information on a queue corresponding to the message and servicing the queue to write the data to the second leaf node. In response to a data write and an internal node indicating that a first leaf node thereof is a synchronous mirror of a second leaf node thereof, data may be written to the first leaf node and to the second leaf node. In response to accessing data being unsuccessful, a mirror thereof may be accessed. The layout storage object may include pointers to data file objects that map a logical address space for a corresponding file to data storage space corresponding to physical data storage. Each data file object may have a state indicating that the data file object is one of: current, an out-of-date mirror of another data object, immutable, and invalid. In response to an attempt to write to a data file object that is immutable, new storage space may be allocated and data may be written to the new storage space. In response to an attempt to read a data file object that is invalid, zeros may be returned.
  • According further to the system described herein, computer software, provided in a computer-readable medium, manages a plurality of files. The software includes executable code provided in at least one server that manages a table with plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes executable code that accesses a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine synchronous and asynchronous mirrors for the file. Each layout storage object may be provided as a tree and leaves of the tree may map logical addresses of a corresponding file to data storage locations. At least one internal node of the tree may indicate that at least one leaf node thereof is a mirror of an other leaf node thereof. In response to a data write and an internal node indicating that a first leaf node thereof is an asynchronous minor of a second leaf node thereof, executable code may cause the data to be written to the first leaf node and an update message to be provided to the at least one server. Executable code at the at least one server may place information on a queue corresponding to the message and may service the queue to write the data to the second leaf node. In response to a data write and an internal node indicating that a first leaf node thereof is a synchronous mirror of a second leaf node thereof, executable code may cause the data to be written to the first leaf node and to the second leaf node. The computer software may also include executable code that, in response to accessing data being unsuccessful, accesses a mirror thereof. The layout storage object may include pointers to data file objects that map a logical address space for a corresponding file to data storage space corresponding to physical data storage. Each data file object may have a state indicating that the data file object is one of: current, an out-of-date mirror of another data object, immutable, and invalid. The computer software may also include executable code that allocates new storage space and writes data to the new storage space in response to an attempt to write to a data file object that is immutable. The computer software may also include executable code that causes zeros to be returned in response to an attempt to read a data file object that is invalid.
  • According further to the system described herein, managing a plurality of files includes providing at least one server having a plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes accessing a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine how data for the file is stored and retrieved.
  • According further to the system described herein, a data storage system includes a plurality of servers, where at least one server has a table with plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes a plurality of clients that access a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine synchronous and asynchronous mirrors for the file.
  • According further to the system described herein, a data storage system includes a plurality of servers, where at least one server has a plurality of file identifiers, each of the file identifiers including a pointer to a metadata file object containing a layout storage object that indicates storage parameters for corresponding files and includes a plurality of clients that access a particular one of the files by obtaining information from the at least one server to access the layout storage object corresponding to the particular file to determine how data for the file is stored and retrieved.
  • According further to the system described herein, managing files includes receiving a plurality of policies for a new file, where different policies may be applied according to predetermined conditions, receiving a request to create a new file, selecting an appropriate policy for the new file according to the predetermined conditions, determining if sufficient resources are available according to the appropriate policy, and obtaining file objects to create the file in response to their being sufficient resources according to the appropriate policy. Policies may be received via a user management interface. The predetermined conditions may include an identity of a client creating a file. The appropriate policy may include information regarding mirrors for the file. The appropriate policy may indicate a minimum geographic distance between the mirrors. Determining if sufficient resources are available may include consulting a resource manager. The resource manager may receive information about system resources and maintains a table indicative thereof. The system resources may include data storage areas.
  • According further to the system described herein, computer software, provided in a computer-readable medium, manages files. The software includes executable code that receives a plurality of policies for a new file, wherein different policies may be applied according to predetermined conditions, executable code that receives a request to create a new file, executable code that selects an appropriate policy for the new file according to the predetermined conditions, executable code that determines if sufficient resources are available according to the appropriate policy and executable code that obtains file objects to create the file in response to their being sufficient resources according to the appropriate policy. Policies may be received via a user management interface. The predetermined conditions may include an identity of a client creating a file. The appropriate policy may include information regarding mirrors for the file. The appropriate policy may indicate a minimum geographic distance between the mirrors. Executable code that determines if sufficient resources are available may include executable code that consults a resource manager. The resource manager may include executable code that receives information about system resources and maintains a table indicative thereof. The system resources may include data storage areas.
  • According further to the system described herein, a data storage system includes a plurality of clients that access files and includes a plurality of interconnected servers, coupled to the clients, a subset of the servers receiving a plurality of policies for a new file, where different policies may be applied according to predetermined conditions, a subset of the servers receiving a request to create a new file, a subset of the servers selecting an appropriate policy for the new file according to the predetermined conditions, a subset of the servers determining if sufficient resources are available according to the appropriate policy, and a subset of the servers obtaining file objects to create the file in response to their being sufficient resources according to the appropriate policy. Policies may be received via a user management interface. The predetermined conditions may include an identity of a client creating a file. The appropriate policy may include information regarding mirrors for the file. The servers include a subset of servers that are consulted to determine if sufficient resources are available. The subset of servers that are consulted to determine if sufficient resources are available may receive information about system resources and maintains a table indicative thereof. The system resources may include data storage areas.
  • According further to the system described herein, managing file objects includes receiving a plurality of policies for file objects, where different policies may be applied according to predetermined conditions, receiving a request to obtain new file objects, selecting an appropriate policy for the new file objects according to the predetermined conditions, determining if sufficient resources are available according to the appropriate policy, and obtaining file objects in response to their being sufficient resources according to the appropriate policy.
  • According further to the system described herein, a data storage system includes a plurality of clients that access file objects and includes a plurality of interconnected servers, coupled to the clients, a subset of the servers receiving a plurality of policies for new file objects, where different policies may be applied according to predetermined conditions, a subset of the servers receiving a request to obtain new file objects, a subset of the servers selecting an appropriate policy for the new file objects according to the predetermined conditions, a subset of the servers determining if sufficient resources are available according to the appropriate policy, and a subset of the servers obtaining the file objects in response to their being sufficient resources according to the appropriate policy.
  • According further to the system described herein, tracking storage resources includes providing a table containing storage resources along with capabilities and statuses thereof, updating the table in response to a change of status of a storage resource, updating the table in response to a change in capabilities of a storage resource and, in response to an inquiry for a storage resource having a particular capability, searching the table for a storage resource having the particular capability. Tracking storage resources may also include adding an element to the table in response to a new resource being added to the system. The capabilities may include RAID striping, data deduplication, and green operation. The status may be one of: on-line, off-line, and full. The storage resources may be disk drives. The disk drives may be managed by data storage servers that present an OSD interface for the disk drives. The table may be maintained by a resource manager server that receives information about storage resources from other servers.
  • According further to the system described herein, computer software, provided in a computer-readable storage medium, tracks storage resources. The software includes a table that contains storage resources along with capabilities and statuses thereof, executable code that updates the table in response to a change of status of a storage resource, executable code that updates the table in response to a change in capabilities of a storage resource, and executable code that searches the table for a storage resource having a particular capability in response to an inquiry for a storage resource having the particular capability. The software may also include executable code that adds an element to the table in response to a new resource being added to the system. The capabilities may include RAID striping, data deduplication, and green operation. The status may be one of: on-line, off-line, and full. The storage resources may be disk drives. The disk drives may be managed by data storage servers that present an OSD interface for the disk drives. The table may be maintained by a resource manager server that receives information about storage resources from other servers.
  • According further to the system described herein, a resource manager that manages storage resources for a storage system includes a processing device and a computer-readable memory coupled to the processing device, the computer-readable memory having a table provided in a data structure and containing storage resources along with capabilities and statuses thereof, the computer-readable memory also having executable code that updates the table in response to a change of status of a storage resource, executable code that updates the table in response to a change in capabilities of a storage resource, and executable code that searches the table for a storage resource having a particular capability in response to an inquiry for a storage resource having the particular capability. The computer-readable memory may also contain executable code that adds an element to the table in response to a new resource being added to the system. The capabilities may include RAID striping, data deduplication, and green operation. The status may be one of: on-line, off-line, and full. The storage resources may be disk drives. The disk drives may be managed by data storage servers that present an OSD interface for the disk drives.
  • According further to the system described herein, a data storage system includes a plurality of clients and a plurality of servers coupled to the clients, where a subset of the servers manage storage resources using a table containing storage resources along with capabilities and statuses thereof, where the subset updates the table in response to a change of status of a storage resource, updates the table in response to a change in capabilities of a storage resource, and searches the table for a storage resource having the particular capability in response to an inquiry for a storage resource having a particular capability. The subset of servers may add an element to the table in response to a new resource being added to the system. The storage resources may be disk drives.
  • According further to the system described herein, providing information to a resource manager of a data storage system includes providing information to the resource manager in response to a change in capabilities of a storage resource, providing information to the resource manager in response to a change in status of a storage resource, and providing information to the resource manager in response to adding a new storage resource. The storage resources may be disk drives.
  • According further to the system described herein, accessing data file objects includes providing a file system interface on at least one client, where the file system interface allows an application running on the at least one client to make file system calls to access the data file objects and providing a Web Services interface on the at least one client, where the Web interface allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface. The Web Services interface may be a SOAP interface and/or a REST interface. Accessing data file objects may also include providing a direct file object interface on the at least one client, where the direct file object interface allows an application to directly access file objects at the same time that file objects are being accessed using the Web Services interface and using the file system interface. Accessing data file objects may also include providing file name services that translate file names into file object identifiers, where the file name services are accessed by the file system interface and/or the Web Services interface. Providing the file system interface may include providing a virtual file system in a kernel address space of the at least one client. Accessing data file objects may also include providing a layout manager that manages file objects. The layout manager may be provided in the kernel address space of the at least one client. The layout manager is provided in user address space of the at least one client. The virtual file system may access the layout manager using a bridge between kernel memory address space and user memory address space.
  • According further to the system described herein, a client that accesses a plurality of file objects includes a file system interface that allows an application running on the at least one client to make file system calls to access the data file objects, a Web Services interface that allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface, and a communication interface, coupled to the file system interface and to the Web Services interface, that exchanges file object data between the client and a plurality of servers containing the file objects. The client may also include a direct file object interface that allows an application to directly access file objects at the same time that file objects are being accessed using the Web Services interface and using the file system interface. The client may also include a file name service that translates file names into file object identifiers, where the file name service is accessed by the file system interface and/or the Web Services interface. The file system interface may include a virtual file system in a kernel address space of the at least one client. The client may also include a layout manager that manages file objects. The layout manager may be provided in the kernel address space of the at least one client. The layout manager may be provided in user address space of the at least one client. The virtual file system may access the layout manager using a bridge between kernel memory address space and user memory address space.
  • According further to the system described herein, computer software, provided in a computer-readable storage medium, includes executable code that allows an application running on at least one client to make file system calls to access the data file objects and includes executable code that allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface. The computer software may also include executable code that allows an application to directly access file objects at the same time that file objects are being accessed using the Web and using file system calls. The computer software may also include executable code that translates file names into file object identifiers.
  • According further to the system described herein, a system simultaneously allows accessing a plurality of file objects using a file system interface and a Web Services interface. The system includes a plurality of servers that contain the file objects and at least one client, coupled to the servers. The client includes a file system interface that allows an application running on the at least one client to make file system calls to access the data file objects, a Web Services interface that allows an application to access file objects using the Web at the same time that file objects are being accessed through the file system interface, and a communication interface, coupled to the file system interface and the Web Services interface, that exchanges file object data between the client and a plurality of servers containing the file objects. The file system interface may include a virtual file system in a kernel address space of the client.
  • According further to the system described herein, managing objects for a data file includes obtaining a metadata object for the file, determining if the metadata object indicates that a portion of the data file includes synchronous mirrors, and, in response to a write of new data to the data file, writing the new data to each synchronous mirror. Data objects of a data file may be in one of four states: current, stale, empty, and immutable. In response to a read operation and the corresponding data object being in a stale state, data may be read for a corresponding synchronous mirror that is in a current state.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a diagram illustrating servers and clients according to an embodiment of the system described herein.
  • FIGS. 2A and 2B are diagrams illustrating a client coupled to servers and to other network(s) according to an embodiment of the system described herein.
  • FIG. 3 is a diagram illustrating a client having server operations software, client software, and a plurality of interfaces therebetween according to an embodiment of the system described herein.
  • FIG. 4 is a diagram illustrating a file having a metadata file object and a plurality of data file objects according to an embodiment of the system described herein.
  • FIG. 5 is a diagram illustrating a metadata file object for a file according to an embodiment of the system described herein.
  • FIG. 6 is a diagram illustrating an example of a layout storage object tree for a file according to an embodiment of the system described herein.
  • FIG. 7 is a diagram illustrating an example of a layout storage object tree with multiple maps for a file according to an embodiment of the system described herein.
  • FIG. 8 is a diagram illustrating another example of a layout storage object tree with multiple maps and replication nodes for a file according to an embodiment of the system described herein.
  • FIG. 9 is a flowchart illustrating a client obtaining a lease for and operating on a file according to an embodiment of the system described herein.
  • FIG. 10 is a flowchart illustrating a client reading data from a file according to an embodiment of the system described herein.
  • FIG. 11 is a flowchart illustrating a client writing data to a file according to an embodiment of the system described herein.
  • FIG. 12 is a flowchart illustrating steps performed by a client in connection with finding an alternative copy of data according to an embodiment of the system described herein.
  • FIG. 13 is a flowchart illustrating a client writing to synchronous mirrors for data according to an embodiment of the system described herein.
  • FIG. 14 is a flow chart illustrating a client converting file names to object identifiers according to an embodiment of the system described herein.
  • FIG. 15 is a diagram illustrating a client having an application in user memory address space and a having a VFS, file name services, kernel I/O drivers, layout manager, and a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 16 is a flow chart illustrating operation of a VFS at a client according to an embodiment of the system described herein.
  • FIG. 17 is a diagram illustrating a client having an application, file name services, user level I/O drivers, and a layout manager in user memory address space and having a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 18 is a diagram illustrating a client having an application, a file presentation layer, user level I/O drivers, and a layout manager in user memory address space and having a VFS and communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 19 is a diagram illustrating a client having an application in user memory address space and having file name services, kernel I/O drivers, a layout manager, and a communication interface in kernel address space according to an embodiment of the system described herein.
  • FIG. 20 is a diagram illustrating a client having an application, file name services, user level I/O drivers, and a layout manager in user memory address space and having a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 21 is a diagram illustrating a client having an application, file name services, user level I/O drivers, and a layout manager in user memory address space and having a communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 22 is a diagram illustrating a client having an application in user memory address space and having a Web Services module, kernel I/O drivers, a layout manager, and a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 23 is a diagram illustrating a client having an application, a Web Services layer, user level I/O drivers, and a layout manager in user memory address space and having a communication interface in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 24 is a diagram illustrating a client having an application, a Web Services layer, user level I/O drivers, and a layout manager in user memory address space and having a communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 25 is a diagram illustrating a client having a plurality of applications, a Web Services layer, file name services, user level I/O drivers, and a layout manager in user memory address space and having a VFS, a communication interface and a kernel memory address space to user memory address space bridge in kernel memory address space according to an embodiment of the system described herein.
  • FIG. 26 is a diagram illustrating different types of servers and a user management interface according to an embodiment of the system described herein.
  • FIG. 27 is a flow chart illustrating steps performed in connection with using security managers servers to obtain credentials for using policy manager servers according to an embodiment of the system described herein.
  • FIG. 28 is a diagram illustrating a resource manager table according to an embodiment of the system described herein.
  • FIG. 29 is a flow chart illustrating steps performed in connection with processing resource information to update a resource table according to an embodiment of the system described herein.
  • FIG. 30 is a flow chart illustrating steps performed in connection with finding a resource with a desired capability according to an embodiment of the system described herein.
  • FIG. 31 is a flow chart illustrating steps performed in connection with a metadata server servicing a lease request according to an embodiment of the system described herein.
  • FIG. 32 is a flow chart illustrating steps performed in connection with monitoring and recalling expired leases according to an embodiment of the system described herein.
  • FIG. 33 is a flow chart illustrating steps performed in connection with lease recall processing according to an embodiment of the system described herein.
  • FIG. 34 is a table illustrating data maintained for files handled by a metadata server according to an embodiment of the system described herein.
  • FIG. 35 illustrates a linked list for file leases handled that are by a metadata server according to an embodiment of the system described herein.
  • FIG. 36 is a table illustrating data maintained by a metadata server to correlate object identifiers and data storage locations according to an embodiment of the system described herein.
  • FIG. 37 illustrates a table at a metadata location server that correlates object identifier ranges and metadata servers according to an embodiment of the system described herein.
  • FIG. 38 is a flow chart illustrating processing performed by a metadata location server in connection with returning a particular metadata server according to an embodiment of the system described herein.
  • FIG. 39 is a flow chart illustrating processing performed by a metadata server in connection with deleting a file according to an embodiment of the system described herein.
  • FIG. 40 is a flow chart illustrating processing performed by a metadata server in connection with creating a file according to an embodiment of the system described herein.
  • FIG. 41 is a flow chart illustrating processing performed by a metadata server in connection with responding to a failed data write operation according to an embodiment of the system described herein.
  • FIG. 42 is a flow chart illustrating processing performed by a metadata server in connection with responding to a stale mirror update message according to an embodiment of the system described herein.
  • FIG. 43 is a flow chart illustrating processing performed by a metadata server in connection with adding an asynchronous copy job to a queue according to an embodiment of the system described herein.
  • FIG. 44 is a flow chart illustrating processing performed by a metadata server in connection with servicing an asynchronous copy job according to an embodiment of the system described herein.
  • DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS
  • Referring to FIG. 1, a diagram illustrates servers 102 coupled to a plurality of clients 104-106. Each of the clients 104-106 represents one or more processing devices that receives file services from the servers 102. Each of the clients 104-106 may or may not be independent of other ones of the clients 104-106. One or more of the clients 104-106 may be a multiprocessing/multiuser system and possibly have multiple independent users. The clients 104-106 are meant to represent any number of clients.
  • The file services provided by the servers 102 may include data storage and retrieval as well as related operations, such as data mirroring, cloning, etc. The servers 102 may be implemented using a plurality of services (and/or interconnected file servers including SAN components) that are provided by interconnected processing and/or storage devices. In an embodiment herein, each of the clients 104-106 may be coupled to the servers 102 using the Web, possibly in conjunction with local TCP/IP connections. However, it is possible for one or more of the clients 104-106 to be coupled to the servers 102 using any other appropriate communication mechanism and/or combinations thereof to provide the functionality described herein.
  • Referring to FIG. 2A, the client 104 is shown as being coupled to the servers 102 and to one or more other network(s). The other network(s) may include a local area network (LAN). Thus, the client 104 may be a gateway between the servers 102 and a LAN to which one or more other devices (not shown) may also be coupled. The client 104 may act as a local file server to the one or more other devices coupled to the LAN by providing data from the servers 102 to the one or more other devices. Of course, it is possible for one or more other clients to simultaneous act as gateways to the same or different other network(s). Generally, for the discussion herein, reference to a particular one of the clients 104-106 may be understood to include reference to any or all of the clients 104-106 coupled to the servers 102 unless otherwise indicated.
  • Referring to FIG. 2B, a diagram shows the client 104 being coupled to the servers 102 and one or more other network(s) (e.g., a LAN) in a configuration that is different from that shown in FIG. 2A. In the configuration of FIG. 2B, a router 108 is coupled between the servers 102 and the client 104. The router 108 may be any conventional router that may be accessed by the client 104. In the configuration of FIG. 2B, the client 104 uses only a single connection point to both the servers 102 and to the other network(s). In the configuration of FIG. 2B, the client 104 may act as local file server and gateway between the servers 102 and one or more other devices (not shown) coupled to the other network(s). Of course, any other appropriate connection configurations may be used by any of the client 104-106 coupled to the servers 102 and/or to other network(s).
  • Referring to FIG. 3, the client 104 is shown in more detail having server operations software 122, client software 124, and an interface layer 125 that includes a plurality of interfaces 126-128 between the server operations software 122 and the client software 124. The server operations software 122 facilitates the exchange of information/data between the client 104 and the servers 102 to provide the functionality described herein. The server operations software 122 is described in more detail elsewhere herein. The client software 124 represents any software that may be run on the client 104, including application software, operating system software, Web server software, etc., that is not part of the server operations software 122 or the interface layer 125. As described in more detail elsewhere herein, it is possible to have the client software 124 interact with the servers 102 through different ones of the interfaces 126-128 at the same time.
  • The file services described herein may be implemented by the servers 102 using a set of file objects where a file that is accessed by the client software includes a metadata file object which points to one or more data file objects that contain the data for the file. Accessing the file would involve first accessing the metadata file object to locate the corresponding data file objects for the file. Doing this is described in more detail elsewhere herein. Note, however, that any appropriate file object mechanism may be used for the system described herein.
  • Referring to FIG. 4, a file 130 is shown as including a metadata file object 132 and a plurality of data file objects. The metadata file object 132 contains information that points to each of the data file objects 134-136. Accessing the file includes first accessing the metadata file object 132 and then using information therein to locate the appropriate one or more of the corresponding data file object 134-136.
  • Referring to FIG. 5, the metadata file object 132 is shown in more detail as including an object attributes section 142 and a Layout Storage Object (LSO) tree section 144. The object attributes section contains conventional file-type attributes such as owner id, group id, access control list, last modification time, last access time, last change time, creation time, file size, and link count. Many of the attributes are self-explanatory. The last modification time corresponds to the last time that the data for the data objects 134-136 had been modified while the last change time corresponds to when the object metadata had last been changed. The link count indicates the number of other objects that reference a particular file (e.g., aliases that point to the same file). In an embodiment herein, a file and its related objects are deleted when the link count is decremented to zero.
  • The LSO tree section 144 includes a data structure that includes one or more maps for mapping the logical space of the file to particular data file objects. The LSO tree section 144 may also indicate any mirrors for the data and whether the mirrors are synchronous or asynchronous. LSO trees and mirrors are described in more detail elsewhere herein.
  • Referring to FIG. 6, a simple LSO tree 160 is shown as including an LSO root node 162 and a single map 164. The LSO root node 162 is used to identify the LSO tree 160 and includes links to one or more map(s) used in connection with the file corresponding to the LSO tree 160. The map 164 maps logical locations within the file to actual data storage location. A process that accesses logical storage space of a file represented by the LSO tree 160 first uses the LSO root node 162 to find the map 164 and then uses the map 164 to translate logical addresses within the file to an actual data storage locations.
  • Referring to FIG. 7, an LSO tree 170 is shown as including an LSO root node 172 and a plurality of maps 174-176. Each of the maps 174-176 may represent a different range of logical offsets within the file corresponding to the LSO tree 170. For example, the map 174 may correspond to a first range of logical offsets in the file. The map 174 may map logical locations in the first range to a first actual storage device. The map 175 may correspond to a second range of logical offsets in the file, different than the first range, which may be mapped to a different actual storage device or may be mapped to the same actual storage device as the map 174. Similarly, the map 176 may correspond to a third range of logical offsets in the file, different than the first range and the second range, which may be mapped to a different actual storage device or may be mapped to the same actual storage device as the map 174 and/or the map 175.
  • Referring to FIG. 8, an LSO tree 180 is shown as including an LSO root node 181 and a pair of replication nodes 182 a, 182 b, which indicate that the underlying data is to be mirrored (replicated) and which indicate whether the mirror is synchronous or asynchronous. Synchronous and asynchronous mirrors are discussed in more detail elsewhere herein. The node 182 a has a plurality of children maps 183-185 associated therewith while the node 182 b has a plurality of children maps 186-188 associated therewith. The replication nodes 182 a, 182 b indicate that the data corresponding to the maps 183-185 is a mirror of data corresponding to the maps 186-188. In some embodiments, the nodes 182 a, 182 b may be implemented using a single node 189 to indicate replication.
  • A process accessing a file having the LSO tree 180 would traverse the tree 180 and determine that data is mirrored. As discussed in more detail elsewhere herein, depending upon the type of mirroring, the process accessing the LSO tree 180 would either write the data to the children of both of the nodes 182 a, 182 b or would provide a message to another process/server (e.g., the servers 102) that would perform the asynchronous mirroring. Mirroring is discussed in more detail elsewhere herein.
  • For the system described herein, file objects are accessed by one of the clients 104-106 by first requesting, and obtaining, a lease from the servers 102. The lease corresponds to the file objects for the particular file being accessed and to the type of access. A lease may be for reading, writing, and/or more some other operation (e.g., changing file attributes). In an embodiment herein, for objects corresponding to any particular file, the servers 102 may issue only one write lease at a time to any of the clients 104-106 but may issue multiple read leases simultaneously and may issue read lease(s) at the same time as issuing a write lease. However, in some embodiments it may be possible to obtain a lease for a specified logical range of a file for operations only on that range. Thus, for example, it may be possible for a first client to obtain lease for writing to a first logical range of a file while a second client may, independently, obtain a lease for writing to a second and separate logical range of the same file. The two write leases for different logical ranges may overlap in time without violating the general rule that the system never issues overlapping write leases for the same data.
  • The lease provided to the clients 104-106 from the servers 102 includes security information (security token) that allows the client appropriate access to the data. The security token may expire after a certain amount of time. In an embodiment herein, a client accesses data by providing an appropriate security token for the data as well as client users/ownership information. Thus, for example, a user wishing to access data would first obtain a lease and then would provide the access request to the servers 102 along with the security token and information identifying the owner (client) accessing the data. The servers 102 would then determine whether the access requested by the client was permissible. After the lease expires (the security token expires), the user requests the lease again. Data security may be implemented using conventional data security mechanisms.
  • After obtaining a lease for accessing a file, a client may then cache the corresponding metadata, including the LSO tree, into local storage of the client. The client may then use and manipulate the local cached version of the metadata and may use the metadata to obtain access to the data. As described in more detail elsewhere herein, a client does not directly modify metadata stored by the servers 102 but, instead, sends update messages to the servers 102 to signal that metadata for a file may need to be modified by the servers 102.
  • Referring to FIG. 9, a flowchart 200 illustrates steps performed by a client in connection with requesting a lease for a file (objects associated with a file) for performing operations thereon. Processing begins at a first step 202 where the client requests the lease for the file. As discussed in more detail elsewhere herein, a client requesting a lease includes specifying the type of access (e.g., read, write, etc.). Following the step 202 is a test step 204 where it is determined if the request has been granted. If not, then control transfers from the test step 204 to a step 206 where processing is performed in connection with the lease not being granted to the client. The particular processing performed at the step 206 may include, for example, providing an error message to the client process requesting access to the file corresponding to the lease and/or waiting for an amount of time and then retrying the request. Note that it is possible that a lease for a particular file is not available at one time is subsequently available at another time because, for example, the lease is released by another client in between the first request and the second request. In any event, any appropriate processing may be performed at the step 206. Following the step 206, processing is complete.
  • If it is determined at the test step 204 that the least requested at the step 202 has been granted, then control transfers from the test step 204 to a step 208 where the client performs an operation using the file for which the lease was granted. Operations performed at the step 208 include reading data and/or writing data. Different types of processing that may be performed at the step 208 are described in more detail elsewhere herein.
  • Following the step 208 is a test step 212 where it is determined if the operations performed at the step 208 require an update. In some instances, a client may obtain a lease and perform operations that do not affect the file or the underlying file objects. For example, a client may acquire a lease for reading a file and the operation performed at the step 208 may include the client reading the file. In such a case, no update may be necessary since the file and corresponding file objects (metadata, data objects, etc.) have not changed. On the other hand, if the client obtains a lease for writing data the file and the operation performed at the step 208 includes writing data to the file, then the underlying file objects will have been changed and an update message needs to be sent the servers 102. If it is determined at the test step 212 that an update is necessary, then control passes from the test step 212 to a step 214 where an update message is sent by the client to the servers 102.
  • Following the step 214, or following the step 212 if no update is necessary, control passes to a test step 216 where it is determined if the client is finished with the file. In some instances, the client may perform a small number of operations on the file, after which the client would be finished with the file at the step 216. In other cases, the client may be performing a series of operations and may not yet have completed all of the operations.
  • If it is determined at the test step 216 that the client is not finished with the file, then control passes from the test step 216 to a test step 218 where it is determined if the lease for the file has expired. Note that a lease may be provided by the servers 102 to the client with a particular expiration time and/or the associated security token may expire. In addition, it may be possible for the servers 102 to recall leases provided to clients under certain circumstances. In either case, the lease may no longer be valid. Accordingly, if it is determined at the step 218 that the lease has expired (and/or has been recalled by the servers 102), then control passes from the test step 218 back to the step 202 request the lease again. Otherwise, if the lease has not expired, then control passes from the test step 218 back to the step 208 to perform another iteration.
  • If it is determined at the test step 216 that the client is finished with the file, then control passes from the test step 216 to a step 222 where the client releases the lease by sending a message to the servers 102 indicating that the client no longer needs the lease. Once the client releases the lease, it may be available for other clients. Following the step 222, processing is complete.
  • In an embodiment herein, data file objects may be indicated as having one of four possible states: current, stale, immutable, or empty. The current state indicates that the data object is up to date and current. The stale state indicates that the data is not valid but, instead, requires updating, perhaps by some other process. In some instances, the stale state may be used only in connection with mirror copies of data (explained in more detail elsewhere herein). Data may be stale because it is a mirror of other data that was recently written but not yet copied. The immutable state indicates that the corresponding data is write protected, perhaps in connection with a previous clone (snapshot) operation. The empty state indicates that no actual storage space has yet been allocated for the data.
  • Referring to FIG. 10, a flow chart 240 illustrates steps performed by a client in connection with performing read operations after obtaining a read lease for a file. Processing begins at a first test step 242 where it is determined if the data object being read is in the current state. If not, then control transfers from the test step 242 to a step 244 where it is determined if the data object being read is in the immutable state. If it is determined at the step 244 that the data object being read is in the immutable state or if it is determined at the test step 242 that the data object being read is in the current state, then control transfers to a step 246 where the read operation is performed. A client reads file data by providing the appropriate data file object identifier to the servers 102 as well as providing appropriate security credentials. Accordingly, the read operation performed at the step 246 includes the client sending an appropriate request to the servers 102 and waiting for a result therefrom.
  • Following the step 246 is a test step 248 where it is determined if the servers 102 have returned a result indicating that the data file object is unavailable. In some cases, a data file object that is otherwise current or immutable may nevertheless become unavailable. For example, the physical storage space that holds the data file object may become temporarily disconnected and/or temporarily busy doing some other operation. If it is determined at the test step 248 that the data file object is available, then control transfers from the test step 248 to a test step 252 where it is determined if the read operation was successful. If so, then control transfers from the test step 252 to a step 254 where the result of the read operation is returned to the process at the client that caused the read operation to be performed. The result may include the data that was read and a status indicator. Following the step 254, processing is complete.
  • If it is determined at the test step 252 that the read operation performed at the step 246 was not successful, then control transfers from the test step 252 to a step 256 where error processing is performed. The particular error processing performed at the step 256 is implementation dependent and may include, for example, reporting the error to a calling process and/or possibly retrying the read operation a specified number of times. Following the step 256, processing is complete.
  • If it is determined at the test step 244 that the data object being read is not in the immutable state, then control transfers from the test step 244 to a test step 258 where it is determined if the data object is in the stale state. If not, then, by virtue of the test steps 242, 244, 258 and process of elimination, the data object is in the empty state. In an embodiment herein, reading a data object in the empty state causes zeros to be returned to the calling process. Accordingly, if it is determined at the test step 258 that the data object is not in the stale state, then control transfers from the test step 258 to a step 262 where zeros are returned in response to the read operation. Following the step 262, processing is complete.
  • If it is determined at the test step 258 that the data file object is in the stale state, or if it is determined at the test step 248 that the data file object is not available, then control transfers to a test step 264 to determine if an alternative version of the data file object is available for reading. As discussed in more detail elsewhere herein, there may be multiple versions of the same data file objects that exist at the same time due to mirroring. Accordingly, if the data file object being read is in the stale state or otherwise unavailable, it may be possible to read a mirror copy of the data file object that may be in the current state. The test performed at the step 264 is described in more detail elsewhere herein.
  • If it is determined at the test step 264 that an alternative version of the data file object is available, then control transfers from the test step 264 to a step 266 where the alternative version of the data file object is selected for use. Following the step 266, control transfers back to the test step 242 for another iteration with the alternative data file object.
  • If it is determined at the test step 264 that an alternative version of the data file object is not available, then control transfers from the test step 264 to a step 268 where the client process waits. In an embodiment herein, it may be desirable to wait for a data file object to become current and/or available. Following the step 268, control transfers back to the step 242 for another iteration. Note that, instead of waiting at the step 268, processing may proceed from the step 264 to the step 256 to perform error processing if there is no alternative data file object available. In other embodiments, it may be possible to perform the step 268 a certain number of times and then, if the data file object is still unavailable or in the stale state and there is no alternative data file object, then perform the error processing at the step 256.
  • Referring to FIG. 11, a flow chart 280 illustrates steps performed by a client in connection with performing write operations after obtaining a write lease for a file. Processing begins at a first test step 282 where it is determined if the data file object to which the write is being performed is in the immutable state. If so, then control transfers from the step 282 to a step 284 where new actual storage space is allocated for the data file object to avoid overwriting the immutable data. Allocating new storage space for a data object may include providing an appropriate request to the servers 102. Following the step 284, control transfers back to the step 282 to begin the processing for the write operation again.
  • If it is determined at the step 282 that the data file object to which the write is being performed is not in the immutable state, then control transfers from the step 282 to a step 286 where it is determined if the data file object to which the write is being performed is in the stale state. If not, then control transfers from the test step 286 to a test step 288 where it is determined if the data file object to which the write is being performed is in the empty state. If so, then control transfers from the step 288 to the step 284, discussed above, where new physical storage space is allocated. Following the step 284, control transfers back to the step 282 to begin the processing for the write operation again.
  • If it is determined at the step 288 that the data file object to which the write is being performed is not in the empty state, then control transfers from the test step 288 to a step 292 where the write operation is performed. Note that the step 292 is reached if the data file object to which the write operation is being performed is not in the immutable state, not in the stale state, and not in the empty state (and thus is in the current state). A client writes file data by providing the appropriate data file object location identifier to the servers 102 as well as providing appropriate security credentials. Accordingly, the write operation performed at the step 292 includes the client sending an appropriate request to the servers 102 and waiting for a result therefrom.
  • Following the step 292 is a test step 294 where it is determined if the write operation performed at the step 292 was successful. If so, then control transfers from the test step 294 to a test step 296 where it is determined if there are synchronous mirrors of the data file object to which the write is being performed. The test performed at the step 296 may include, for example, determining if a parent node of the data file object in the file LSO tree indicates replication. If not, then control transfers from the test step 296 to a step 298 where an update (message) is sent to the servers 102 indicating that the write had been performed. Following the step 298, processing is complete.
  • If it is determined at the test step 296 that there are synchronous mirrors of the data file object to which the write is being performed, then control passes from the test step 296 to a step 302 where the data that was written at the step 292 is also written to the synchronous mirror(s). The processing performed at the step 302 is discussed in more detail elsewhere herein. Following the step 302, control transfers to the step 298, discussed above, where an update (message) is sent to the servers 102. Following the step 298, processing is complete.
  • If it is determined at the test step 294 that the write operation performed at the step 292 was not successful, or if it is determined at the test step 286 that the data file object to which the write operation is being performed is in the stale state, then control transfers to a step 304 where the data file object to which the write is attempting to be performed is removed from the client's local copy of the LSO tree. At the end of the write operation illustrated by the flow chart 280, the client may inform the servers 102 (at the step 298) of the difficulty in writing to the data object so that the servers 102 can take appropriate action, if necessary.
  • Following the step 304 is a test step 306 where it is determined if an alternative version of the data is available. As discussed in more detail elsewhere herein, there may be multiple versions of the same data file objects that exist at the same time due to mirroring. Accordingly, if the data file object to which the write operation is being performed is stale or otherwise cannot be written to, it may be possible to write to a mirror copy of the data. The test performed at the step 306 is like the test performed at the step 264 and is described in more detail elsewhere herein. If it is determined at the test step 306 that an alternative version of the data corresponding to the data file object is available, then control transfers from the test step 306 to a step 308 where the alternative version is selected for writing. Following the step 308, control transfers back to the test step 282 for another iteration with the alternative data file object.
  • If it is determined at the test step 306 that an alternative version of the data corresponding to the data file object is not available, then control transfers from the test step 306 to a step 312 to perform error processing if there is no alternative available. The particular error processing performed at the step 312 is implementation dependent and may include, for example, reporting the error to a calling process and/or possibly retrying the write operation a specified number of times before reporting the error. Following the step 312, control transfers to the step 298, discussed above, to send update information to the servers 102. Following the step 298, processing is complete.
  • Referring to FIG. 12, a flow chart 320 illustrates in more detail steps performed in connection with the alternative available test step 264 of FIG. 10 and/or the alternative available test step 306 of FIG. 11. Processing begins at a first test step 322 where it is determined if the file has any mirror data file objects at all. In some instances, a file may not use mirrors, in which case there would be no alternative copy available. Accordingly, if it is determined at the test step 322 that the file does not have any mirror data file objects, then control transfers from the test step 322 to a step 324 where a value is returned indicating that no alternative copies are available. Following the step 324, processing is complete.
  • If it is determined at the test step 322 that mirror copies are available, then control transfers from the test step 322 to a step 326 where a pointer is made to point to a first mirror data file object. For the processing discussed herein, a pointer may be used to iterate through mirror data file objects to find a useable data file object. Following the step 326 is a test step 328 where it is determined if the pointer is past the end of the list of mirror data file objects (has iterated through all of the mirror data file objects). If so, then control passes from the test step 328 to the step 324, discussed above, to return a value that indicates that no alternatives are available.
  • If it is determined at the test step 328 that the pointer is not past the end of a list of mirror data file objects, then control transfers from the test step 328 to a test step 332 where it is determined if the pointer points to a data file object in a stale state. If so, then control transfers from the test step 332 to a step 334 where the pointer is made to point to the next data file object to be examined. Following the step 334, control transfers back to the step 328, discussed above, for another iteration. If it is determined at the test step 332 that the pointer does not point to a data file object in the stale state, then control transfers from the test step 332 to a step 336 where the data file object that is pointed to by the pointer is returned as an alternative data file object that may be used by the calling process. Following the step 336, processing is complete.
  • Referring to FIG. 13, a flow chart 350 illustrates in more detail operations performed in connection with the step 302 of the flow chart 280 of FIG. 11 where data that has been written is copied to a number of synchronous mirrors (mirror data file objects). Processing begins at a first step 352 where a pointer that is used to iterate through the mirror data file objects is set to point the first one of the mirror data file objects. Following the step 352 is a test step 354 where it is determined if the pointer used for iterating through the mirror data file objects points past the end (i.e., if all of the mirror data file objects have been processed). If so, then processing is complete. Otherwise, control transfers from the test step 354 to a test step 356 where it is determined if the status of the mirror data file object pointed to by the pointer indicates that the mirror data file object is current. If not, then control passes from the test step 356 to a test step 358 where it is determined if the status of the mirror data file object pointed to by the pointer indicates that the mirror data file object is in the stale state. If so, then control passes from the test step 358 to a step 362 where the mirror data file object is removed from the client's local copy of the LSO tree. In an embodiment herein, a synchronous mirror data file object should not be in a stale state and, if that occurs, it may indicate an error condition. Accordingly, following the step 362 is a step 364 where information about the stale mirror is sent to the servers 102, which may perform recovery processing in response thereto.
  • Note that if a mirror data file object is neither in the stale state nor in the current state, then the mirror data file object is either in the empty state or in the immutable state. In either case, it may be necessary to allocate new space for a data file object to which the data is to be written. Accordingly, if it is determined at the test step 358 that the data file object is not in the stale state, then control passes from the test step 358 to a step 366 where new space is allocated for the mirror data file object. Following the step 366 is a step 368 where the data that is being copied across synchronous mirror data file objects is written to the mirror data file object pointed to by the pointer used to iterate through the mirror data file objects. Note that the step 368 may also be reached from the test step 356 if it is determined that the mirror data file object is current. Following the step 368 is a step 372 where the pointer used to iterate through the mirror data file objects is made to point to the next one. Note that the step 372 is also reached following the step 364. Following the step 372, control transfers back to the test step 354 for another iteration.
  • The system described herein may access file objects using object identifiers. In an embodiment herein, each file object that is stored among the servers 102 may be assigned a unique object identifier that identifies each file object and distinguishes each file object from other file objects in the system. However, many applications use a file naming structure and/or a hierarchical directory to access files and data therein. For example, a file name “C:\ABC\DEF\GHI.doc” indicates a file called “GHI.doc” stored in a sub-directory “DEF” that is stored in another directory “ABC” located on a root volume “C”. A nested directory structure may be provided by implementing directories as special files that are stored in other directories. In the example given above, the sub-directory “DEF” may be implemented as a file stored in the directory “ABC”.
  • The system described herein may present to applications a conventional naming structure and directory hierarchy by translating conventional file names into file object identifiers. Such a translation service may be used by other services in connection with file operations. In an embodiment herein, each directory may include a table that correlates file names and sub-directory names with file object identifiers. The system may examine one directory at a time and traverse sub-directories until a target file is reached.
  • Referring to FIG. 14, a flow chart 380 illustrates steps performed in connection with providing a file name translation service (file name service) that translates a conventional hierarchical file name into a file object identifier. The file name service may receive a conventional hierarchical file name as an input and may return an object identifier (or, in some cases, an error). Processing begins at a first step 382 where the file name service receives a file name, such as a conventional hierarchical file name. Following the step 382 is a test step 384 where it is determined if the syntax of the file name is OK. Checking the syntax of a hierarchical file name is know and includes, for example, checking that only appropriate characters have been used. If it is determined at the test step 384 that the syntax is not OK, then control transfers from the test step 384 to a step 386 where an error indicator (error message) is returned to the calling process. Following the step 386, processing is complete.
  • If it is determined at the test step 384 that the syntax of the provided name is OK, then control transfers from the test step 384 to a step 388 where the root directory is read. In an embodiment herein, all file name paths begin at a single common root directory used for all file objects stored in the servers 102. In other embodiments, there may be multiple root directories where specification of a particular root directory may be provided by any appropriate means, such as using a volume identifier, specifically selecting a particular root directory, etc.
  • Following the step 388 is a test step 392 where it is determined if the target file (or sub-directory that is part of the file name path) is in the directory that has been read. If not, then control passes from the test step 392 to the step 386, discussed above, where an error is returned. In some embodiments, the file-not-found error that results from the test at the step 392 may be different from the syntax error that results from the test at the step 384.
  • If it is determined that the target file or a sub-directory that is part of the file name path is in the directory that has just been read, then control passes from the test step 392 to a test step 394 where it is determined if the directory that has just been read contains the target file (as opposed to containing a sub-directory that is part of the file name path). If so, then control passes from the test step 394 to a step 396 where the object identifier of the target file object is returned to the calling process. Following the step 396, processing is complete.
  • If it is determined at the test step 394 that the directory that has just been read contains a sub-directory that is part of the file name path, then control transfers from the test step 394 to a step 398 where the sub-directory is read so that the sub-directory becomes the directory being examined. In effect, processing at the step 398 traverses the chain of subdirectories to eventually get to the target file. Following the step 398, control transfers back to the step 392, discussed above, for a next iteration.
  • Referring to FIG. 15, a diagram shows the client 104 as including user address memory space and kernel address memory space. In an embodiment herein, user address memory space is memory space that is generally used by user applications and related processes while kernel address memory space is memory space that is generally accessible only by system processes, such as an operating system kernel and related processes. As discussed in more detail herein, it is possible to have different portions of the system described herein reside and operate in the user memory space and/or the kernel memory space. In addition, it is possible for the client 104 to have multiple different interfaces to access file objects at the servers.
  • In FIG. 15, the client 104 is shown as including an application in the user memory address space and a virtual file system (VFS), file name services, kernel I/O drivers, a layout manager, and a communication interface in the kernel memory address space. The VFS is an abstraction layer on top of a more concrete file system. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. The VFS allows the application running on the client 104 to access file objects on the servers 102 without the application needing to understand the details of the underlying file system. The VFS may be implemented in a conventional fashion by translating file system calls by the application into file object manipulations and vice versa. For example, the VFS may translate file system calls such as open, read, write, close, etc. into file object calls such as create object, delete object, etc.
  • The VFS may use the file name services, described elsewhere herein, to translate file names into object identifiers. The kernel I/O drivers provide an interface to low-level object level I/O operations. The kernel I/O drivers may be modeled after, and be similar to, Linux I/O drivers. The layout manager may perform some of the processing on LSO trees corresponding to files, as discussed in more detail elsewhere herein. The communication interface provides communication between the client 104 and the servers 102. The communication interface may be implemented using any appropriate communication mechanism. For example, if the client 104 communicates with the servers 102 via an Internet connection, then the communication interface may use TCP/IP to facilitate communication between the servers 102 and the client 104.
  • The application of FIG. 15 may correspond to the client software 124 of FIG. 3. The VFS of FIG. 15 may correspond to one of the interfaces 126-128 of FIG. 3. The file name services, kernel I/O drivers, layout manager, and communication interface of FIG. 15 may correspond to the server operations software 122 of FIG. 3. Similar correlation between components of FIG. 3 and other figures may also be found.
  • Referring to FIG. 16, a flow chart 410 illustrates steps performed by a VFS to provide file services in connection with an application running on the client 104. Processing begins at a first step 412 where a file system operation requested by an application may be translated into one or more object operations. For example, a file operation to open a file for reading may be converted to object operations that include obtaining an object lease for reading as discussed elsewhere herein. Following the step 412 is a step 414 where the VFS translates the file name into an object identifiers using the file name services discussed above in connection with FIG. 14. Operations that follow may be performed using the object identifiers obtained at the step 414.
  • Following the step 414 is a test step 416 where it is determined if the requested operation requires the LSO tree. As discussed elsewhere herein, operations such as read, write, etc. use LSO trees corresponding to file objects. However, some possible file operations may not require accessing a corresponding LSO tree. If it is determined at the test step 416 that the LSO tree is needed, then control transfers from the test step 416 to a step 418 where the VFS accesses the LSO manager to perform the necessary operations. For example, for a read operation, the LSO manager may perform processing like that illustrated in the flow chart 240 of FIG. 10. Following the step 418, or following the step 416 if the LSO is not needed, is a step 422 where the operations are passed to low level kernel I/O drivers (e.g., via one or more appropriate API's). The kernel I/O drivers use the communication module to communicate between the client 104 and the servers 102 in connection with performing the requested operation(s). In instances where the application running on the client 104 has requested data and/or other information from the servers 102, the data and/or information may be passed back up through the communication interface, kernel I/O drivers, etc. to the VFS and ultimately to the application.
  • Referring to FIG. 17, the client 104 is shown as having an application, file name services, user level I/O drivers, and a layout manager all provided in user memory address space. The functionality of the VFS that was shown in FIG. 15 and described above may be performed instead by library routines linked to the application, and thus are part of the application. These routines would provide functionality like that discussed above in connection with FIG. 16. Accordingly, it is the application that uses the file name services and makes calls to the user level I/O drivers (like the kernel I/O drivers) and to the layout manager. The communication interface is still maintained in the kernel memory address space.
  • Note that, for the configuration of FIG. 15, modifications are provided by modifying system processes (the operating system), which is disadvantageous for a number of reasons. For example, if the client 104 is a multiuser computing system, then modifying the operating system may involve restarting the entire system and thus disrupting all of the users. In contrast, the configuration of FIG. 17 is advantageous since it allows modification of the system in the application/user memory address space so that the operating system of the client 104 does not need to be modified. However, the configuration of FIG. 17 does not use a VFS, and thus does not obtain the advantageous separation of the application from the file system that is provided by the VFS in FIG. 15.
  • Referring to FIG. 18, the client 104 is shown as having an application in user memory address space that accesses file objects through a VFS in kernel memory address space like that illustrated in FIG. 15. However, the file name services, I/O drivers, and the layout manager all reside in the user memory address space like the system illustrated in FIG. 17. The VFS communicates with components in the user memory address space through a bridge between kernel memory address space and user memory address space, such as a FUSE (or similar) interface. The bridge allows file system components to be provided in user memory space instead of kernel address memory space while still preserving the VFS in the kernel address memory space. Thus, the configuration illustrated by FIG. 18 provides the advantages of using a VFS, as illustrated in the configuration of FIG. 15, along with the advantages of having file system components in the user address memory space, as illustrated in the configuration of FIG. 17.
  • It is possible in some instances to have applications and/or other processing in the user memory address space of the client 104 access file objects directly, rather than through a file services layer like the VFS and/or equivalent functionality provided by user linkable libraries (e.g., the configuration illustrated in FIG. 17). Accessing file objects directly may include invoking routines that create objects, read objects, modify objects, delete objects, etc. In such a case, the application would need to know how to interpret and/or manipulate the object data, which may not always be desirable. For example, an application that accesses file objects through the VFS may not need to take into account (or even know about) the structure of an LSO tree while an application that accesses objects directly may need to use the LSO tree. On the other hand, removing the file services layer may provide an opportunity for optimizations not otherwise available. Note that, since the servers 102 exchange object information/operations with the clients 104-106, the servers 102 may not need to distinguish or be able to distinguish between application on the clients 104-106 using a file system interface (file services like the VFS) and those that are not.
  • Referring to FIG. 19, the client 104 is shown as including an application in the user memory address space and kernel I/O drivers, a layout manager, and file name services in the kernel memory address space. The configuration illustrated in FIG. 19 is like that illustrated in FIG. 15, except that the VFS is not used. In the configuration illustrated in FIG. 19, the application could directly access the file name services, the kernel I/O drivers, and the layout manager. The communication interface in the kernel memory address space communicates with the servers 102 just as in other configurations. The direct access illustrated in FIG. 19 allows applications to manipulate file objects (via, for example, appropriate API's) while access via the VFS (or similar) allows applications to accesses file objects indirectly through file system calls to the VFS.
  • Referring to FIG. 20, the client 104 is shown as having an application, user level I/O drivers, a layout manager, and file name services all provided in user memory address space. The configuration shown in FIG. 20 is like that shown in FIG. 17. However, as set forth above, the configuration of FIG. 17 includes file service libraries that are linked into, and thus part of, the application. In contrast, in the configuration of FIG. 20, the application is not linked into libraries with extensive file services. Instead, like the application of the configuration illustrated in FIG. 19, the application in the configuration of FIG. 20 uses minimal file services and, instead, uses and operates upon file objects directly using the user level I/O drivers, the layout manager and, if a file name translation is needed, the file name services.
  • Referring to FIG. 21, the client 104 is shown as having an application in user memory address space and a bridge in the kernel memory address space. File name services, user level I/O drivers, and a layout manager are provided in user memory address space. However, unlike the configuration of FIG. 20, the application does not make direct calls to the file system components in the user memory address space. Instead, the application calls the file system components indirectly through the bridge. Just as with the configuration illustrated in FIG. 18, the configuration of FIG. 21 advantageously locates file system components in the user memory address space and, at the same time, provides a kernel memory address space layer between the application and the file system components.
  • Referring to FIG. 22, the client 104 is shown as having an application in user memory address space and a Web Services module in kernel memory address space. The application may be a Web server application or any application that handles communication with the Web. In an embodiment herein, the application allows communication with the client 104, which acts as a Web server to other computing devices (not shown) that access the client 104 through a Web connection.
  • The configuration illustrated in FIG. 22 provides Web Services in a manner similar to the file services and/or file object access provided by other configurations. However, the Web Services receives requests/data via a Web data protocol, such as HTML, and provides responses/data also in a Web data protocol, which may be the same or different from the protocol used for requests/data. Operations handled by the Web Services may include object-level operations such as create object, delete object, read object, modify object, modify object metadata, etc. It is also possible to provide more file system level operations, via the Web Services, that open files, read data from files, etc. by including at least some of the functionality of the file services, described elsewhere herein, with the Web Services. The Web Services may present to the other computing devices a conventional well-known Web Services protocol, such as REST or SOAP, or may provide any other appropriate protocol.
  • Referring to FIG. 23, the client 104 is shown as having an application, Web Services, user level I/O drivers, and a layout manager in user memory address space. The application may include a Web connection that allows communication with the client 104, which acts as a Web server to other computing devices (not shown) that access the client 104 through the Web connection. The configuration of FIG. 23 is like that of FIG. 17 and FIG. 20. The advantages of the configuration shown in FIG. 23 over the configuration shown in FIG. 22 is that, generally, changes to the configuration shown in FIG. 23 do not require reconfiguring kernel memory address space processes.
  • Referring to FIG. 24, the 104 is shown as having an application, Web Services, user level I/O drivers, and a layout manager in user memory address space. The application may include a Web connection that allows communication with the client 104, which acts as a Web server to other computing devices (not shown) that access the client 104 through the Web connection. A bridge is provided in the kernel memory address space. The configuration of FIG. 24 has similar advantages to the configuration shown in FIG. 23, but also has the advantages provided by providing the bridge, discussed elsewhere herein.
  • Referring to FIG. 25, the client 104 is shown as having a plurality of applications in user memory address space, each of which may use a different interface to access file objects of the servers 102. Each of the applications shown in FIG. 25 is meant to represent one or more applications. Accordingly, APP1 may present one or more applications that access file objects at the servers 102 using a Web Services interface. The APP1 application may include a Web connection that allows communication with the client 104, which acts as a Web server to other computing devices (not shown) that access the client 104 through the Web connection. APP2 may represent one or more applications that access file objects at the servers 102 using the VFS, and APP3 may represent one or more applications that directly operate on file objects at the servers 102. The different interfaces may operate at the client 104 at the same time.
  • Note that may other combinations of configurations, including illustrated configurations, are possible so that the client 104 may simultaneously present to applications thereon different interfaces. For example, it is possible to combine the configurations illustrated in FIGS. 15, 19, and 22 and/or combine the configurations of FIGS. 17, 20, and 23. Other combinations, including combinations of only two illustrated configurations, are also possible. The servers 102 provide the file objects to the clients 104 provided: 1) the requesting client has appropriate authorization for whatever operation is requested for the file objects; and 2) there is no conflict with any previous request. For example, in systems where only one client is allowed to write to an object at any one time, the servers 102 would not allow one of the clients 104-106 to modify a particular object while another one of the clients 104-106 is also modifying the object.
  • Referring to FIG. 26, the servers 102 are shown in more detail as including one or more policy manager servers 402, one or more security manager servers 403, one or more audit servers 404, one or more metadata servers 405, one or more resource manager servers 406, one or more data storage servers 407, and one or more metadata location servers 408. Each of the servers 402-408 may be implemented as one or more unitary processing devices capable of providing the functionality described herein. For the discussion herein, reference to servers should be understood as a reference to one or more servers. The servers 402-408 may be interconnected using any appropriate data communication mechanism, such as TCP/IP, and may be coupled to the clients 104-106 (not shown in FIG. 26) using any appropriate data communication mechanism, such as TCP/IP.
  • The servers 102 may include a user management interface 412 that facilitates system management. The user management interface 412 exchanges data with the policy management servers 402, the security management servers 403, and the audit servers 404 to affect how the servers 102 interact with the clients 104-106 and corresponding users. The data may be provided through the user management interface 412 in any one of a number of ways, including conventional interactive computer screen input and data file input (e.g., a text file having user management commands). The data may include information that correlates classes of users and storage parameters such as Quality of Service (QOS), RAID protection level, number and geographic location(s) of mirrors, etc. For example, an administrator may specify through the user management interface 412 that users of a particular class (users belonging to a particular group) store data file objects on storage devices having a particular RAID level protection.
  • The servers 102 also include physical storage 414 coupled to the data storage servers 407. Although the physical storage 414 is shown as a single item in FIG. 26, there may be any number of separate physical storage units that may be geographically dispersed. In addition, there may be different types of physical storage units having different capabilities. Accordingly, the physical storage 414 generically represents one or more instances of physical data storage for the system that is managed by the data storage servers 407, as explained in more detail below.
  • Data modifications, including modifications of metadata file objects and/or policies that affect handling/creation of metadata file objects, require appropriate security credentials. Accordingly, the security manager servers 403 may restrict/inhibit the ability of certain administrators (users) to modify and/or create policies for classes of users.
  • Referring to FIG. 27, a flow chart 430 illustrates steps performed by the user management interface 412 to obtain and use security credentials for accessing the policy manager servers 402. Processing begins at a first step 432 where the user management interface 412 sends a request to the security manager servers 403 to obtain a token (or other appropriate security credentials) for the operation to be performed by the user management interface 412. Following the step 432 is a test step 434 where it is determined if the token has been granted (provided). In some instances, the security manager servers 403 may not issue a security token at all. For example, if the administrator (user) does not have sufficient rights to perform the desired function.
  • If the security token is not granted, then control passes from the step 434 to a step 436 where processing is performed in connection with the security token not being granted. The operations performed at the step 436 may including providing a message to the administrator (user) through the security management interface 412 indicating that the administrator does not have sufficient rights to perform the desired operation. Following the step 436, processing is complete.
  • If it is determined at the test step 434 that a security token has been granted (provided) by the security manager servers 403, then control passes from the test step 434 to a step 438 where the user management interface 412 provides the security token, and user id information, to the policy manager servers 402. Of course, information indicating the desired operation/modification may also be provided at the step 438. Following the step 438 is a test step 442 where it is determined if the policy manager servers 402 have allowed the requested operation. Note that, in some instances, the policy manager servers 402 may not allow a particular operation even though the security manager servers 403 have provided a security token. For example, if the user id and the user indicated by the security token do not match and/or if the requested operation and the operation indicated by the security token do not match.
  • If it is determined at the test step 442 that the requested operation is not allowed, then control passes from the test step 442 to the step 436, described above, where processing is performed to indicate that there are security issues. The processing performed at the step 436 may include providing a message to an administrator (user) indicating that the operation cannot be performed because of insufficient security rights. The message provided when the step 436 is reached from the step 442 may be different than the message provided when the step 436 is reached from the step 434.
  • If it is determined at the test step 442 that the requested operation is allowed, then control passes from the test step 442 to a step 444 where the operation is performed. Performing the operation at the step 444 may include modifying policy data, as described in more detail elsewhere herein. Following the step 444, processing is complete.
  • Thus, an administrator (user) accessing the policy manager servers 402 would first provide identification information to the security manager servers 403 that would return a security token (perhaps having an expiration time). The administrator presents the token and identification information to the policy manager servers 402, which would decide to grant or deny access based on the token and the identification information. Note that the security mechanism illustrated by the flow chart 430 of FIG. 27 may be extended to be used in connection with accessing any of the servers 402-408 and/or other data. For example, one of the clients 104-106 could obtain/modify file objects by first requesting a security token from the security manager servers 403 prior to performing an operation that includes operations with file objects. Accordingly, for the discussion herein, it can be assumed that access to file objects, servers, etc. includes appropriate security procedures like those illustrated in FIG. 27.
  • The policy manager servers 402 handle placement and protection of file objects. An administrator (user) may input, through the user management interface 412, different policy templates that may be assigned to different ones of the clients 104-106, different users, different classes of users, or any other appropriate group. For example, a policy template may indicate that, for a particular group of users, whenever a new file is created, a mirror will be created that is geographically farther from the initial data set by at least a certain distance. In such a case, when a first user of the group creates an initial data set in New York, a mirror may be automatically created in Los Angeles while, when a second user creates an initial data set in Los Angeles, a mirror may be created in New York.
  • The audit servers 404 may be used to provide system auditing capability. A user may communicate to the audit servers 404 through the user management interface 412. The user may indicate the type of information to be audited (tracked).
  • The resource manager servers 406 keep track of available system resources. In some instances, the resource manager servers 406 may interact with the policy manager servers 402 in connection with establishing policy templates and/or assigning policy templates. In some cases, a user may attempt to construct a policy template that is impossible to fulfill if assigned to a group. For example, if all of the physical data storage is in a single geographic location, then it would not be appropriate to have a policy template indicating that new files should include a mirror that is geographically distant from the initial data set.
  • The resource manager servers 406 receive information from other components of the system in order to be able to keep track which resources are available. Whenever a resource is added to the system, the resource or another component reports that information to the resource manager servers 406. For example, if new physical storage is added to the system, the new physical storage itself, or a corresponding one of the data storage servers 407, sends a message to the resource manager servers 406. Similarly, if a resource becomes full (e.g., a physical disk is full) or is removed from the system (planned removal or unplanned resource failure), information is provided to the resource manager servers 406. In an embodiment herein, system resources may correspond to portions of the physical storage 414 and/or data servers 407 that manage the physical storage 414.
  • Referring to FIG. 28, a resource table 460 is shown as including a plurality of entries 462-464, each of which corresponds to a particular storage resource. Although only three entries are shown, the table 460 may contain any number of entries. The table 460 may be implemented using any appropriate technique, including an array, linked list, etc.
  • Each of the entries 462-464 includes a resource field identifying a particular resource corresponding to the entry. In an embodiment herein, each of the entries 462-464 may correspond to a particular one of the data storage servers 407 and/or a portion thereof. Each of the entries 462-464 includes a status field corresponding to the status of the corresponding resource. In an embodiment herein, the status field may indicate that a resource is on-line (available) or off-line (unavailable). The status field may also indicate the percentage of used space of a resource, and perhaps indicate any performance degradation.
  • Each of the entries 462-464 may also include a capabilities field that indicates the capabilities of the corresponding resource. In an embodiment herein, when the resources represent storage areas, the capabilities field may indicate particular capabilities of a corresponding storage area. Particular capabilities may include the resource being green (low energy use through, for example, spinning disks down when not in use), capable of data deduplication (maintaining only a single copy of data that is otherwise duplicated), capable of various RAID configurations, etc. The capabilities field may indicate any appropriate data storage capabilities.
  • Referring to FIG. 29, a flow chart 480 indicates operation of the resource manager servers 406 in connection with maintaining information about system resources. Processing begins at a first step 482 where the resource manager servers 406 are initialized with information about resources. The initialization processing performed at the step 482 may take any form, including loading a fixed table of initially available resources, having the resource manager servers 406 poll system resources, etc.
  • Following the step 482 is a test step 484 where the resource manager servers 406 wait for new information to be provided. In an embodiment herein, after initialization, the resource manager servers 406 wait to receive information from other system components. In other embodiments, it may be possible to have the resource manager servers 406 periodically poll system components to see if anything has changed. If it is determined at the test step 484 that no new information is available, control loops back on the test step 484 to continue polling.
  • Once it is determined at the test step 484 that new information is available, then control transfers from the test step 484 to a test step 486 where it is determined if the new information relates to a new resource added to the system. If so, then control transfers from the test step 486 to a step 488 where the new entry is added to the resource table that is managed by the resource manager servers 406. Following the step 488, control transfers back to the step 484 to continue waiting for new information.
  • If it is determined at the step 486 that the received resource information does not related to a new resource (and thus relates to a change of an existing resource), then control transfers from the step 486 to a step 492 where the existing entry is located in the resource table. Following the step 492 is a test step 494 where it is determined if the capability is being changed for the modified resource. The capability of a resource may change under many different circumstances. For example, a resource may degrade and lose capabilities, a resource may be modified/enhanced and gain capabilities, a local manager of a resource may decide to make certain capabilities available/unavailable, etc.
  • If it is determined at the step 494 that the capabilities of a resource have changed, then control transfers from the test step 494 to a step 496 to change the capabilities field for the resource being modified. Otherwise, control transfers from the test step 494 to a step 498 to change the status field of the resource being modified (e.g., resource is full, resource is off-line, resource is on-line, etc.). Following either the step 496 or the step 498, control transfer back to the step 484, discussed above, for another iteration.
  • Note that the resource manager servers 406 may represent a plurality of separate computing devices that may be dispersed throughout the system. Furthermore, each of the separate computing devices may maintain its own copy of the resource table. The separate computing devices that are used to implement the resource manager servers 406 may or may not share resource information and may or may not receive the same resource status messages. In instances where information sharing and/or receipt of status messages is not perfect, then each of the computing devices may have a somewhat different version of the resource table and it is possible for no one version of the resource table to reflect a completely accurate picture of the exact state of all of the resources of the system.
  • The physical storage 414 may be provided using relatively inexpensive off-the-shelf mass produced storage hardware. In an embodiment herein, at least some of the physical storage 414 may be implemented using serial ATA disk drives, which are available from a number of manufactures such as Seagate and Western Digital. As discussed elsewhere herein, the physical storage may be geographically dispersed. However, each portion of the physical storage may be managed/controlled by at least one of the data storage servers 407, which may be implemented using conventional computing devices local to the corresponding portion of the physical storage 414.
  • In an embodiment herein, the data storage servers 407 may present an OSD Standard interface to the system. Thus, the servers 102 and/or the clients 104-106 may access physical storage 414 through the data storage servers 407 using OSD calls and may receive information/data according to the OSD protocol. In addition, the data storage servers 407 may handle managing/posting the capabilities and status of different portions of the physical storage 414. Thus, for example, when a portion of the physical storage 414 is managed by a particular server of the data storage servers 407, the particular server may send a message to the resource manager servers 406 indicating the new status.
  • Referring to FIG. 30, a flow chart 510 illustrates steps performed by the resource manager servers 406 in connection with servicing an inquiry for a resource with particular capabilities (i.e., finding a resource with particular capabilities). Processing begins at a first step 512 where a pointer, used to iterate through each entry of the resource table, is set to point to the first entry. Following the step 512 is a test step 514 where it is determined if the pointer points past the end of the table (i.e., all entries have been examined). If so, then control passes from the test step 514 to a step 516 where a result indicating no match for the requested capabilities is returned by the resource manager servers 406. Following the step 516, processing is complete.
  • If it is determined at the test step 514 that the pointer used to iterate through the entries does not point past the end of the table, then control transfers from the test step 514 to a test step 518 where it is determined if the entry currently indicated by the pointer is a match for the requested capability. Note that the test at the step 518 may include checking the status of a resource to ensure that the resource is on-line and not full or otherwise unusable. If it is determined at the step 518 that the resource indicated by the pointer has the requested capability, then control transfers from the test step 518 to a step 522 where the resource manager servers 406 return an indicator indicating the matching resource. Following the step 522, processing is complete.
  • If it is determined at the step 518 that the resource indicated by the pointer does not have the requested capability (or is off-line, full, etc.), then control transfers from the test step 518 to a step 524 where the pointer is incremented. Following the step 524, control transfers back to the step 514, discussed above, for another iteration.
  • The LSO trees that are part of the metadata objects for files are created, maintained, and manipulated by the metadata servers 405. The metadata servers 205 handle updates from the clients 104-106 in connection with manipulation of file objects (e.g., at the step 214 of the flow chart 200 of FIG. 9, at the step 298 of the flow chart 280 of FIG. 11, etc.). The metadata servers 205 may also handle any actions, besides modifying the LSO tree, that may need to be performed in connection with the updates. The metadata servers 205 also handle leases obtained for file objects.
  • Referring to FIG. 31, a flow chart 550 illustrates steps performed by the metadata servers 205 in connection with servicing a request for a lease for a file. As discussed elsewhere herein, a lease may be requested by one of the clients 104-106. However, it is also possible for one of the components of the servers 102 to request a lease. It is even possible for one of the metadata servers 405 to request a lease. One of the metadata servers 405 may request a lease in connection with file maintenance (e.g., repairing mirrors), as described in more detail elsewhere herein. In an embodiment herein, leases are maintained by the metadata servers 405 in a table that indicates the entity that has acquired the lease, the type of lease (e.g., read or write), and possibly an expiration time.
  • In addition, as discussed elsewhere herein, it is possible to perform lease operations on ranges of logical addresses in a file so that, for example, one set of lease operations may be performed on logical addresses A-B for a file while another set of lease operations may be independently performed for logical addresses C-D for the same file, where A-B does not overlap C-D. In a system where only one write lease is issued at a time, it may still be possible for one entity to acquire a write lease for the A-B portion of a file while another independent entity simultaneously acquires a write lease for the C-D portion of the same file. Accordingly, for the discussion herein, in appropriate instances, a reference to a file or files should be understood to include non-overlapping portions of a file or files.
  • Processing begins at a first test step 552 where it is determined if the requested lease is available. The test at the step 552 determines if the requestor has appropriate security credentials, if the corresponding data file exists, etc. Also, as discussed in more detail elsewhere herein, leases may be purposely made unavailable in certain circumstances. If it is determined at the test step 552 that the lease is not available, then control transfers from the test step 552 to a step 554 where a failure indicator is returned to the requestor. The failure indicator may include a reason for the failure (e.g., improper security credentials, file does not exist, etc.). Following the step 554, processing is complete.
  • If it is determined at the test step 552 that the requested lease is available, then control transfers from the step 552 to a test step 556 where it is determined if the lease request is for writing data to the file corresponding to the lease. In an embodiment herein, multiple users (e.g., clients, servers) may read from the same file simultaneously while only one user may write to the same file. Accordingly, if it is determined at the test step 556 that a user is not requesting write access, then control transfers from the test step 556 to a step 558 where the metadata servers 405 return the lease (i.e., returns an appropriate indicator/identifier corresponding to granting the lease). In an embodiment herein, leases may also be provided with a predetermined expiration time after which the leases are no longer valid. Lease expirations and lease recalls are discussed in more detail elsewhere here. In addition, leases may be provided along with security credentials (generated, perhaps, by the security manager servers 403) that only allow for the requested operation (e.g., read only, read and write, etc.). The security credentials may also expire at or around the same time that the lease expires in order to enforce lease expirations. Following the step 558, processing is complete.
  • If it is determined at the test step 556 that the user is requesting a write lease, then control transfers from the test step 558 to a test step 562 where it is determined if another user has already obtained a write lease for the same file. As discussed elsewhere herein, only one write lease at a time is granted for a file. If it is determined at the test step 562 that another write lease has already been granted, then control transfers from the test step 562 to a step 564 where a failure indicator is returned. Just as with the step 562, the failure indicator returned at the step 564 may include information identifying the nature of the failure. Following the step 564, processing is complete. If it is determined at the test step 562 that another write lease has not been granted, then control transfers from the test step 562 to a step 566 where the metadata servers 405 return the lease, possibly along with an expiration. Following the step 566, processing is complete.
  • As discussed elsewhere herein, it may be desirable in some instances to issue leases with expiration dates. In an embodiment herein, a particular one of the metadata servers 405 may be responsible for a particular file and corresponding file objects. The responsible one of the metadata servers 405 issues leases for the file and corresponding file objects and handles lease expiration processing. The lease information may be stored in appropriate data structures (e.g., table(s), linked list(s), etc.) by the responsible one of the metadata servers 405. In addition, it is possible to have more than one of the metadata servers 405 be responsible for a particular file or set of files, where one of the metadata servers 405 is a primary server and other responsible metadata servers are secondary servers that maintain appropriate information, but do not otherwise provide services unless the primary server fails.
  • Referring to FIG. 32, a flow chart 580 illustrates steps performed by a responsible one of the metadata servers 405 in connection with monitoring lease expiration. Processing begins at a first step 582 where a pointer that iterates through all of the leases (file and lease combinations) for which the server is responsible is made to point to the first one. Following the step 582 is a test step 584 where it is determined if the lease has expired. The processing performed at the step 584 may use any appropriate technique, including comparing lease expiration times with the current time. If it is determined at the step 584 that a lease has expired, then control transfers from the step 584 to a step 586 where the lease is recalled. Recalling the lease at the step 586 is discussed in more detail elsewhere herein.
  • Following the step 586, or following the step 584 if the lease has not expired, is a step 588 where the pointer that iterates through the files and leases for which the server is responsible is incremented. Following the step 588 is a test step 592 where it is determined if the pointer points past the end (i.e., all files and corresponding leases have been processed). If so, then control transfers from the step 592 back to the step 582, discussed above, to reset the pointer to point to the first one and begin another pass to check for expired leases. If it is determined at the step 592 that the pointer does not point past the end, then control transfers from the test step 592 back to the step 584, discussed above, for another iteration.
  • In an embodiment herein, the system may provide close-to-open consistency where data consistency is provided after an entity has released write access. Said differently, consistency is provided for a file when no entity has an active write lease for the file. Conversely, while any entity has an active write lease, the state of the data may not be guaranteed for any entity reading the data. In the system described herein, leases may be recalled after expiration or may be recalled for other reasons. Recalling the leases may improve the consistency of the data being accessed by other entities.
  • Referring to FIG. 33, a flow chart 610 illustrates steps performed in connection with a particular one of the metadata servers 405 recalling a lease. Processing begins at a first step 612 where a message is sent to the entity holding the lease (e.g., one of the clients 104-106) to alert the entity that the lease is being recalled. As discussed elsewhere herein, any appropriate entity may hold a lease, including clients 104-106, other servers, or even one of the metadata servers 405. In some embodiments, the step 612 may include ensuring receipt of the message by the entity (e.g., waiting for an acknowledgement) while in other embodiments the message is simply sent and it is assumed that the message is received by any operational recipient. Of course, in systems that wait for an acknowledgement, there may be a timeout and/or a limit on the number of attempts.
  • Following the step 612 is a step 614 where the appropriate tables are adjusted to reflect that the lease that has been recalled is no longer outstanding. Tables and other data structures used with leases are discussed in more detail elsewhere herein. Following the step 614 is a test step 616 where it is determined if the lease that was recalled was a write lease (lease to allow writing data). As discussed elsewhere herein, the system may provide close-to-open consistency so that, when a write lease is released, data reads are made consistent. This may be facilitated by recalling all read leases when a write lease is recalled. Entities for whom a read lease is recalled may flush their internal buffers prior to reacquiring the read lease after the recall. Note, by the way, that an entity for which a write lease is recalled may also flush buffers by writing unsaved data to the physical storage 414 (through the data storage servers 407) in response to receiving a recall notification. Accordingly, in some embodiments, an entity receiving a recall message for a write lease may be provided with a certain amount of time in which to write any unsaved data to physical storage. For this purpose, the security credentials provided along with a write lease may be set to expire a predetermined amount of time after the write lease expires.
  • If it is determined at the step 616 that the lease that is being recalled is not a write lease, then processing is complete. Otherwise, control transfers from the test step 616 to a step 618 where a pointer, used to iterate through all of the outstanding leases for the file for which the write lease is being recalled, is made to point to the first outstanding lease. Following the step 618 is a test step 622 where it is determined if the pointer points past the end (i.e., all outstanding leases have been recalled). If so, then processing is complete. Otherwise, control transfers from the test step 622 to a step 624 where the lease is recalled. The lease may be recalled by calling the processing illustrated by the flow chart 610, and thus may be recursive. Following the step 624 is a step 626 where the pointer used to iterate through entities having outstanding leases for a file is incremented. Following the step 626, control transfers back to the step 622 for another iteration.
  • Referring to FIG. 34, a table 630 is shown as including a plurality of entries used to keep track of files for which a particular one of the metadata servers 405 is responsible. Of course, other appropriate data structures may be used instead of the table 630, such as a linked list, a doubly linked list, etc. Each primary one of the metadata servers 405, and any corresponding secondary one(s) of the metadata servers 405, may contain data corresponding to specific file sets for which the particular one(s) of the metadata servers 405 are responsible. Thus, for example, a first primary one of the metadata servers 405 may contain a version of the table 630 for a first set of files for which the first primary one of the metadata servers is responsible while a second primary one of the metadata servers 405 may contain a completely different version of the table 630 for a second (different) set of files for which the second primary one of the metadata servers 405 is responsible.
  • Each entry of the table 630 includes a file identifier field (FID) that uniquely identifies the file corresponding to an entry. In an embodiment herein, the FID field may be the object id of the metadata object for the file (for example, the metadata object 132 in the diagram 130 of FIG. 4). Note that, as discussed elsewhere herein, the metadata object for a file may be used to locate all of the other data objects for the file.
  • The MD LOC field may describe the data storage location of the metadata object for the file. In an embodiment herein, the MD LOC field may contain an identifier for the one of the data storage servers 407 that stores the metadata object for the file. THE MD LOC field may also contain a unique identifier (perhaps initially assigned by the one of the data storage servers 407) that may be used to retrieve and store data at the one of the data storage servers 407.
  • The LEASE LIST field may contain information about all entities that have active leases outstanding for the corresponding file. In an embodiment herein, the LEASE LIST field may contain a pointer to a linked list of elements that corresponding to entities having outstanding leases. Of course, any other appropriate data structure (e.g., array) may be used.
  • Referring to FIG. 35, a diagram 640 illustrates a linked list that may be used to keep track of entities having an outstanding active lease for a file. Each element of the list contains a NEXT field that points to the next element in the list. The element at the end of the list contains a null indicator. Thus, the list may be traversed by starting with the element pointed to by the LEASE LIST pointer and subsequently pointing to the elements pointed to by the NEXT field. (e.g., in connection with the processing illustrated in the flow chart 580 of FIG. 32). Similarly, conventional linked list operations may be used to add and remove elements.
  • Each element of the list also contains a TYPE field that indicates the type of lease (e.g., read or write) and includes an EXP field that indicates when the lease corresponding to the element expires. Each element also contains an ENT field that indicates the entity (e.g., one of the clients 104-106, another server, etc.) that holds the corresponding lease.
  • Manipulation of the linked list is fairly straight-forward. When a lease is granted, fields of an element are populated with the type, expiration, and entity corresponding to the lease and the element is then added to the list. Similarly, when a lease is recalled or otherwise returned, the corresponding element is removed from the list. Of course, other data structures may be used instead of a linked list.
  • Referring to FIG. 36, a table 650 is shown as containing a plurality of entries that correlate object identifiers (OID) with location information (LOC). In an embodiment herein, object identifiers are a numerical value. Each of the metadata servers 405 is provided with a unique range of object identifiers to use/manage. In some embodiments, object identifiers may be reused while in other embodiments object identifiers are not reused. Of course, in instances where object identifiers are not reused, each of the metadata servers 405 needs to be provided with a sufficient range of useable object identifiers.
  • The LOC field is like the MD LOC field for the table 630. The LOC field describes the data storage location of the corresponding object. In an embodiment herein, the LOC field may contain an identifier for the one of the data storage servers 407 containing (handling) the object as a unique identifier (perhaps assigned by the one of the data storage servers 407) that may be used to retrieve and store data for the object. Thus, if one of the metadata servers 405 has a table entry for a particular object, an entity can pass the object identifier to the one of the metadata servers 405 and receive in return the corresponding LOC information to allow the entity to access the appropriate one of data storage servers 407 directly.
  • Having a number of metadata servers 405 allows for distributed servicing of file operations (and thus significant scalability) as well as providing failover/redundancy capability. In some instances, objects may be reassigned from one of the metadata servers 405 to another. However, since each of the metadata servers 405 contains information for only a subset of files (and corresponding file objects), it may be necessary to provide a mechanism for locating an appropriate one of the metadata servers 405 in connection with performing operations.
  • The metadata location servers 408 provide location services for an entity seeking the appropriate one of the metadata servers 405 for operations on a particular file. In an embodiment herein, each of the metadata location servers 408 may receive a call having an object identifier and can return a specific one of the metadata servers 405 that handles the particular object. In addition, as discussed in more detail elsewhere herein, the metadata location servers 408 may assist in connection with the creation of new objects by indicating to a calling entity (e.g., one of the clients 104-106) a specific one of the metadata servers 405 to be used for the new object. The metadata servers 408 may operate like Domain Name Servers on the Web, and each of the clients 104-106 (and other entities) may be provided with a primary and a secondary one of the metadata location servers 408 to consult.
  • Referring to FIG. 37, a table 670 is shown as containing entries for use by one of the metadata location servers 408. Each entry includes an OID RANGE field, which indicates a range of object identifiers, and an MDS ID field, which identifies a particular one of the metadata servers 405, or possibly a group of the metadata servers 405, with one being primary and the remainder being secondary. An entity may provide a particular object identifier to the metadata location server 408, which may then consult the table 670 and return the corresponding value from the MDS ID field.
  • In addition, the metadata location servers 408 may assign a particular one of the metadata servers 405 in connection with creation of a new object. The assignment may be based on any appropriate metric, including random assignment, assignment based on geographic proximity, load balancing, and/or a policy input by a user through the user management interface 412, discussed above. A policy may indicate, for example, that new objects created by a particular client are provided on a particular metadata server.
  • Referring to FIG. 38, a flow chart 680 illustrates processing by one of the metadata location servers 408 to assign a particular one of the metadata servers in connection with creation of a new file object. Processing begins at a first step 682 where the policy may be consulted. As discussed elsewhere herein, it may be possible to input policies through the user management interface 412. The policies may dictate (directly or indirectly) which of the metadata servers 405 are to be used for which of the clients 104-106. Note that other policies are possible. For example, the policies may indicate which of the metadata servers 405 are to be used at different times of the day (independent of the clients 104-106) or based on the load, the user, etc. The policy information may be stored at the user management interface 412 and accessed in connection with the step 682, or may be stored at the one of the metadata location servers 408 after having been previously passed thereto. Following the step 682 is a step 684 where identification infatuation for a specific one of the metadata location servers 405 is returned to the calling entity. Following the step 684, processing is complete.
  • Referring to FIG. 39, a flow chart 700 illustrates steps performed by one of the metadata servers 405 in connection with deleting a file for which the one of the metadata servers 405 is responsible. Processing begins at a first step 702 where all leases for the file are recalled (e.g., by iterating through the lease list 640 and providing recall processing like that illustrated by the flow chart 610 of FIG. 33). Following the step 702 is a step 704 where the leases are made unavailable for other processes (e.g., by setting an appropriate flag that prevents granting further leases for the file). Following the step 704 is a step 706 where the metadata for the file is obtained (e.g., by retrieving the metadata object for the file). Note that, as discussed elsewhere herein, the metadata object for a file contains information about the location of all the file objects used in connection with the file.
  • Following the step 706 is a step 708 where a pointer, used to iterate through the objects used in connection with a file, is made to point to the first object. The pointer and subsequent iterative processing uses information obtained at the step 706 to determine the objects for the file. Following the step 708 is a test step 712 where it is determined if the pointer, used to iterate through the objects, points past the end (i.e., all of the file objects have been processed). If so, then control transfers from the test step 712 to a step 714 where the table entry corresponding to the file (i.e., the entry in the table 630 of FIG. 34) is deleted (e.g., set to null). Following the step 714, processing is complete.
  • If it is determined at the step 712 that there are more file objects to process, then control transfers from the step 712 to a step 716 where the LOC information is obtained for the object. The LOC information is like the information stored in the table 650 of FIG. 36, discussed above. In some instances, the LOC information will be local to the one of the metadata servers 405 performing the processing. In other instances, it may be necessary to call one of the metadata location servers 408 to get the location information for the object. Following the step 716 is a step 718 where a message is sent to the appropriate one of the data storage servers 407 (i.e., the one handling the object) to cause the object to be deleted.
  • In an embodiment herein, it may be possible for different files to use the same object (e.g., deduplication, file aliasing, etc.), in which case the one of the data storage servers 407 would simply decrement a counter for the object indicating the number of users thereof. When the counter is decremented to zero, the data storage server may delete the data corresponding to the object. Note that the object(s) associated with a file may be deleted asynchronously. Following the step 718 is a step 722 where the pointer used to iterate through the file objects is incremented. Following the step 722, control transfers back to the step 712, discussed above, for another iteration.
  • Referring to FIG. 40, a flow chart 740 illustrates steps performed by one of the metadata servers 405 in connection with creating a new file. Note that, prior to performing the processing illustrated in FIG. 40, the entity creating the file (e.g., one of the clients 104-106) may first consult the metadata location servers 408 to determine the proper one of the metadata servers 405 to use to create the file.
  • Processing begins at a first step 742 where the policy manager servers 402 are consulted to obtain policy information for new files (e.g., new files for client X have a mirror geographically located at least a certain distance from the primary data set). Following the step 742 is a step 744 where the resource manager servers 406 are consulted to determine the available resources to meet the dictates of the policy obtained at the step 742. Following the step 744 is a test step 746 where it is determined if it is possible to meet the dictates of the policy given the available resources. For example, it may not be possible to satisfy the policy of having geographically separated mirrors if all of the remaining physical storage in a system is in one geographic location. If it is determined at the test step 746 that it is not possible to fulfill a policy, then control transfers from the test step 746 to a step 748 where alternative processing is performed. Any appropriate processing may be performed at the step 748, including returning an error indicator to the calling entity, creating the file with the next best available resources, etc. Following the step 748, processing is complete.
  • If it is determined at the step 746 that it is possible to fulfill the policy with available resources, then control transfers from the test step 746 to a step 752 where the metadata object for the file is created. Creating the metadata object may include populating the data fields of the metadata object and obtaining storage from an appropriate one of the data storage servers 407. In an embodiment herein, the data storage servers 407 may be a pool and, absent any other specific requirements, may provide storage space at any appropriate portion of the physical storage 414 upon request. The metadata objects created at the step 752 will be like those described herein. See, for example, FIGS. 4-8 and the corresponding description. Following the step 752 is a step 754 where a table entry is created in the table 630 for the new file. Following the step 754 is a step 756 where the object id of the metadata object for the file is returned to the calling entity. Following the step 756, processing is complete.
  • As discussed elsewhere herein, when a client or other entity unsuccessfully attempts a write operation, a message (update) is sent to the servers 102 by the client or other entity. Similarly, a message (update) may also be sent to the servers 102 in connection with finding a stale mirror in connection with a synchronous mirror copy (see the step 358 of the flow chart 350 of FIG. 13), and/or writing to data having an asynchronous mirror.
  • Referring to FIG. 41, a flow chart 760 illustrates steps performed by one of the metadata servers 405 in connection with handling a message that a write operation was unsuccessful. Processing begins at a first step 762 where all of the leases for the file are recalled. Following the step 762 is a step 764 where leases for the file are made unavailable. Following the step 764 is a step 766 where new storage space is allocated to replace the old storage space to which the write operation was unsuccessful. Following the step 766 is a step 768 where the appropriate information in the metadata object for the file is adjusted. Following the step 768 is a step 772 where a data copy operation is begun to repopulate the new data storage space with, for example, data from one of the minors. While the data copy operation is being performed, the data may be indicated as being stale at the step 772. Of course, if there are no mirrors or other data that can be used to repopulate the new storage space, then the processing at the step 772 would not be performed. Following the step 772 is a step 774 where the leases for the file are made available. Following the step 774, processing is complete.
  • Referring to FIG. 42, a flow chart 780 illustrates processing preformed in connection with one of the metadata servers 405 receiving an indication that a synchronous mirror was stale. Processing begins at a first test step 782 where it is determined if the mirror is currently in the process of being populated with data (perhaps in connection with a previous bad write operation). If so, then the data population operation is allowed to continue and processing is complete. Otherwise, control transfers from the test step 782 to a step 784 where processing like that illustrated in the flow chart 760 of FIG. 41, discussed above, is performed. Following the step 784, processing is complete.
  • Referring to FIG. 43, a flow chart 790 illustrates steps performed in connection with the servers 102 receiving a message that data corresponding to asynchronous copies has been written, thus necessitating update of the asynchronous copies. Processing begins at a first step 792 where information is added to a queue (e.g., a job queue provided at the affected one of the metadata servers 405) indicating that asynchronous data needs to be copied. As discussed in more elsewhere herein, a process at each of the metadata servers 405 services the corresponding queue. Following the step 792, processing is complete.
  • Referring to FIG. 44, a flow chart 800 illustrates processing performed by a process at each of the metadata servers 405 that services the corresponding queue that is populated by the processing illustrated by the flow chart 790 of FIG. 43. Processing begins at a first test step 802 where it is determined if the queue is empty. If so, then control transfers back to the test step 802 to continue to poll the queue. If the queue is not empty, then control transfers from the test step 802 to a step 804 where the next entry in the queue is removed. In an embodiment herein, queue elements may be processed on a first in first out basis. However, it is also possible to selectively remove elements from the queue in any order. For example, elements associated with files that receive higher priority may be removed before other elements (e.g., in connection with a flush operation for the file). As another example, elements may be removed according to size (of needed write operation) or according to any other appropriate criteria.
  • Following the step 804 is a step 806 where the write lease for the affected file is recalled. Following the step 806 is a step 808 where the write lease is obtained. Following the step 808 is a step 812 where the write operation is performed to write the asynchronous data to the mirror. Following the step 812 is a step 814 where the write lock is released. Following the step 814, control transfers back to the step 802 to continue to poll the queue.
  • The system described herein may be used with any server, or any group of servers, capable of providing file objects to clients. The particular form of the file objects may vary without departing from the spirit and scope of the invention. In some instances, the order of steps in the flow charts may be modified, where appropriate. The system described herein may be implemented using a computer program product/software provided in a computer-readable storage medium.
  • While the invention has been disclosed in connection with various embodiments, modifications thereon will be readily apparent to those skilled in the art. Accordingly, the spirit and scope of the invention is set forth in the following claims.

Claims (13)

1.-25. (canceled)
26. A method of managing objects for a data file, comprising:
determining a specific metadata server that handles a particular metadata object for the data file by providing a call to a metadata location server which returns the specific metadata server that handles the particular object according to an identifier for the object;
obtaining the metadata object for the data file from the specific metadata server;
determining if the metadata object indicates that a portion of the data file includes synchronous mirrors; and
in response to a write of new data to the data file, writing the new data to each synchronous mirror.
27. A method, according to claim 26, wherein data objects of the data file may be in one of four states: current, stale, empty, and immutable.
28. A method, according to claim 27, wherein, in response to a read operation and the corresponding data object being in a stale state, data is read for a corresponding synchronous mirror that is in a current state.
29. A method, according to claim 26, wherein the metadata object is provided through a Web Services interface that includes at least one of: a SOAP interface and a REST interface.
30. A method, according to claim 26, wherein the metadata object is provided through a direct file object interface that allows an application to directly access file objects at the same time that file objects are being accessed using at least one other interface.
31. A method, according to claim 26, further comprising:
providing file name services that translate file names into file object identifiers, wherein the file name services are accessed by at least one of: a file system interface and a Web Services interface.
32. Computer software, provided in a computer-readable medium, that manages objects for a data file, the software comprising:
executable code that determines a specific metadata server that handles a particular metadata object for the data file by providing a call to a metadata location server which returns the specific metadata server that handles the particular object according to an identifier for the object;
executable code that obtains a metadata object for the data file from the specific metadata server;
executable code that determines if the metadata object indicates that a portion of the data file includes synchronous mirrors; and
executable code that writes new data to each synchronous mirror in response to a write of new data to the data file.
33. Computer software, according to claim 31, wherein data objects of the data file may be in one of four states: current, stale, empty, and immutable.
34. Computer software, according to claim 33, wherein, in response to a read operation and a corresponding one of the data objects being in a stale state, executable code causes data to be read for a corresponding synchronous mirror that is in a current state.
35. Computer software, according to claim 32, wherein the metadata object is provided through a Web Services interface that includes at least one of: a SOAP interface and a REST interface.
36. Computer software, according to claim 32, wherein the metadata object is provided through a direct file object interface that allows an application to directly access file objects at the same time that file objects are being accessed using at least one other interface.
37. Computer software, according to claim 32, further comprising:
executable code that provides file name services that translate file names into file object identifiers, wherein the file name services are accessed by at least one of: a file system interface and a Web Services interface.
US13/295,485 2007-10-31 2011-11-14 Managing objects for a data file Abandoned US20120059799A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/295,485 US20120059799A1 (en) 2007-10-31 2011-11-14 Managing objects for a data file

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/981,624 US9413825B2 (en) 2007-10-31 2007-10-31 Managing file objects in a data storage system
US13/295,485 US20120059799A1 (en) 2007-10-31 2011-11-14 Managing objects for a data file

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/981,624 Continuation US9413825B2 (en) 2007-10-24 2007-10-31 Managing file objects in a data storage system

Publications (1)

Publication Number Publication Date
US20120059799A1 true US20120059799A1 (en) 2012-03-08

Family

ID=40584215

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/981,624 Active 2034-05-02 US9413825B2 (en) 2007-10-24 2007-10-31 Managing file objects in a data storage system
US13/111,285 Active US8296340B2 (en) 2007-10-31 2011-05-19 Managing files using layout storage objects
US13/295,485 Abandoned US20120059799A1 (en) 2007-10-31 2011-11-14 Managing objects for a data file

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/981,624 Active 2034-05-02 US9413825B2 (en) 2007-10-24 2007-10-31 Managing file objects in a data storage system
US13/111,285 Active US8296340B2 (en) 2007-10-31 2011-05-19 Managing files using layout storage objects

Country Status (1)

Country Link
US (3) US9413825B2 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120185510A1 (en) * 2011-01-14 2012-07-19 International Business Machines Corporation Domain based isolation of objects
US8375439B2 (en) 2011-04-29 2013-02-12 International Business Machines Corporation Domain aware time-based logins
US20140025655A1 (en) * 2011-03-30 2014-01-23 Splunk Inc. File identification management and tracking
WO2015119596A1 (en) * 2014-02-05 2015-08-13 American Express Travel Related Services Company, Inc. System and method for duplicating an intellectual property transaction deal room
US20150228012A1 (en) * 2009-12-17 2015-08-13 American Express Travel Related Services Company, Inc. System and method for enabling product development
US9189643B2 (en) 2012-11-26 2015-11-17 International Business Machines Corporation Client based resource isolation with domains
US9513827B1 (en) * 2015-05-21 2016-12-06 Intetrnational Business Machines Corporation Determining adjustments of storage device timeout values based on synchronous or asynchronous remote copy state
US9959063B1 (en) 2016-03-30 2018-05-01 EMC IP Holding Company LLC Parallel migration of multiple consistency groups in a storage system
US9959073B1 (en) 2016-03-30 2018-05-01 EMC IP Holding Company LLC Detection of host connectivity for data migration in a storage system
US9983937B1 (en) 2016-06-29 2018-05-29 EMC IP Holding Company LLC Smooth restart of storage clusters in a storage system
US10013200B1 (en) 2016-06-29 2018-07-03 EMC IP Holding Company LLC Early compression prediction in a storage system with granular block sizes
US10048874B1 (en) 2016-06-29 2018-08-14 EMC IP Holding Company LLC Flow control with a dynamic window in a storage system with latency guarantees
US10083190B2 (en) 2011-03-30 2018-09-25 Splunk Inc. Adaptive monitoring and processing of new data files and changes to existing data files
US10083067B1 (en) 2016-06-29 2018-09-25 EMC IP Holding Company LLC Thread management in a storage system
US10095428B1 (en) * 2016-03-30 2018-10-09 EMC IP Holding Company LLC Live migration of a tree of replicas in a storage system
US10152232B1 (en) 2016-06-29 2018-12-11 EMC IP Holding Company LLC Low-impact application-level performance monitoring with minimal and automatically upgradable instrumentation in a storage system
US10152527B1 (en) 2015-12-28 2018-12-11 EMC IP Holding Company LLC Increment resynchronization in hash-based replication
US10310951B1 (en) 2016-03-22 2019-06-04 EMC IP Holding Company LLC Storage system asynchronous data replication cycle trigger with empty cycle detection
US10324635B1 (en) 2016-03-22 2019-06-18 EMC IP Holding Company LLC Adaptive compression for data replication in a storage system
US10565058B1 (en) 2016-03-30 2020-02-18 EMC IP Holding Company LLC Adaptive hash-based data replication in a storage system
US10599611B1 (en) * 2017-04-24 2020-03-24 EMC IP Holding Company LLC Base object selection and creation in data storage system management

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7783664B2 (en) * 2004-12-17 2010-08-24 Microsoft Corporation Method and system for protecting the consistency of information in a distributed file system
US7650514B2 (en) * 2005-12-30 2010-01-19 Microsoft Corporation Scalable leases
EP2115593B1 (en) 2007-01-26 2018-10-24 Intel Corporation Hierarchical immutable content-addressable memory processor
US9601199B2 (en) 2007-01-26 2017-03-21 Intel Corporation Iterator register for structured memory
US8407428B2 (en) 2010-05-20 2013-03-26 Hicamp Systems, Inc. Structured memory coprocessor
US8504791B2 (en) 2007-01-26 2013-08-06 Hicamp Systems, Inc. Hierarchical immutable content-addressable memory coprocessor
US9413825B2 (en) * 2007-10-31 2016-08-09 Emc Corporation Managing file objects in a data storage system
US9122397B2 (en) * 2007-10-26 2015-09-01 Emc Corporation Exposing storage resources with differing capabilities
US9462056B1 (en) * 2007-10-31 2016-10-04 Emc Corporation Policy-based meta-data driven co-location of computation and datasets in the cloud
US8135760B1 (en) * 2007-11-01 2012-03-13 Emc Corporation Determining the lineage of a content unit on an object addressable storage system
US8484625B2 (en) * 2009-04-01 2013-07-09 Motorola Mobility Llc Method and apparatus to vet an executable program using a model
US8290906B1 (en) 2009-09-28 2012-10-16 Decho Corporation Intelligent resource synchronization
US8627446B1 (en) 2009-09-30 2014-01-07 Emc Corporation Federating data between groups of servers
US8307362B1 (en) 2009-12-18 2012-11-06 Emc Corporation Resource allocation in a virtualized environment
US8700676B2 (en) * 2010-04-09 2014-04-15 Dell Products L.P. System and method for appending metadata to objects
US9824095B1 (en) * 2010-05-03 2017-11-21 Panzura, Inc. Using overlay metadata in a cloud controller to generate incremental snapshots for a distributed filesystem
US8578204B1 (en) 2010-12-29 2013-11-05 Emc Corporation Witness facility for distributed storage system
US8682955B1 (en) 2010-12-31 2014-03-25 Emc Corporation Fully automated cloud tiering controlled by an orchestration layer based on dynamic information
US10769021B1 (en) 2010-12-31 2020-09-08 EMC IP Holding Company LLC Cache protection through cache
US8812901B2 (en) * 2011-09-23 2014-08-19 Lsi Corporation Methods and apparatus for marking writes on a write-protected failed device to avoid reading stale data in a RAID storage system
US8667092B1 (en) * 2011-10-17 2014-03-04 Google Inc. System and method for providing online file management services
US8965847B1 (en) * 2011-10-28 2015-02-24 Oxygen Cloud, Inc. Independent synchronization of file data and file metadata
US8892513B2 (en) * 2011-10-31 2014-11-18 U9T Inc Method, process and system to atomically structure varied data and transform into context associated data
US10885060B2 (en) * 2012-02-28 2021-01-05 International Business Machines Corporation On-demand file synchronization
US10528262B1 (en) * 2012-07-26 2020-01-07 EMC IP Holding Company LLC Replication-based federation of scalable data across multiple sites
CN103581229B (en) * 2012-07-26 2018-06-15 腾讯科技(深圳)有限公司 Distributed file system, file access method and client
US8972405B1 (en) 2012-07-26 2015-03-03 Emc Corporation Storage resource management information modeling in a cloud processing environment
US9304703B1 (en) 2015-04-15 2016-04-05 Symbolic Io Corporation Method and apparatus for dense hyper IO digital retention
US10133636B2 (en) * 2013-03-12 2018-11-20 Formulus Black Corporation Data storage and retrieval mediation system and methods for using same
US9628108B2 (en) 2013-02-01 2017-04-18 Symbolic Io Corporation Method and apparatus for dense hyper IO digital retention
US9817728B2 (en) 2013-02-01 2017-11-14 Symbolic Io Corporation Fast system state cloning
US9467294B2 (en) * 2013-02-01 2016-10-11 Symbolic Io Corporation Methods and systems for storing and retrieving data
US10116540B1 (en) * 2013-04-03 2018-10-30 Amdocs Development Limited System, method, and computer program for managing data objects in a multiprocessor unit telecommunications network
US20140379637A1 (en) * 2013-06-25 2014-12-25 Microsoft Corporation Reverse replication to rollback corrupted files
US9331943B2 (en) * 2013-09-10 2016-05-03 Robin Systems, Inc. Asynchronous scheduling informed by job characteristics and anticipatory provisioning of data for real-time, parallel processing
US9477679B2 (en) * 2013-09-20 2016-10-25 Google Inc. Programmatically choosing preferred storage parameters for files in large-scale distributed storage systems
CN104778166B (en) * 2014-01-09 2018-02-13 腾讯科技(深圳)有限公司 Pages Security identification display method, device and network system
US10248682B2 (en) 2015-02-20 2019-04-02 Scality, S.A. Object storage system capable of performing snapshots, branches and locking
US10261960B2 (en) * 2014-09-12 2019-04-16 Scality, S.A. Snapshots and forks of storage systems using distributed consistent databases implemented within an object store
US10366070B2 (en) 2015-02-20 2019-07-30 Scality S.A. Locking and I/O improvements of systems built with distributed consistent database implementations within an object store
GB2529670A (en) * 2014-08-28 2016-03-02 Ibm Storage system
GB2529669B8 (en) 2014-08-28 2017-03-15 Ibm Storage system
US10061514B2 (en) 2015-04-15 2018-08-28 Formulus Black Corporation Method and apparatus for dense hyper IO digital retention
US10474654B2 (en) * 2015-08-26 2019-11-12 Storagecraft Technology Corporation Structural data transfer over a network
US20170123676A1 (en) * 2015-11-04 2017-05-04 HGST Netherlands B.V. Reference Block Aggregating into a Reference Set for Deduplication in Memory Management
US10360057B1 (en) * 2016-09-22 2019-07-23 Amazon Technologies, Inc. Network-accessible volume creation and leasing
US10282127B2 (en) 2017-04-20 2019-05-07 Western Digital Technologies, Inc. Managing data in a storage system
US10565168B2 (en) 2017-05-02 2020-02-18 Oxygen Cloud, Inc. Independent synchronization with state transformation
US10809928B2 (en) 2017-06-02 2020-10-20 Western Digital Technologies, Inc. Efficient data deduplication leveraging sequential chunks or auxiliary databases
US10503608B2 (en) 2017-07-24 2019-12-10 Western Digital Technologies, Inc. Efficient management of reference blocks used in data deduplication
RU2656739C1 (en) * 2017-10-04 2018-06-06 Общество с ограниченной ответственностью "ИНТЕЛЛЕКТУАЛЬНЫЙ РЕЗЕРВ" Data storage method and system
US11658982B2 (en) * 2017-10-06 2023-05-23 Red Hat, Inc. Efficient authentication in a file system with multiple security groups
WO2019126072A1 (en) 2017-12-18 2019-06-27 Formulus Black Corporation Random access memory (ram)-based computer systems, devices, and methods
US10936544B2 (en) * 2018-05-16 2021-03-02 International Business Machines Corporation Enabling consistency in push order for network caching
US11080231B2 (en) * 2018-12-31 2021-08-03 Micron Technology, Inc. File creation with requester-specified backing
US10725853B2 (en) 2019-01-02 2020-07-28 Formulus Black Corporation Systems and methods for memory failure prevention, management, and mitigation
US11526371B2 (en) * 2019-07-31 2022-12-13 Rubrik, Inc. Data correlation using file object cache
US11451627B2 (en) * 2020-04-03 2022-09-20 Open Text Holdings, Inc. System and method for content management with intelligent data store access across distributed stores
CN113726842B (en) * 2021-07-09 2022-07-05 贝壳找房(北京)科技有限公司 File uploading method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083183A1 (en) * 2000-11-06 2002-06-27 Sanjay Pujare Conventionally coded application conversion system for streamed delivery and execution
US20030028623A1 (en) * 2001-08-04 2003-02-06 Hennessey Wade L. Method and apparatus for facilitating distributed delivery of content across a computer network
US20030172145A1 (en) * 2002-03-11 2003-09-11 Nguyen John V. System and method for designing, developing and implementing internet service provider architectures
US7281032B2 (en) * 2000-06-30 2007-10-09 Hitachi, Ltd. File sharing system with data mirroring by storage systems
US20070276838A1 (en) * 2006-05-23 2007-11-29 Samy Khalil Abushanab Distributed storage
US20080178298A1 (en) * 2001-02-14 2008-07-24 Endeavors Technology, Inc. Intelligent network streaming and execution system for conventionally coded applications

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5675785A (en) * 1994-10-04 1997-10-07 Hewlett-Packard Company Data warehouse which is accessed by a user using a schema of virtual tables
US6098108A (en) * 1997-07-02 2000-08-01 Sitara Networks, Inc. Distributed directory for enhanced network communication
JPH11123683A (en) * 1997-10-22 1999-05-11 Fanuc Ltd Power control robot having drawing and disassembling function
US6131120A (en) * 1997-10-24 2000-10-10 Directory Logic, Inc. Enterprise network management directory containing network addresses of users and devices providing access lists to routers and servers
US6360262B1 (en) * 1997-11-24 2002-03-19 International Business Machines Corporation Mapping web server objects to TCP/IP ports
US6085188A (en) * 1998-03-30 2000-07-04 International Business Machines Corporation Method of hierarchical LDAP searching with relational tables
US6240416B1 (en) * 1998-09-11 2001-05-29 Ambeo, Inc. Distributed metadata system and method
US7725558B2 (en) * 2000-07-26 2010-05-25 David Dickenson Distributive access controller
JP2005530242A (en) * 2000-09-11 2005-10-06 アガミ システムズ, インコーポレイテッド Storage system having partitioned movable metadata
WO2003107209A1 (en) * 2000-09-19 2003-12-24 Zambeel, Inc. File storage system having separation of components
US7120675B1 (en) * 2000-09-26 2006-10-10 Microsoft Corporation Information location service
US20020124137A1 (en) * 2001-01-29 2002-09-05 Ulrich Thomas R. Enhancing disk array performance via variable parity based load balancing
US7945551B1 (en) * 2001-06-14 2011-05-17 Oracle International Corporation Redirection of misses in queryable caches
US6889309B1 (en) * 2002-04-15 2005-05-03 Emc Corporation Method and apparatus for implementing an enterprise virtual storage system
US8671132B2 (en) * 2003-03-14 2014-03-11 International Business Machines Corporation System, method, and apparatus for policy-based data management
US7523275B2 (en) * 2003-04-10 2009-04-21 International Business Machines Corporation Method, system, and program for maintaining a copy relationship between primary volumes and corresponding secondary volumes
US7457982B2 (en) * 2003-04-11 2008-11-25 Network Appliance, Inc. Writable virtual disk of read-only snapshot file objects
US7072917B2 (en) * 2003-04-24 2006-07-04 Neopath Networks, Inc. Extended storage capacity for a network file server
US7139781B2 (en) * 2003-04-29 2006-11-21 International Business Machines Corporation Managing filesystem versions
US7380039B2 (en) * 2003-12-30 2008-05-27 3Tera, Inc. Apparatus, method and system for aggregrating computing resources
US7634566B2 (en) * 2004-06-03 2009-12-15 Cisco Technology, Inc. Arrangement in a network for passing control of distributed data between network nodes for optimized client access based on locality
US7895264B2 (en) * 2004-07-15 2011-02-22 Yhc Corporation Storage cluster server network
US20060080321A1 (en) * 2004-09-22 2006-04-13 Whenu.Com, Inc. System and method for processing requests for contextual information
US20060167838A1 (en) * 2005-01-21 2006-07-27 Z-Force Communications, Inc. File-based hybrid file storage scheme supporting multiple file switches
US7730038B1 (en) * 2005-02-10 2010-06-01 Oracle America, Inc. Efficient resource balancing through indirection
US20070038697A1 (en) * 2005-08-03 2007-02-15 Eyal Zimran Multi-protocol namespace server
JP2007149223A (en) * 2005-11-28 2007-06-14 Toshiba Corp Disk storage device and magnetic head
US7716180B2 (en) * 2005-12-29 2010-05-11 Amazon Technologies, Inc. Distributed storage system with web services client interface
US8151323B2 (en) * 2006-04-12 2012-04-03 Citrix Systems, Inc. Systems and methods for providing levels of access and action control via an SSL VPN appliance
US9413825B2 (en) * 2007-10-31 2016-08-09 Emc Corporation Managing file objects in a data storage system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7281032B2 (en) * 2000-06-30 2007-10-09 Hitachi, Ltd. File sharing system with data mirroring by storage systems
US20020083183A1 (en) * 2000-11-06 2002-06-27 Sanjay Pujare Conventionally coded application conversion system for streamed delivery and execution
US20080178298A1 (en) * 2001-02-14 2008-07-24 Endeavors Technology, Inc. Intelligent network streaming and execution system for conventionally coded applications
US20030028623A1 (en) * 2001-08-04 2003-02-06 Hennessey Wade L. Method and apparatus for facilitating distributed delivery of content across a computer network
US20030172145A1 (en) * 2002-03-11 2003-09-11 Nguyen John V. System and method for designing, developing and implementing internet service provider architectures
US20070276838A1 (en) * 2006-05-23 2007-11-29 Samy Khalil Abushanab Distributed storage

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150228012A1 (en) * 2009-12-17 2015-08-13 American Express Travel Related Services Company, Inc. System and method for enabling product development
US20120185510A1 (en) * 2011-01-14 2012-07-19 International Business Machines Corporation Domain based isolation of objects
US8429191B2 (en) * 2011-01-14 2013-04-23 International Business Machines Corporation Domain based isolation of objects
US11580071B2 (en) 2011-03-30 2023-02-14 Splunk Inc. Monitoring changes to data items using associated metadata
US11042515B2 (en) 2011-03-30 2021-06-22 Splunk Inc. Detecting and resolving computer system errors using fast file change monitoring
US10083190B2 (en) 2011-03-30 2018-09-25 Splunk Inc. Adaptive monitoring and processing of new data files and changes to existing data files
US20140025655A1 (en) * 2011-03-30 2014-01-23 Splunk Inc. File identification management and tracking
US10860537B2 (en) * 2011-03-30 2020-12-08 Splunk Inc. Periodically processing data in files identified using checksums
US9430488B2 (en) 2011-03-30 2016-08-30 Splunk Inc. File update tracking
US8977638B2 (en) * 2011-03-30 2015-03-10 Splunk Inc. File identification management and tracking
US9767112B2 (en) 2011-03-30 2017-09-19 Splunk Inc. File update detection and processing
US20170344576A1 (en) * 2011-03-30 2017-11-30 Splunk Inc. Periodically Processing Data In Files Identified Using Checksums
US11914552B1 (en) 2011-03-30 2024-02-27 Splunk Inc. Facilitating existing item determinations
US8375439B2 (en) 2011-04-29 2013-02-12 International Business Machines Corporation Domain aware time-based logins
US9189643B2 (en) 2012-11-26 2015-11-17 International Business Machines Corporation Client based resource isolation with domains
WO2015119596A1 (en) * 2014-02-05 2015-08-13 American Express Travel Related Services Company, Inc. System and method for duplicating an intellectual property transaction deal room
US9910609B2 (en) 2015-05-21 2018-03-06 International Business Machines Corporation Determining adjustments of storage device timeout values based on synchronous or asynchronous remote copy state
US9513827B1 (en) * 2015-05-21 2016-12-06 Intetrnational Business Machines Corporation Determining adjustments of storage device timeout values based on synchronous or asynchronous remote copy state
US10152527B1 (en) 2015-12-28 2018-12-11 EMC IP Holding Company LLC Increment resynchronization in hash-based replication
US10324635B1 (en) 2016-03-22 2019-06-18 EMC IP Holding Company LLC Adaptive compression for data replication in a storage system
US10310951B1 (en) 2016-03-22 2019-06-04 EMC IP Holding Company LLC Storage system asynchronous data replication cycle trigger with empty cycle detection
US9959073B1 (en) 2016-03-30 2018-05-01 EMC IP Holding Company LLC Detection of host connectivity for data migration in a storage system
US10095428B1 (en) * 2016-03-30 2018-10-09 EMC IP Holding Company LLC Live migration of a tree of replicas in a storage system
US10565058B1 (en) 2016-03-30 2020-02-18 EMC IP Holding Company LLC Adaptive hash-based data replication in a storage system
US9959063B1 (en) 2016-03-30 2018-05-01 EMC IP Holding Company LLC Parallel migration of multiple consistency groups in a storage system
US10152232B1 (en) 2016-06-29 2018-12-11 EMC IP Holding Company LLC Low-impact application-level performance monitoring with minimal and automatically upgradable instrumentation in a storage system
US10083067B1 (en) 2016-06-29 2018-09-25 EMC IP Holding Company LLC Thread management in a storage system
US10048874B1 (en) 2016-06-29 2018-08-14 EMC IP Holding Company LLC Flow control with a dynamic window in a storage system with latency guarantees
US10013200B1 (en) 2016-06-29 2018-07-03 EMC IP Holding Company LLC Early compression prediction in a storage system with granular block sizes
US9983937B1 (en) 2016-06-29 2018-05-29 EMC IP Holding Company LLC Smooth restart of storage clusters in a storage system
US10599611B1 (en) * 2017-04-24 2020-03-24 EMC IP Holding Company LLC Base object selection and creation in data storage system management

Also Published As

Publication number Publication date
US20090112880A1 (en) 2009-04-30
US20110219032A1 (en) 2011-09-08
US8296340B2 (en) 2012-10-23
US9413825B2 (en) 2016-08-09

Similar Documents

Publication Publication Date Title
US8296340B2 (en) Managing files using layout storage objects
US20090112789A1 (en) Policy based file management
US20090112921A1 (en) Managing files using layout storage objects
US9122397B2 (en) Exposing storage resources with differing capabilities
US7836018B2 (en) Simultaneously accessing file objects through web services and file services
US9684659B1 (en) Fine-grain policy-based snapshots
US9462056B1 (en) Policy-based meta-data driven co-location of computation and datasets in the cloud
US7574443B2 (en) Scalable clustered storage system
US7480677B2 (en) System and program for maintaining a namespace of filesets accessible to clients over a network
US8255420B2 (en) Distributed storage
US20210200641A1 (en) Parallel change file tracking in a distributed file server virtual machine (fsvm) architecture
US8977659B2 (en) Distributing files across multiple, permissibly heterogeneous, storage devices
US9817832B1 (en) Unified framework for policy-based metadata-driven storage services
EP3296895B1 (en) File storage
US20120254111A1 (en) Global indexing within an enterprise object store file system
US20060212746A1 (en) Selection of migration methods including partial read restore in distributed storage management
US8627446B1 (en) Federating data between groups of servers
US10579597B1 (en) Data-tiering service with multiple cold tier quality of service levels
US20230056217A1 (en) Failover and failback of distributed file servers
US8082230B1 (en) System and method for mounting a file system on multiple host computers
US9122688B1 (en) Naming scheme for different computer systems
US20240070032A1 (en) Application level to share level replication policy transition for file server disaster recovery systems
US20230056425A1 (en) File server managers and systems for managing virtualized file servers
US20240045774A1 (en) Self-service restore (ssr) snapshot replication with share-level file system disaster recovery on virtualized file servers
US20230237170A1 (en) Consistent access control lists across file servers for local users in a distributed file server environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OLIVEIRA, FERNANDO;FRIDELLA, STEPHEN;DIMITROV, ROSSEN;AND OTHERS;SIGNING DATES FROM 20080118 TO 20080207;REEL/FRAME:027221/0566

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION