US20160041996A1 - System and method for developing and implementing a migration plan for migrating a file system - Google Patents
System and method for developing and implementing a migration plan for migrating a file system Download PDFInfo
- Publication number
- US20160041996A1 US20160041996A1 US14/456,991 US201414456991A US2016041996A1 US 20160041996 A1 US20160041996 A1 US 20160041996A1 US 201414456991 A US201414456991 A US 201414456991A US 2016041996 A1 US2016041996 A1 US 2016041996A1
- Authority
- US
- United States
- Prior art keywords
- file system
- destination
- source
- namespace
- migration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/214—Database migration support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/11—File system administration, e.g. details of archiving or snapshots
- G06F16/119—Details of migration of file systems
-
- G06F17/30079—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/185—Hierarchical storage management [HSM] systems, e.g. file migration or policies thereof
-
- G06F17/30203—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0893—Assignment of logical groups to network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0894—Policy-based network configuration management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Definitions
- Examples described herein relate to network-based file systems, and more specifically, to a system and method for planning and configuring migrations amongst file systems.
- Network-based file systems include distributed file systems which use network protocols to regulate access to data.
- Network File System (NFS) protocol is one example of a protocol for regulating access to data stored with a network-based file system.
- the specification for the NFS protocol has had numerous iterations, with recent versions NFS version 3 (1995) (See e.g., RFC 1813) and version 4 (2000) (See e.g., RFC 3010).
- NFS protocol allows a user on a client terminal to access files over a network in a manner similar to how local files are accessed.
- the NFS protocol uses the Open Network Computing Remote Procedure Call (ONC RPC) to implement various file access operations over a network.
- OPC Open Network Computing Remote Procedure Call
- SMB Server Message Block
- AFP Apple Filing Protocol
- NCP NetWare Core Protocol
- SMB Server Message Block
- AFP Apple Filing Protocol
- NCP NetWare Core Protocol
- network-based file systems are implemented through software products, such as operating systems marketed under DATA ONTAP 7G, GX, or 8 (sometimes referred to as “7-MODE”) and CLUSTERED DATA ONTAP (sometimes referred to as “cDOT”), which are manufactured by NETAPP INC.
- DATA ONTAP 7G, GX, or 8 sometimes referred to as “7-MODE”
- CLUSTERED DATA ONTAP sometimes referred to as “cDOT”
- ISILON and VNX are manufactured by EMC.
- FIG. 1 illustrates a system for implementing migration of a source file system to a destination file system using a migration plan, according to an embodiment.
- FIG. 2 illustrates a migration planning system for use in migrating file system objects from a source filer to a destination filer, according to an embodiment.
- FIG. 3 illustrates an example method for using a migration plan to migrate a file system onto a destination filer, according to an embodiment.
- FIG. 4 illustrates an example method for developing a migration plan using operator input, according to an embodiment.
- FIG. 5 illustrates an example method for implementing a selective migration of a source filer with alterations to how container objects are migrated, according to an embodiment.
- FIG. 6 illustrates a method for mapping policies of a source filer to that of a destination filer, according to an embodiment.
- FIG. 7 illustrates a method for allocating resources of a destination filer based on a migration plan, according to an embodiment.
- FIG. 8A through 8D illustrate example interfaces that are generated for an operator to enter input for configuring or defining a migration plan, according to one or more embodiments.
- FIG. 9A through FIG. 9E illustrate the provisioning of a destination filer in which changes are made to container types.
- FIG. 10 is a block diagram that illustrates a computer system upon which embodiments described herein may be implemented.
- Embodiments described herein provide for a computer system and method for creating a migration plan for migrating a source file system to a destination file system.
- the migration plan can incorporate operator input for configuring the structure, organization and policy implementation of the destination file system.
- a planner operates to generate a user interface for enabling operator input to specify structure, organization and policy input for file system objects that are to be migrated to the destination filer.
- the migration plan facilitates the migration of file system objects as between source and destination file systems that have different architectures. Furthermore, a migration plan can be created that enables the migration to be optimized for the architecture of the destination file system.
- a migration plan is created that is based at least in part on an operator input.
- the resources of a destination file system are provisioned based on the migration plan.
- One or more processes to migrate the source file system for the provisioned resources of the destination file system are then configured based on the migration plan.
- a migration plan is determined for a file system migration.
- the migration plan maps each of (i) a set of file system objects from the source file system to a corresponding set of file system objects at the destination file system, and (ii) a source policy collection for the set of file system objects to a destination policy collection for the corresponding set of file system objects as the destination file system.
- the migration plan includes a set of parameters that are based at least in part on an operator input.
- the migration of the file system objects can be implemented using the migration plan, with the migration plan specifying, for at least a first object container of the corresponding set of file system objects at the destination file system, at least one of a file contextual path, type, or policy implementation relating to the first object container, that is different as compared to a file path, type or policy implementation of a corresponding source object container of the set of file system objects of the source file system.
- a migration plan is determined for the file system migration.
- the migration plan includes a set of parameters and a set of rules, including one or more parameters or rules that are based on an operator input.
- a namespace is determined for the source file system.
- the source namespace identifies a set of file system objects and a file path for each of the file system objects in the set of file system objects.
- a collection of source policies that are implemented for file system objects identified in the namespace of the source file system is also determined.
- the migration plan is used to create a namespace for the destination file system based at least in part on the namespace of the source file system.
- the migration policy is used to determine a destination collection of policies for implementation on the destination file system.
- the destination collection of destination policies can be based at least in part on the collection of source policies. At least one of the namespace or the collection of destination policies include an operator-specific configuration that is specified by the one or more rules or parameters of the operator input.
- programatic means through execution of code, programming or other logic.
- a programmatic action may be performed with software, firmware or hardware, and generally without user-intervention, albeit not necessarily automatically, as the action may be manually triggered.
- optical means through the use of intelligent considerations to make more optimal than otherwise without such considerations.
- the use of the term “optimize” or “optimized”, for example, in reference to a given process or data structure does not necessarily mean a process or structure that is the most optimal or the best.
- the basis for optimum can include performance, efficiency, effectiveness (e.g., elimination of redundancy) and/or cost.
- An “architecture” of a network file system can be characterized by an operating system level component that structures the file system and implements the protocol(s) for operating the file system.
- the operating software of a given architecture structures the namespace and the policies that affect the namespace.
- structural and variants thereof refers to both a format and a logical structure.
- One or more embodiments described herein may be implemented using programmatic elements, often referred to as modules or components, although other names may be used.
- Such programmatic elements may include a program, a subroutine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions.
- a module or component can exist in a hardware component independently of other modules/components or a module/component can be a shared element or process of other modules/components, programs or machines.
- a module or component may reside on one machine, such as on a client or on a server, or may alternatively be distributed among multiple machines, such as on multiple clients or server machines.
- Any system described may be implemented in whole or in part on a server, or as part of a network service.
- a system such as described herein may be implemented on a local computer or terminal, in whole or in part.
- implementation of a system may use memory, processors and network resources (including data ports and signal lines (optical, electrical etc.)), unless stated otherwise.
- one or more embodiments described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a non-transitory computer-readable medium.
- Machines shown in figures below provide examples of processing resources and non-transitory computer-readable mediums on which instructions for implementing one or more embodiments can be executed and/or carried.
- a machine shown for one or more embodiments includes processor(s) and various forms of memory for holding data and instructions.
- Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers.
- Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash memory (such as carried on many cell phones and tablets) and magnetic memory.
- Computers, terminals, and network-enabled devices are all examples of machines and devices that use processors, memory, and instructions stored on computer-readable mediums.
- FIG. 1 illustrates a system for implementing migration of a source file system (“source filer”) to a destination file system (“destination filer”) using a migration plan, according to an embodiment.
- source filer a source file system
- destination filer a destination file system
- FIG. 1 enables a migration to be performed in which a resulting destination filer is optimized with respect to characteristics such as organizational structure, policy application, and resource allocation.
- an example of FIG. 1 enables migration of source filer to the destination filer when the source and destination filers operate under different architectures.
- programmatic components operate to automatically identify and optimize the manner in which file system objects are mapped from a source filer to a destination filer, so to account for semantic and logical characteristics of the destination filer.
- FIG. 1 enables an operator to further structure and configure the destination filer to implement changes based on need or preference in order to account for features of the destination filer and/or resources of the destination filer.
- a system 10 includes a data migration system 100 and a planner 110 .
- the data migration system 100 and planner 110 operate to migrate file system objects 101 from a source filer 20 to a destination filer 50 .
- the planner 110 operates to (i) discover structure, organization and policies of the source filer 20 from which migration is to be initiated, (ii) provision the destination filer 50 for structure and organization, such as to type and location of object containers, and (iii) implement a policy collection 119 on the destination filer 50 that is based on the discovered policies of the source filer 20 .
- the planner 110 can publish a migration plan for use by the data migration system 100 to perform the migration as session-based tasks.
- the planner 110 provides a mechanism to enable operator input to (i) select portions of the source filer for migration, (ii) configure or after the provisioning of the destination filer, such as by way of changing container types or file paths, and (iii) altering the destination's policy collection 119 relative to a policy collection 109 of the source filer 20 .
- the planner 110 can include rules and other logic to optimize the destination filer 50 .
- the resources of the destination filer can be provisioned in a manner that optimizes implementation of the migrated file system to account for characteristics and functionality of the destination filer 50 , as well as preferences and needs of the operator.
- the source filer 20 and destination filer 50 can optionally operate under different system architectures.
- the planner 110 can provision the destination filer 50 to, for example, change the type of select containers when the migration is performed in order to leverage additional configurability and functionality available to the particular container type on the architecture of the destination filer 50 . enable more operator configurability.
- the planner 110 can also define policies of a destination policy collection 119 in order to account for the architecture of the destination filer 50 .
- the planner 110 receives, as input, a namespace 107 and policy collection 109 of the source file system 20 .
- the namespace 107 can be determined from, for example, one or more programmatic resource that perform discovery on the source filer 20 .
- the data migration system 100 can include one or more components that implement operations to discover file system objects, including interfacing with log files and other resources of the source file system 20 in order to determine the namespace 107 .
- the planner 110 can execute processes to interface with export files and other policy stores or resources of the source filer 20 in order to determine the source policy collection 109 .
- the planner 110 can also receive operator input 115 (e.g., via a user-interface), which identifies preferences of an operator (e.g., administrator).
- the operator input 115 can include input to (i) select containers or portions of the source filer 20 to migrate, (ii) specify a container type and/or organization for a select or global number of containers in the destination filer 50 , and/or (iii) specify policy configurations and changes for the file system when migrated onto the destination filer 50 .
- the planner 110 can include a default set of rules and parameters 125 , which are selected or otherwise based on the architecture of the source filer 20 and the architecture of the destination filer 50 .
- the default set of rules and parameters 125 can determine how file system objects from the source filer 20 map to the destination filer 50 .
- the default set of rules and parameters 125 can include considerations for optimizing the destination architecture.
- the default set of rules and parameters 125 (i) determine the structure and organization of the migrated file system objects on the destination filer 50 , and (ii) define the policies of the destination policy collection 119 for application on the file system objects of the destination filer 50 .
- the default set of rules and parameters 125 can include logic for provisioning the destination filer 50 and for implementing the migration as between the source filer 20 and the destination filer 50 when the source and destination utilize different architectures.
- planner 110 operates to generate the migration plan 112 , which in turn is used to provision the destination filer 50 .
- the structure and organization of the file system objects in the destination filer 50 can be dictated by the manner in which the destination filer 50 was provisioned through use of the migration plan 112 .
- the policies of the destination policy collection 119 can be formatted and logically structured based on operator input 115 and/or optimization considerations of the destination filer 50 .
- the optimization considerations for the destination policy collection 119 can be based in part on optimization characteristics of the architecture of the destination filer 50 .
- the destination policy collection 119 can differ in format and structure from the corresponding policy collection 109 of the source filer 20 .
- the policy collections 109 , 119 of the respective source and destination filers 20 , 50 can include policy sets for exports, quotas, storage efficiency and backup.
- the planner 110 operates to generate the migration plan 112 based on the default set of rules and parameters 125 and/or the operator input 115 .
- the migration plan 112 is communicated in whole or in part to the data migration system 100 .
- the data migration system 100 uses the migration plan to implement the migration.
- the migration plan 112 is published for the data migration system 100 , and the data migration system replicates file system objects for inclusion in containers that are identified or specified by the migration plan.
- the data migration system 100 can operate on a session-basis that is defined in part by specific containers provisioned on the destination filer 50 .
- the sessions of the data migration system 100 can also be automated and/or sequenced, so that some or all of the migration can be performed in autonomous fashion once the migration plan 112 is published.
- the planner 110 and data migration system 100 combine to allocate aggregates of the source filer 20 as being migrated amongst hardware resources in accordance with default settings or preferences of the user.
- the data migration system 100 can determine the logical interfaces (LIF) for accessing aggregates of the source file system 20 being migrated.
- LIF determination can specify routes and resources for aggregates based on considerations such as expected traffic for aggregates and minimizing communication hops.
- the data migration system 100 can replicate file system objects 101 of the source filer 20 as destination file system objects 121 .
- the data migration system 100 uses information from the migration plan 112 (e.g., destination namespace) in order to replicate file system objects for the containers of the destination filer 50 .
- the replication of the file system objects 101 results in the generation of destination file system objects 121 which are stored in containers of type and organization specified by the migration plan 112 .
- the object containers of the destination file system 50 can include volumes, quota trees (“q-tree”) or directories.
- the object containers map in type and organization to that which is discovered from the source filer 20 , unless operator input 115 is recorded to alter the default set of rules and parameters 125 of the migration plan 112 .
- the operator-input can change both type and relative location of individual containers on the destination filer 50 , as compared to provisioning using the default set of rules and parameters 125 .
- the destination filer 50 is provisioned with the destination namespace, as provided by the migration plan 112 . Additionally, the data migration system 100 uses the object containers of the destination filer 50 to replicate file system objects onto the destination filer.
- file system objects 101 can also be determined on the destination file system by either default or by operator input.
- the containers of the destination filer 50 can be structured and organized to preserve the granularity and organization of the source filer 20 .
- the organizational aspects of the containers in the destination filer 50 can be based on their respective file path, as provided on the source filer 20 .
- the destination filer 50 can be provisioned to generate the containers so that they include equivalent context file paths, meaning that the file paths provided with the container objects of the destination filer 50 include a common segment (which includes the leaf node of the file path) with the file path of the counterpart object container at the source filer 20 .
- the migration plan 112 can use operational input 115 or other input in order to alter the context file paths of the destination containers. For example, when the file path of an object container at the source filer 20 is shown to embed that object container within another container type, operator input 115 can specify that that the two containers occupy the same hierarchy level when provisioned on the destination filer 50 .
- the data migration system 100 can also implement the policy collection specified by the migration plan 112 using for example, one or more controllers (shown as “controller 58 ”) of the destination filer 50 .
- the controller 58 can represent one or multiple hardware and/or software components for controlling aspects of the destination filer 50 .
- the policy management may be conducted through use of a Storage Virtual Machine (or “SVM”).
- SVM Storage Virtual Machine
- one implementation provides for the migration plan 112 to be communicated to the controller 58 of the destination filer 50 in order to create the policy rules and statements that comprise the policy collection 119 , specified by the migration plan 112 for the destination filer 50 .
- the controller 58 applies the destination policy collection 119 .
- FIG. 2 illustrates a migration planning system for use in migrating file system objects from a source filer to a destination filer, according to an embodiment.
- FIG. 2 provides example components for implementing planner 110 , as also described with an example of FIG. 1 , the planner 110 includes a user interface 220 , a namespace component 230 , a policy component 240 and a device or resource planner 250 .
- a walker 210 can operate as part of, or with the planner 110 , in order to determine source information 215 .
- the walker 210 can execute a process that interfaces with various resources of the source filer 20 in determining the source information stores 215 .
- the walker 210 can implement a REST interface to access log or health files, such as generated through “autosupport” (or “ASUP”) mechanisms of ONTAP-based network file systems.
- the walker 210 can utilize log or health files to identify file system objects and structural (e.g. type) or organizational information (e.g., file paths) about the file system objects of the source filer 20 .
- the walker 210 interfaces with policy servers and stores of the source file system 20 in order to determine the policies of the file system objects being migrated.
- the walker 210 includes a process that executes through the planner 110 and executes a Zephyr Application Program Interface (ZAPI) to locate and extract export files of the source filer 20 .
- the export files can identify policies that exist on the source filer for file system objects and portions (e.g., volumes) of the source filer 20 .
- the source information stores 215 can include a source namespace 208 , an export policy store 212 , an efficiency policy store 214 , a snapshot policy store 216 and a quota policy store 218 .
- the namespace 208 identifies file system objects (e.g., by inode) and their respective type, as well as file path information for the objects.
- the information of the namespace is determined primarily from log health files of the source filer 20 .
- Information for the export policy store 212 , efficiency policy store 214 , snapshot policy store 216 and quota policy store 218 can be obtained from, for example, the walker 210 locating and obtaining export files and related data from controllers and policy resources of the source filer 20 .
- the planner 110 includes a plan store 225 for aggregating parameters and data corresponding to a migration plan. Some of the parameters and data of the plan store 225 can be determined by default, based on, for example, the source and destination filers 20 , 50 . Other parameters and data of plan store 225 can be obtained from operator input 223 , provided through, for example, the user interface 220 . In one implementation, the user interface 220 utilizes source information 215 , such as information from the namespace store 208 , in order to generate prompts and contextual information for enabling operator input 223 . The operator input 223 can be stored as parameters in the plan store 225 .
- plan store 225 can be used to (i) generate a destination namespace, (ii) create and configure or modify file system objects (e.g., containers) identified for the destination filer 50 , (iii) create policy statements for application of a policy collection on file system objects at the destination filer 50 (as identified in the destination namespace), and/or (iv) provision the destination filer 50 and/or its resources for migration of file system objects from the source filer 20 .
- file system objects e.g., containers
- the namespace component 230 of the planner 110 receives source namespace information 231 from the namespace store 208 . Additionally, the namespace component 230 receives migration planning parameters 233 from the plan store 225 .
- the namespace component 230 can include programming and logic to enable establishment of a destination namespace 261 for the destination filer 50 using the source namespace information 231 (e.g., namespace entries). In one implementation, the destination namespace 261 is generated for a migration in which the destination filer 50 is of a different architecture than that of the source filer 20 .
- the namespace component 230 can include a format component 232 , container logic 234 and namespace builder 236 .
- the format component 232 implements formatting rules for reformatting namespace entries specified in the source namespace information 231 into namespace entries 235 that have a format of the destination architecture.
- an individual source namespace entry includes a file path and identifier in a first format (e.g., 7-MODE) of the source architecture, and the format component 232 restructures the file path and identifier into a second format of the destination architecture (shown by entries 235 ).
- the reformatting of the file path and identifiers can include both syntax and semantic changes.
- the container logic 234 includes rules and other logic for specifying a change to an object container.
- the container logic 234 can be configured by parameters to change container types of select object containers identified in the source namespace information 231 .
- a container as identified by the source namespace information 231 can be changed in type based on planning parameters 233 , which can be determined by the operator input 223 .
- operator-selected object containers specified in the source namespace information 231 can be converted from q-trees or directories to volumes.
- the parametric configuration of the container logic 234 can be set by default (e.g., container logic 234 keeps original container types) or by operator input 223 .
- the operator input 223 can, for example, specify specific (e.g., container-specific) or global changes to container types (e.g., change some or all q-trees to volumes). Accordingly, in one implementation, parameters that specify conversion of select object containers are received by user input provided through the user interface 220 . Alternatively, the parameters can correspond to a global selection that applies to object containers of a specific type in a particular source.
- the namespace builder 238 receives (i) reformatted namespace entries 235 provided by the format component 232 , and (ii) identification of object containers and type from the container logic 234 .
- the reformatted namespace entries and object containers can be determined in part from the source namespace information 231 , using the namespace migration parameters 233 as provided from the plan store 225 .
- the namespace builder 238 can generate a destination namespace 261 .
- the namespace builder 238 can also include logic to optimize the destination namespace 261 by structure or formatting.
- the destination namespace 261 can be used to provision the destination filer 50 . Additionally, the destination namespace 261 can configure the data migration system 100 in populating the containers of type and organization specified in the destination namespace 261 .
- the policy component 240 of the planner 110 includes a format component 242 , a logical component 244 and a policy mapper 246 .
- the policy component 240 receives policy information 243 from the source information 215 .
- the policy component 240 receives planning parameters 241 from the plan store 225 .
- the planning parameters 241 can be based on user input 223 , received through the user interface 220 .
- the planning parameters 241 can correspond to a default set of parameters.
- the policy component 240 generates policy collections 263 of different types for the destination filer 50 .
- the policy collections 263 can include export policies, efficiency policies, snapshot policies and quota policies.
- the architecture of the destination filer 50 can require different formatting and logical structures for each kind of policy. For some types of policies, the format and logical structure as between the source and destination architectures may be the same, and for other policy types, the format and logical structure may be different.
- the policy format component 242 operates to restructure and format policies 243 as provided from the source information 215 .
- the policies 243 can be reformatted or structured for syntax and semantic structure.
- the logical component 244 can apply cluster or group operations on select kinds of policy collections in order to determine logically equivalent policy sets.
- the logical component 244 can include logic to combine, for example, equivalent or identical policies to reduce, thereby reducing the policy count for the destination filer 50 .
- a cluster of export policy statements can be represented by a single policy statement in the corresponding policy collection 263 .
- the policy mapper 246 can store the policy collections 263 on the destination filer 50 when, for example, the destination filer 50 is provisioned.
- the resource planner 250 can plan for hardware and resource allocation of the destination filer 50 , given resource information 253 and operator-specified allocation parameters 251 .
- the resource information 255 can include usage information, which can be determined or estimated from, for example, the operator for aggregates of the file system being migrated.
- the resource information 255 can be determined from health logs of the source filer 20 or from manual input (e.g., operator identifies aggregates of the source filer with the most traffic or least traffic).
- the allocation parameters 251 can include specified by the operator via the user interface 220 (operator input 223 ).
- the allocation parameters 251 can include default parameters or rules which can be provided by the plan store 225 .
- the resource planner 250 can generate resource allocation data 265 (e.g., instructions, parameters) to allocate and configure hardware and logical resources on the destination filer 50 .
- the allocation and configuration of hardware resources can be to optimize performance and balance loads.
- the resource allocation data 265 assign specific objects to aggregates of the file system being migrated based on expected traffic. For example, those objects on the source filer 20 with the most traffic can be assigned to high-performance resources when migrated to the destination filer 50 , while other objects that had history of less demand receive low cost resources on the destination filer 50 .
- the allocation instructions 265 can also generate a topology that identifies the location of aggregates on nodes of the destination filer 50 .
- the topology can specify hardware driven paths to aggregates of the destination filer, which can be determined by the presence of logical interfaces (LIFs) amongst the nodes of the destination architecture.
- LIFs logical interfaces
- each LIF on the destination filer 50 provides an address (e.g., IP address) to a physical port, and the allocation instructions 265 can select paths amongst physical ports of the destination filer through LIF selection.
- the network paths can be shortened (e.g., fewer hops), for file system objects and containers that are most heavily used.
- the planner 110 operates to provision and ready the destination filer 50 , and to configure the data migration system 100 .
- the destination filer 50 is intelligently configured when put in use after migration, with minimal effort from the operator.
- the data migration system 100 can operate asynchronously and seamlessly, so as to support multiple clients what actively use file system objects being migrated while those file system objects are replicated from the destination filer 50 .
- the data migration system 100 can be implemented in accordance with migration systems such as described with U.S. patent application Ser. Nos. 14/011,699, 14/011,696, 14/011,719, 14/011,718 and 14/011,723; all of the aforementioned applications being owned by the assignee of this application and further being incorporated by reference in its entirety.
- FIG. 3 illustrates an example method for using a migration plan to migrate a file system onto a destination filer, according to an embodiment.
- FIG. 4 illustrates an example method for developing a migration plan using operator input, according to an embodiment.
- FIG. 5 illustrates an example method for implementing a selective migration of a source filer with alterations to how container objects are migrated, according to an embodiment.
- FIG. 6 illustrates a method for mapping policies of a source filer to that of a destination filer, according to an embodiment.
- FIG. 7 illustrates a method for allocating resources of a destination filer based on a migration plan, according to an embodiment. Methods such as described with examples of FIG. 2 or FIG. 3 can be implemented using, for example, a system such as described with an example of FIG.
- FIG. 3 through FIG. 7 can be implemented in context such as described with FIG. 1 or FIG. 2 , including an example in which a source and destination filer operate under different architectures.
- FIG. 2 through FIG. 7 reference may be made to examples of FIG. 1 or FIG. 2 for purpose of illustrating a suitable component for performing a step or sub-step being described.
- a migration plan is determined ( 310 ).
- the planner 110 can generate the migration plan 112 using a combination of default rules and logic, as well as operator-specified input.
- the migration plan 112 can also be specific to the architecture type of the source and destination filers 20 , 50 , to enable the migration to occur between architectures of different types.
- the planner 110 includes a user interface 220 that prompts for and receives operator input ( 312 ).
- FIG. 8A through FIG. 8D illustrate example of a user interface that can enable the operator to specify planner input that is object specific or global.
- the planner 110 can utilize default rules and parameters for the migration plan 112 , which can be specific to the destination filer 50 and/or the conversions needed as between the source and destination architectures ( 314 ). The operator input can change the default rules and parameters.
- the migration plan 112 can then be used to implement the migration from the source filer 20 to the destination filer 50 ( 320 ).
- the migration plan 112 is used to provision the destination filer 50 , and the data migration system 100 populates the destination filer 50 based on the provisioning ( 322 ).
- the migration plan can define structures and organization for containers at the destination filer ( 324 ).
- the type of container can be changed (e.g., from q-tree to volume or directory).
- the context portion of file path of the containers at the destination filer 50 can be modified based on the settings of the migration plan 112 .
- an operator can promote a q-tree to a volume, then move the promoted volume relative to other containers.
- the migration plan can generate policy definitions for policy collections of the destination filer ( 326 ).
- the policy component 240 can generate policy collections 263 which provides for intelligently grouping or clustering policy entries of the source filer into policy definitions of the destination filer 50 .
- the migration plan 112 can be published to the data migration system 100 , so as to configure operations of the data migration system ( 328 ).
- the data migration system 100 can create tasks which serve to populate containers of the destination namespace (as provided with the migration plan 112 ) with file system objects of the source filer 20 .
- the implementation of the task approach enables the data migration system 100 to access containers that have been modified by type and path when replicating the file system objects of those containers.
- the migration plan enables the data migration system 100 to automate, time and/or sequence (including implement in parallel or back-to-back) initiation of tasks.
- one or more discovery processes are implemented to discover a namespace of the source filer and the policy collections of the source namespace ( 410 ).
- the planner 110 can implement one or more walker processes 210 that can be utilized in the discovery process.
- the discovery process can identify the source namespace and the policy collections that are implemented on the source namespace.
- the planner can provide a user interface for the operator to specify input that configures the migration plan 112 ( 420 ).
- information from the source namespace is used to prompt the operator for intelligent input that configures the migration plan 112 .
- a user interface is generated to enable the user to provide input for creating and configuring the migration plan 112 .
- the user interface can be generated from the source namespace, which can be discovered through one or more walker processes 210 .
- the user interface can display container objects (e.g., volumes, q-trees and directories) from the information of the source namespace.
- One embodiment enables the user to select through the user interface a specific container from the source namespace ( 422 ).
- the selection can correspond to a juncture within the namespace, and serves to make the migration of the source filer selective. With the selection, the migration may be limited to file system objects that are embedded within the selected container.
- the user can provide input through the interface to enable the operator to provide exclusion input, which identifies file system objects that the operator does not want to migrate.
- the operator can provide input that specifies container input, and specifically input to change an existing container type to a different container type ( 424 ).
- container input specifies container input
- a different container type 424
- the user can use the user interface to provide input that promotes the q-tree to a volume.
- the cDOT architecture permits specification of, for example, select policies.
- the operator can specify input that changes the path of a given object container or other file system object when migration is performed ( 426 ).
- the user interface can generate an organization that reflects the organization and structure of the source namespace. Absent input from the user, the structure and organization of the source namespace is replicated for the destination namespace, meaning the object containers identified in the destination namespace include file path segments (context file paths) which are the same as corresponding containers of the source namespace. With input, the file paths of containers or other objects in the destination namespace 261 can be modified as compared to file paths that would otherwise be used with the default settings (e.g., to create one-to-one mapping with the source namespace).
- the operator can elect to un-embed a volume that has been converted from a q-tree.
- the corresponding portion of the destination namespace 261 i.e., context file path
- policies can be displayed in interfaces that reflect existing source policy collections, and the interface can include rules for enabling the user to specify or configure policies differently for the destination namespace.
- an embedded q-tree can be associated with the security policy of the volume in its junction.
- the operator can provide input to promote the q-tree and further a new security policy that is specific for the promoted volume.
- the migration plan is determined ( 430 ). At least a portion of the migration plan 112 can be based on default parameters ( 432 ).
- the default parameters can select to migrate file system objects with the same structure, organization, and granularity.
- the policies can be replicated from the source filer 20 to the destination filer 50 with the default parameters set to achieve the same granularity (e.g., 1:1 mapping), in terms of logical equivalence, as that provided with the source filer 20 .
- the operator input can be received to after the default parameters, thereby enabling the migration plan 112 to be configured per preferences and need of the operator ( 434 ).
- the alteration to the migration plan 112 can select junctures or containers for migration, specify alternative container types and paths for migrated containers. Additionally, the alteration to the migration plan 112 can select or modify policies from the default setting. In particular, the operator can specify container-specific policies with ability to change container types and position in order to achieve desired policies for select containers of the file system.
- the migration plan 112 can be used to provision the destination filer 50 and configure the data migration system 100 ( 440 ).
- a destination namespace 261 is generated and used for provisioning the destination filer 50 ( 442 ).
- the destination namespace can define the structure and organization of the destination filer 50 .
- policy collections can be implemented on the policy server and resources of the destination filer 50 based on default settings and/or operator-specified parameters
- the data migration system 100 can be configured to execute sessions where junctures or other portions of the source filer 20 are replicated on the destination filer 50 to populate the corresponding containers (which may have been modified by file path or type) ( 450 ). Additionally the data migration system 100 can be configured by the migration plan to sequence and queue the sessions automatically so that the destination filer is populated by junctures over time.
- the source namespace is discovered ( 510 ).
- one or more walker processes 210 can discover the containers and junctures of the source filer 20 , from which the remainder of the source namespace is determined.
- the migration plan 112 can be created in part by mapping the containers of the source namespace to a destination namespace ( 520 ) that is under development.
- the planner 110 can create the migration plan with default settings that create the destination namespace using the same or equivalent container structure, organization and granularity as present with the source namespace. For example, the planner 110 can implement the migration with default settings that by default, substantially replicates the source namespace as the default namespace.
- the planner 110 can generate the migration plan 112 to include default settings that optimize the destination namespace ( 530 ). For example, containers on the source namespace can be combined or consolidated (e.g., multiple q-trees can be consolidated into one volume) ( 532 ). Additionally, the planner 110 can include operator settings to enable greater adjustments to the migration plan 112 , such as changes to container types and file paths ( 534 ).
- the collection of policies on the source filer 20 can be identified in the context of the source namespace ( 610 ).
- one or more walker processes 210 can be implemented to determine various kinds of policies for the source namespace, including export policy, efficiency policy, snapshot policy and quota policy, as well as other policies (e.g., security).
- the policy component 240 of the planner 110 can map the collection of policies to the destination namespace using default rules and/or operator-specified parameters ( 620 ). For example, under default, the collection of policies for the source namespace can be formatted and converted into logically equivalent policy entries, which are subsequently implemented on the destination namespace.
- the default parameters can be set to achieve the same granularity (e.g., 1:1 mapping), in terms of logical equivalence, as that provided with the source filer 20 .
- the operator input can permit configurations, selections and other changes to the policy collection, as provided by other examples described herein.
- the policy component 240 can implement one or more optimization processes in order to configure the manner in which policies are stated and implemented, given, for example, the architecture of the destination filer 50 ( 630 ).
- the optimization can include de-duplicating policy entries present for the source filer 20 in order to create an equivalent set of policy statements ( 632 ).
- the de-duplication can be based on the architecture of the destination filer, which can differ by format, structure and syntax as to how policy entries are applied to defined file system objects.
- some types of policy entries from the source namespace can be grouped as a result of the destination architecture permitting (or requiring) logically different structures to the policy entries ( 634 ).
- the policy entries of the source filer can be individually or by group restated using policy entries of the destination architect which are logically equivalent ( 636 ).
- the following examples illustrate examples of how policy entries for a destination collection can be reformatted or structured by the planner 110 in order to optimize a corresponding subset of policy collections.
- the following example illustrates two export policy entries for q-trees in 7-MODE, in which the same policy is applied to two different q-trees.
- a policy definition in cDOT consists of defining the policy (first line in following paragraph) and then adding in the access rules (last 2 lines). Accordingly, when converting to cDOT, the policy component 240 of the planner 110 can recognize the application of identical policies to different containers, and further carry application of the policy to corresponding volumes (if the q-trees are promoted). This results in the creation of one policy on the destination that applies to both volumes.
- export-policy create -policyname secpol-1 -vserver ausvs
- export-policy rule create -policyname secpol-1 -clientmatch .mobstor.sp2.
- export-policy rule create -policyname secpol-1 -clientmatch .ymdb.sp2.
- snapshot policies An example of another type of policy collection is snapshot policies.
- policy entries for snapshot scheduling are defined per volume.
- cDOT a single policy entry for snapshot scheduling can be applied to all volumes that are to implement the policy, and the policy entry can carry additional information for implementing the policy to the volumes.
- planner 110 can generate the destination policy collection for such policies with logical equivalences that are more optimal for the destination architecture:
- snapshot policies as between 7-MODE and cDOT can differ by syntax, rather than application.
- a snapshot reserve policy can differ by syntax but not logic.
- Storage efficiency policies are generally governed by a policy that is applied to a volume.
- a storage efficiency policy can include a deduplication process that is applied to a volume on a scheduled basis.
- the policy component 240 can specify a job that performs the efficiency task on a schedule, and then assign the job to a particular volume.
- Quota policy as between 7-MODE and cDOT provides an example in which the mapping onto the destination filer 50 require policy generation and considerations which are not present for the source filer 20 .
- 7-MODE for example, a new default user quota is derived on a per volume transaction.
- cDOT the quota policy is a collection of quota rules for all volumes of a virtual server.
- the SVM can have 5 policies, of which 1 is active, and the activation of the quota policy is on a volume-by-volume basis.
- the policy component 240 can generate the policy collection for quotas on the destination filer 50 by (i) generating additional policies, (ii) specifying values for the generated policies, and (iii) selectively activating a policy.
- the steps of generating quota policy for the destination filer in this manner can be performed programmatically and/or with user input (such as activation parameter or quota value).
- a migration plan 112 is determined ( 710 ), and the migration plan is used to determine the destination namespace ( 720 ). Additionally, the resources of the destination filer 50 can be determined through, for example, operator input ( 730 ). The identified resources can include the tier or level of hardware components for implementing the storage environment, as well as the arrangement of logical interfaces (LIFs) for physical ports that interconnect the hardware resources.
- LIFs logical interfaces
- the destination namespace can be a basis for allocating the identified resources for implementing the destination filer ( 740 ).
- the resource selection can be based on operator input ( 742 ), and/or based on optimization parameters ( 744 ).
- the operator provides input that identifies the aggregates that are to have, for example, the best or lowest performance, based on expected traffic or usage ( 746 ).
- the determination can be made programmatically by resource planner 250 , which can access, for example, health logs or usage data in the source information 215 .
- those volumes or junctures that have highest use can be interconnected across resources with relatively shortened paths based on LIF alignment ( 748 ).
- an active portion of a namespace can be located on hardware resources that are co-located or separated by a single LIF distance.
- FIG. 8A through 8D illustrate example interfaces that are generated for an operator to enter input for configuring or defining a migration plan, according to one or more embodiments.
- the interfaces provided with examples of FIG. 8A through FIG. 8D can be generated by, for example, the user interface 220 of the planner 110 (as shown in FIG. 2 ).
- the interfaces of FIG. 8A through FIG. 8D can be created using the source namespace, which can be discovered using walker processes 210 .
- FIG. 8A illustrates an example interface 810 that enables the operator to specify global parameters for configuring the migration plan.
- the global parameters can, for example, enable the operator to globally specify conversions of containers by type, and also to specify which aggregate promoted volumes should occupy (e.g., same).
- the example assumes that, for example, a q-tree that is being promoted to a volume may need a different aggregate as it is likely to become more significant.
- a pull down menu can be generated which identifies the conversions possible as between the source and destination architecture.
- FIG. 8B illustrate an example interface 820 that enable an operator to select the volumes or portions of the source filer 20 that is to be migrated.
- the interface 820 can be generated in part based on the source namespace, so as to identify actual containers of the source namespace, including the organization of the containers on the source filer 20 .
- the interfaces of FIG. 8B enables the operator to specify selective migration, based on knowledge of the source namespace.
- the planner 110 can also incorporate migration of multiple source filers into one destination filer, and an example interface of FIG. 8B enables the operator to specify portions of multiple sources to aggregate.
- FIG. 8C illustrates an interface 830 that illustrates how objects of the source namespace map to objects of the destination namespace, based on global policies (which can be selected or set by default).
- FIG. 8D illustrates an interface for enabling the user to view a destination namespace. Based on the interfaces 830 , 840 , the user can select policies and/or provide input parameters.
- FIG. 9A through FIG. 9E is a representative flow for the provisioning of a destination filer in which changes are made to container types.
- a 7-MODE source namespace is identified which includes four volumes under a root volume 902 , and each volume includes two embedded q-trees (not shown).
- the example assumes that the destination namespace is for cDOT implementation that promotes the q-trees to volumes.
- the provisioning of the destination filer 50 includes generating a placeholder volume 904 under the root volume 902 , which in FIG. 9A can be implemented using the destination namespace entry:
- the volumes 906 which in the source namespace were previously mounted under the root node 902 , are mounted under the placeholder volume using the following namespace entries:
- vol-pol volume create vol05-00 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol05/00 -policy vol-pol volume create vol05-01 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol05/01 -policy vol-pol
- FIG. 9E shows the result. Where in the source namespace, 4 volumes includes 8 embedded q-trees, now in the destination namespace q-trees are modified to volumes 908 , with the addition of the placeholder volume.
- the example provided converts the source namespace with 12 containers (4 volumes and 8 q-trees) into a destination namespace with 13 containers (all volumes).
- FIG. 10 is a block diagram that illustrates a computer system upon which embodiments described herein may be implemented.
- planner 110 may be implemented using one or more computer systems such as described by FIG. 10 .
- methods such as described with FIG. 3 through FIG. 7 can be implemented using a computer such as described with an example of FIG. 10 .
- computer system 1000 includes processor 1004 , memory 1006 (including non-transitory memory), and a communication interface 1018 .
- Computer system 1000 includes at least one processor 1004 for processing information.
- Computer system 1000 also includes a memory 1006 , such as a random access memory (RAM) or other dynamic storage device, for storing information and instructions to be executed by processor 1004 .
- the memory 1006 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1004 .
- Computer system 1000 may also include a read only memory (ROM) or other static storage device for storing static information and instructions for processor 1004 .
- a storage device 1010 such as a magnetic disk or optical disk, is provided for storing information and instructions.
- the communication interface 1018 may enable the computer system 1000 to communicate with one or more networks through use of the network link 1020 (wireless or wireline).
- memory 1006 may store instructions for implementing functionality such as described with an example of FIG. 1 or FIG. 2 , or implemented through an example method such as described with FIG. 3 through FIG. 7 .
- the processor 1004 may execute the instructions in providing functionality as described with FIG. 1 or FIG. 2 , or performing operations as described with an example method of FIG. 3 through FIG. 7 .
- Embodiments described herein are related to the use of computer system 1000 for implementing the techniques described herein. According to one embodiment, those techniques are performed by computer system 1000 in response to processor 1004 executing one or more sequences of one or more instructions contained in the memory 1006 . Such instructions may be read into memory 1006 from another machine-readable medium, such as storage device 1010 . Execution of the sequences of instructions contained in memory 1006 causes processor 1004 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement embodiments described herein. Thus, embodiments described are not limited to any specific combination of hardware circuitry and software.
Abstract
A migration plan is used to create a namespace for the destination file system based at least in part on a namespace of the source file system. The migration plan is also used to determine a destination collection of policies for implementation on the destination file system. At least one of the namespace or the collection of destination policies include an operator-specific configuration that is specified by the one or more rules or parameters of the operator input.
Description
- Examples described herein relate to network-based file systems, and more specifically, to a system and method for planning and configuring migrations amongst file systems.
- Network-based file systems include distributed file systems which use network protocols to regulate access to data. Network File System (NFS) protocol is one example of a protocol for regulating access to data stored with a network-based file system. The specification for the NFS protocol has had numerous iterations, with recent versions NFS version 3 (1995) (See e.g., RFC 1813) and version 4 (2000) (See e.g., RFC 3010). In general terms, the NFS protocol allows a user on a client terminal to access files over a network in a manner similar to how local files are accessed. The NFS protocol uses the Open Network Computing Remote Procedure Call (ONC RPC) to implement various file access operations over a network.
- Other examples of remote file access protocols for use with network-based file systems include the Server Message Block (SMB), Apple Filing Protocol (AFP), and NetWare Core Protocol (NCP). Generally, such protocols support synchronous message-based communications amongst programmatic components.
- Commercially available network-based file systems are implemented through software products, such as operating systems marketed under DATA ONTAP 7G, GX, or 8 (sometimes referred to as “7-MODE”) and CLUSTERED DATA ONTAP (sometimes referred to as “cDOT”), which are manufactured by NETAPP INC. Other commercial examples of network based file systems include ISILON and VNX, which are manufactured by EMC.
- With changes to business needs and development in technology, enterprise operators sometimes elect to change their network file system. For example, an operator may elect to do a technical refresh of hardware resources on a network file system, and in the process, elect to implement a different file system architecture for the updated hardware resources. Under conventional approaches, tools are available to assist an operator of a network file system in migrating to a network file system architecture. Many approaches require the source file system to incur some downtime as the clients are redirected to the new file system. Typically, file system migration amongst file systems of different architecture are very labor-intensive, as administrators are required to implement various tasks and processes in order to preserve the original namespace and the various policies (including export policies) in the new destination.
-
FIG. 1 illustrates a system for implementing migration of a source file system to a destination file system using a migration plan, according to an embodiment. -
FIG. 2 illustrates a migration planning system for use in migrating file system objects from a source filer to a destination filer, according to an embodiment. -
FIG. 3 illustrates an example method for using a migration plan to migrate a file system onto a destination filer, according to an embodiment. -
FIG. 4 illustrates an example method for developing a migration plan using operator input, according to an embodiment. -
FIG. 5 illustrates an example method for implementing a selective migration of a source filer with alterations to how container objects are migrated, according to an embodiment. -
FIG. 6 illustrates a method for mapping policies of a source filer to that of a destination filer, according to an embodiment. -
FIG. 7 illustrates a method for allocating resources of a destination filer based on a migration plan, according to an embodiment. -
FIG. 8A through 8D illustrate example interfaces that are generated for an operator to enter input for configuring or defining a migration plan, according to one or more embodiments. -
FIG. 9A throughFIG. 9E illustrate the provisioning of a destination filer in which changes are made to container types. -
FIG. 10 is a block diagram that illustrates a computer system upon which embodiments described herein may be implemented. - Embodiments described herein provide for a computer system and method for creating a migration plan for migrating a source file system to a destination file system.
- Among other uses, the migration plan can incorporate operator input for configuring the structure, organization and policy implementation of the destination file system. In some embodiments, a planner operates to generate a user interface for enabling operator input to specify structure, organization and policy input for file system objects that are to be migrated to the destination filer.
- Still further, in some embodiments, the migration plan facilitates the migration of file system objects as between source and destination file systems that have different architectures. Furthermore, a migration plan can be created that enables the migration to be optimized for the architecture of the destination file system.
- According to one aspect, a migration plan is created that is based at least in part on an operator input. The resources of a destination file system are provisioned based on the migration plan. One or more processes to migrate the source file system for the provisioned resources of the destination file system are then configured based on the migration plan.
- In still another embodiment, a migration plan is determined for a file system migration. The migration plan maps each of (i) a set of file system objects from the source file system to a corresponding set of file system objects at the destination file system, and (ii) a source policy collection for the set of file system objects to a destination policy collection for the corresponding set of file system objects as the destination file system. Additionally, the migration plan includes a set of parameters that are based at least in part on an operator input. The migration of the file system objects can be implemented using the migration plan, with the migration plan specifying, for at least a first object container of the corresponding set of file system objects at the destination file system, at least one of a file contextual path, type, or policy implementation relating to the first object container, that is different as compared to a file path, type or policy implementation of a corresponding source object container of the set of file system objects of the source file system.
- In still another embodiment, a migration plan is determined for the file system migration. The migration plan includes a set of parameters and a set of rules, including one or more parameters or rules that are based on an operator input. A namespace is determined for the source file system. The source namespace identifies a set of file system objects and a file path for each of the file system objects in the set of file system objects. A collection of source policies that are implemented for file system objects identified in the namespace of the source file system is also determined. The migration plan is used to create a namespace for the destination file system based at least in part on the namespace of the source file system. Additionally, the migration policy is used to determine a destination collection of policies for implementation on the destination file system. The destination collection of destination policies can be based at least in part on the collection of source policies. At least one of the namespace or the collection of destination policies include an operator-specific configuration that is specified by the one or more rules or parameters of the operator input.
- As used herein, the terms “programmatic”, “programmatically” or variations thereof mean through execution of code, programming or other logic. A programmatic action may be performed with software, firmware or hardware, and generally without user-intervention, albeit not necessarily automatically, as the action may be manually triggered.
- The term “optimal” (or variants such as “optimized” or “optimum”) means through the use of intelligent considerations to make more optimal than otherwise without such considerations. Thus, the use of the term “optimize” or “optimized”, for example, in reference to a given process or data structure does not necessarily mean a process or structure that is the most optimal or the best. The basis for optimum can include performance, efficiency, effectiveness (e.g., elimination of redundancy) and/or cost.
- An “architecture” of a network file system can be characterized by an operating system level component that structures the file system and implements the protocol(s) for operating the file system. Among other tasks, the operating software of a given architecture structures the namespace and the policies that affect the namespace. In the context provided, the term “structuring” and variants thereof refers to both a format and a logical structure.
- One or more embodiments described herein may be implemented using programmatic elements, often referred to as modules or components, although other names may be used. Such programmatic elements may include a program, a subroutine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions. As used herein, a module or component can exist in a hardware component independently of other modules/components or a module/component can be a shared element or process of other modules/components, programs or machines. A module or component may reside on one machine, such as on a client or on a server, or may alternatively be distributed among multiple machines, such as on multiple clients or server machines. Any system described may be implemented in whole or in part on a server, or as part of a network service. Alternatively, a system such as described herein may be implemented on a local computer or terminal, in whole or in part. In either case, implementation of a system may use memory, processors and network resources (including data ports and signal lines (optical, electrical etc.)), unless stated otherwise.
- Furthermore, one or more embodiments described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a non-transitory computer-readable medium. Machines shown in figures below provide examples of processing resources and non-transitory computer-readable mediums on which instructions for implementing one or more embodiments can be executed and/or carried. For example, a machine shown for one or more embodiments includes processor(s) and various forms of memory for holding data and instructions. Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers. Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash memory (such as carried on many cell phones and tablets) and magnetic memory. Computers, terminals, and network-enabled devices (e.g. portable devices such as cell phones) are all examples of machines and devices that use processors, memory, and instructions stored on computer-readable mediums.
- System Overview
-
FIG. 1 illustrates a system for implementing migration of a source file system (“source filer”) to a destination file system (“destination filer”) using a migration plan, according to an embodiment. Among other benefits, an example ofFIG. 1 enables a migration to be performed in which a resulting destination filer is optimized with respect to characteristics such as organizational structure, policy application, and resource allocation. In particular, an example ofFIG. 1 enables migration of source filer to the destination filer when the source and destination filers operate under different architectures. According to an example ofFIG. 1 , programmatic components operate to automatically identify and optimize the manner in which file system objects are mapped from a source filer to a destination filer, so to account for semantic and logical characteristics of the destination filer. As an addition or alternative, an example ofFIG. 1 enables an operator to further structure and configure the destination filer to implement changes based on need or preference in order to account for features of the destination filer and/or resources of the destination filer. - With further reference to
FIG. 1 , asystem 10 includes adata migration system 100 and aplanner 110. Thedata migration system 100 andplanner 110 operate to migrate file system objects 101 from asource filer 20 to adestination filer 50. Among other functions, theplanner 110 operates to (i) discover structure, organization and policies of thesource filer 20 from which migration is to be initiated, (ii) provision thedestination filer 50 for structure and organization, such as to type and location of object containers, and (iii) implement apolicy collection 119 on thedestination filer 50 that is based on the discovered policies of thesource filer 20. Additionally, theplanner 110 can publish a migration plan for use by thedata migration system 100 to perform the migration as session-based tasks. - In some embodiments, the
planner 110 provides a mechanism to enable operator input to (i) select portions of the source filer for migration, (ii) configure or after the provisioning of the destination filer, such as by way of changing container types or file paths, and (iii) altering the destination'spolicy collection 119 relative to apolicy collection 109 of thesource filer 20. - Still further, the
planner 110 can include rules and other logic to optimize thedestination filer 50. In particular, the resources of the destination filer can be provisioned in a manner that optimizes implementation of the migrated file system to account for characteristics and functionality of thedestination filer 50, as well as preferences and needs of the operator. As described with some examples, thesource filer 20 anddestination filer 50 can optionally operate under different system architectures. In such implementations, theplanner 110 can provision thedestination filer 50 to, for example, change the type of select containers when the migration is performed in order to leverage additional configurability and functionality available to the particular container type on the architecture of thedestination filer 50. enable more operator configurability. Theplanner 110 can also define policies of adestination policy collection 119 in order to account for the architecture of thedestination filer 50. - In more detail, the
planner 110 receives, as input, anamespace 107 andpolicy collection 109 of thesource file system 20. Thenamespace 107 can be determined from, for example, one or more programmatic resource that perform discovery on thesource filer 20. In the example ofFIG. 1 , thedata migration system 100 can include one or more components that implement operations to discover file system objects, including interfacing with log files and other resources of thesource file system 20 in order to determine thenamespace 107. In a variation, theplanner 110 can execute processes to interface with export files and other policy stores or resources of thesource filer 20 in order to determine thesource policy collection 109. - The
planner 110 can also receive operator input 115 (e.g., via a user-interface), which identifies preferences of an operator (e.g., administrator). As described below, theoperator input 115 can include input to (i) select containers or portions of thesource filer 20 to migrate, (ii) specify a container type and/or organization for a select or global number of containers in thedestination filer 50, and/or (iii) specify policy configurations and changes for the file system when migrated onto thedestination filer 50. Still further, in some embodiments, theplanner 110 can include a default set of rules andparameters 125, which are selected or otherwise based on the architecture of thesource filer 20 and the architecture of thedestination filer 50. The default set of rules andparameters 125 can determine how file system objects from thesource filer 20 map to thedestination filer 50. The default set of rules andparameters 125 can include considerations for optimizing the destination architecture. In particular, the default set of rules and parameters 125 (i) determine the structure and organization of the migrated file system objects on thedestination filer 50, and (ii) define the policies of thedestination policy collection 119 for application on the file system objects of thedestination filer 50. Accordingly, the default set of rules andparameters 125 can include logic for provisioning thedestination filer 50 and for implementing the migration as between thesource filer 20 and thedestination filer 50 when the source and destination utilize different architectures. - In this way,
planner 110 operates to generate themigration plan 112, which in turn is used to provision thedestination filer 50. Once the migration is complete, the structure and organization of the file system objects in thedestination filer 50 can be dictated by the manner in which thedestination filer 50 was provisioned through use of themigration plan 112. - As described with other examples, the policies of the
destination policy collection 119 can be formatted and logically structured based onoperator input 115 and/or optimization considerations of thedestination filer 50. The optimization considerations for thedestination policy collection 119 can be based in part on optimization characteristics of the architecture of thedestination filer 50. Thus, thedestination policy collection 119 can differ in format and structure from thecorresponding policy collection 109 of thesource filer 20. By way of example, thepolicy collections destination filers - In operation, the
planner 110 operates to generate themigration plan 112 based on the default set of rules andparameters 125 and/or theoperator input 115. In one implementation, themigration plan 112 is communicated in whole or in part to thedata migration system 100. Thedata migration system 100 uses the migration plan to implement the migration. In one implementation, themigration plan 112 is published for thedata migration system 100, and the data migration system replicates file system objects for inclusion in containers that are identified or specified by the migration plan. Thedata migration system 100 can operate on a session-basis that is defined in part by specific containers provisioned on thedestination filer 50. The sessions of thedata migration system 100 can also be automated and/or sequenced, so that some or all of the migration can be performed in autonomous fashion once themigration plan 112 is published. - Among other tasks, the
planner 110 anddata migration system 100 combine to allocate aggregates of thesource filer 20 as being migrated amongst hardware resources in accordance with default settings or preferences of the user. For example, thedata migration system 100 can determine the logical interfaces (LIF) for accessing aggregates of thesource file system 20 being migrated. The LIF determination can specify routes and resources for aggregates based on considerations such as expected traffic for aggregates and minimizing communication hops. - The
data migration system 100 can replicate file system objects 101 of thesource filer 20 as destination file system objects 121. According to some embodiments, thedata migration system 100 uses information from the migration plan 112 (e.g., destination namespace) in order to replicate file system objects for the containers of thedestination filer 50. In this way, the replication of the file system objects 101 results in the generation of destination file system objects 121 which are stored in containers of type and organization specified by themigration plan 112. - In one embodiment, the object containers of the
destination file system 50 can include volumes, quota trees (“q-tree”) or directories. In one implementation, the object containers map in type and organization to that which is discovered from thesource filer 20, unlessoperator input 115 is recorded to alter the default set of rules andparameters 125 of themigration plan 112. In one implementation, the operator-input can change both type and relative location of individual containers on thedestination filer 50, as compared to provisioning using the default set of rules andparameters 125. Still further, in one implementation, thedestination filer 50 is provisioned with the destination namespace, as provided by themigration plan 112. Additionally, thedata migration system 100 uses the object containers of thedestination filer 50 to replicate file system objects onto the destination filer. - Other organization aspects of file system objects 101 can also be determined on the destination file system by either default or by operator input. For example, absent operator input, the containers of the
destination filer 50 can be structured and organized to preserve the granularity and organization of thesource filer 20. The organizational aspects of the containers in thedestination filer 50 can be based on their respective file path, as provided on thesource filer 20. Thedestination filer 50 can be provisioned to generate the containers so that they include equivalent context file paths, meaning that the file paths provided with the container objects of thedestination filer 50 include a common segment (which includes the leaf node of the file path) with the file path of the counterpart object container at thesource filer 20. Themigration plan 112 can useoperational input 115 or other input in order to alter the context file paths of the destination containers. For example, when the file path of an object container at thesource filer 20 is shown to embed that object container within another container type,operator input 115 can specify that that the two containers occupy the same hierarchy level when provisioned on thedestination filer 50. - The
data migration system 100 can also implement the policy collection specified by themigration plan 112 using for example, one or more controllers (shown as “controller 58”) of thedestination filer 50. Thecontroller 58 can represent one or multiple hardware and/or software components for controlling aspects of thedestination filer 50. For example, in an implementation in which thedestination filer 50 is a cDOT architecture, the policy management may be conducted through use of a Storage Virtual Machine (or “SVM”). Accordingly, one implementation provides for themigration plan 112 to be communicated to thecontroller 58 of thedestination filer 50 in order to create the policy rules and statements that comprise thepolicy collection 119, specified by themigration plan 112 for thedestination filer 50. When the migration is performed by thedata migration system 100, thecontroller 58 applies thedestination policy collection 119. -
FIG. 2 illustrates a migration planning system for use in migrating file system objects from a source filer to a destination filer, according to an embodiment. In particular,FIG. 2 provides example components for implementingplanner 110, as also described with an example ofFIG. 1 , theplanner 110 includes auser interface 220, anamespace component 230, apolicy component 240 and a device orresource planner 250. - A
walker 210 can operate as part of, or with theplanner 110, in order to determinesource information 215. Thewalker 210 can execute a process that interfaces with various resources of thesource filer 20 in determining the source information stores 215. In one implementation, thewalker 210 can implement a REST interface to access log or health files, such as generated through “autosupport” (or “ASUP”) mechanisms of ONTAP-based network file systems. By way of example, thewalker 210 can utilize log or health files to identify file system objects and structural (e.g. type) or organizational information (e.g., file paths) about the file system objects of thesource filer 20. As an addition or alternative, thewalker 210 interfaces with policy servers and stores of thesource file system 20 in order to determine the policies of the file system objects being migrated. By way of example, in one implementation, thewalker 210 includes a process that executes through theplanner 110 and executes a Zephyr Application Program Interface (ZAPI) to locate and extract export files of thesource filer 20. The export files can identify policies that exist on the source filer for file system objects and portions (e.g., volumes) of thesource filer 20. - In one implementation, the source information stores 215 can include a
source namespace 208, anexport policy store 212, anefficiency policy store 214, asnapshot policy store 216 and aquota policy store 218. Thenamespace 208 identifies file system objects (e.g., by inode) and their respective type, as well as file path information for the objects. In one implementation, the information of the namespace is determined primarily from log health files of thesource filer 20. Information for theexport policy store 212,efficiency policy store 214,snapshot policy store 216 andquota policy store 218 can be obtained from, for example, thewalker 210 locating and obtaining export files and related data from controllers and policy resources of thesource filer 20. - In one implementation, the
planner 110 includes aplan store 225 for aggregating parameters and data corresponding to a migration plan. Some of the parameters and data of theplan store 225 can be determined by default, based on, for example, the source anddestination filers plan store 225 can be obtained fromoperator input 223, provided through, for example, theuser interface 220. In one implementation, theuser interface 220 utilizessource information 215, such as information from thenamespace store 208, in order to generate prompts and contextual information for enablingoperator input 223. Theoperator input 223 can be stored as parameters in theplan store 225. Among other applications,plan store 225 can be used to (i) generate a destination namespace, (ii) create and configure or modify file system objects (e.g., containers) identified for thedestination filer 50, (iii) create policy statements for application of a policy collection on file system objects at the destination filer 50 (as identified in the destination namespace), and/or (iv) provision thedestination filer 50 and/or its resources for migration of file system objects from thesource filer 20. - In an embodiment, the
namespace component 230 of theplanner 110 receivessource namespace information 231 from thenamespace store 208. Additionally, thenamespace component 230 receivesmigration planning parameters 233 from theplan store 225. Thenamespace component 230 can include programming and logic to enable establishment of adestination namespace 261 for thedestination filer 50 using the source namespace information 231 (e.g., namespace entries). In one implementation, thedestination namespace 261 is generated for a migration in which thedestination filer 50 is of a different architecture than that of thesource filer 20. - In one embodiment, the
namespace component 230 can include aformat component 232,container logic 234 and namespace builder 236. Theformat component 232 implements formatting rules for reformatting namespace entries specified in thesource namespace information 231 intonamespace entries 235 that have a format of the destination architecture. In one implementation, an individual source namespace entry includes a file path and identifier in a first format (e.g., 7-MODE) of the source architecture, and theformat component 232 restructures the file path and identifier into a second format of the destination architecture (shown by entries 235). The reformatting of the file path and identifiers can include both syntax and semantic changes. - The
container logic 234 includes rules and other logic for specifying a change to an object container. In particular, thecontainer logic 234 can be configured by parameters to change container types of select object containers identified in thesource namespace information 231. For example, a container as identified by thesource namespace information 231 can be changed in type based onplanning parameters 233, which can be determined by theoperator input 223. In one implementation, operator-selected object containers specified in thesource namespace information 231 can be converted from q-trees or directories to volumes. The parametric configuration of thecontainer logic 234 can be set by default (e.g.,container logic 234 keeps original container types) or byoperator input 223. In one implementation, theoperator input 223 can, for example, specify specific (e.g., container-specific) or global changes to container types (e.g., change some or all q-trees to volumes). Accordingly, in one implementation, parameters that specify conversion of select object containers are received by user input provided through theuser interface 220. Alternatively, the parameters can correspond to a global selection that applies to object containers of a specific type in a particular source. - In one implementation, the
namespace builder 238 receives (i) reformattednamespace entries 235 provided by theformat component 232, and (ii) identification of object containers and type from thecontainer logic 234. The reformatted namespace entries and object containers can be determined in part from thesource namespace information 231, using thenamespace migration parameters 233 as provided from theplan store 225. Based on the input, thenamespace builder 238 can generate adestination namespace 261. Thenamespace builder 238 can also include logic to optimize thedestination namespace 261 by structure or formatting. - The
destination namespace 261 can be used to provision thedestination filer 50. Additionally, thedestination namespace 261 can configure thedata migration system 100 in populating the containers of type and organization specified in thedestination namespace 261. - According to one example, the
policy component 240 of theplanner 110 includes aformat component 242, alogical component 244 and a policy mapper 246. Thepolicy component 240 receivespolicy information 243 from thesource information 215. Additionally, thepolicy component 240 receivesplanning parameters 241 from theplan store 225. Theplanning parameters 241 can be based onuser input 223, received through theuser interface 220. As an addition or variation, theplanning parameters 241 can correspond to a default set of parameters. - The
policy component 240 generatespolicy collections 263 of different types for thedestination filer 50. For example, thepolicy collections 263 can include export policies, efficiency policies, snapshot policies and quota policies. The architecture of thedestination filer 50 can require different formatting and logical structures for each kind of policy. For some types of policies, the format and logical structure as between the source and destination architectures may be the same, and for other policy types, the format and logical structure may be different. - The
policy format component 242 operates to restructure andformat policies 243 as provided from thesource information 215. Thepolicies 243 can be reformatted or structured for syntax and semantic structure. Thelogical component 244 can apply cluster or group operations on select kinds of policy collections in order to determine logically equivalent policy sets. In particular, thelogical component 244 can include logic to combine, for example, equivalent or identical policies to reduce, thereby reducing the policy count for thedestination filer 50. For example, a cluster of export policy statements can be represented by a single policy statement in thecorresponding policy collection 263. The policy mapper 246 can store thepolicy collections 263 on thedestination filer 50 when, for example, thedestination filer 50 is provisioned. - The
resource planner 250 can plan for hardware and resource allocation of thedestination filer 50, givenresource information 253 and operator-specifiedallocation parameters 251. The resource information 255 can include usage information, which can be determined or estimated from, for example, the operator for aggregates of the file system being migrated. For example, the resource information 255 can be determined from health logs of thesource filer 20 or from manual input (e.g., operator identifies aggregates of the source filer with the most traffic or least traffic). Theallocation parameters 251 can include specified by the operator via the user interface 220 (operator input 223). As an addition or alternative, theallocation parameters 251 can include default parameters or rules which can be provided by theplan store 225. - The
resource planner 250 can generate resource allocation data 265 (e.g., instructions, parameters) to allocate and configure hardware and logical resources on thedestination filer 50. The allocation and configuration of hardware resources can be to optimize performance and balance loads. In one implementation, theresource allocation data 265 assign specific objects to aggregates of the file system being migrated based on expected traffic. For example, those objects on thesource filer 20 with the most traffic can be assigned to high-performance resources when migrated to thedestination filer 50, while other objects that had history of less demand receive low cost resources on thedestination filer 50. - As an addition or alternative, the
allocation instructions 265 can also generate a topology that identifies the location of aggregates on nodes of thedestination filer 50. The topology can specify hardware driven paths to aggregates of the destination filer, which can be determined by the presence of logical interfaces (LIFs) amongst the nodes of the destination architecture. Specifically, each LIF on thedestination filer 50 provides an address (e.g., IP address) to a physical port, and theallocation instructions 265 can select paths amongst physical ports of the destination filer through LIF selection. By way of example, the network paths can be shortened (e.g., fewer hops), for file system objects and containers that are most heavily used. - As shown by an example of
FIG. 2 , theplanner 110 operates to provision and ready thedestination filer 50, and to configure thedata migration system 100. As a result, thedestination filer 50 is intelligently configured when put in use after migration, with minimal effort from the operator. - With respect to examples such as described with
FIG. 1 andFIG. 2 , thedata migration system 100 can operate asynchronously and seamlessly, so as to support multiple clients what actively use file system objects being migrated while those file system objects are replicated from thedestination filer 50. According to some embodiments, thedata migration system 100 can be implemented in accordance with migration systems such as described with U.S. patent application Ser. Nos. 14/011,699, 14/011,696, 14/011,719, 14/011,718 and 14/011,723; all of the aforementioned applications being owned by the assignee of this application and further being incorporated by reference in its entirety. - Methodology
-
FIG. 3 illustrates an example method for using a migration plan to migrate a file system onto a destination filer, according to an embodiment.FIG. 4 illustrates an example method for developing a migration plan using operator input, according to an embodiment.FIG. 5 illustrates an example method for implementing a selective migration of a source filer with alterations to how container objects are migrated, according to an embodiment.FIG. 6 illustrates a method for mapping policies of a source filer to that of a destination filer, according to an embodiment.FIG. 7 illustrates a method for allocating resources of a destination filer based on a migration plan, according to an embodiment. Methods such as described with examples ofFIG. 2 orFIG. 3 can be implemented using, for example, a system such as described with an example ofFIG. 1 or withFIG. 2 . Accordingly, examples described byFIG. 3 throughFIG. 7 can be implemented in context such as described withFIG. 1 orFIG. 2 , including an example in which a source and destination filer operate under different architectures. In describing examples ofFIG. 2 throughFIG. 7 , reference may be made to examples ofFIG. 1 orFIG. 2 for purpose of illustrating a suitable component for performing a step or sub-step being described. - With reference to
FIG. 3 , a migration plan is determined (310). Theplanner 110 can generate themigration plan 112 using a combination of default rules and logic, as well as operator-specified input. Themigration plan 112 can also be specific to the architecture type of the source anddestination filers - In one implementation, the
planner 110 includes auser interface 220 that prompts for and receives operator input (312).FIG. 8A throughFIG. 8D illustrate example of a user interface that can enable the operator to specify planner input that is object specific or global. As an addition or alternative, theplanner 110 can utilize default rules and parameters for themigration plan 112, which can be specific to thedestination filer 50 and/or the conversions needed as between the source and destination architectures (314). The operator input can change the default rules and parameters. - The
migration plan 112 can then be used to implement the migration from thesource filer 20 to the destination filer 50 (320). In one implementation, themigration plan 112 is used to provision thedestination filer 50, and thedata migration system 100 populates thedestination filer 50 based on the provisioning (322). In provisioning thedestination filer 50, the migration plan can define structures and organization for containers at the destination filer (324). In one implementation, the type of container can be changed (e.g., from q-tree to volume or directory). Still further, the context portion of file path of the containers at thedestination filer 50 can be modified based on the settings of themigration plan 112. Thus, for example, an operator can promote a q-tree to a volume, then move the promoted volume relative to other containers. - Additionally, the migration plan can generate policy definitions for policy collections of the destination filer (326). For example, the
policy component 240 can generatepolicy collections 263 which provides for intelligently grouping or clustering policy entries of the source filer into policy definitions of thedestination filer 50. - Additionally, the
migration plan 112 can be published to thedata migration system 100, so as to configure operations of the data migration system (328). With themigration plan 112, thedata migration system 100 can create tasks which serve to populate containers of the destination namespace (as provided with the migration plan 112) with file system objects of thesource filer 20. The implementation of the task approach enables thedata migration system 100 to access containers that have been modified by type and path when replicating the file system objects of those containers. Additionally, the migration plan enables thedata migration system 100 to automate, time and/or sequence (including implement in parallel or back-to-back) initiation of tasks. - With reference to
FIG. 4 , one or more discovery processes are implemented to discover a namespace of the source filer and the policy collections of the source namespace (410). As described with an example ofFIG. 2 , theplanner 110 can implement one or more walker processes 210 that can be utilized in the discovery process. The discovery process can identify the source namespace and the policy collections that are implemented on the source namespace. - The planner can provide a user interface for the operator to specify input that configures the migration plan 112 (420). In one implementation, information from the source namespace is used to prompt the operator for intelligent input that configures the
migration plan 112. For example, as shown with examples ofFIG. 8A throughFIG. 8D , a user interface is generated to enable the user to provide input for creating and configuring themigration plan 112. The user interface can be generated from the source namespace, which can be discovered through one or more walker processes 210. The user interface can display container objects (e.g., volumes, q-trees and directories) from the information of the source namespace. - One embodiment enables the user to select through the user interface a specific container from the source namespace (422). The selection can correspond to a juncture within the namespace, and serves to make the migration of the source filer selective. With the selection, the migration may be limited to file system objects that are embedded within the selected container. In a variation, the user can provide input through the interface to enable the operator to provide exclusion input, which identifies file system objects that the operator does not want to migrate.
- In another implementation, the operator can provide input that specifies container input, and specifically input to change an existing container type to a different container type (424). For example, in cDOT, there is limited ability to specify independent policies for embedded q-trees. Should the user wish to specify a policy for the q-tree container, he can use the user interface to provide input that promotes the q-tree to a volume. As a volume, the cDOT architecture permits specification of, for example, select policies.
- Still further, the operator can specify input that changes the path of a given object container or other file system object when migration is performed (426). In one implementation, the user interface can generate an organization that reflects the organization and structure of the source namespace. Absent input from the user, the structure and organization of the source namespace is replicated for the destination namespace, meaning the object containers identified in the destination namespace include file path segments (context file paths) which are the same as corresponding containers of the source namespace. With input, the file paths of containers or other objects in the
destination namespace 261 can be modified as compared to file paths that would otherwise be used with the default settings (e.g., to create one-to-one mapping with the source namespace). For example, in the preceding example where the container type is modified, the operator can elect to un-embed a volume that has been converted from a q-tree. The corresponding portion of the destination namespace 261 (i.e., context file path) can be modified based on the input. - Additionally, the user interface enables the operator to specify policy changes and additions for portions of the destination namespace (428). As shown with examples of
FIG. 8A through 8D , policies can be displayed in interfaces that reflect existing source policy collections, and the interface can include rules for enabling the user to specify or configure policies differently for the destination namespace. For example, in the source filer running 7-MODE, an embedded q-tree can be associated with the security policy of the volume in its junction. For the destination running cDOT, the operator can provide input to promote the q-tree and further a new security policy that is specific for the promoted volume. - With operator input, the migration plan is determined (430). At least a portion of the
migration plan 112 can be based on default parameters (432). In one implementation, for example, the default parameters can select to migrate file system objects with the same structure, organization, and granularity. Additionally, the policies can be replicated from thesource filer 20 to thedestination filer 50 with the default parameters set to achieve the same granularity (e.g., 1:1 mapping), in terms of logical equivalence, as that provided with thesource filer 20. - The operator input can be received to after the default parameters, thereby enabling the
migration plan 112 to be configured per preferences and need of the operator (434). The alteration to themigration plan 112 can select junctures or containers for migration, specify alternative container types and paths for migrated containers. Additionally, the alteration to themigration plan 112 can select or modify policies from the default setting. In particular, the operator can specify container-specific policies with ability to change container types and position in order to achieve desired policies for select containers of the file system. - The
migration plan 112 can be used to provision thedestination filer 50 and configure the data migration system 100 (440). In one implementation, adestination namespace 261 is generated and used for provisioning the destination filer 50 (442). The destination namespace can define the structure and organization of thedestination filer 50. Additionally, as described in greater detail with an example ofFIG. 6 , policy collections can be implemented on the policy server and resources of thedestination filer 50 based on default settings and/or operator-specified parameters - Additionally, the
data migration system 100 can be configured to execute sessions where junctures or other portions of thesource filer 20 are replicated on thedestination filer 50 to populate the corresponding containers (which may have been modified by file path or type) (450). Additionally thedata migration system 100 can be configured by the migration plan to sequence and queue the sessions automatically so that the destination filer is populated by junctures over time. - With reference to
FIG. 5 , the source namespace is discovered (510). For example, one or more walker processes 210 can discover the containers and junctures of thesource filer 20, from which the remainder of the source namespace is determined. - The
migration plan 112 can be created in part by mapping the containers of the source namespace to a destination namespace (520) that is under development. Theplanner 110 can create the migration plan with default settings that create the destination namespace using the same or equivalent container structure, organization and granularity as present with the source namespace. For example, theplanner 110 can implement the migration with default settings that by default, substantially replicates the source namespace as the default namespace. - However, examples recognize that operators typically desire to upgrade or change the technological resources of the filer, in which case the source namespace is likely not optimal for the destination filer. In one implementation, the
planner 110 can generate themigration plan 112 to include default settings that optimize the destination namespace (530). For example, containers on the source namespace can be combined or consolidated (e.g., multiple q-trees can be consolidated into one volume) (532). Additionally, theplanner 110 can include operator settings to enable greater adjustments to themigration plan 112, such as changes to container types and file paths (534). - With reference to
FIG. 6 , the collection of policies on thesource filer 20 can be identified in the context of the source namespace (610). As described with an example ofFIG. 2 , one or more walker processes 210 can be implemented to determine various kinds of policies for the source namespace, including export policy, efficiency policy, snapshot policy and quota policy, as well as other policies (e.g., security). - The
policy component 240 of theplanner 110 can map the collection of policies to the destination namespace using default rules and/or operator-specified parameters (620). For example, under default, the collection of policies for the source namespace can be formatted and converted into logically equivalent policy entries, which are subsequently implemented on the destination namespace. The default parameters can be set to achieve the same granularity (e.g., 1:1 mapping), in terms of logical equivalence, as that provided with thesource filer 20. The operator input can permit configurations, selections and other changes to the policy collection, as provided by other examples described herein. - Once mapped, the
policy component 240 can implement one or more optimization processes in order to configure the manner in which policies are stated and implemented, given, for example, the architecture of the destination filer 50 (630). As shown by examples provided below, the optimization can include de-duplicating policy entries present for thesource filer 20 in order to create an equivalent set of policy statements (632). The de-duplication can be based on the architecture of the destination filer, which can differ by format, structure and syntax as to how policy entries are applied to defined file system objects. Likewise, some types of policy entries from the source namespace can be grouped as a result of the destination architecture permitting (or requiring) logically different structures to the policy entries (634). Still further, the policy entries of the source filer can be individually or by group restated using policy entries of the destination architect which are logically equivalent (636). - The following examples illustrate examples of how policy entries for a destination collection can be reformatted or structured by the
planner 110 in order to optimize a corresponding subset of policy collections. - The following example illustrates two export policy entries for q-trees in 7-MODE, in which the same policy is applied to two different q-trees.
-
/vol/vol01/p17d4 - sec=sys,rw=.mobstor.sp2.yahoo.com:.ymdb.sp2.yahoo.com,anon=0 /vol/vol01/p17d5 - sec=sys,rw=.mobstor.sp2.yahoo.com:.ymdb.sp2.yahoo.com,anon=0 - A policy definition in cDOT consists of defining the policy (first line in following paragraph) and then adding in the access rules (last 2 lines). Accordingly, when converting to cDOT, the
policy component 240 of theplanner 110 can recognize the application of identical policies to different containers, and further carry application of the policy to corresponding volumes (if the q-trees are promoted). This results in the creation of one policy on the destination that applies to both volumes. -
export-policy create -policyname secpol-1 -vserver ausvs export-policy rule create -policyname secpol-1 -clientmatch .mobstor.sp2.yahoo.com -rorule sys -rwrule sys -anon pcuser -vserver ausvs export-policy rule create -policyname secpol-1 -clientmatch .ymdb.sp2.yahoo.com -rorule sys -rwrule sys -anon pcuser -vserver ausvs - An example of another type of policy collection is snapshot policies. For the
source filer 20 implemented under 7-MODE, policy entries for snapshot scheduling are defined per volume. Conversely, in cDOT, a single policy entry for snapshot scheduling can be applied to all volumes that are to implement the policy, and the policy entry can carry additional information for implementing the policy to the volumes. The following example illustrates howplanner 110 can generate the destination policy collection for such policies with logical equivalences that are more optimal for the destination architecture: -
- 7-MODE snap sched vol0
- cDOT Volume vol0: 0 2 6 volume snapshot policy create—vserver ausys—policy sspol—enabled true—schedule1 hourly—count1 6—schedule2 daily—count2 2—schedule3 weekly—
count3 0
- Other snapshot policies as between 7-MODE and cDOT can differ by syntax, rather than application. For example, a snapshot reserve policy can differ by syntax but not logic.
-
- 7-MODE snap reserve vm_images
- Volume vm_images: current snapshot reserve is 20% or 41943040 k-bytes.
- cDOT volume modify—vserver ausys—volume vol01—percent-snapshot—
space 20
- 7-MODE snap reserve vm_images
- Storage efficiency policies are generally governed by a policy that is applied to a volume. In 7-MODE, for example, a storage efficiency policy can include a deduplication process that is applied to a volume on a scheduled basis. In order to replicate such a policy in cDOT, the
policy component 240 can specify a job that performs the efficiency task on a schedule, and then assign the job to a particular volume. - Quota policy as between 7-MODE and cDOT provides an example in which the mapping onto the
destination filer 50 require policy generation and considerations which are not present for thesource filer 20. In 7-MODE, for example, a new default user quota is derived on a per volume transaction. Conversely, in cDOT, the quota policy is a collection of quota rules for all volumes of a virtual server. For example, the SVM can have 5 policies, of which 1 is active, and the activation of the quota policy is on a volume-by-volume basis. Accordingly, thepolicy component 240 can generate the policy collection for quotas on thedestination filer 50 by (i) generating additional policies, (ii) specifying values for the generated policies, and (iii) selectively activating a policy. The steps of generating quota policy for the destination filer in this manner can be performed programmatically and/or with user input (such as activation parameter or quota value). - With reference to
FIG. 7 , amigration plan 112 is determined (710), and the migration plan is used to determine the destination namespace (720). Additionally, the resources of thedestination filer 50 can be determined through, for example, operator input (730). The identified resources can include the tier or level of hardware components for implementing the storage environment, as well as the arrangement of logical interfaces (LIFs) for physical ports that interconnect the hardware resources. - The destination namespace can be a basis for allocating the identified resources for implementing the destination filer (740). The resource selection can be based on operator input (742), and/or based on optimization parameters (744). In one implementation, the operator provides input that identifies the aggregates that are to have, for example, the best or lowest performance, based on expected traffic or usage (746). Alternatively, the determination can be made programmatically by
resource planner 250, which can access, for example, health logs or usage data in thesource information 215. In similar fashion, those volumes or junctures that have highest use can be interconnected across resources with relatively shortened paths based on LIF alignment (748). For example, an active portion of a namespace can be located on hardware resources that are co-located or separated by a single LIF distance. - User Interface
-
FIG. 8A through 8D illustrate example interfaces that are generated for an operator to enter input for configuring or defining a migration plan, according to one or more embodiments. The interfaces provided with examples ofFIG. 8A throughFIG. 8D can be generated by, for example, theuser interface 220 of the planner 110 (as shown inFIG. 2 ). The interfaces ofFIG. 8A throughFIG. 8D can be created using the source namespace, which can be discovered using walker processes 210. -
FIG. 8A illustrates anexample interface 810 that enables the operator to specify global parameters for configuring the migration plan. The global parameters can, for example, enable the operator to globally specify conversions of containers by type, and also to specify which aggregate promoted volumes should occupy (e.g., same). The example assumes that, for example, a q-tree that is being promoted to a volume may need a different aggregate as it is likely to become more significant. A pull down menu can be generated which identifies the conversions possible as between the source and destination architecture. -
FIG. 8B illustrate anexample interface 820 that enable an operator to select the volumes or portions of thesource filer 20 that is to be migrated. Theinterface 820 can be generated in part based on the source namespace, so as to identify actual containers of the source namespace, including the organization of the containers on thesource filer 20. Thus, the interfaces ofFIG. 8B enables the operator to specify selective migration, based on knowledge of the source namespace. According to some variations, theplanner 110 can also incorporate migration of multiple source filers into one destination filer, and an example interface ofFIG. 8B enables the operator to specify portions of multiple sources to aggregate. -
FIG. 8C illustrates aninterface 830 that illustrates how objects of the source namespace map to objects of the destination namespace, based on global policies (which can be selected or set by default).FIG. 8D illustrates an interface for enabling the user to view a destination namespace. Based on theinterfaces -
FIG. 9A throughFIG. 9E is a representative flow for the provisioning of a destination filer in which changes are made to container types. In a specific example provided, a 7-MODE source namespace is identified which includes four volumes under aroot volume 902, and each volume includes two embedded q-trees (not shown). The example assumes that the destination namespace is for cDOT implementation that promotes the q-trees to volumes. InFIG. 9A , the provisioning of thedestination filer 50 includes generating aplaceholder volume 904 under theroot volume 902, which inFIG. 9A can be implemented using the destination namespace entry: -
- volume create—volume vol—aggregate n02_sata01—vserver ausys—junction-path/vol
- In
FIG. 9B , thevolumes 906, which in the source namespace were previously mounted under theroot node 902, are mounted under the placeholder volume using the following namespace entries: -
volume create vol05 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol05 -policy vol-pol2 volume create vol06 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol06 -policy vol-pol2 volume create vol07 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol07 -policy vol-pol2 volume create vol08 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol08 -policy vol-pol2 - In
FIG. 9C andFIG. 9D , the remaining q-trees present in the source namespace are now mounted within the placeholder volume structure as follows (note that only the first two q-trees are shown for brevity): -
volume create vol05-00 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol05/00 -policy vol-pol volume create vol05-01 -aggregate n02_sata01 -size 100GB -vserver ausvs -autosize true -junction-path /vol/vol05/01 -policy vol-pol -
FIG. 9E shows the result. Where in the source namespace, 4 volumes includes 8 embedded q-trees, now in the destination namespace q-trees are modified tovolumes 908, with the addition of the placeholder volume. Thus, the example provided converts the source namespace with 12 containers (4 volumes and 8 q-trees) into a destination namespace with 13 containers (all volumes). - Computer System
-
FIG. 10 is a block diagram that illustrates a computer system upon which embodiments described herein may be implemented. For example, in the context ofFIG. 1 orFIG. 2 ,planner 110 may be implemented using one or more computer systems such as described byFIG. 10 . Still further, methods such as described withFIG. 3 throughFIG. 7 can be implemented using a computer such as described with an example ofFIG. 10 . - In an embodiment,
computer system 1000 includesprocessor 1004, memory 1006 (including non-transitory memory), and a communication interface 1018.Computer system 1000 includes at least oneprocessor 1004 for processing information.Computer system 1000 also includes amemory 1006, such as a random access memory (RAM) or other dynamic storage device, for storing information and instructions to be executed byprocessor 1004. Thememory 1006 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed byprocessor 1004.Computer system 1000 may also include a read only memory (ROM) or other static storage device for storing static information and instructions forprocessor 1004. A storage device 1010, such as a magnetic disk or optical disk, is provided for storing information and instructions. The communication interface 1018 may enable thecomputer system 1000 to communicate with one or more networks through use of the network link 1020 (wireless or wireline). - In one implementation,
memory 1006 may store instructions for implementing functionality such as described with an example ofFIG. 1 orFIG. 2 , or implemented through an example method such as described withFIG. 3 throughFIG. 7 . Likewise, theprocessor 1004 may execute the instructions in providing functionality as described withFIG. 1 orFIG. 2 , or performing operations as described with an example method ofFIG. 3 throughFIG. 7 . - Embodiments described herein are related to the use of
computer system 1000 for implementing the techniques described herein. According to one embodiment, those techniques are performed bycomputer system 1000 in response toprocessor 1004 executing one or more sequences of one or more instructions contained in thememory 1006. Such instructions may be read intomemory 1006 from another machine-readable medium, such as storage device 1010. Execution of the sequences of instructions contained inmemory 1006 causesprocessor 1004 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement embodiments described herein. Thus, embodiments described are not limited to any specific combination of hardware circuitry and software. - Although illustrative embodiments have been described in detail herein with reference to the accompanying drawings, variations to specific embodiments and details are encompassed by this disclosure. It is intended that the scope of embodiments described herein be defined by claims and their equivalents. Furthermore, it is contemplated that a particular feature described, either individually or as part of an embodiment, can be combined with other individually described features, or parts of other embodiments. Thus, absence of describing combinations should not preclude the inventor(s) from claiming rights to such combinations.
Claims (20)
1. A method for implementing a file system migration as between a source file system and a destination file system, the method being implemented by one or more processors and comprising:
determining a migration plan for the file system migration, the migration plan mapping each of (i) a set of file system objects from the source file system to a corresponding set of file system objects at the destination file system, and (ii) a source policy collection for the set of file system objects to a destination policy collection for the corresponding set of file system objects as the destination file system, wherein the migration plan includes a set of parameters that are based at least in part on an operator input; and
implementing the migration of the file system objects using the migration plan, the migration plan specifying, for at least a first object container of the corresponding set of file system objects at the destination file system, at least one of a file contextual path, type, or policy implementation relating to the first object container, that is different as compared to a file path, type or policy implementation of a corresponding source object container of the set of file system objects of the source file system.
2. The method of claim 1 , further comprising providing a user interface to receive the operator input, the user interface including a plurality of feature sets, the user interface including at least a type feature set to enable the operator to specify a parameter for implementing a first conversion rule to migrate one or more source object containers of a first container type as a corresponding one or more destination object containers of a second container type on the destination file system.
3. The method of claim 2 , wherein the parameter is a global parameter that implements the first conversion rule to migrate all source object containers of the first container type in at least a given portion of the source file system as corresponding destination file system objects of the second type on the destination file system.
4. The method of claim 2 , wherein the parameter is a selective parameter that implements the first conversion rule on an input specified set of source object containers of the first type in at least a given portion of the source file system as corresponding destination file system objects of the second type on the destination file system.
5. The method of claim 1 , further comprising providing a user interface to receive the operator input, the user interface including a plurality of feature sets, including at least a location feature set to enable the operator to specify a parameter for implementing a mapping rule to specify a path for a set of destination object containers as compared to a corresponding set of source object containers.
6. The method of claim 1 , further comprising providing a user interface to receive the operator input, to enable an operator to specify an input that specifies (i) a first parameter to migrate at least a first source object container of a first container type as a corresponding destination object container of a second container type, and (ii) a second parameter to implement a container-specific policy on the corresponding destination object container, the container-specific policy being available on the destination file system for the second container type and not the first container type.
7. The method of claim 1 , further comprising providing a user interface to receive the operator input, to enable an operator to specify an input that specifies (i) a first parameter to migrate at least a first source object container of a first container type as a corresponding destination object container of a second container type, and (ii) a second parameter for implementing a mapping rule to specify a file path for the corresponding destination object container that is contextually different than the file path of the first source object container.
8. The method of claim 1 , wherein the migration plan includes a set of parameters that are implemented by default and alterable by input from an operator.
9. The method of claim 1 , wherein implementing the migration of the file system objects using the migration plan includes causing a quota tree or directory container on the source file system to be migrated as a volume container on the destination file system.
10. The method of claim 6 , wherein the first object container of the destination file system is a volume, and the corresponding source object container of the source file system is a quota tree or directory.
11. The method of claim 1 , wherein the migration plan includes a set of conversion rules for implementing the migration as between a source architecture of the source file system and a destination architecture of the destination file system, wherein the source and destination architectures are different.
12. A method for implementing a file system migration as between a source file system and a destination file system, the method being implemented by one or more processors and comprising:
determining a migration plan for the file system migration, the migration plan including a set of parameters and a set of rules, including one or more parameters or rules that are based on an operator input;
determining each of (i) a namespace for the source file system, the namespace identifying a set of file system objects and a file path for each of the file system objects in the set of file system objects, and (ii) a collection of source policies that are implemented for file system objects identified in the namespace of the source file system;
using the migration plan to create a namespace for the destination file system based at least in part on the namespace of the source file system; and
using the migration plan to determine a collection of destination policies for implementation on the destination file system, the collection of destination policies being based at least in part on the collection of source policies;
wherein at least one of the namespace or the collection of destination policies include an operator-specific configuration that is specified by the one or more rules or parameters of the operator input.
13. The method of claim 12 , wherein using the migration plan to create the namespace for the destination file system includes creating the namespace of the destination file system in accordance with a destination architecture of the destination file system, the destination architecture being different than a source architecture.
14. The method of claim 13 , wherein using the migration plan to create the namespace of the destination file system includes converting a set of entries for the namespace of the source file system, which individually specify one or more of a file path or policy for individual file system objects in the set of file system objects, into a corresponding set of entries for the namespace of the destination file system.
15. The method of claim 14 , wherein using the migration plan to convert the set of entries includes converting individual entries in the set of entries from a source format and structure into a destination format and structure.
16. The method of claim 15 , wherein converting the set of entries includes determining the set of entries for the namespace of the destination file system that are logically equivalent to the set of entries for the namespace of the source file system.
17. The method of claim 14 , wherein using the migration plan to determine a collection of destination policies includes grouping multiple policy entries in the destination format into a single logically equivalent policy entry.
18. The method of claim 14 , wherein using the migration plan to create the namespace of the destination file system includes generating one or more entries in the corresponding set of entries for the namespace of the destination file system to specify that a given destination object container is of a different container type than that of a corresponding source object container.
19. The method of claim 14 , wherein using the migration plan to create the namespace of the destination file system includes generating one or more entries in the corresponding set of entries for the namespace of the destination file system to specify that a given destination object container has a different contextual file path or policy than that of a source object container.
20. A non-transitory computer-readable medium that stores instruction for implementing a file system migration as between a source file system and a destination file system, the instructions being executable by one or more processors of a computer system to cause the computer system to perform operations that comprise:
determining a migration plan for the file system migration, the migration plan including a set of parameters and a set of rules, including one or more parameters or rules that are based on an operator input;
determining each of (i) a namespace for the source file system, the namespace identifying a set of file system objects and a file path for each of the file system objects in the set of file system objects, and (ii) a collection of source policies that are implemented for file system objects identified in the namespace of the source file system;
using the migration plan to create a namespace for the destination file system based at least in part on the namespace of the source file system; and
using the migration plan to determine a collection of destination policies for implementation on the destination file system, the collection of destination policies being based at least in part on the collection of source policies;
wherein at least one of the namespace or the collection of destination policies include an operator-specific configuration that is specified by the one or more rules or parameters of the operator input.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/456,991 US20160041996A1 (en) | 2014-08-11 | 2014-08-11 | System and method for developing and implementing a migration plan for migrating a file system |
US14/981,818 US10853333B2 (en) | 2013-08-27 | 2015-12-28 | System and method for developing and implementing a migration plan for migrating a file system |
US17/060,519 US11681668B2 (en) | 2014-08-11 | 2020-10-01 | System and method for developing and implementing a migration plan for migrating a file system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/456,991 US20160041996A1 (en) | 2014-08-11 | 2014-08-11 | System and method for developing and implementing a migration plan for migrating a file system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/456,978 Continuation US10860529B2 (en) | 2013-08-27 | 2014-08-11 | System and method for planning and configuring a file system migration |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/981,818 Continuation US10853333B2 (en) | 2013-08-27 | 2015-12-28 | System and method for developing and implementing a migration plan for migrating a file system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160041996A1 true US20160041996A1 (en) | 2016-02-11 |
Family
ID=55267542
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/456,991 Abandoned US20160041996A1 (en) | 2013-08-27 | 2014-08-11 | System and method for developing and implementing a migration plan for migrating a file system |
US14/981,818 Active 2034-12-22 US10853333B2 (en) | 2013-08-27 | 2015-12-28 | System and method for developing and implementing a migration plan for migrating a file system |
US17/060,519 Active US11681668B2 (en) | 2014-08-11 | 2020-10-01 | System and method for developing and implementing a migration plan for migrating a file system |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/981,818 Active 2034-12-22 US10853333B2 (en) | 2013-08-27 | 2015-12-28 | System and method for developing and implementing a migration plan for migrating a file system |
US17/060,519 Active US11681668B2 (en) | 2014-08-11 | 2020-10-01 | System and method for developing and implementing a migration plan for migrating a file system |
Country Status (1)
Country | Link |
---|---|
US (3) | US20160041996A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160179919A1 (en) * | 2014-12-18 | 2016-06-23 | International Business Machines Corporation | Asynchronous data replication using an external buffer table |
US10089371B2 (en) * | 2015-12-29 | 2018-10-02 | Sap Se | Extensible extract, transform and load (ETL) framework |
US10365976B2 (en) * | 2015-07-28 | 2019-07-30 | Vmware, Inc. | Scheduling and managing series of snapshots |
US10853333B2 (en) | 2013-08-27 | 2020-12-01 | Netapp Inc. | System and method for developing and implementing a migration plan for migrating a file system |
US10860529B2 (en) | 2014-08-11 | 2020-12-08 | Netapp Inc. | System and method for planning and configuring a file system migration |
US20220179983A1 (en) * | 2020-12-04 | 2022-06-09 | Vmware, Inc. | System and method for matching, grouping and recommending computer security rules |
US11403134B2 (en) * | 2020-01-31 | 2022-08-02 | Hewlett Packard Enterprise Development Lp | Prioritizing migration of data associated with a stateful application based on data access patterns |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10671435B1 (en) | 2017-10-19 | 2020-06-02 | Pure Storage, Inc. | Data transformation caching in an artificial intelligence infrastructure |
US11861423B1 (en) * | 2017-10-19 | 2024-01-02 | Pure Storage, Inc. | Accelerating artificial intelligence (‘AI’) workflows |
US11455168B1 (en) | 2017-10-19 | 2022-09-27 | Pure Storage, Inc. | Batch building for deep learning training workloads |
US10360214B2 (en) | 2017-10-19 | 2019-07-23 | Pure Storage, Inc. | Ensuring reproducibility in an artificial intelligence infrastructure |
US11494692B1 (en) | 2018-03-26 | 2022-11-08 | Pure Storage, Inc. | Hyperscale artificial intelligence and machine learning infrastructure |
WO2019127234A1 (en) * | 2017-12-28 | 2019-07-04 | 华为技术有限公司 | Object migration method, device, and system |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6377958B1 (en) * | 1998-07-15 | 2002-04-23 | Powerquest Corporation | File system conversion |
US20040133577A1 (en) * | 2001-01-11 | 2004-07-08 | Z-Force Communications, Inc. | Rule based aggregation of files and transactions in a switched file system |
US20040267830A1 (en) * | 2003-04-24 | 2004-12-30 | Wong Thomas K. | Transparent file migration using namespace replication |
US20050192918A1 (en) * | 2004-02-12 | 2005-09-01 | International Business Machines Corporation | Method for supporting multiple filesystem implementations |
US20060010445A1 (en) * | 2004-07-09 | 2006-01-12 | Peterson Matthew T | Apparatus, system, and method for managing policies on a computer having a foreign operating system |
US7197490B1 (en) * | 2003-02-10 | 2007-03-27 | Network Appliance, Inc. | System and method for lazy-copy sub-volume load balancing in a network attached storage pool |
US20070198722A1 (en) * | 2005-12-19 | 2007-08-23 | Rajiv Kottomtharayil | Systems and methods for granular resource management in a storage network |
US20080028007A1 (en) * | 2006-07-27 | 2008-01-31 | Yohsuke Ishii | Backup control apparatus and method eliminating duplication of information resources |
US20090106255A1 (en) * | 2001-01-11 | 2009-04-23 | Attune Systems, Inc. | File Aggregation in a Switched File System |
US20100274981A1 (en) * | 2009-04-23 | 2010-10-28 | Hitachi, Ltd. | Method and system for migration between physical and virtual systems |
US20130110778A1 (en) * | 2010-05-03 | 2013-05-02 | Panzura, Inc. | Distributing data for a distributed filesystem across multiple cloud storage systems |
US20160246648A1 (en) * | 2013-10-09 | 2016-08-25 | Harish Bantwal Kamath | Information technology resource planning |
Family Cites Families (164)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0264839A (en) | 1988-08-31 | 1990-03-05 | Toshiba Corp | Channel device |
US5592611A (en) | 1995-03-14 | 1997-01-07 | Network Integrity, Inc. | Stand-in computer server |
US5710885A (en) | 1995-11-28 | 1998-01-20 | Ncr Corporation | Network management system with improved node discovery and monitoring |
US5778389A (en) | 1996-05-23 | 1998-07-07 | Electronic Data Systems Corporation | Method and system for synchronizing computer file directories |
US5893140A (en) | 1996-08-14 | 1999-04-06 | Emc Corporation | File server having a file system cache and protocol for truly safe asynchronous writes |
US5937406A (en) | 1997-01-31 | 1999-08-10 | Informix Software, Inc. | File system interface to a database |
US6324581B1 (en) | 1999-03-03 | 2001-11-27 | Emc Corporation | File server system using file system storage, data movers, and an exchange of meta data among data movers for file locking and direct access to shared file systems |
US6081900A (en) | 1999-03-16 | 2000-06-27 | Novell, Inc. | Secure intranet access |
US6401093B1 (en) | 1999-03-31 | 2002-06-04 | International Business Machines Corporation | Cross file system caching and synchronization |
US6658650B1 (en) | 2000-02-18 | 2003-12-02 | International Business Machines Corporation | Service entry point for use in debugging multi-job computer programs |
US7203731B1 (en) | 2000-03-03 | 2007-04-10 | Intel Corporation | Dynamic replication of files in a network storage system |
US6658540B1 (en) | 2000-03-31 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Method for transaction command ordering in a remote data replication system |
US8463912B2 (en) | 2000-05-23 | 2013-06-11 | Media Farm, Inc. | Remote displays in mobile communication networks |
US6938039B1 (en) | 2000-06-30 | 2005-08-30 | Emc Corporation | Concurrent file across at a target file server during migration of file systems between file servers using a network file system access protocol |
US7346702B2 (en) | 2000-08-24 | 2008-03-18 | Voltaire Ltd. | System and method for highly scalable high-speed content-based filtering and load balancing in interconnected fabrics |
US7035907B1 (en) | 2000-09-13 | 2006-04-25 | Jibe Networks, Inc. | Manipulating content objects to control their display |
DE60131900T2 (en) | 2000-10-26 | 2008-12-04 | Flood, James C. jun., Portland | METHOD AND SYSTEM FOR MANAGING DISTRIBUTED CONTENT AND RELATED METADATA |
US6970939B2 (en) | 2000-10-26 | 2005-11-29 | Intel Corporation | Method and apparatus for large payload distribution in a network |
US7865596B2 (en) | 2000-11-02 | 2011-01-04 | Oracle America, Inc. | Switching system for managing storage in digital networks |
US7165096B2 (en) | 2000-12-22 | 2007-01-16 | Data Plow, Inc. | Storage area network file system |
US7444335B1 (en) | 2001-02-28 | 2008-10-28 | Oracle International Corporation | System and method for providing cooperative resource groups for high availability applications |
US7307962B2 (en) | 2001-03-02 | 2007-12-11 | Hewlett-Packard Development Company, L.P. | System for inference of presence of network infrastructure devices |
US7555561B2 (en) | 2001-03-19 | 2009-06-30 | The Aerospace Corporation | Cooperative adaptive web caching routing and forwarding web content data broadcasting method |
US6971044B2 (en) | 2001-04-20 | 2005-11-29 | Egenera, Inc. | Service clusters and method in a processing system with failover capability |
US7020687B2 (en) | 2001-05-18 | 2006-03-28 | Nortel Networks Limited | Providing access to a plurality of e-mail and voice message accounts from a single web-based interface |
US6950833B2 (en) | 2001-06-05 | 2005-09-27 | Silicon Graphics, Inc. | Clustered filesystem |
US7640582B2 (en) | 2003-04-16 | 2009-12-29 | Silicon Graphics International | Clustered filesystem for mix of trusted and untrusted nodes |
US20030028514A1 (en) | 2001-06-05 | 2003-02-06 | Lord Stephen Philip | Extended attribute caching in clustered filesystem |
US6889233B2 (en) | 2001-06-18 | 2005-05-03 | Microsoft Corporation | Selective file purging for delete or rename |
US7249150B1 (en) | 2001-07-03 | 2007-07-24 | Network Appliance, Inc. | System and method for parallelized replay of an NVRAM log in a storage appliance |
JP3879594B2 (en) | 2001-11-02 | 2007-02-14 | 日本電気株式会社 | Switch method, apparatus and program |
US6959310B2 (en) | 2002-02-15 | 2005-10-25 | International Business Machines Corporation | Generating data set of the first file system by determining a set of changes between data stored in first snapshot of the first file system, and data stored in second snapshot of the first file system |
US7890554B2 (en) | 2002-03-14 | 2011-02-15 | International Business Machines Corporation | Apparatus and method of exporting file systems without first mounting the file systems |
US6993539B2 (en) | 2002-03-19 | 2006-01-31 | Network Appliance, Inc. | System and method for determining changes in two snapshots and for transmitting changes to destination snapshot |
ES2420583T3 (en) | 2002-07-11 | 2013-08-26 | Panasonic Corporation | Post-decoder buffer management for an MPEG H.264-SVC bit stream |
EP1566008A2 (en) | 2002-07-22 | 2005-08-24 | Trusted Media Networks Inc. | System and method for validating security access across a network layer and a local file layer |
US7953926B2 (en) | 2002-08-15 | 2011-05-31 | Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations | SCSI-to-IP cache storage device and method |
US7020758B2 (en) | 2002-09-18 | 2006-03-28 | Ortera Inc. | Context sensitive storage management |
US7457822B1 (en) | 2002-11-01 | 2008-11-25 | Bluearc Uk Limited | Apparatus and method for hardware-based file system |
US7334064B2 (en) | 2003-04-23 | 2008-02-19 | Dot Hill Systems Corporation | Application server blade for embedded storage appliance |
US7653699B1 (en) | 2003-06-12 | 2010-01-26 | Symantec Operating Corporation | System and method for partitioning a file system for enhanced availability and scalability |
WO2005013136A1 (en) | 2003-08-04 | 2005-02-10 | Mitsubishi Denki Kabushiki Kaisha | Video information device and module unit |
WO2005015361A2 (en) | 2003-08-08 | 2005-02-17 | Jp Morgan Chase Bank | System for archive integrity management and related methods |
US7953819B2 (en) | 2003-08-22 | 2011-05-31 | Emc Corporation | Multi-protocol sharable virtual storage objects |
US8539081B2 (en) | 2003-09-15 | 2013-09-17 | Neopath Networks, Inc. | Enabling proxy services using referral mechanisms |
US20050075856A1 (en) | 2003-10-01 | 2005-04-07 | Sbc Knowledge Ventures, L.P. | Data migration using SMS simulator |
US7149858B1 (en) | 2003-10-31 | 2006-12-12 | Veritas Operating Corporation | Synchronous replication for system and data security |
US7814131B1 (en) | 2004-02-02 | 2010-10-12 | Network Appliance, Inc. | Aliasing of exported paths in a storage system |
US7383463B2 (en) | 2004-02-04 | 2008-06-03 | Emc Corporation | Internet protocol based disaster recovery of a server |
US7685128B2 (en) | 2004-06-10 | 2010-03-23 | International Business Machines Corporation | Remote access agent for caching in a SAN file system |
US20060015584A1 (en) | 2004-07-13 | 2006-01-19 | Teneros, Inc. | Autonomous service appliance |
GB0416074D0 (en) | 2004-07-17 | 2004-08-18 | Ibm | Controlling data consistency guarantees in storage apparatus |
US7284043B2 (en) | 2004-09-23 | 2007-10-16 | Centeris Corporation | System and method for automated migration from Linux to Windows |
JP4281658B2 (en) | 2004-09-24 | 2009-06-17 | 日本電気株式会社 | File access service system, switching device, quota management method and program |
US7603372B1 (en) | 2004-10-29 | 2009-10-13 | Netapp, Inc. | Modeling file system operation streams |
US8229985B2 (en) | 2005-02-07 | 2012-07-24 | Cisco Technology, Inc. | Arrangement for a distributed file system having data objects mapped independent of any data object attribute |
US7747836B2 (en) | 2005-03-08 | 2010-06-29 | Netapp, Inc. | Integrated storage virtualization and switch system |
US7546431B2 (en) | 2005-03-21 | 2009-06-09 | Emc Corporation | Distributed open writable snapshot copy facility using file migration policies |
US7467282B2 (en) * | 2005-04-05 | 2008-12-16 | Network Appliance, Inc. | Migrating a traditional volume to a virtual volume in a storage system |
US20070022129A1 (en) | 2005-07-25 | 2007-01-25 | Parascale, Inc. | Rule driven automation of file placement, replication, and migration |
US20070038697A1 (en) | 2005-08-03 | 2007-02-15 | Eyal Zimran | Multi-protocol namespace server |
US20070055703A1 (en) | 2005-09-07 | 2007-03-08 | Eyal Zimran | Namespace server using referral protocols |
US8504597B2 (en) | 2005-09-09 | 2013-08-06 | William M. Pitts | Distributed file system consistency mechanism extension for enabling internet video broadcasting |
US20070088702A1 (en) | 2005-10-03 | 2007-04-19 | Fridella Stephen A | Intelligent network client for multi-protocol namespace redirection |
US20070083570A1 (en) | 2005-10-11 | 2007-04-12 | Fineberg Samuel A | File system versioning using a log |
US8141125B2 (en) | 2005-11-30 | 2012-03-20 | Oracle International Corporation | Orchestration of policy engines and format technologies |
ES2582364T3 (en) | 2005-12-19 | 2016-09-12 | Commvault Systems, Inc. | Systems and methods to perform data replication |
US7651593B2 (en) | 2005-12-19 | 2010-01-26 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US7552300B2 (en) | 2006-01-03 | 2009-06-23 | International Business Machines Corporation | Method for migrating objects in content management systems through exploitation of file properties, temporal locality, and spatial locality |
US7769719B2 (en) | 2006-01-05 | 2010-08-03 | International Business Machines Corporation | File system dump/restore by node numbering |
US8285817B1 (en) | 2006-03-20 | 2012-10-09 | Netapp, Inc. | Migration engine for use in a logical namespace of a storage system environment |
GB0606639D0 (en) | 2006-04-01 | 2006-05-10 | Ibm | Non-disruptive file system element reconfiguration on disk expansion |
US7769723B2 (en) | 2006-04-28 | 2010-08-03 | Netapp, Inc. | System and method for providing continuous data protection |
EP1857946B1 (en) | 2006-05-16 | 2018-04-04 | Sap Se | Systems and methods for migrating data |
US7844584B1 (en) | 2006-06-23 | 2010-11-30 | Netapp, Inc. | System and method for persistently storing lock state information |
US8281360B2 (en) | 2006-11-21 | 2012-10-02 | Steven Adams Flewallen | Control of communication ports of computing devices using policy-based decisions |
JP4931660B2 (en) * | 2007-03-23 | 2012-05-16 | 株式会社日立製作所 | Data migration processing device |
US7925629B2 (en) | 2007-03-28 | 2011-04-12 | Netapp, Inc. | Write ordering style asynchronous replication utilizing a loosely-accurate global clock |
WO2008128194A2 (en) | 2007-04-12 | 2008-10-23 | Rutgers, The State University Of New Jersey | System and method for controlling a file system |
WO2008138008A1 (en) | 2007-05-08 | 2008-11-13 | Riverbed Technology, Inc | A hybrid segment-oriented file server and wan accelerator |
US20080294748A1 (en) | 2007-05-21 | 2008-11-27 | William Boyd Brown | Proxy between network file system version three and network file system version four protocol |
US20080306986A1 (en) | 2007-06-08 | 2008-12-11 | Accenture Global Services Gmbh | Migration of Legacy Applications |
US8346966B1 (en) | 2007-07-19 | 2013-01-01 | Blue Coat Systems, Inc. | Transparent file system access for wide area network file system acceleration |
US8908700B2 (en) | 2007-09-07 | 2014-12-09 | Citrix Systems, Inc. | Systems and methods for bridging a WAN accelerator with a security gateway |
US8117244B2 (en) | 2007-11-12 | 2012-02-14 | F5 Networks, Inc. | Non-disruptive file migration |
US20100312861A1 (en) | 2007-11-30 | 2010-12-09 | Johan Kolhi | Method, network, and node for distributing electronic content in a content distribution network |
US8375190B2 (en) | 2007-12-11 | 2013-02-12 | Microsoft Corporation | Dynamtic storage hierarachy management |
JP2009146106A (en) | 2007-12-13 | 2009-07-02 | Hitachi Ltd | Storage system having function which migrates virtual communication port which is added to physical communication port |
US7904466B1 (en) | 2007-12-21 | 2011-03-08 | Netapp, Inc. | Presenting differences in a file system |
US8805949B2 (en) | 2008-01-16 | 2014-08-12 | Netapp, Inc. | System and method for populating a cache using behavioral adaptive policies |
US9143566B2 (en) | 2008-01-16 | 2015-09-22 | Netapp, Inc. | Non-disruptive storage caching using spliced cache appliances with packet inspection intelligence |
US9130968B2 (en) | 2008-01-16 | 2015-09-08 | Netapp, Inc. | Clustered cache appliance system and methodology |
US20090222509A1 (en) | 2008-02-29 | 2009-09-03 | Chao King | System and Method for Sharing Storage Devices over a Network |
TWI476610B (en) | 2008-04-29 | 2015-03-11 | Maxiscale Inc | Peer-to-peer redundant file server system and methods |
US8910255B2 (en) | 2008-05-27 | 2014-12-09 | Microsoft Corporation | Authentication for distributed secure content management system |
US7941591B2 (en) | 2008-07-28 | 2011-05-10 | CacheIQ, Inc. | Flash DIMM in a standalone cache appliance system and methodology |
US8385202B2 (en) * | 2008-08-27 | 2013-02-26 | Cisco Technology, Inc. | Virtual switch quality of service for virtual machines |
US20100083673A1 (en) | 2008-10-02 | 2010-04-08 | Island Sky Corporation | Water production system and method with air bypass |
US20100095348A1 (en) * | 2008-10-10 | 2010-04-15 | Ciphent, Inc. | System and method for management and translation of technical security policies and configurations |
EP2340491B1 (en) | 2008-10-24 | 2019-11-27 | Hewlett-Packard Development Company, L.P. | Direct-attached/network-attached storage device |
US9426213B2 (en) | 2008-11-11 | 2016-08-23 | At&T Intellectual Property Ii, L.P. | Hybrid unicast/anycast content distribution network system |
US20110055299A1 (en) | 2008-12-18 | 2011-03-03 | Virtual Computer, Inc. | Managing User Data in a Layered Virtual Workspace |
US8417816B2 (en) | 2009-02-20 | 2013-04-09 | Alcatel Lucent | Topology aware cache cooperation |
DE112009004503T5 (en) | 2009-03-10 | 2012-05-31 | Hewlett-Packard Development Co., L.P. | OPTIMIZING THE ACCESS TIME OF SAVED FILES |
US8078816B1 (en) * | 2009-04-07 | 2011-12-13 | Netapp, Inc. | Transparent transfer of qtree and quota metadata in conjunction with logical replication of user data |
JP5331555B2 (en) * | 2009-04-23 | 2013-10-30 | 株式会社日立製作所 | Data migration system and data migration method |
US8655848B1 (en) | 2009-04-30 | 2014-02-18 | Netapp, Inc. | Unordered idempotent logical replication operations |
CN102460393B (en) | 2009-05-01 | 2014-05-07 | 思杰系统有限公司 | Systems and methods for establishing a cloud bridge between virtual storage resources |
JP4990322B2 (en) | 2009-05-13 | 2012-08-01 | 株式会社日立製作所 | Data movement management device and information processing system |
US8205035B2 (en) | 2009-06-22 | 2012-06-19 | Citrix Systems, Inc. | Systems and methods for integration between application firewall and caching |
US20100333116A1 (en) | 2009-06-30 | 2010-12-30 | Anand Prahlad | Cloud gateway system for managing data storage to cloud storage sites |
US20110016085A1 (en) | 2009-07-16 | 2011-01-20 | Netapp, Inc. | Method and system for maintaining multiple inode containers in a storage server |
WO2011023134A1 (en) | 2009-08-28 | 2011-03-03 | Beijing Innovation Works Technology Company Limited | Method and system for managing distributed storage system through virtual file system |
US8326798B1 (en) * | 2009-09-14 | 2012-12-04 | Network Appliance, Inc. | File system agnostic replication |
US9235595B2 (en) | 2009-10-02 | 2016-01-12 | Symantec Corporation | Storage replication systems and methods |
US8484164B1 (en) | 2009-10-23 | 2013-07-09 | Netapp, Inc. | Method and system for providing substantially constant-time execution of a copy operation |
US20110184907A1 (en) | 2010-01-27 | 2011-07-28 | Sun Microsystems, Inc. | Method and system for guaranteed traversal during shadow migration |
US9015129B2 (en) | 2010-02-09 | 2015-04-21 | Veeam Software Ag | Cross-platform object level restoration from image level backups |
US8332351B2 (en) * | 2010-02-26 | 2012-12-11 | Oracle International Corporation | Method and system for preserving files with multiple links during shadow migration |
EP2542985A1 (en) * | 2010-03-01 | 2013-01-09 | Hitachi, Ltd. | File level hierarchical storage management system, method, and apparatus |
JP5551245B2 (en) | 2010-03-19 | 2014-07-16 | 株式会社日立製作所 | File sharing system, file processing method, and program |
CN102209087B (en) | 2010-03-31 | 2014-07-09 | 国际商业机器公司 | Method and system for MapReduce data transmission in data center having SAN |
CN102754084B (en) | 2010-05-18 | 2015-10-07 | 株式会社日立制作所 | Memory storage and data managing method |
EP2579157B1 (en) | 2010-05-27 | 2016-12-07 | Hitachi, Ltd. | Local file server operative to transfer file to remote file server via communication network, and storage system including those file servers |
US20120011176A1 (en) | 2010-07-07 | 2012-01-12 | Nexenta Systems, Inc. | Location independent scalable file and block storage |
US8423646B2 (en) | 2010-07-09 | 2013-04-16 | International Business Machines Corporation | Network-aware virtual machine migration in datacenters |
US8452856B1 (en) | 2010-08-04 | 2013-05-28 | Netapp, Inc. | Non-disruptive storage server migration |
US8458040B2 (en) | 2010-08-13 | 2013-06-04 | Cox Communications, Inc. | Systems and methods for managing rights to broadband content |
US8463788B2 (en) | 2010-09-03 | 2013-06-11 | Marvell World Trade Ltd. | Balancing caching load in a peer-to-peer based network file system |
US9720606B2 (en) * | 2010-10-26 | 2017-08-01 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Methods and structure for online migration of data in storage systems comprising a plurality of storage devices |
US8595451B2 (en) | 2010-11-04 | 2013-11-26 | Lsi Corporation | Managing a storage cache utilizing externally assigned cache priority tags |
US8762668B2 (en) * | 2010-11-18 | 2014-06-24 | Hitachi, Ltd. | Multipath switching over multiple storage systems |
US9824091B2 (en) * | 2010-12-03 | 2017-11-21 | Microsoft Technology Licensing, Llc | File system backup using change journal |
US8856073B2 (en) | 2010-12-14 | 2014-10-07 | Hitachi, Ltd. | Data synchronization among file storages using stub files |
US8676980B2 (en) | 2011-03-22 | 2014-03-18 | Cisco Technology, Inc. | Distributed load balancer in a virtual machine environment |
EP2689568A4 (en) | 2011-03-25 | 2014-11-26 | Hewlett Packard Development Co | Network topology discovery |
US9201751B1 (en) | 2011-04-18 | 2015-12-01 | American Megatrends, Inc. | Data migration between multiple tiers in a storage system using policy based ILM for QOS |
EP2701070A4 (en) | 2011-04-22 | 2014-11-05 | Nec Corp | Content distribution system, control device, and content distribution method |
EP2722765B1 (en) | 2011-06-14 | 2016-02-03 | NEC Corporation | Content delivery system, controller and content delivery method |
US8504723B2 (en) | 2011-06-15 | 2013-08-06 | Juniper Networks, Inc. | Routing proxy for resource requests and resources |
US9176773B2 (en) | 2011-06-29 | 2015-11-03 | Microsoft Technology Licensing, Llc | Virtual machine migration tool |
US8856191B2 (en) * | 2011-08-01 | 2014-10-07 | Infinidat Ltd. | Method of migrating stored data and system thereof |
US20130041985A1 (en) | 2011-08-10 | 2013-02-14 | Microsoft Corporation | Token based file operations |
US8484161B2 (en) | 2011-08-29 | 2013-07-09 | Oracle International Corporation | Live file system migration |
US10200493B2 (en) | 2011-10-17 | 2019-02-05 | Microsoft Technology Licensing, Llc | High-density multi-tenant distributed cache as a service |
US20130132544A1 (en) | 2011-11-23 | 2013-05-23 | Telefonaktiebolaget L M Ericsson (Publ) | Precise geolocation for content caching in evolved packet core networks |
US9088584B2 (en) | 2011-12-16 | 2015-07-21 | Cisco Technology, Inc. | System and method for non-disruptive management of servers in a network environment |
US8762477B2 (en) | 2012-02-28 | 2014-06-24 | Futurewei Technologies, Inc. | Method for collaborative caching for content-oriented networks |
US9258262B2 (en) * | 2012-04-30 | 2016-02-09 | Racemi, Inc. | Mailbox-based communications system for management communications spanning multiple data centers and firewalls |
US9305004B2 (en) | 2012-06-05 | 2016-04-05 | International Business Machines Corporation | Replica identification and collision avoidance in file system replication |
US9973468B2 (en) | 2012-06-15 | 2018-05-15 | Citrix Systems, Inc. | Systems and methods for address resolution protocol (ARP) resolution over a link aggregation of a cluster channel |
WO2014029419A1 (en) | 2012-08-21 | 2014-02-27 | Nec Europe Ltd. | Method and system for performing mobile cdn request routing |
US8856086B2 (en) | 2012-08-24 | 2014-10-07 | International Business Machines Corporation | Ensuring integrity of security event log upon download and delete |
EP2802118B1 (en) | 2012-12-07 | 2021-02-03 | Duvon Corporation | File sharing system and method |
US10311151B2 (en) | 2013-02-21 | 2019-06-04 | Hitachi Vantara Corporation | Object-level replication of cloned objects in a data storage system |
US10275593B2 (en) | 2013-04-01 | 2019-04-30 | Uniquesoft, Llc | Secure computing device using different central processing resources |
US9747311B2 (en) * | 2013-07-09 | 2017-08-29 | Oracle International Corporation | Solution to generate a scriptset for an automated database migration |
US9098364B2 (en) | 2013-07-09 | 2015-08-04 | Oracle International Corporation | Migration services for systems |
US10860529B2 (en) | 2014-08-11 | 2020-12-08 | Netapp Inc. | System and method for planning and configuring a file system migration |
US20160041996A1 (en) | 2014-08-11 | 2016-02-11 | Netapp, Inc. | System and method for developing and implementing a migration plan for migrating a file system |
US9305071B1 (en) * | 2013-09-30 | 2016-04-05 | Emc Corporation | Providing virtual storage processor (VSP) mobility with induced file system format migration |
WO2015087442A1 (en) | 2013-12-13 | 2015-06-18 | 株式会社日立製作所 | Transfer format for storage system, and transfer method |
EP3149606B1 (en) * | 2014-05-30 | 2019-05-08 | Hitachi Vantara Corporation | Metadata favored replication in active topologies |
US9298734B2 (en) | 2014-06-06 | 2016-03-29 | Hitachi, Ltd. | Storage system, computer system and data migration method |
-
2014
- 2014-08-11 US US14/456,991 patent/US20160041996A1/en not_active Abandoned
-
2015
- 2015-12-28 US US14/981,818 patent/US10853333B2/en active Active
-
2020
- 2020-10-01 US US17/060,519 patent/US11681668B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6377958B1 (en) * | 1998-07-15 | 2002-04-23 | Powerquest Corporation | File system conversion |
US20040133577A1 (en) * | 2001-01-11 | 2004-07-08 | Z-Force Communications, Inc. | Rule based aggregation of files and transactions in a switched file system |
US20090106255A1 (en) * | 2001-01-11 | 2009-04-23 | Attune Systems, Inc. | File Aggregation in a Switched File System |
US7197490B1 (en) * | 2003-02-10 | 2007-03-27 | Network Appliance, Inc. | System and method for lazy-copy sub-volume load balancing in a network attached storage pool |
US20040267830A1 (en) * | 2003-04-24 | 2004-12-30 | Wong Thomas K. | Transparent file migration using namespace replication |
US20050192918A1 (en) * | 2004-02-12 | 2005-09-01 | International Business Machines Corporation | Method for supporting multiple filesystem implementations |
US20060010445A1 (en) * | 2004-07-09 | 2006-01-12 | Peterson Matthew T | Apparatus, system, and method for managing policies on a computer having a foreign operating system |
US20070198722A1 (en) * | 2005-12-19 | 2007-08-23 | Rajiv Kottomtharayil | Systems and methods for granular resource management in a storage network |
US20080028007A1 (en) * | 2006-07-27 | 2008-01-31 | Yohsuke Ishii | Backup control apparatus and method eliminating duplication of information resources |
US20100274981A1 (en) * | 2009-04-23 | 2010-10-28 | Hitachi, Ltd. | Method and system for migration between physical and virtual systems |
US20130110778A1 (en) * | 2010-05-03 | 2013-05-02 | Panzura, Inc. | Distributing data for a distributed filesystem across multiple cloud storage systems |
US20160246648A1 (en) * | 2013-10-09 | 2016-08-25 | Harish Bantwal Kamath | Information technology resource planning |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10853333B2 (en) | 2013-08-27 | 2020-12-01 | Netapp Inc. | System and method for developing and implementing a migration plan for migrating a file system |
US10860529B2 (en) | 2014-08-11 | 2020-12-08 | Netapp Inc. | System and method for planning and configuring a file system migration |
US11681668B2 (en) | 2014-08-11 | 2023-06-20 | Netapp, Inc. | System and method for developing and implementing a migration plan for migrating a file system |
US20160179919A1 (en) * | 2014-12-18 | 2016-06-23 | International Business Machines Corporation | Asynchronous data replication using an external buffer table |
US9811577B2 (en) * | 2014-12-18 | 2017-11-07 | International Business Machines Corporation | Asynchronous data replication using an external buffer table |
US9817879B2 (en) | 2014-12-18 | 2017-11-14 | International Business Machines Corporation | Asynchronous data replication using an external buffer table |
US10365976B2 (en) * | 2015-07-28 | 2019-07-30 | Vmware, Inc. | Scheduling and managing series of snapshots |
US10089371B2 (en) * | 2015-12-29 | 2018-10-02 | Sap Se | Extensible extract, transform and load (ETL) framework |
US11403134B2 (en) * | 2020-01-31 | 2022-08-02 | Hewlett Packard Enterprise Development Lp | Prioritizing migration of data associated with a stateful application based on data access patterns |
US20220179983A1 (en) * | 2020-12-04 | 2022-06-09 | Vmware, Inc. | System and method for matching, grouping and recommending computer security rules |
US11847240B2 (en) * | 2020-12-04 | 2023-12-19 | Vmware, Inc. | System and method for matching, grouping and recommending computer security rules |
Also Published As
Publication number | Publication date |
---|---|
US10853333B2 (en) | 2020-12-01 |
US11681668B2 (en) | 2023-06-20 |
US20210026819A1 (en) | 2021-01-28 |
US20160179795A1 (en) | 2016-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11681668B2 (en) | System and method for developing and implementing a migration plan for migrating a file system | |
US20210109890A1 (en) | System and method for planning and configuring a file system migration | |
US10250461B2 (en) | Migrating legacy non-cloud applications into a cloud-computing environment | |
US11882177B2 (en) | Orchestration of data services in multiple cloud infrastructures | |
US9959105B2 (en) | Configuration of an application in a computing platform | |
US9444896B2 (en) | Application migration between clouds | |
CN112099938A (en) | Determining resource allocation in a distributed computing environment using multi-dimensional metadata tag sets | |
US9170797B2 (en) | Automated deployment of an application in a computing platform | |
US9262238B2 (en) | Connection management for an application in a computing platform | |
EP2675127B1 (en) | Method and device for automatically migrating system configuration item | |
US9733971B2 (en) | Placement of virtual machines on preferred physical hosts | |
US10540162B2 (en) | Generating service images having scripts for the deployment of services | |
US9378039B2 (en) | Virtual machine storage replication schemes | |
US11588698B2 (en) | Pod migration across nodes of a cluster | |
US9959157B1 (en) | Computing instance migration | |
EP3005113B1 (en) | Improved deployment of virtual machines by means of differencing disks | |
EP2920691B1 (en) | A network-independent programming model for online processing in distributed systems | |
CN110275775A (en) | Resource allocation method, system and the storage medium of container application | |
JP6423752B2 (en) | Migration support apparatus and migration support method | |
US10572412B1 (en) | Interruptible computing instance prioritization | |
US11836125B1 (en) | Scalable database dependency monitoring and visualization system | |
US11947555B1 (en) | Intelligent query routing across shards of scalable database tables | |
US20240118800A1 (en) | Graphical user interface for workload migration | |
US11954469B2 (en) | Bases for pattern-based cloud computing | |
US20230393876A1 (en) | Landing zones for pattern-based cloud computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NETAPP, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PRINZ, ALFRED G., III;RAY, FOUNTAIN L., III;HEATH, DOUGLAS THARON;SIGNING DATES FROM 20140827 TO 20140922;REEL/FRAME:033844/0292 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |