US20070250302A1 - Simulated storage area network - Google Patents
Simulated storage area network Download PDFInfo
- Publication number
- US20070250302A1 US20070250302A1 US11/525,690 US52569006A US2007250302A1 US 20070250302 A1 US20070250302 A1 US 20070250302A1 US 52569006 A US52569006 A US 52569006A US 2007250302 A1 US2007250302 A1 US 2007250302A1
- Authority
- US
- United States
- Prior art keywords
- storage
- simulator
- area network
- volume
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/10—Program control for peripheral devices
- G06F13/105—Program control for peripheral devices where the programme performs an input/output emulation function
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/22—Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
- G06F11/26—Functional testing
- G06F11/261—Functional testing by simulating additional hardware, e.g. fault simulation
Definitions
- Two ways to test a storage management application are to deploy a storage area network device or to use a hardware-based emulator. Testing a storage management application in either of these ways, however, has several disadvantages.
- Third, sometimes new industry standards are created that no existing storage system supports.
- a test automation engine instructs a simulator to simulate a storage device having certain characteristics such as a storage area network.
- the test automation engine then tests an application against the simulated storage device.
- Tests may include storage management requests and storage access requests.
- a provider may translate a request to one or more operations suitable to perform the request on the underlying simulated device. Shadow copies and the results of other storage management-related operations may be shared across computers via aspects of a simulation framework described herein.
- FIG. 1 is a block diagram representing an exemplary general-purpose computing environment into which aspects of the subject matter described herein may be incorporated;
- FIG. 2 is a block diagram representing an exemplary arrangement of components of a system in which aspects of the subject matter described herein may operate;
- FIG. 3 is a block diagram that generally represents some exemplary components that may be involved when storing and retrieving data in accordance with aspects of the subject matter described herein;
- FIG. 4 is a block diagram of an exemplary environment in which aspects of the subject matter described herein may be implemented.
- FIG. 5 is a block diagram that generally represents an exemplary testing environment in accordance with aspects of the subject matter described herein;
- FIGS. 6A and 6B are a flow diagram that generally represents exemplary actions that may occur in testing a storage device in accordance with aspects of the subject matter described herein.
- FIG. 1 illustrates an example of a suitable computing system environment 100 on which aspects of the subject matter described herein may be implemented.
- the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of aspects of the subject matter described herein. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100 .
- aspects of the subject matter described herein are operational with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well known computing systems, environments, and/or configurations that may be suitable for use with aspects of the subject matter described herein include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microcontroller-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- aspects of the subject matter described herein may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
- program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types.
- aspects of the subject matter described herein may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in both local and remote computer storage media including memory storage devices.
- an exemplary system for implementing aspects of the subject matter described herein includes a general-purpose computing device in the form of a computer 110 .
- Components of the computer 110 may include, but are not limited to, a processing unit 120 , a system memory 130 , and a system bus 121 that couples various system components including the system memory to the processing unit 120 .
- the system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
- ISA Industry Standard Architecture
- MCA Micro Channel Architecture
- EISA Enhanced ISA
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interconnect
- Computer 110 typically includes a variety of computer-readable media.
- Computer-readable media can be any available media that can be accessed by the computer 110 and includes both volatile and nonvolatile media, and removable and non-removable media.
- Computer-readable media may comprise computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 110 .
- Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
- the system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132 .
- ROM read only memory
- RAM random access memory
- BIOS basic input/output system
- RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120 .
- FIG. 1 illustrates operating system 134 , application programs 135 , other program modules 136 , and program data 137 .
- the computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media.
- FIG. 1 illustrates a hard disk drive 140 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152 , and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media.
- removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
- the hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140
- magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150 .
- hard disk drive 141 is illustrated as storing operating system 144 , application programs 145 , other program modules 146 , and program data 147 . Note that these components can either be the same as or different from operating system 134 , application programs 135 , other program modules 136 , and program data 137 . Operating system 144 , application programs 145 , other program modules 146 , and program data 147 are given different numbers herein to illustrate that, at a minimum, they are different copies.
- a user may enter commands and information into the computer 20 through input devices such as a keyboard 162 and pointing device 161 , commonly referred to as a mouse, trackball or touch pad.
- Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, a touch-sensitive screen of a handheld PC or other writing tablet, or the like.
- These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
- a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190 .
- computers may also include other peripheral output devices such as speakers 197 and printer 196 , which may be connected through an output peripheral interface 190 .
- the computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180 .
- the remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110 , although only a memory storage device 181 has been illustrated in FIG. 1 .
- the logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173 , but may also include other networks.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
- the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170 .
- the computer 110 When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173 , such as the Internet.
- the modem 172 which may be internal or external, may be connected to the system bus 121 via the user input interface 160 or other appropriate mechanism.
- program modules depicted relative to the computer 110 may be stored in the remote memory storage device.
- FIG. 1 illustrates remote application programs 185 as residing on memory device 181 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- FIG. 2 is a block diagram representing an exemplary arrangement of components of a system in which aspects of the subject matter described herein may operate.
- the system includes a test application 205 , a virtual shadow copy service (VSS) and a virtual disk service (VDS) 210 , VSS/VDS providers 215 , a stub 220 , and a virtual storage driver 225 .
- VSS virtual shadow copy service
- VDS virtual disk service
- test application 205 the VSS/VDS 210 , the VSS/VDS Providers 215 , and the stub 220 execute in user mode while the virtual storage driver 225 executes in kernel mode. In other embodiments, the virtual storage simulator 225 may execute in user mode.
- the test application 205 is any program that is capable of testing one or more features of a storage area network (SAN), distributed storage, or other storage.
- a SAN may include storage elements, storage devices, computer systems, and/or storage appliances plus control software communicating over a network.
- a computer issues a request for specific blocks or data segments from a specific logical unit number (LUN).
- LUN may refer to a physical disk drive on the SAN or to a virtual partition (or volume) including portions or all of one or more physical disk drives of the SAN.
- the test application 205 may include a specialized application such as a storage manager for managing a SAN, or an application capable of backup from and restore to a storage device, or any other application capable of reading from, writing to, or managing a storage device.
- the test application 205 may or may not be structured primarily to test storage devices.
- the test application 205 may be controlled by another application that instructs the test application 205 as to what operations to perform. Such an embodiment is described in further detail in conjunction with FIG. 5 .
- the VSS/VDS 210 are services that may be used in disk management.
- the VSS/VDS 210 are part of the same component and may have a common interface.
- the VSS and the VDS are separate components and may have separate interfaces.
- the VSS may coordinate the creation of shadow copies.
- a shadow copy may be thought of as a “snapshot” of a volume.
- a shadow copy is a duplicate of a volume at a given point in time, even though the volume may not be entirely copied (e.g., via copy-on-write) in creating the shadow copy.
- a shadow copy may be viewed as a separate volume by the operating system and any executing applications.
- a shadow copy may have a volume device, a volume name, a drive letter, a mount point, and any other attribute of an actual volume.
- the VSS provides an interface through which an application (e.g., the test application 205 ) may request shadow copy operations for any supported storage device.
- the VSS works in conjunction with a VSS provider to perform a requested shadow copy operation.
- the VSS shields the application from nuances of the supported storage device.
- the interface provided by the VSS may be an industry standard interface for managing storage devices or may be a proprietary interface.
- the VSS provides an interface that corresponds to the VSS standards provided by Microsoft Corporation of Redmond Wash.
- the VSS provides an interface that corresponds to a Storage Network Industry Association (SNIA) storage management standard.
- the VSS provides a proprietary interface for managing storage devices.
- SNIA Storage Network Industry Association
- VSS When the VSS receives a request to create a shadow copy, various actions may occur to prepare for the shadow copy. After the actions are completed and the VSS receives an instruction (e.g., a commit) to finalize the shadow copy, the VSS communicates with the VSS provider to create a shadow copy.
- an instruction e.g., a commit
- a VSS provider is responsible for causing the appropriate actions to occur on the underlying storage hardware to implement the shadow copy operations received by the VSS.
- the VSS provider may be considered as a translator that translates the requests received from the VSS into a set of one or more requests needed to fulfill the request using the virtual storage simulator 225 .
- Shadow copy operations may include creating or deleting a shadow copy, importing a shadow copy, exposing a shadow copy, and so forth.
- the VSS provider may take advantage of specialized storage features when performing a shadow copy operation. For example, some storage hardware may mirror a volume on another volume such that each data write to one volume is also written to the other volume. In such platforms, creating a shadow copy of a volume may involve requesting that the storage hardware split the mirror and expose the shadow copy volume for read/write access. Other storage hardware may not have such built-in features. In such storage hardware, the VSS provider may make a copy of the volume on another volume or may use a differential technique (e.g., copy-on-write) to create a shadow copy.
- a differential technique e.g., copy-on-write
- VSS providers associated with a single VSS service.
- the VSS service may determine which VSS provider to use depending on the storage device indicated by the application.
- the VDS (of VSS/VDS 210 ) allows a program to query or proactively informs the program as to what storage devices are attached to a computer.
- the VDS may also allow a program to configure and manage attached storage devices.
- the VDS presents an interface that allows programs to configure, manage, and otherwise interact with storage devices.
- the programs may communicate with the VDS using a standard set of methods while the VDS communicates with the storage device via a VDS provider that translates instructions depending on the storage device's characteristics.
- the interface provided by the VDS may be an industry standard interface for managing storage devices or may be a proprietary interface.
- the VDS In communicating with the storage device, the VDS selects a VDS provider and issues commands to the VDS provider (of VSS/VDS Providers 215 ). The VDS provider then takes the actions needed by the underlying storage device (which may be different across storage devices) to carry out the commands.
- the VDS provider may be considered as a translator that translates requests received by the VDS into a set of one or more requests to send to the virtual storage simulator 225 .
- Some exemplary commands that the VDS may allow include commands to enable or disable automatic mount of the file system for new volumes, to mark a partition as primary (i.e. bootable) or inactive (i.e., unbootable), to repair a RAID-5 volume by replacing a failed member or mirror with a specified dynamic disk, to remove a drive letter or mount point assignment, to grow or shrink volumes, to assign or change a drive letter, and other commands.
- the VDS may include commands to select a subsystem, controller, drive, or LUN, to create or delete a LUN, to rescan to locate any new disks that may have been added to the storage array, to unmask/mask a LUN to make it accessible/un-accessible to specific computers for use, to extend or shrink a LUN, and so forth.
- the VSS/VDS providers 215 are created to interact with the virtual storage driver 225 through the stub 220 .
- the VSS/VDS providers 215 still provide the same interfaces to the VSS/VDS 210 , but some of the VSS/VDS providers 215 are structured so that they issue commands to the virtual storage simulator 225 instead of a storage driver associated with an actual storage device.
- the VSS/VDS providers 215 that interact with the virtual storage simulator 225 may include a complete set of operations that are available on sophisticated or even non-existent storage devices.
- the stub 220 is a component that provides an interface library to communicate with the virtual storage simulator 225 .
- a user mode process cannot directly call methods in the virtual storage simulator 225 .
- a complex set of actions may be involved to pass information back and forth to the virtual storage simulator. These actions may be encapsulated in the stub 220 so that the VSS/VDS providers 215 do not need this functionality. Instead, the VSS/VDS providers 215 may communicate with the stub 220 , which then handles the complexities of communicating with the virtual storage simulator 225 across the user mode/kernel mode boundary.
- the stub 220 may also allow communication with a virtual storage simulator on another machine. This may be done, for example, to create a simulated storage device on another server. This may be helpful, for example, in testing a remote backup feature where a backup server imports a shadow copy remotely from another server.
- the virtual storage simulator 225 simulates a storage device. Such storage devices may include actual storage devices that are currently available and non-available storage devices. A non-available storage device may be simulated, for example, to provide a test bed for a proposed or actual standard that is not currently supported by any available device. Although the virtual storage simulator 225 may be structured to simulate any storage device, it will be readily recognized that the virtual storage simulator 225 may be particularly advantageous in simulating expensive storage devices such as SANs.
- the virtual storage simulator 225 simulates a block level storage device.
- a block level storage device the storage on the storage device is divided into fixed sized blocks that are individually accessible.
- the virtual storage simulator 225 may inform the VSS/VDS provider 215 when a new simulated disk is available.
- the virtual storage simulator 225 may include an API that allows another program to add, remove, or reconfigure simulated disks.
- FIG. 3 is a block diagram that generally represents some exemplary components that may be involved when storing and retrieving data in accordance with aspects of the subject matter described herein.
- the components include a file system and I/O manager 305 , a physical storage driver 310 , a hard disk 315 , and a virtual storage simulator 225 .
- the file system and I/O manager 305 may receive data access requests (e.g., reads and/or writes) from a user mode process (e.g., stub 220 of FIG. 2 ). Based on the storage driver associated with the request, the file system and I/O manager 305 may direct the request to the physical storage driver 310 or the virtual storage simulator 225 .
- data access requests e.g., reads and/or writes
- a user mode process e.g., stub 220 of FIG. 2
- the file system and I/O manager 305 may direct the request to the physical storage driver 310 or the virtual storage simulator 225 .
- the virtual storage simulator 225 may access a file on the hard disk 315 by submitting a data access request to the file system and I/O manager 305 and referencing a file on the hard disk 315 .
- the file system and I/O manager 305 may direct the request from the virtual storage simulator 225 to the physical storage driver 310 which may read or write the data to or from the file on the hard disk 315 .
- the results of the data access are reported to the virtual storage simulator 225 which may then respond appropriately via the file system and I/O manager 305 to the process that requested the data access.
- the process that requested the data access need not know that the data it has accessed was on the hard disk 315 . As far as the process is concerned, the data access was handled by a storage device simulated by the virtual storage simulator 225 .
- the virtual storage simulator 225 may also simulate the “mirror” ability of a storage area network or other storage device. Some storage devices mirror the data of a first volume on a second volume. When a write operation is received for the first volume, the write operation is sent to the first volume and the second volume so that the two volumes remain in-synch with each other. In such storage configurations, a shadow copy may be created by breaking the mirror and exposing the second volume. Breaking the mirror comprises ceasing to cause writes to the first volume to be also be sent to the second volume. At the time of the break, the second volume is a duplicate of the first volume. It will be appreciated that creating a full shadow copy on storage devices having a mirror ability may be a relatively fast operation.
- the virtual storage simulator 225 may simulate the quickness of the mirror ability of a storage device by writing to two or more files for each write request received by the virtual storage simulator 225 . To create a shadow copy at a point in time, the virtual storage simulator 225 may cease writing to one of the files for subsequent write operations. The virtual storage simulator 225 may then expose the shadow copy if desired for access by programs seeking to access the shadow copy.
- FIG. 4 is a block diagram of an exemplary environment in which aspects of the subject matter described herein may be implemented.
- the environment includes a storage area network 205 and computers 210 - 215 connected via the network 220 .
- the storage area network 205 may include special hardware or software components that take advantage of the configuration of the storage area network 205 to create shadow copies.
- Each of the computers 210 - 215 may include components (e.g., VSSs) for creating shadow copies on the storage area network 205 .
- VSSs components
- VSSs virtual system for creating shadow copies on the storage area network 205 .
- a simulated virtual storage device In a simulated virtual storage device, however, sharing shadow copies of simulated storage area network among computers poses additional challenges. For example, if the storage area network 205 is replaced by a computer having a virtual storage area network and that computer create a shadow copy of a volume of a simulated storage area network, certain actions may be performed to allow other computers (e.g., computers 210 - 215 ) to access the shadow copy.
- the file representing the shadow copy and metadata associated therewith is copied to a destination computer that seeks to access the shadow copy.
- the metadata includes information that precisely defines the shadow copy. This information may include the storage ID (e.g., volume, disk, or LUN ID), and information that defines the type of storage device that is being simulated (e.g., the characteristics of a storage area network).
- the destination computer may import the shadow copy using a VSS method.
- the virtual storage simulator may be informed as to the characteristics of the simulated device associated with the shadow copy.
- the virtual storage simulator 225 may then simulate the simulated device based on these characteristics.
- the destination computer may access the shadow copy through a virtual storage simulator as if the destination computer were accessing the shadow copy on a physical storage device.
- the file representing the shadow copy may be made available to the destination computer through the use of a file share. Metadata regarding the shadow copy may be stored as part of the file share or may be passed directly to the destination computer. After the shadow copy and its metadata are imported, the virtual storage simulator may then use the share to access the shadow copy. To a test application using VSS on the destination computer, the virtual storage simulator makes it appears that the shadow copy is being accessed on a storage area network.
- Similar actions as described above may also be used to share simulated volumes other than shadow copies. For example, if a destination computer seeks to gain access to a simulated volume of a SAN, the volume and metadata associated therewith may be copied to the destination computer or a file share to such information provided. The destination computer may then import the volume and its associated metadata so that the virtual storage simulator on the destination computer may be informed as to the characteristics of the volume. After the import, the virtual storage simulator may then simulate the volume appropriately.
- FIG. 5 is a block diagram that generally represents an exemplary testing environment in accordance with aspects of the subject matter described herein.
- the testing environment may include a test automation engine 505 , an application to be tested 510 , a storage management API 515 , a simulated provider 520 , and a simulator simulating a storage device 525 .
- the application to be tested 510 , the storage management API 515 , the simulated provider 520 , and the simulator simulated a storage device 525 correspond to the test application 205 , the VSS/VDS 210 , the VSS/VDS Providers 215 , and the virtual storage simulator 225 , respectively of FIG. 2 and provide similar functionality.
- the test automation engine 505 drives the test process and may communicate with the simulator simulating a storage device 525 to create a simulated LUN, storage area network, or other storage device having the desired characteristics.
- the test automation engine 505 may simulate error conditions by instructing the simulated provider 520 to return an error to the application to be tested 510 or instructing the virtual storage simulator 525 to return an error to the simulated provider 520 .
- the test automation engine 505 may instruct the application to be tested 510 to attempt to perform any number of storage management and file access operations to test the application.
- the test automation engine 505 may query the simulator simulating a storage device 525 to determine whether storage management commands issued by the application to be tested 510 have their desired effect.
- the test automation engine 505 may also have components on other machines that test other properties of the simulated storage environment including simulated SAN properties, for example.
- multiple servers may be hosted as virtual servers on a single physical machine. This may allow, for example, the simulated testing of a backup application without the need for multiple physical devices connected over a network.
- FIGS. 6A and 6B are a flow diagram that generally represents exemplary actions that may occur in testing a storage device in accordance with aspects of the subject matter described herein. Turning to FIG. 6A , at block 605 , the actions begin.
- a virtual storage simulator is configured to simulate a storage device having particular characteristics (e.g., number of LUNS, mirroring capability, shadow copy capabilities, and so forth). For example, referring to FIG. 5 , the test automation engine 505 , configures the simulator 525 to simulate a storage area network having sophisticated storage management capabilities.
- an application is instructed to issue a storage request.
- the storage request may be a storage management request or a storage access request, for example.
- the test automation engine 505 instructs the application 510 to issue a storage management request to create a shadow copy.
- the storage management request is received at an interface (e.g., such as the interface of VSS).
- an interface e.g., such as the interface of VSS.
- the application 510 uses a method of the storage management API 515 to request the creation of a shadow copy.
- the storage management request is translated to a set of one or more operations suitable for the storage simulated by the virtual storage simulator. For example, referring to FIG. 5 , if the storage the simulator 525 is simulating supports breaking a mirror, the simulator provider 520 translates the shadow copy request into a request to break the mirror to create the shadow copy.
- the test automation engine 505 may instruct the provider 520 to return an error to the application 510 .
- the provider is caused to return an error.
- the set of operations are sent to the virtual storage simulator.
- the provider 520 sends a command to break the mirror to the simulator 525 .
- the simulator simulates the output of the simulated storage device. For example, referring to FIG. 5 , the simulator 525 returns a code that indicates that the mirror was successfully broken and a shadow copy created.
- information derived from the output is forwarded to the interface.
- a provider uses multiple operations to fulfill a request, it may receive multiple responses from the virtual storage simulator.
- the application that requested a storage request may expect a single response.
- information may be derived from the responses (e.g., all succeeded) to provide a response suitable (e.g., success) to the application.
- the actions continue at block 655 .
- the derived information is provided to the application.
- the storage management API 515 may indicate to the application 510 whether the request succeeded and may also provide additional information regarding the request (e.g., the name of the volume containing the shadow copy).
- the correctness of the results of the storage request may be verified.
- the test automation engine 505 may query the simulator 525 to determine whether a shadow copy was indeed created or whether an error resulted.
- the test automation engine 505 may not perform any more tests. Otherwise, the test automation engine 505 may configure the simulator 525 to simulate a storage device having different characteristics (e.g., block 610 of FIG. 6A ) and run a suite of tests against the new simulated storage device or may begin another test against a currently simulated storage device (e.g., block 615 of FIG. 6A ).
- the actions described in conjunction with FIGS. 6A and 6B are not all-inclusive of all the actions that may be taken in testing a storage device. Furthermore, although the actions are described as occurring in a particular order, in other embodiments, some of the actions may occur in parallel, may be performed with other actions, or may be performed in another order without departing from the spirit or scope of the subject matter described herein.
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 60/794,324, filed Apr. 21, 2006, entitled SIMULATED STORAGE AREA NETWORK FOR TESTING STORAGE MANAGEMENT APPLICATIONS, which application is incorporated herein in its entirety.
- Two ways to test a storage management application are to deploy a storage area network device or to use a hardware-based emulator. Testing a storage management application in either of these ways, however, has several disadvantages. First, storage area network devices or even hardware-based emulators can be very expensive and may involve dedicated personnel for setup and maintenance. Second, to test the application for each available storage array to ensure compatibility and correctness may be time consuming and add to the expense as a storage array or emulator for each vendor may be needed. Finally, sometimes new industry standards are created that no existing storage system supports.
- Briefly, aspects of the subject matter described herein relate to simulating storage devices. In aspects, a test automation engine instructs a simulator to simulate a storage device having certain characteristics such as a storage area network. The test automation engine then tests an application against the simulated storage device. Tests may include storage management requests and storage access requests. A provider may translate a request to one or more operations suitable to perform the request on the underlying simulated device. Shadow copies and the results of other storage management-related operations may be shared across computers via aspects of a simulation framework described herein.
- This Summary is provided to briefly identify some aspects of the subject matter that is further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
- The phrase “subject matter described herein” refers to subject matter described in the Detailed Description unless the context clearly indicates otherwise. The term “aspects” should be read as “at least one aspect.” Identifying aspects of the subject matter described in the Detailed Description is not intended to identify key or essential features of the claimed subject matter.
- The aspects described above and other aspects of the subject matter described herein are illustrated by way of example and not limited in the accompany figures in which like reference numerals indicate similar elements and in which:
-
FIG. 1 is a block diagram representing an exemplary general-purpose computing environment into which aspects of the subject matter described herein may be incorporated; -
FIG. 2 is a block diagram representing an exemplary arrangement of components of a system in which aspects of the subject matter described herein may operate; -
FIG. 3 is a block diagram that generally represents some exemplary components that may be involved when storing and retrieving data in accordance with aspects of the subject matter described herein; -
FIG. 4 is a block diagram of an exemplary environment in which aspects of the subject matter described herein may be implemented; and -
FIG. 5 is a block diagram that generally represents an exemplary testing environment in accordance with aspects of the subject matter described herein; and -
FIGS. 6A and 6B are a flow diagram that generally represents exemplary actions that may occur in testing a storage device in accordance with aspects of the subject matter described herein. -
FIG. 1 illustrates an example of a suitablecomputing system environment 100 on which aspects of the subject matter described herein may be implemented. Thecomputing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of aspects of the subject matter described herein. Neither should thecomputing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in theexemplary operating environment 100. - Aspects of the subject matter described herein are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with aspects of the subject matter described herein include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microcontroller-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- Aspects of the subject matter described herein may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. Aspects of the subject matter described herein may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
- With reference to
FIG. 1 , an exemplary system for implementing aspects of the subject matter described herein includes a general-purpose computing device in the form of acomputer 110. Components of thecomputer 110 may include, but are not limited to, aprocessing unit 120, asystem memory 130, and asystem bus 121 that couples various system components including the system memory to theprocessing unit 120. Thesystem bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus. -
Computer 110 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by thecomputer 110 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by thecomputer 110. Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. - The
system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements withincomputer 110, such as during start-up, is typically stored inROM 131.RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on byprocessing unit 120. By way of example, and not limitation,FIG. 1 illustratesoperating system 134,application programs 135,other program modules 136, andprogram data 137. - The
computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,FIG. 1 illustrates ahard disk drive 140 that reads from or writes to non-removable, nonvolatile magnetic media, amagnetic disk drive 151 that reads from or writes to a removable, nonvolatilemagnetic disk 152, and anoptical disk drive 155 that reads from or writes to a removable, nonvolatileoptical disk 156 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. Thehard disk drive 141 is typically connected to thesystem bus 121 through a non-removable memory interface such asinterface 140, andmagnetic disk drive 151 andoptical disk drive 155 are typically connected to thesystem bus 121 by a removable memory interface, such asinterface 150. - The drives and their associated computer storage media, discussed above and illustrated in
FIG. 1 , provide storage of computer-readable instructions, data structures, program modules, and other data for thecomputer 110. InFIG. 1 , for example,hard disk drive 141 is illustrated as storingoperating system 144,application programs 145,other program modules 146, andprogram data 147. Note that these components can either be the same as or different fromoperating system 134,application programs 135,other program modules 136, andprogram data 137.Operating system 144,application programs 145,other program modules 146, andprogram data 147 are given different numbers herein to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 20 through input devices such as akeyboard 162 and pointingdevice 161, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, a touch-sensitive screen of a handheld PC or other writing tablet, or the like. These and other input devices are often connected to theprocessing unit 120 through auser input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). Amonitor 191 or other type of display device is also connected to thesystem bus 121 via an interface, such as avideo interface 190. In addition to the monitor, computers may also include other peripheral output devices such asspeakers 197 andprinter 196, which may be connected through an outputperipheral interface 190. - The
computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as aremote computer 180. Theremote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to thecomputer 110, although only amemory storage device 181 has been illustrated inFIG. 1 . The logical connections depicted inFIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. - When used in a LAN networking environment, the
computer 110 is connected to theLAN 171 through a network interface oradapter 170. When used in a WAN networking environment, thecomputer 110 typically includes amodem 172 or other means for establishing communications over theWAN 173, such as the Internet. Themodem 172, which may be internal or external, may be connected to thesystem bus 121 via theuser input interface 160 or other appropriate mechanism. In a networked environment, program modules depicted relative to thecomputer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,FIG. 1 illustratesremote application programs 185 as residing onmemory device 181. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. -
FIG. 2 is a block diagram representing an exemplary arrangement of components of a system in which aspects of the subject matter described herein may operate. The system includes atest application 205, a virtual shadow copy service (VSS) and a virtual disk service (VDS) 210, VSS/VDS providers 215, astub 220, and avirtual storage driver 225. - In one embodiment, the
test application 205, the VSS/VDS 210, the VSS/VDS Providers 215, and thestub 220 execute in user mode while thevirtual storage driver 225 executes in kernel mode. In other embodiments, thevirtual storage simulator 225 may execute in user mode. - The
test application 205 is any program that is capable of testing one or more features of a storage area network (SAN), distributed storage, or other storage. A SAN may include storage elements, storage devices, computer systems, and/or storage appliances plus control software communicating over a network. In one embodiment, in a SAN, a computer issues a request for specific blocks or data segments from a specific logical unit number (LUN). A LUN may refer to a physical disk drive on the SAN or to a virtual partition (or volume) including portions or all of one or more physical disk drives of the SAN. - The
test application 205 may include a specialized application such as a storage manager for managing a SAN, or an application capable of backup from and restore to a storage device, or any other application capable of reading from, writing to, or managing a storage device. Thetest application 205 may or may not be structured primarily to test storage devices. Thetest application 205 may be controlled by another application that instructs thetest application 205 as to what operations to perform. Such an embodiment is described in further detail in conjunction withFIG. 5 . - The VSS/
VDS 210 are services that may be used in disk management. In one embodiment, the VSS/VDS 210 are part of the same component and may have a common interface. In other embodiments, the VSS and the VDS are separate components and may have separate interfaces. As one of its functions, the VSS may coordinate the creation of shadow copies. A shadow copy may be thought of as a “snapshot” of a volume. Logically, a shadow copy is a duplicate of a volume at a given point in time, even though the volume may not be entirely copied (e.g., via copy-on-write) in creating the shadow copy. A shadow copy may be viewed as a separate volume by the operating system and any executing applications. For example, a shadow copy may have a volume device, a volume name, a drive letter, a mount point, and any other attribute of an actual volume. - The VSS provides an interface through which an application (e.g., the test application 205) may request shadow copy operations for any supported storage device. The VSS works in conjunction with a VSS provider to perform a requested shadow copy operation. By providing an interface that may be used for any storage device, the VSS shields the application from nuances of the supported storage device.
- The interface provided by the VSS may be an industry standard interface for managing storage devices or may be a proprietary interface. In one embodiment, the VSS provides an interface that corresponds to the VSS standards provided by Microsoft Corporation of Redmond Wash. In another embodiment, the VSS provides an interface that corresponds to a Storage Network Industry Association (SNIA) storage management standard. In other embodiments, the VSS provides a proprietary interface for managing storage devices.
- When the VSS receives a request to create a shadow copy, various actions may occur to prepare for the shadow copy. After the actions are completed and the VSS receives an instruction (e.g., a commit) to finalize the shadow copy, the VSS communicates with the VSS provider to create a shadow copy.
- A VSS provider is responsible for causing the appropriate actions to occur on the underlying storage hardware to implement the shadow copy operations received by the VSS. In one sense, the VSS provider may be considered as a translator that translates the requests received from the VSS into a set of one or more requests needed to fulfill the request using the
virtual storage simulator 225. - Shadow copy operations may include creating or deleting a shadow copy, importing a shadow copy, exposing a shadow copy, and so forth. The VSS provider may take advantage of specialized storage features when performing a shadow copy operation. For example, some storage hardware may mirror a volume on another volume such that each data write to one volume is also written to the other volume. In such platforms, creating a shadow copy of a volume may involve requesting that the storage hardware split the mirror and expose the shadow copy volume for read/write access. Other storage hardware may not have such built-in features. In such storage hardware, the VSS provider may make a copy of the volume on another volume or may use a differential technique (e.g., copy-on-write) to create a shadow copy.
- As there may be many types of storage devices associated with a single computer, there may also be many VSS providers associated with a single VSS service. The VSS service may determine which VSS provider to use depending on the storage device indicated by the application.
- The VDS (of VSS/VDS 210) allows a program to query or proactively informs the program as to what storage devices are attached to a computer. In addition, the VDS may also allow a program to configure and manage attached storage devices. In one embodiment, the VDS presents an interface that allows programs to configure, manage, and otherwise interact with storage devices. The programs may communicate with the VDS using a standard set of methods while the VDS communicates with the storage device via a VDS provider that translates instructions depending on the storage device's characteristics. Similar to the VSS, the interface provided by the VDS may be an industry standard interface for managing storage devices or may be a proprietary interface.
- In communicating with the storage device, the VDS selects a VDS provider and issues commands to the VDS provider (of VSS/VDS Providers 215). The VDS provider then takes the actions needed by the underlying storage device (which may be different across storage devices) to carry out the commands. In aspects, similar to the VSS provider, the VDS provider may be considered as a translator that translates requests received by the VDS into a set of one or more requests to send to the
virtual storage simulator 225. - Some exemplary commands that the VDS may allow include commands to enable or disable automatic mount of the file system for new volumes, to mark a partition as primary (i.e. bootable) or inactive (i.e., unbootable), to repair a RAID-5 volume by replacing a failed member or mirror with a specified dynamic disk, to remove a drive letter or mount point assignment, to grow or shrink volumes, to assign or change a drive letter, and other commands.
- For systems having multiple disks in a storage array, the VDS may include commands to select a subsystem, controller, drive, or LUN, to create or delete a LUN, to rescan to locate any new disks that may have been added to the storage array, to unmask/mask a LUN to make it accessible/un-accessible to specific computers for use, to extend or shrink a LUN, and so forth.
- In an embodiment, the VSS/
VDS providers 215 are created to interact with thevirtual storage driver 225 through thestub 220. In this embodiment, the VSS/VDS providers 215 still provide the same interfaces to the VSS/VDS 210, but some of the VSS/VDS providers 215 are structured so that they issue commands to thevirtual storage simulator 225 instead of a storage driver associated with an actual storage device. In this embodiment, the VSS/VDS providers 215 that interact with thevirtual storage simulator 225 may include a complete set of operations that are available on sophisticated or even non-existent storage devices. - The
stub 220 is a component that provides an interface library to communicate with thevirtual storage simulator 225. In one embodiment, because thevirtual storage simulator 225 executes in kernel mode, a user mode process cannot directly call methods in thevirtual storage simulator 225. Instead, a complex set of actions may be involved to pass information back and forth to the virtual storage simulator. These actions may be encapsulated in thestub 220 so that the VSS/VDS providers 215 do not need this functionality. Instead, the VSS/VDS providers 215 may communicate with thestub 220, which then handles the complexities of communicating with thevirtual storage simulator 225 across the user mode/kernel mode boundary. - The
stub 220 may also allow communication with a virtual storage simulator on another machine. This may be done, for example, to create a simulated storage device on another server. This may be helpful, for example, in testing a remote backup feature where a backup server imports a shadow copy remotely from another server. - The
virtual storage simulator 225 simulates a storage device. Such storage devices may include actual storage devices that are currently available and non-available storage devices. A non-available storage device may be simulated, for example, to provide a test bed for a proposed or actual standard that is not currently supported by any available device. Although thevirtual storage simulator 225 may be structured to simulate any storage device, it will be readily recognized that thevirtual storage simulator 225 may be particularly advantageous in simulating expensive storage devices such as SANs. - In an embodiment, the
virtual storage simulator 225 simulates a block level storage device. In a block level storage device, the storage on the storage device is divided into fixed sized blocks that are individually accessible. - The
virtual storage simulator 225 may inform the VSS/VDS provider 215 when a new simulated disk is available. Thevirtual storage simulator 225 may include an API that allows another program to add, remove, or reconfigure simulated disks. - As part of simulating a physical storage device, the
virtual simulator 225 may allow data to be stored and retrieved.FIG. 3 is a block diagram that generally represents some exemplary components that may be involved when storing and retrieving data in accordance with aspects of the subject matter described herein. The components include a file system and I/O manager 305, aphysical storage driver 310, ahard disk 315, and avirtual storage simulator 225. - The file system and I/
O manager 305 may receive data access requests (e.g., reads and/or writes) from a user mode process (e.g.,stub 220 ofFIG. 2 ). Based on the storage driver associated with the request, the file system and I/O manager 305 may direct the request to thephysical storage driver 310 or thevirtual storage simulator 225. - When the
virtual storage simulator 225 receives a data access request, thevirtual storage simulator 225 may access a file on thehard disk 315 by submitting a data access request to the file system and I/O manager 305 and referencing a file on thehard disk 315. The file system and I/O manager 305 may direct the request from thevirtual storage simulator 225 to thephysical storage driver 310 which may read or write the data to or from the file on thehard disk 315. The results of the data access are reported to thevirtual storage simulator 225 which may then respond appropriately via the file system and I/O manager 305 to the process that requested the data access. The process that requested the data access need not know that the data it has accessed was on thehard disk 315. As far as the process is concerned, the data access was handled by a storage device simulated by thevirtual storage simulator 225. - The
virtual storage simulator 225 may also simulate the “mirror” ability of a storage area network or other storage device. Some storage devices mirror the data of a first volume on a second volume. When a write operation is received for the first volume, the write operation is sent to the first volume and the second volume so that the two volumes remain in-synch with each other. In such storage configurations, a shadow copy may be created by breaking the mirror and exposing the second volume. Breaking the mirror comprises ceasing to cause writes to the first volume to be also be sent to the second volume. At the time of the break, the second volume is a duplicate of the first volume. It will be appreciated that creating a full shadow copy on storage devices having a mirror ability may be a relatively fast operation. - The
virtual storage simulator 225 may simulate the quickness of the mirror ability of a storage device by writing to two or more files for each write request received by thevirtual storage simulator 225. To create a shadow copy at a point in time, thevirtual storage simulator 225 may cease writing to one of the files for subsequent write operations. Thevirtual storage simulator 225 may then expose the shadow copy if desired for access by programs seeking to access the shadow copy. -
FIG. 4 is a block diagram of an exemplary environment in which aspects of the subject matter described herein may be implemented. The environment includes astorage area network 205 and computers 210-215 connected via thenetwork 220. - The
storage area network 205 may include special hardware or software components that take advantage of the configuration of thestorage area network 205 to create shadow copies. Each of the computers 210-215 may include components (e.g., VSSs) for creating shadow copies on thestorage area network 205. When a shadow copy is created, it may be exposed (e.g., through VSS commands) to more computers than just the computer that requested that the shadow copy be created. For thestorage area network 205 this poses no special problem as the storage arenetwork 205 may just assign the shadow copy a LUN and expose the LUN to other computers. - In a simulated virtual storage device, however, sharing shadow copies of simulated storage area network among computers poses additional challenges. For example, if the
storage area network 205 is replaced by a computer having a virtual storage area network and that computer create a shadow copy of a volume of a simulated storage area network, certain actions may be performed to allow other computers (e.g., computers 210-215) to access the shadow copy. - In particular, to share the shadow copy, in one embodiment, the file representing the shadow copy and metadata associated therewith is copied to a destination computer that seeks to access the shadow copy. The metadata includes information that precisely defines the shadow copy. This information may include the storage ID (e.g., volume, disk, or LUN ID), and information that defines the type of storage device that is being simulated (e.g., the characteristics of a storage area network). Upon receiving the copy of the shadow copy, the destination computer may import the shadow copy using a VSS method. In importing the shadow copy and its associated metadata, the virtual storage simulator may be informed as to the characteristics of the simulated device associated with the shadow copy. The
virtual storage simulator 225 may then simulate the simulated device based on these characteristics. - After the shadow copy and its associated metadata are imported, the destination computer may access the shadow copy through a virtual storage simulator as if the destination computer were accessing the shadow copy on a physical storage device.
- In another embodiment, the file representing the shadow copy may be made available to the destination computer through the use of a file share. Metadata regarding the shadow copy may be stored as part of the file share or may be passed directly to the destination computer. After the shadow copy and its metadata are imported, the virtual storage simulator may then use the share to access the shadow copy. To a test application using VSS on the destination computer, the virtual storage simulator makes it appears that the shadow copy is being accessed on a storage area network.
- Similar actions as described above may also be used to share simulated volumes other than shadow copies. For example, if a destination computer seeks to gain access to a simulated volume of a SAN, the volume and metadata associated therewith may be copied to the destination computer or a file share to such information provided. The destination computer may then import the volume and its associated metadata so that the virtual storage simulator on the destination computer may be informed as to the characteristics of the volume. After the import, the virtual storage simulator may then simulate the volume appropriately.
-
FIG. 5 is a block diagram that generally represents an exemplary testing environment in accordance with aspects of the subject matter described herein. The testing environment may include atest automation engine 505, an application to be tested 510, astorage management API 515, asimulated provider 520, and a simulator simulating astorage device 525. - The application to be tested 510, the
storage management API 515, thesimulated provider 520, and the simulator simulated astorage device 525 correspond to thetest application 205, the VSS/VDS 210, the VSS/VDS Providers 215, and thevirtual storage simulator 225, respectively ofFIG. 2 and provide similar functionality. - The
test automation engine 505 drives the test process and may communicate with the simulator simulating astorage device 525 to create a simulated LUN, storage area network, or other storage device having the desired characteristics. Thetest automation engine 505 may simulate error conditions by instructing thesimulated provider 520 to return an error to the application to be tested 510 or instructing thevirtual storage simulator 525 to return an error to thesimulated provider 520. Thetest automation engine 505 may instruct the application to be tested 510 to attempt to perform any number of storage management and file access operations to test the application. Thetest automation engine 505 may query the simulator simulating astorage device 525 to determine whether storage management commands issued by the application to be tested 510 have their desired effect. - The
test automation engine 505 may also have components on other machines that test other properties of the simulated storage environment including simulated SAN properties, for example. - If needed, multiple servers may be hosted as virtual servers on a single physical machine. This may allow, for example, the simulated testing of a backup application without the need for multiple physical devices connected over a network.
-
FIGS. 6A and 6B are a flow diagram that generally represents exemplary actions that may occur in testing a storage device in accordance with aspects of the subject matter described herein. Turning toFIG. 6A , atblock 605, the actions begin. - At
block 610, a virtual storage simulator is configured to simulate a storage device having particular characteristics (e.g., number of LUNS, mirroring capability, shadow copy capabilities, and so forth). For example, referring toFIG. 5 , thetest automation engine 505, configures thesimulator 525 to simulate a storage area network having sophisticated storage management capabilities. - At
block 615, an application is instructed to issue a storage request. The storage request may be a storage management request or a storage access request, for example. For example, referring toFIG. 5 , thetest automation engine 505 instructs theapplication 510 to issue a storage management request to create a shadow copy. - At
block 620, the storage management request is received at an interface (e.g., such as the interface of VSS). For example, referring toFIG. 5 , theapplication 510 uses a method of thestorage management API 515 to request the creation of a shadow copy. - At
block 625, the storage management request is translated to a set of one or more operations suitable for the storage simulated by the virtual storage simulator. For example, referring toFIG. 5 , if the storage thesimulator 525 is simulating supports breaking a mirror, thesimulator provider 520 translates the shadow copy request into a request to break the mirror to create the shadow copy. - At
block 630, a determination is made as to whether an error is to be introduced. If so, the actions continue atblock 635; otherwise, the actions continue atblock 640. For example, referring toFIG. 5 , thetest automation engine 505 may instruct theprovider 520 to return an error to theapplication 510. Atblock 635, the provider is caused to return an error. - At
block 640, the set of operations are sent to the virtual storage simulator. For example, referring toFIG. 5 , theprovider 520 sends a command to break the mirror to thesimulator 525. - At
block 645, the simulator simulates the output of the simulated storage device. For example, referring toFIG. 5 , thesimulator 525 returns a code that indicates that the mirror was successfully broken and a shadow copy created. - At
block 650, information derived from the output is forwarded to the interface. When a provider uses multiple operations to fulfill a request, it may receive multiple responses from the virtual storage simulator. The application that requested a storage request, however, may expect a single response. To fulfill the expectations of the application, information may be derived from the responses (e.g., all succeeded) to provide a response suitable (e.g., success) to the application. - After
block 650, the actions continue atblock 655. Atblock 655, the derived information is provided to the application. For example, referring toFIG. 5 , thestorage management API 515 may indicate to theapplication 510 whether the request succeeded and may also provide additional information regarding the request (e.g., the name of the volume containing the shadow copy). - At
block 660, the correctness of the results of the storage request may be verified. For example, referring toFIG. 5 , thetest automation engine 505 may query thesimulator 525 to determine whether a shadow copy was indeed created or whether an error resulted. - At
block 665, a determination is made as to whether the test or a suite of test has completed. If so, the actions continue atblock 670; otherwise, the actions continue atblocks FIG. 6A . For example, referring to FIG. 5, if thetest automation engine 505 determines that all tests have been completed, thetest automation engine 505 may not perform any more tests. Otherwise, thetest automation engine 505 may configure thesimulator 525 to simulate a storage device having different characteristics (e.g., block 610 ofFIG. 6A ) and run a suite of tests against the new simulated storage device or may begin another test against a currently simulated storage device (e.g., block 615 ofFIG. 6A ). - At
block 670, the actions end. The actions described in conjunction withFIGS. 6A and 6B may be repeated to perform additional tests. - In one embodiment, the actions described in conjunction with
FIGS. 6A and 6B are not all-inclusive of all the actions that may be taken in testing a storage device. Furthermore, although the actions are described as occurring in a particular order, in other embodiments, some of the actions may occur in parallel, may be performed with other actions, or may be performed in another order without departing from the spirit or scope of the subject matter described herein. - It will be appreciated that aspects of the subject matter described above allow a storage device to be tested on virtually any computer. Furthermore, by using virtual machines, tests that may need multiple machines (e.g., backing up remote shadow copies), may be performed on a single physical machine.
- As can be seen from the foregoing detailed description, aspects have been described related to simulating storage devices. While aspects of the subject matter described herein are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit aspects of the claimed subject matter to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of various aspects of the subject matter described herein.
Claims (19)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/525,690 US7552044B2 (en) | 2006-04-21 | 2006-09-22 | Simulated storage area network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US79432406P | 2006-04-21 | 2006-04-21 | |
US11/525,690 US7552044B2 (en) | 2006-04-21 | 2006-09-22 | Simulated storage area network |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/976,122 Division US8394815B2 (en) | 2002-08-19 | 2010-12-22 | 2,4,5-trisubstituted imidazoles and their use as anti-microbial agents |
Publications (2)
Publication Number | Publication Date |
---|---|
US20070250302A1 true US20070250302A1 (en) | 2007-10-25 |
US7552044B2 US7552044B2 (en) | 2009-06-23 |
Family
ID=38620542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/525,690 Expired - Fee Related US7552044B2 (en) | 2006-04-21 | 2006-09-22 | Simulated storage area network |
Country Status (1)
Country | Link |
---|---|
US (1) | US7552044B2 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060020623A1 (en) * | 2003-04-10 | 2006-01-26 | Fujitsu Limited | Relation management control program, device, and system |
US20090190454A1 (en) * | 2008-01-25 | 2009-07-30 | Kunihito Onoue | Information apparatus |
US20100268689A1 (en) * | 2009-04-15 | 2010-10-21 | Gates Matthew S | Providing information relating to usage of a simulated snapshot |
US7941621B1 (en) * | 2007-03-30 | 2011-05-10 | Symantec Corporation | Automatically mounting and unmounting a volume during a backup or restore operation |
US9213607B2 (en) | 2010-09-30 | 2015-12-15 | Axcient, Inc. | Systems, methods, and media for synthesizing views of file system backups |
US20150370671A1 (en) * | 2014-06-23 | 2015-12-24 | International Business Machines Corporation | Test virtual volumes for test environments |
US9235474B1 (en) * | 2011-02-17 | 2016-01-12 | Axcient, Inc. | Systems and methods for maintaining a virtual failover volume of a target computing system |
US9292153B1 (en) | 2013-03-07 | 2016-03-22 | Axcient, Inc. | Systems and methods for providing efficient and focused visualization of data |
US9397907B1 (en) | 2013-03-07 | 2016-07-19 | Axcient, Inc. | Protection status determinations for computing devices |
US9559903B2 (en) | 2010-09-30 | 2017-01-31 | Axcient, Inc. | Cloud-based virtual machines and offices |
US20170123883A1 (en) * | 2015-10-29 | 2017-05-04 | At&T Intellectual Property I, L.P. | Predicting the reliability of large scale storage systems |
US9665385B1 (en) * | 2013-03-14 | 2017-05-30 | EMC IP Holding Company LLC | Method and apparatus for simulation storage shelves |
US9705730B1 (en) | 2013-05-07 | 2017-07-11 | Axcient, Inc. | Cloud storage using Merkle trees |
US9785647B1 (en) | 2012-10-02 | 2017-10-10 | Axcient, Inc. | File system virtualization |
US9852140B1 (en) | 2012-11-07 | 2017-12-26 | Axcient, Inc. | Efficient file replication |
US10284437B2 (en) | 2010-09-30 | 2019-05-07 | Efolder, Inc. | Cloud-based virtual machines and offices |
US10402393B2 (en) * | 2012-03-02 | 2019-09-03 | Pure Storage, Inc. | Slice migration in a dispersed storage network |
US11232093B2 (en) | 2012-03-02 | 2022-01-25 | Pure Storage, Inc. | Slice migration in a dispersed storage network |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8924933B2 (en) * | 2008-03-25 | 2014-12-30 | Barclays Capital Inc. | Method and system for automated testing of computer applications |
US8032352B2 (en) * | 2008-05-08 | 2011-10-04 | International Business Machines Corporation | Device, system, and method of storage controller simulating data mirroring |
US8719635B2 (en) | 2012-01-06 | 2014-05-06 | International Business Machines Corporation | Cost effective use of simulated storage in a storage subsystem test environment |
US10248705B2 (en) | 2015-01-30 | 2019-04-02 | Dropbox, Inc. | Storage constrained synchronization of shared content items |
US9413824B1 (en) * | 2015-01-30 | 2016-08-09 | Dropbox, Inc. | Storage constrained synchronization of content items based on predicted user access to shared content items using retention scoring |
US9563638B2 (en) | 2015-01-30 | 2017-02-07 | Dropbox, Inc. | Selective downloading of shared content items in a constrained synchronization system |
US9185164B1 (en) | 2015-01-30 | 2015-11-10 | Dropbox, Inc. | Idle state triggered constrained synchronization of shared content items |
US9361349B1 (en) | 2015-01-30 | 2016-06-07 | Dropbox, Inc. | Storage constrained synchronization of shared content items |
US10831715B2 (en) | 2015-01-30 | 2020-11-10 | Dropbox, Inc. | Selective downloading of shared content items in a constrained synchronization system |
US10719532B2 (en) | 2016-04-25 | 2020-07-21 | Dropbox, Inc. | Storage constrained synchronization engine |
US10049145B2 (en) | 2016-04-25 | 2018-08-14 | Dropbox, Inc. | Storage constrained synchronization engine |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5604889A (en) * | 1994-06-15 | 1997-02-18 | Texas Instruments Incorporated | Memory management system for checkpointed logic simulator with increased locality of data |
US5615335A (en) * | 1994-11-10 | 1997-03-25 | Emc Corporation | Storage system self-test apparatus and method |
US5752005A (en) * | 1996-01-22 | 1998-05-12 | Microtest, Inc. | Foreign file system establishing method which uses a native file system virtual device driver |
US5978576A (en) * | 1995-12-22 | 1999-11-02 | Ncr Corporation | Computer performance modeling system and method |
US20050154576A1 (en) * | 2004-01-09 | 2005-07-14 | Hitachi, Ltd. | Policy simulator for analyzing autonomic system management policy of a computer system |
US7051092B2 (en) * | 1999-12-30 | 2006-05-23 | International Business Machines Corporation | Request scheduler for automated software configuration |
-
2006
- 2006-09-22 US US11/525,690 patent/US7552044B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5604889A (en) * | 1994-06-15 | 1997-02-18 | Texas Instruments Incorporated | Memory management system for checkpointed logic simulator with increased locality of data |
US5615335A (en) * | 1994-11-10 | 1997-03-25 | Emc Corporation | Storage system self-test apparatus and method |
US5978576A (en) * | 1995-12-22 | 1999-11-02 | Ncr Corporation | Computer performance modeling system and method |
US5752005A (en) * | 1996-01-22 | 1998-05-12 | Microtest, Inc. | Foreign file system establishing method which uses a native file system virtual device driver |
US7051092B2 (en) * | 1999-12-30 | 2006-05-23 | International Business Machines Corporation | Request scheduler for automated software configuration |
US20050154576A1 (en) * | 2004-01-09 | 2005-07-14 | Hitachi, Ltd. | Policy simulator for analyzing autonomic system management policy of a computer system |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060020623A1 (en) * | 2003-04-10 | 2006-01-26 | Fujitsu Limited | Relation management control program, device, and system |
US8380823B2 (en) * | 2003-04-10 | 2013-02-19 | Fujitsu Limited | Storage medium storing relation management control program, device, and system |
US7941621B1 (en) * | 2007-03-30 | 2011-05-10 | Symantec Corporation | Automatically mounting and unmounting a volume during a backup or restore operation |
US20090190454A1 (en) * | 2008-01-25 | 2009-07-30 | Kunihito Onoue | Information apparatus |
US8341468B2 (en) * | 2008-01-25 | 2012-12-25 | Fujitsu Limited | Information apparatus |
US20100268689A1 (en) * | 2009-04-15 | 2010-10-21 | Gates Matthew S | Providing information relating to usage of a simulated snapshot |
US9559903B2 (en) | 2010-09-30 | 2017-01-31 | Axcient, Inc. | Cloud-based virtual machines and offices |
US9213607B2 (en) | 2010-09-30 | 2015-12-15 | Axcient, Inc. | Systems, methods, and media for synthesizing views of file system backups |
US10284437B2 (en) | 2010-09-30 | 2019-05-07 | Efolder, Inc. | Cloud-based virtual machines and offices |
US9235474B1 (en) * | 2011-02-17 | 2016-01-12 | Axcient, Inc. | Systems and methods for maintaining a virtual failover volume of a target computing system |
US10402393B2 (en) * | 2012-03-02 | 2019-09-03 | Pure Storage, Inc. | Slice migration in a dispersed storage network |
US11934380B2 (en) | 2012-03-02 | 2024-03-19 | Pure Storage, Inc. | Migrating slices in a storage network |
US11232093B2 (en) | 2012-03-02 | 2022-01-25 | Pure Storage, Inc. | Slice migration in a dispersed storage network |
US9785647B1 (en) | 2012-10-02 | 2017-10-10 | Axcient, Inc. | File system virtualization |
US9852140B1 (en) | 2012-11-07 | 2017-12-26 | Axcient, Inc. | Efficient file replication |
US11169714B1 (en) | 2012-11-07 | 2021-11-09 | Efolder, Inc. | Efficient file replication |
US9292153B1 (en) | 2013-03-07 | 2016-03-22 | Axcient, Inc. | Systems and methods for providing efficient and focused visualization of data |
US9397907B1 (en) | 2013-03-07 | 2016-07-19 | Axcient, Inc. | Protection status determinations for computing devices |
US9998344B2 (en) | 2013-03-07 | 2018-06-12 | Efolder, Inc. | Protection status determinations for computing devices |
US10003646B1 (en) | 2013-03-07 | 2018-06-19 | Efolder, Inc. | Protection status determinations for computing devices |
US9665385B1 (en) * | 2013-03-14 | 2017-05-30 | EMC IP Holding Company LLC | Method and apparatus for simulation storage shelves |
US10599533B2 (en) | 2013-05-07 | 2020-03-24 | Efolder, Inc. | Cloud storage using merkle trees |
US9705730B1 (en) | 2013-05-07 | 2017-07-11 | Axcient, Inc. | Cloud storage using Merkle trees |
US20150370671A1 (en) * | 2014-06-23 | 2015-12-24 | International Business Machines Corporation | Test virtual volumes for test environments |
US9558087B2 (en) * | 2014-06-23 | 2017-01-31 | International Business Machines Corporation | Test virtual volumes for test environments |
US10002039B2 (en) * | 2015-10-29 | 2018-06-19 | At&T Intellectual Property I, L.P. | Predicting the reliability of large scale storage systems |
US20170123883A1 (en) * | 2015-10-29 | 2017-05-04 | At&T Intellectual Property I, L.P. | Predicting the reliability of large scale storage systems |
Also Published As
Publication number | Publication date |
---|---|
US7552044B2 (en) | 2009-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7552044B2 (en) | Simulated storage area network | |
US10296423B2 (en) | System and method for live virtual incremental restoring of data from cloud storage | |
CN104407938B (en) | A kind of a variety of granularity restoration methods after virtual machine image level backup | |
US9400611B1 (en) | Data migration in cluster environment using host copy and changed block tracking | |
US8417796B2 (en) | System and method for transferring a computing environment between computers of dissimilar configurations | |
US8532973B1 (en) | Operating a storage server on a virtual machine | |
US8122212B2 (en) | Method and apparatus for logical volume management for virtual machine environment | |
US20120144110A1 (en) | Methods and structure for storage migration using storage array managed server agents | |
US8037026B1 (en) | Protected user-controllable volume snapshots | |
US8327096B2 (en) | Method and system for efficient image customization for mass deployment | |
US20100280998A1 (en) | Metadata for data storage array | |
KR20140018316A (en) | Virtual disk storage techniques | |
KR20140023944A (en) | Virtual disk storage techniques | |
US20170262307A1 (en) | Method and apparatus for conversion of virtual machine formats utilizing deduplication metadata | |
JP2009276969A (en) | Storage system and method for managing storage system using management device | |
CN104216801B (en) | The data copy method and system of a kind of Virtual environment | |
JP2007334878A (en) | Long-term data archiving system and method | |
US7406578B2 (en) | Method, apparatus and program storage device for providing virtual disk service (VDS) hints based storage | |
US5794013A (en) | System and method for testing computer components in development environments | |
US20040254777A1 (en) | Method, apparatus and computer program product for simulating a storage configuration for a computer system | |
US7478026B1 (en) | Application programming interface simulator for a data storage system | |
CN1834912A (en) | ISCSI bootstrap driving system and method for expandable internet engine | |
US8898444B1 (en) | Techniques for providing a first computer system access to storage devices indirectly through a second computer system | |
US20080154574A1 (en) | Application emulation on a non-production computer system | |
Barrett et al. | In press |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PILLAI, AVINASH G.;LI, HUI;TRUNLEY, PAUL;AND OTHERS;REEL/FRAME:018339/0425 Effective date: 20060921 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20130623 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0509 Effective date: 20141014 |