US20070233704A1 - Data migration method - Google Patents
Data migration method Download PDFInfo
- Publication number
- US20070233704A1 US20070233704A1 US11/759,524 US75952407A US2007233704A1 US 20070233704 A1 US20070233704 A1 US 20070233704A1 US 75952407 A US75952407 A US 75952407A US 2007233704 A1 US2007233704 A1 US 2007233704A1
- Authority
- US
- United States
- Prior art keywords
- storage device
- target
- name
- management
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Definitions
- the terminals 6 are computers used by end users or the like for using services provided by the hosts 4 .
- the name management device 5 is a computer for unified management of combinations of an iSCSI name, an IP address, and a TCP port number of each of the hosts 4 and the storage devices.
- the name management device 5 , the hosts 4 , and the storage devices are connected to each other via an IP-SAN 13 , which is an IP network.
- the terminals 6 are connected to the hosts 4 via a LAN 14 , which is an IP network.
- the management terminal 2 is connected to the storage devices and the name management device 5 via a management network 15 .
- the control device 107 comprises a volatile memory (referred to hereinbelow as a “main memory”) 101 ; a communication line 102 , such as a bus; a central processing unit (referred to hereinbelow as a “CPU”) 104 ; an IO interface (referred to hereinbelow as a “IO IF”) 105 , which constitutes an interface for connecting the control device 107 and the communication line 106 ; a network interface (referred to hereinbelow as a “NIF”) 108 for connecting the control device 107 and the communication line 10 ; a management NIF 109 for connecting the control device 107 and the communication line 12 ; and an iSCSI processing device 110 for disassembling and assembling iSCSI packets.
- the NIF 108 and the management NIF 109 have one or more physical ports.
- FIG. 6 ( d ) shows an example of a data structure of the LU table 124 .
- the LU table 124 has the same number of records as the LUs managed by the migration source storage device 1 .
- Each record of the LU table 124 has an entry 1241 in which an iSCSI name of a target is registered and an entry 1242 in which an LUN is registered, which is an identifier for identifying the LU allocated to the target.
- “null” when “null” is registered in an entry 1241 of a record of the LU table 124 , it shows that the LU corresponding to the record is not allocated to any target.
- Each record of the iSCSI node table 521 has an entry 5211 in which an iSCSI name of the iSCSI node corresponding to the record is registered; an entry 5212 in which a node type is registered, which is a character row for discriminating as to whether the iSCSI node corresponding to the record is an initiator or a target; an entry 5213 and an entry 5214 in which, respectively, an IP address and a TCP port number allocated to the iSCSI node corresponding to the record are registered; and an entry 5215 in which a change notification flag showing whether or not the iSCSI node corresponding to the record requested change notification is registered.
- FIG. 7 ( d ) shows an example of the data structure of the domain table 522 .
- the domain table 522 has the same number of records as combinations of a discovery domain registered in the name management device 5 and iSCSI nodes belonging to the discovery domain.
- Each record of the domain table 522 has an entry 5221 in which a domain ID, which is an identifier for identifying a discover domain, is registered and an entry 5222 in which an iSCSI name of an iSCSI node belonging to the discovery domain is registered.
- the system administrator or the like conducts operations according to the following sequence.
- the system administrator or the like designates the table initialization processing to the management terminal 2 .
- the system administrator or the like registers the information relating to the name management device 5 and the migration source storage device 1 in the management terminal 2 by using the name management device management screen 820 and the storage device management screen 800 (the above-mentioned communication sequence will be explained with reference to FIG. 12 ).
- FIG. 12 shows an example of a communication sequence relating to a case where the system administrator or the like designates the table initialization processing to the management terminal 2 and registers the information relating to the name management device 5 and the migration source storage device 1 in the management terminal 2 .
- the CPU 204 of the management terminal 2 sends an initialization request to the name management device 5 via the management NIF 209 ( 1204 ).
- the destination IP address of the initialization request is assumed to be the contents inputted into the area 823 .
- the system administrator or the like uses the pointing device 206 or the character input device 207 and designates the display of the storage device management screen 800 to the management terminal 2 .
- the CPU 204 of the management terminal 2 which has received the designation, executes the GUI control program 211 and carries out the storage device management screen display processing ( 1207 ).
- the CPU 204 of the management terminal 2 displays the storage device management screen 800 on the display 205 , reads all of the records from the storage device table 221 , and displays the contents of each record in the area 812 .
- the system administrator uses the character input device 207 and the pointing device 206 and designates the management terminal 2 to display the migration management screen 1100 .
- the CPU 204 of the management terminal 2 that received this designation executes the GUI control program 211 and conducts the migration management screen display processing ( 1601 ).
- the CPU 204 of the management terminal 2 reads the entries 2211 (device ID) of all of the records of the storage device table 221 , creates a list of device IDs of the storage devices according to the results obtained and makes it possible to display the list of device IDs of the storage devices when the button 1104 or the button 1111 is specified by the system administrator or the like. Duplication of a device ID in the list of device IDs of the storage devices is avoided.
- the iSCSI processing device 310 of the migration destination storage device 3 conducts the login processing ( 1713 ). In this login processing, the iSCSI processing device 310 fetches the iSCSI name of the first initiator and the iSCSI name of the first target from the received iSCSI login request, verifies whether the combination of the iSCSI name of the initiator and the iSCSI name of the target is correct, authenticates the initiator, and conducts the negotiation of various parameters.
- the LU that is the access object for the host 4 is changed from the first LU to the second LU of the LU managed by the storage device.
Abstract
Data migration arrangements for storage systems are disclosed.
Description
- This is a continuation of U.S. application Ser. No. 10/980,196, filed Nov. 4, 2004. This application relates to and claims priority from Japanese Patent Application No. 2004-274338, filed on Sep. 22, 2004. The entirety of the contents and subject matter of all of the above is incorporated herein by reference.
- The present invention relates to a system comprising a storage device and a computer that are connected via a network.
- A system in which at least one storage device is connected to a plurality of hosts via a network has come into wide use in recent years in place of a system in which the storage device is directly connected to the computer (also referred to hereinbelow as a “host”). The connection of storage devices using a network is called a storage area network (referred to hereinbelow as a “SAN”).
- A SAN has heretofore been constructed by using fibre channel (referred to hereinbelow as “FC”) technology. A SAN constructed by using FC technology will be referred to hereinbelow as a FC-SAN. A host and a storage device connected to an FC-SAN operate to conduct data write/read operations to/from a storage device by sending and receiving a SCSI (Small Computer System Interface) command or data by means of FCP (Fiber Channel Protocol).
- On the other hand, an IP-SAN, which is a SAN constructed by using an IP (Internet Protocol) network has recently attracted much attention. When a host and a storage device communicate via an IP-SAN, the iSCSI protocol, which is a protocol in which SCSI commands or data are capsulated with TCP/IP (Transmission Control Protocol/Internet Protocol), is mainly used.
- Among devices conducting communication via the above-mentioned FC-SAN or IP-SAN, those devices that send commands requesting data write/read operation or write data (either physically or logically) are called initiators. On the other hand, devices that receive a write command or data from an initiator and write the data into a storage device, such as a hard disk drive, or that receive a read command from an initiator, read data from a storage device and send this data to the initiator (either physically or logically) are called targets. Further, an initiator and a target are together referred to as a node. Usually, a host serves as an initiator and a storage device serves as a target. However, when data replication is conducted between storage devices, a storage device that stores original data serves as an initiator, and the other storage device that stores a replica of the data serves as a target.
- In a FCP, an initiator and a target are distinguished by a WWN (World Wide Name), which is an address assigned to a physical port of an HBA (Host Bus Adapter) or a physical port of a storage device. Here, an HBA is a device attached to the host for conducting communication processing using the FCP. A WWN is an address inherent to a physical port and cannot be changed by a system administrator or the like.
- On the other hand, in the iSCSI protocol, an initiator and a target are logically distinguished with an identifier called an iSCSI name. An iSCSI name is a character row which does not exist in the physical port and can be changed by a system administrator or the like. Therefore, for example, an iSCSI name assigned to a certain storage device can be reassigned to another storage device.
- A process of replacing a storage device connected to a SAN such as an aforementioned FC-SAN or IP-SAN with another storage device due to insufficient capacity or functionality is called a migration of storage devices. Migration also includes a change of a storage device storing data according to changes in the data read/write frequency or importance.
- For example, a method is described in Japanese Patent Application No. 2003-108315 as a technology for rapidly conducting migration using an FC-SAN.
- As described hereinabove, in the FCP, a target is distinguished by a WWN assigned to a physical port of a storage device. The WWN cannot be changed by a system administrator or the like. Therefore, when the system administrator conducts migration of a storage device, a configuration change, such as a change of a WWN of an access destination target has to be conducted with respect to a host. Generally, host reboot is necessary to conduct this configuration change effectively. For this reason, applications running in the host have to be temporarily interrupted.
- However, in systems based on uninterruptible operations, such as online shopping systems, interruption of application leads to significant damage. Therefore, a technology is required for implementing migration of storage devices, without interrupting the applications.
- Furthermore, in systems in which data stored in a storage device is replicated into another storage device disposed at the same site or at a remote location, as a measure against large-scale disasters or equipment failure, a configuration change, such as a change of a WWN of a replication destination target, has to be conducted with respect to the storage device where the original data is stored, in order to conduct migration of the storage device where the replica of the data is stored. The operation load on a system administrator or the like conducting this configuration change increases with an increase in the number of targets storing the data, which constitutes the object of replication. Therefore, a technology is required, which allows the migration of the storage device where the replica of the data is stored to be conducted without changing the configuration of the storage device where the original data is stored. The “site” as referred to hereinbelow is a location or building where the devices are disposed.
- Furthermore, in systems in which data stored in a logical unit (referred to hereinbelow as “LU”) of a storage device is replicated to another LU of the same storage device, as a measure against data destruction caused by operation errors of end users, a configuration change, such as a change of a WWN of an access destination target, has to be conducted with respect to a host in order to change the host access destination from the LU where the original data is stored to the LU where the replicated data is stored, when the original data is destroyed. In order to conduct this configuration change effectively, applications running on the host have to be temporarily interrupted, similar to the case of storage device migration. Therefore, a technology is required for changing the LU accessed by the host, without interrupting the applications. Further, the LU is a logical storage area composed of the physical storage areas of a storage device.
- In order to satisfy the above-described requirement, the following embodiment is suggested as an aspect of the present invention. More specifically, in a system comprising an initiator and a target, a device (referred to hereinbelow as “first device”) having the target designates the creation of a target, having an identifier identical to the identifier assigned to its own target, to another device (referred to hereinbelow as “second device”). Then, the initiator establishes a communication path to the target created in the second device by using the identifier identical to that used in the communication path established with the first device. Then, the first device disconnects the communication path used for communication between its own target and the initiator. Then, the initiator maintains the communication with the target with the same identifier by using the communication path established with the second device.
- Here, the initiator may be a computer or a storage device. Furthermore, the first device and second device may be the same or different storage devices.
- Furthermore, a configuration may be also considered in which, prior to designating the creation of the target to the second device, the first device replicates data stored in its own target to the second device, and the second device creates a target so as to correspond to the replicated data.
- Further, a configuration may be also employed in which a name management device is added to the system, and association of the identifier assigned to the initiator or the target with a physical port or a storage area owned by each device is managed by the name management device. In this case, a configuration is assumed in which the second device registers information of the newly created target in the name management device, and the initiator receives the information of the newly created target from the name management device.
- The identifier assigned to the target may be an iSCSI name and the iSCSI name may be associated with the physical port and the storage area of each device. Other configurations will be made clear from the following disclosure of various embodiments.
-
FIG. 1 is a block diagram which shows an example of the system configuration of the first embodiment of the present invention; -
FIG. 2 is a diagram which shows an example of a migration source storage device; -
FIG. 3 is a diagram which shows an example of a migration destination storage device; -
FIG. 4 is a diagram which shows an example of a management terminal; -
FIG. 5 (a) is a diagram which shows an example of a host andFIG. 5 (b) is a diagram which shows a name management device; - FIGS. 6(a) to 6(d) are diagrams which show an example of the data structure of a name management device table, a port table, a target table, and an LU table, respectively;
- FIGS. 7(a) to 7(d) are diagrams which show an example of the shows a data structure of a storage device table, a name management device table, an iSCSI node table, and a domain table, respectively;
-
FIG. 8 (a) is a diagram which shows an example of a display of a storage management screen andFIG. 8 (b) is a diagram which shows an example of a name management device management screen; -
FIG. 9 (a) is a diagram which shows an example of a display of a domain management screen andFIG. 9 (b) is a diagram which shows an example of a port management screen; -
FIG. 10 is a diagram which shows an example of a display of a target management screen; -
FIG. 11 is a diagram which shows an example of a display of a migration management screen; -
FIG. 12 is a sequence diagram which shows a communication sequence relating to table initialization processing of the management terminal and registration processing of information, relating to the name management device and the migration source storage device, as applied to the management terminal in the first embodiment; -
FIG. 13 is a sequence diagram which shows a communication sequence relating to registration processing of information of a discovery domain applied to the name management device and registration processing of information relating to a physical port and a target as applied to the migration source storage device in the first embodiment; -
FIG. 14 is a sequence diagram which shows a communication sequence relating to initiator activation on the host in the first embodiment; -
FIG. 15 is a sequence diagram which shows a communication sequence example relating to registration processing of information relating to the migration destination storage device applied to the management terminal and the registration processing of information relating to a physical port owned by the migration destination storage device to the migration destination storage device in the first embodiment; -
FIG. 16 is a sequence diagram which shows a first communication sequence relating to migration processing in the first embodiment; -
FIG. 17 is a sequence diagram which shows a second communication sequence relating to the migration processing in the first embodiment; -
FIG. 18 is a sequence diagram which shows a third communication sequence relating to the migration processing in the first embodiment; -
FIG. 19 is a block diagram which shows an example of the system configuration of a second embodiment of the present invention; -
FIG. 20 is a diagram which shows an example of a master storage device; -
FIG. 21 is a diagram which shows an example of a display of a target replication management screen; -
FIG. 22 is a diagram which shows an example of the system configuration of a third embodiment of the present invention; -
FIG. 23 is a diagram which shows an example of a remote name management device; -
FIG. 24 is a diagram which shows an example of the data structure of a domain replication table; -
FIG. 25 is a diagram which shows as example of a display of a domain replication management screen; -
FIG. 26 is a diagram which shows an example of the system configuration of a fourth embodiment of the present invention; -
FIG. 27 is a diagram which shows an example of a storage device; and -
FIG. 28 (a) is a diagram which shows an example of a display of an LU replication management screen andFIG. 28 (b) is a diagram which shows an example of an inside-storage migration management screen. - Various embodiments will be described below with reference to the appended drawings. In the drawings, identical components are identified by identical reference symbols. However, the present invention is not limited to the disclosed embodiments, and various application examples agreeing with the idea of the present invention correspond to the present invention. Further, each structural element may be used as a single element or as a plurality of elements.
- The first embodiment relates to a system in which first and second storage devices and a host accessing the first storage device are connected to a network. In the present embodiment, migration is executed from the first storage device to the second storage device. The first and second storage devices will be referred to hereinbelow as a migration source storage device and a migration destination storage device, respectively.
-
FIG. 1 shows an example of the system configuration of the present embodiment. The system of the present embodiment, as described hereinabove, has a migrationsource storage device 1, a migrationdestination storage device 3, amanagement terminal 2 to be used by a system administrator or the like to control the configuration of the migrationsource storage device 1 and the migrationdestination storage device 3, hosts 4 connected to the migration source and migration destination storage devices via a network,terminals 6 connected to thehosts 4, and aname management device 5. In order to facilitate an explanation of this embodiment, the migrationsource storage device 1 and the migrationdestination storage device 3 will be collectively called storage devices. - The storage devices communicate with the
hosts 4 by using the iSCSI protocol. Furthermore, the storage devices are storage device systems having at least one storage device (for example, a hard disk drive or the like). The storage devices have a plurality of LUs. - The
terminals 6 are computers used by end users or the like for using services provided by thehosts 4. Thename management device 5 is a computer for unified management of combinations of an iSCSI name, an IP address, and a TCP port number of each of thehosts 4 and the storage devices. Thename management device 5, thehosts 4, and the storage devices are connected to each other via an IP-SAN 13, which is an IP network. Further, theterminals 6 are connected to thehosts 4 via aLAN 14, which is an IP network. Furthermore, themanagement terminal 2 is connected to the storage devices and thename management device 5 via amanagement network 15. - The storage devices, the
hosts 4, and thename management device 5 are connected to the IP-SAN 13 usingcommunication lines 10, such as UTP (Unshielded Twisted Pair) cables or optical fibre cables. Furthermore, thehosts 4 and theterminals 6 are connected to theLAN 14 usingcommunication lines 11. Moreover, the storage devices, themanagement terminal 2, and thename management device 5 are connected to themanagement network 15 usingcommunication lines 12. - When devices, such as the storage devices, and the IP network, such as the IP-
SAN 13 and theLAN 14, are connected by using wireless communication technology, thecommunication lines SAN 13 and theLAN 14 are separate from each other, but a configuration is possible in which the IP-SAN 13 also serves as theLAN 14. In this case, the system construction cost is reduced, but an inherent problem is that packets used for communication between the storage devices and thehosts 4 and packets used for communication between thehosts 4 and theterminals 6 are mixed in one network and the network becomes congested. The configuration of the present embodiment is preferred from the standpoint of resolving this problem. - Furthermore, in the present embodiment, an example is shown in which the IP-
SAN 13 and themanagement network 15 are separate from each other, but a configuration is possible in which the IP-SAN 13 also serves as themanagement network 15. In this case, the system construction cost is reduced, but when a network apparatus constituting the IP-SAN 13 fails, operations for management of the storage devices from themanagement terminal 2 become impossible. As a result, the range of impact during failure is large. The configuration of the present embodiment is preferred from the standpoint of resolving this problem. - An overview of the operation of the present embodiment will be described below briefly. In the present embodiment, first, a first storage device (migration source storage device 1) for managing a target (a first target; an allocated identifier (iSCSI name) is assumed to be a “first-target”) with an allocated first physical port and first logical volume and a second storage device (migration destination storage device 3) comprising a second physical port and managing a second logical volume are prepared. Then, a computer (host 4) to use the first target establishes a first communication path with the first physical port and effects access to the first target by using this communication path.
- In this state, the system administrator conducts a migration of the storage devices, that is, replicates data of the migration
source storage device 1 to the migrationdestination storage device 3 and starts the operation of the migrationdestination storage device 3. At this time, the migrationsource storage device 1, after the data replication has been completed, designates, to the migrationdestination storage device 3, the creation of a target (the second logical volume having the replica of the data stored therein and the second physical port are allocated to this target), which has been assigned thereto, an identifier identical to the identifier “first-target” assigned to the first target. - Upon completion of the creation of a target, the
host 4 establishes a second communication path with the second physical port allocated to the created target (because the iSCSI name is the same as that of the first target, this target is recognized as a target identical to the first target by the host 4). Then, the migrationsource storage device 1 notifies thehost 4 of the deletion of the first physical port. - The
host 4, which has received this notification, stops using the first communication path using the first physical port and then accesses the target (the target configured in the migration destination storage device 3) by using the second communication path using the second physical port. - As a result, the storage device used in the system can be migrated, while the
host 4 accesses the same target (because the target identifier is not changed; as a result, it is not necessary to reboot the host 4). - In another embodiment, a case will be explained in which a target that is a migration destination is in another site or the same storage device.
-
FIG. 2 shows the configuration of the migrationsource storage device 1. The migrationsource storage device 1 is a storage device system having at least one storage device. The storage device includes a device using nonvolatile storage media, such as a hard disk drive or DVD. In a storage device system, a RAID (Redundant Array of Independent Disks) configuration may be employed. The migrationsource storage device 1 comprises a storage device (referred to hereinbelow as a “disk device”) 103, acontrol device 107 for controlling data write or read operation to/from thedisk device 103, and acommunication line 106 connecting thecontrol device 107 anddisk device 103. - The
control device 107 comprises a volatile memory (referred to hereinbelow as a “main memory”) 101; acommunication line 102, such as a bus; a central processing unit (referred to hereinbelow as a “CPU”) 104; an IO interface (referred to hereinbelow as a “IO IF”) 105, which constitutes an interface for connecting thecontrol device 107 and thecommunication line 106; a network interface (referred to hereinbelow as a “NIF”) 108 for connecting thecontrol device 107 and thecommunication line 10; amanagement NIF 109 for connecting thecontrol device 107 and thecommunication line 12; and aniSCSI processing device 110 for disassembling and assembling iSCSI packets. TheNIF 108 and themanagement NIF 109 have one or more physical ports. - The
main memory 101 has acache area 111 for storing data read out from thedisk device 103 or data received from thehost 4 or the like; a migration source storagedevice control program 112 that is executed by theCPU 104 when migration from the migrationsource storage device 1 to the migrationdestination storage device 3 is executed; aname change program 113 that is executed by theCPU 104 when iSCSI names, IP addresses, and TCP port numbers of targets are registered or deregistered in thename management device 5; and asynchronous replication program 114 that is executed by theCPU 104 when synchronous replication is executed. - The
main memory 101 also stores a name management device table 121 for storing information relating to thename management device 5 that is connected to the IP-SAN 13; a port table 122 for storing information relating to the physical ports of the migrationsource storage device 1; a target table 123 for storing information relating to targets managed by the migrationsource storage device 1; and an LU table 124 for storing information relating to the LUs managed by the migrationsource storage device 1. Further, as will be described hereinabove, an LU is a logical storage area composed of physical storage areas of thedisk device 103. The LU may be composed of storage areas of onedisk device 103, or it may be defined as an assembly of individual storage areas of a plurality ofdisk devices 103. -
FIG. 3 shows the configuration of the migrationdestination storage device 3. The migrationdestination storage device 3 is also a storage device system having at least one storage device. The migrationdestination storage device 3, similar to the migrationsource storage device 1, has adisk device 303, acontrol device 307, and acommunication line 306. Further, thecontrol device 307, similar to thecontrol device 107 of the migrationsource storage device 1, has amain memory 301, acommunication line 302, aCPU 304, anIO IF 305, aNIF 308, amanagement NIF 309, and aniSCSI processing device 310. TheNIF 308 and themanagement NIF 309 have one or more physical ports. - The
main memory 301 has acache area 311 for storing data read out from thedisk device 303 or data received from thehost 4 or the like; a migration destination storagedevice control program 312 that is executed by theCPU 304 when migration from the migrationsource storage device 1 to the migrationdestination storage device 3 is executed; and aname change program 313 that is executed by theCPU 304 when iSCSI names, IP addresses, and TCP port numbers of the targets are registered or deregistered in thename management device 5. - Further, the
main memory 301, similar to themain memory 101 of the migrationsource storage device 1, also stores a name management device table 321, a port table 322, a target table 323, and a LU table 324. - Further, in the present embodiment, each table is assumed to be stored in the main memory of the storage devices, but in order to prevent the information stored in each table to be lost, even in the case of a failure of the storage devices, the information stored in each table may be copied to the
disk device - Further, in the present embodiment, it is assumed that disassembling or assembling of iSCSI packets is conducted by hardware, such as the
iSCSI processing device CPU 104 or theCPU 304 according to the contents of the iSCSI processing program. However, because the storage devices have to process large-capacity iSCSI packets, the configuration of the present embodiment, which has a higher processing capacity, is preferred. -
FIG. 4 shows an example of themanagement terminal 2. Themanagement terminal 2 is a computer having amain memory 201; acommunication line 202; adisk device 203; aCPU 204; an output device (referred to hereinbelow as a “display”) 205, such as a display device; apointing device 206, such as a mouse; acharacter input device 207, such as a keyboard; and amanagement NIF 209. Themain memory 201 stores aGUI control program 211 that is executed by theCPU 204 when a graphical user interface is provided to the system administrator. Further, themain memory 201 also stores a storage device table 221 for storing information of the storage devices connected to the IP-SAN 13 and a name management device table 222 for storing information relating to thename management device 5. -
FIG. 5 (a) shows an example of thehost 4. Thehost 4 is a computer having amain memory 401, acommunication line 402, adisk device 403, aCPU 404, adisplay 405, apointing device 406, acharacter input device 407, aNIF 408, and aNIF 409. Themain memory 401 stores aniSCSI processing program 411 that is executed by theCPU 404 when disassembling or assembling of iSCSI packets is conducted and aname operation program 412 executed by theCPU 404 when an initiator iSCSI name, an IP address, and a TCP port number are registered in thename management device 5 or deregistered therefrom, and when sending an inquiry to thename management device 5 or receiving an inquiry response or change notification from thename management device 5. Further, themain memory 401 stores abuffer area 421 to which the contents of disk accesses are temporarily saved. - Further, in the present embodiment, the
CPU 404 is assumed to execute disassembling or assembling of iSCSI packets according to the contents of theiSCSI processing program 411, but, in order to increase the processing speed, disassembling or assembling of iSCSI packets may be processed by hardware, similar to the migrationsource storage device 1 and the like. -
FIG. 5 (b) shows an example of thename management device 5. Thename management device 5 is a computer having amain memory 501, acommunication line 502, adisk device 503, aCPU 504, adisplay 505, apointing device 506, acharacter input device 507, aNIF 508, and amanagement NIF 509. Themain memory 501 stores adomain management program 511 that is executed by theCPU 504 when a request is received from another device, such as the storage devices, and a domain table 522 is changed; an iSCSInode management program 512 that is executed by theCPU 504 when a request is received from another device, such as the storage devices and an iSCSI node table 521 is changed or read; and achange notification program 513 for notifying the other device, such as thehost 4, that the iSCSI node table 521 was changed. Furthermore, themain memory 501 also stores the iSCSI node table 521 which stores the association relationship of iSCSI nodes with IP addresses and TCP port numbers and the domain table 522, which stores an association relationship of the iSCSI nodes and discovery domains. - Further, the above-described programs are stored in advance in disk devices of each device or main memory by reading from a portable storage media or by downloading via a network from another computer. When necessary, those programs are transferred into the main memory and executed by the CPU.
- The role of the
name management device 5 will be explained below. - An initiator has to perform login to a target prior to starting an exchange of SCSI commands or data with the target via the IP-
SAN 13. When the initiator performs login to the target by using iSCSI, information consisting of an iSCSI name, an IP address, and a TCP port number of the target is required. The process of acquiring the target information by the initiator is called a discovery. However, the operations of configuring iSCSI names, IP addresses, and TCP port numbers for all targets in eachhost 4 operating as an initiator places a very heavy burden on the system administrator. For this reason, the iSCSI protocol stipulates methods by which an initiator conducts discovery, without configuring the target information in advance for each initiator. One of such methods is a method comprising connecting thename management device 5 to the IP-SAN 13. - The
name management device 5 is a device for managing combinations of an iSCSI name, an IP address, and a TCP port number of each node in the iSCSI (referred to hereinbelow as an “iSCSI node”). Thus, in each node, a logical iSCSI name is associated with an IP address and a TCP port number of a physical port. The iSNSP (Internet Storage Name Service Protocol) or SLP (Service Location Protocol) is used as a communication protocol between thename management device 5 and iSCSI nodes. Further, in the present embodiment, it is assured that thename management device 5 uses iSNSP for communication with other devices, but a system in which thename management device 5 uses another protocol, such as SLP, is also possible. - Further, the
name management device 5 also manages information called a discovery domain to limit targets that can be objects of discovery by an initiator. A discovery domain is information indicating an association of an initiator and a target to which the initiator can perform login. - Furthermore, the
name management device 5 notifies iSCSI nodes belonging to the same discovery domain that there was a change when information relating to the iSCSI nodes has been registered or deregistered. An SCN (State Change Notification) is used for this notification. - The operation procedure relating to a discovery using the
name management device 5 will be explained below. First, one of the storage devices or thehost 4, after being activated, transmits information of an iSCSI name, an IP address, and a TCP port number of a node that is managed, thereby to thename management device 5 via the IP-SAN 13, and registers this information in thename management device 5. Then, thehost 4 inquires from thename management device 5 via the IP-SAN 13 concerning information of iSCSI names, IP addresses, and TCP port numbers of targets to which thehost 4 itself can perform login and acquires this information. Thus, thename management device 5 can substantially reduce configuration operations by the system administrator to thehost 4 by unified management of the combinations of an iSCSI name, an IP address, and a TCP port number. - Further, the
terminal 6 is a computer for general applications and has a CPU, a main memory, an I/O device, and a network interface, which is an interface for connecting to other devices via acommunication line 11. - The data structure of each table stored in the
main memory 101 of the migrationsource storage device 1 will be described below. The name management device table 121, the port table 122, the target table 123, and the LU table 124 form an array structure and can store at least one record. However, the data structure is not limited to the array structure. -
FIG. 6 (a) shows an example of the data structure of the name management device table 121. The name management device table 121 has the same number of records as thename management devices 5 connected to the IP-SAN 13. Each record of the name management device table 121 has anentry 1211 in which a device ID is registered, which is an identifier for identifying thename management device 5 corresponding to the record, and anentry 1212 in which an IP address allocated to theNIF 508 of the aforementionedname management device 5 is registered. In the present embodiment, onename management device 5 is assumed to be connected to the IP-SAN 13. Therefore, one record is stored in the name management device table 121. However, in the case of a system where a plurality ofname management devices 5 are connected to the IP-SAN 13 in order to, for example, provide redundancy of thename management device 5, the name management device table 121 stores a plurality of records. -
FIG. 6 (b) shows an example of the data structure of the port table 122. The port table 122 has the same number of records as the physical ports of the migrationsource storage device 1. Each record of the port table 122 has anentry 1221 in which a port ID is registered, which is an identifier for identifying the physical port corresponding to the record, anentry 1222 in which an IP address allocated to the physical port corresponding to the record is registered, anentry 1223 in which a subnet mask of a subnet to which the IP address belongs is registered, and anentry 1224 in which an IP address of a default gateway of the subnet is registered. In the present embodiment, when “0. 0. 0. 0” is registered in each of anentry 1222, anentry 1223, and anentry 1224 of a record of the port table 122, it shows that an IP address, a subnet mask, and an IP address of a default gateway have not been registered in the physical ports corresponding to the record. -
FIG. 6 (c) shows an example of the data structure of the target table 123. The target table 123 has the same number of records as combinations of a target managed by the migrationsource storage device 1 and a physical port allocated to the target. Each record of the target table 123 has anentry 1231 in which an iSCSI name of a target is registered, anentry 1232 in which a port ID of a physical port allocated to the target is registered, and anentry 1233 in which a TCP port number used by the target is registered. -
FIG. 6 (d) shows an example of a data structure of the LU table 124. The LU table 124 has the same number of records as the LUs managed by the migrationsource storage device 1. Each record of the LU table 124 has anentry 1241 in which an iSCSI name of a target is registered and anentry 1242 in which an LUN is registered, which is an identifier for identifying the LU allocated to the target. In the present embodiment, when “null” is registered in anentry 1241 of a record of the LU table 124, it shows that the LU corresponding to the record is not allocated to any target. - Further, the data structures of the name management device table 321, the port table 322, the target table 323, and the LU table 324, which are stored in the
main memory 301 of the migrationdestination storage device 3, are identical to the data structures of the name management device table 121, the port table 122, the target table 123, and the LU table 124, respectively. - The data structure of each table stored in the
disk device 203 of themanagement terminal 2 will be explained below. The storage device table 221 and the name management device table 222 form an array structure and can store at least one record. However, the data structure is not limited to the array structure. -
FIG. 7 (a) shows an example of the data structure of the storage device table 221. The storage device table 221 has the same number of records as the storage devices connected to the IP-SAN 13. Each record of the storage device table 221 has anentry 2211 in which a device ID is registered, which is an identifier for identifying the storage device corresponding to the record, and anentry 2212 in which an IP address allocated to a management NIF of the storage device corresponding to the record is registered. An IP address allocated to a management NIF will be called hereinbelow a management IP address. -
FIG. 7 (b) shows an example of the data structure of the name management device table 222. The name management device table 222 has the same number of records as thename management devices 5 connected to the IP-SAN 13. Each record of the name management device table 222 has anentry 2221 in which a device ID of thename management device 5 corresponding to the record is registered, anentry 2222 in which an IP address allocated to aNIF 508 of thename management device 5 corresponding to the record is registered, and anentry 2223 in which a management IP address allocated to amanagement NIF 509 of thename management device 5 corresponding to the record is registered. As described hereinabove, in the present embodiment, onename management device 5 is assumed to be connected to the IP-SAN 13. Therefore, one record is stored in the name management device table 222. - The data structure of each table stored in the
disk device 503 of thename management device 5 will be described below. The iSCSI node table 521 and the domain table 522 have an array structure and can store at least one record. However, the data structure is not limited to the array structure. -
FIG. 7 (c) shows an example of the data structure of the iSCSI node table 521. The iSCSI node table 521 has the same number of records as combinations of an iSCSI node managed by the migrationsource storage device 1, the migrationdestination storage device 3 or thehost 4, an IP address and a TCP port number allocated to the iSCSI node. Each record of the iSCSI node table 521 has anentry 5211 in which an iSCSI name of the iSCSI node corresponding to the record is registered; anentry 5212 in which a node type is registered, which is a character row for discriminating as to whether the iSCSI node corresponding to the record is an initiator or a target; anentry 5213 and anentry 5214 in which, respectively, an IP address and a TCP port number allocated to the iSCSI node corresponding to the record are registered; and anentry 5215 in which a change notification flag showing whether or not the iSCSI node corresponding to the record requested change notification is registered. - In the present embodiment, when the designation “initiator” is registered in an
entry 5212 of a record of the iSCSI node table 521, it shows that the iSCSI node corresponding to the record is an initiator; and, when the designation “target” is registered in theentry 5212, it shows that the iSCSI node corresponding to the record is a target. Furthermore, in the present embodiment, when the designation “null” is registered in anentry 5214 of a record of the iSCSI node table 521, it shows that a TCP port number that will be used by the iSCSI node corresponding to the record is not determined. Further, in the present embodiment, when “0” is registered in anentry 5215 of a record of the iSCSI node table 521, it shows that the iSCSI node corresponding to the record has not requested a change notification, and, when “1” is registered in theentry 5215, it shows that the iSCSI node has requested a change notification. -
FIG. 7 (d) shows an example of the data structure of the domain table 522. The domain table 522 has the same number of records as combinations of a discovery domain registered in thename management device 5 and iSCSI nodes belonging to the discovery domain. Each record of the domain table 522 has anentry 5221 in which a domain ID, which is an identifier for identifying a discover domain, is registered and anentry 5222 in which an iSCSI name of an iSCSI node belonging to the discovery domain is registered. - The graphical user interfaces (referred to hereinbelow as “GUI”s) of the present embodiment will be explained below. The GUIs are displayed on the
display 205 when theCPU 204 of themanagement terminal 2 executes theGUI control program 211. The system administrator configures each parameter of the displayed GUIs by using thecharacter input device 207 and thepointing device 206. Further, themanagement terminal 2 may provide the system administrator with command line interfaces having functions identical to those of the GUIs, instead of the GUIs designated in connection with the present embodiment. -
FIG. 8 (a) shows a display example of astorage management screen 800 used by the system administrator or the like to register in or delete from themanagement terminal 2 information of the storage device connected to the IP-SAN 13. Thestorage management screen 800 has anarea 801 to which the device ID of the storage device is inputted; anarea 802 to which a management IP address of the storage device is inputted; abutton 810 that is used when the information inputted into thearea 801 and thearea 802 is registered in themanagement terminal 2; abutton 811 that is used when the information of the storage device specified by using anarea 812 is deleted from themanagement terminal 2; thearea 812 for displaying information of all of the storage devices that have already been registered in themanagement terminal 2; abutton 813 and abutton 815 that is used when the display range of thearea 812 is moved up and down, respectively, by one line; abutton 814 that is used when the display range of thearea 812 is moved to any position; and abutton 819 that is used when thestorage management screen 800 is closed. -
FIG. 8 (b) shows an example of a name managementdevice management screen 820 used by the system administrator or the like to register in themanagement terminal 2 information of thename management device 5 that is connected to the IP-SAN 13. The name managementdevice management screen 820 has anarea 821 to which the device ID of thename management device 5 is inputted; anarea 822 to which the IP address allocated to theNIF 508 of thename management device 5 is inputted; anarea 823 to which the management IP address of thename management device 5 is inputted; abutton 828 that is used when the information inputted into the areas from thearea 821 to thearea 823 is registered in themanagement terminal 2; and abutton 829 that is used when registration of the information relating to thename management device 5 is canceled. -
FIG. 9 (a) shows an example of adomain management screen 900 used by the system administrator or the like to register in or delete from thename management device 5 information of a discovery domain. Thedomain management screen 900 has anarea 901 to which the domain ID of the discovery domain is inputted; anarea 902 to which an iSCSI name of an iSCSI node, which belongs to the discovery domain, is inputted; abutton 910 that is used when the information inputted into thearea 901 and thearea 902 is registered in thename management device 5; abutton 911 that is used when the information of the discovery domain specified by using anarea 912 is deleted from thename management device 5; thearea 912 for displaying the information of all the discovery domains that have already been registered in thename management device 5; abutton 913 and abutton 915 that are used when the display range of thearea 912 is moved up and down, respectively, by one line; abutton 914 that is used when the display range of thearea 912 is moved to any position; and abutton 919 that is used when thedomain management screen 900 is closed. -
FIG. 9 (b) shows a display example of aport management screen 920 used by the system administrator or the like to register in or delete from one of the storage devices information of a physical port owned by the storage device. The port management screen 920 has a button 922 that is used when the device ID of the storage device having the physical port that will be registered is selected from a list; an area 921 for displaying the device ID selected by using the button 922; an area 923 to which the port ID of the physical port is inputted; an area 924 to which an IP address allocated to the physical port is inputted; an area 925 to which a subnet mask of a subnet, to which the physical port is connected, is inputted; an area 926 to which an IP address of a default gateway of the subnet is inputted; a button 930 that is used when the information inputted into the areas from the area 923 to the area 926 is registered in the storage device having the device ID selected by using the button 922; a button 931 that is used when the information of the physical port specified by using the area 932 is deleted from the storage device; an area 932 for displaying the information of all of the physical ports of all of the storage devices connected to the IP-SAN 13; a button 933 and a button 935 that is used when the display range of the area 932 is moved up and down, respectively, by one line; a button 934 that is used when the display range of the area 932 is moved to any position; and a button 939 that is used when the port management screen 920 is closed. -
FIG. 10 shows an example of atarget management screen 1000 used by the system administrator or the like for registering in or deleting from one of the storage devices information of a target operating in the storage device. The target management screen 1000 has a button 1002 that is used when the device ID of the storage device managing target to be registered is selected from a list; an area 1001 for displaying the device ID selected by using the button 1002; an area 1003 to which an iSCSI name of the target is inputted; an area 1004 to which a port ID of a physical port allocated to the target is inputted; an area 1005 to which a TCP port number that is used by the target is inputted; an area 1006 to which an LUN of an LU allocated to the target is inputted; a button 1010 that is used when the information inputted into the areas from the area 1003 to the area 1006 is registered in the storage device having the device ID selected by the button 1002; a button 1011 that is used when the information of the target specified by using the area 1012 is deleted from the storage device; an area 1012 for displaying the information of all of the targets that have already been registered in all of the storage devices connected to the IP-SAN 13; a button 1013 and a button 1015 that is used when the display range of the area 1012 is moved up and down, respectively, by one line; a button 1014 that is used when the display range of the area 1012 is moved to any position; and a button 1019 that is used when the target management screen 1000 is closed. -
FIG. 11 shows an example of amigration management screen 1100 used by the system administrator or the like when migration of storage devices is conducted for each target. Themigration management screen 1100 has anarea 1101 to which information relating to the migrationsource storage device 1 is inputted; anarea 1102 to which information relating to the migrationdestination storage device 3 is inputted; abutton 1128 that is used when the start of the migration processing is designated to the migrationsource storage device 1 according to the information inputted into thearea 1101 and thearea 1102; and abutton 1129 that is used when the migration processing is canceled. Further, thearea 1101 is composed of abutton 1104 that is used when the device ID of the migrationsource storage device 1 is selected from a list; anarea 1103 for displaying the device ID selected by using thebutton 1004; anarea 1105 to which an iSCSI name of a target, which is the migration object, is inputted; anarea 1106 to which an iSCSI name of an initiator is inputted, which is used by the migrationsource storage device 1 when migration of data into the migrationdestination storage device 3 is conducted by using synchronous replication; and anarea 1107 to which a port ID of a physical port is inputted, which is used by the migrationsource storage device 1 for the data migration using synchronous replication. Furthermore, thearea 1102 is composed of abutton 1111 that is used when the device ID of the migrationdestination storage device 3 is selected from a list; anarea 1110 for displaying the device ID selected by using thebutton 1111; anarea 1112 to which an iSCSI name of a target is inputted, which is used by the migrationdestination storage device 3 during the data migration using synchronous replication; anarea 1113 to which a port ID of a physical port allocated to the target is inputted; anarea 1114 to which a TCP port number used by the target is inputted; anarea 1115 to which a port ID of a physical port allocated to the target is inputted, which will be migrated from the migrationsource storage device 1 to the migrationdestination storage device 3; anarea 1116 to which a TCP port number that is used by the target is inputted, which will be migrated from the migrationsource storage device 1 to the migrationdestination storage device 3; and anarea 1117 to which an LUN of an LU allocated to the target is inputted, which will be migrated from the migrationsource storage device 1 to the migrationdestination storage device 3. - The communication sequence and the operation procedure in the present embodiment will be explained hereinbelow.
- In the present embodiment, the system administrator or the like conducts operations according to the following sequence. First, the system administrator or the like designates the table initialization processing to the
management terminal 2. Then, the system administrator or the like registers the information relating to thename management device 5 and the migrationsource storage device 1 in themanagement terminal 2 by using the name managementdevice management screen 820 and the storage device management screen 800 (the above-mentioned communication sequence will be explained with reference toFIG. 12 ). - The system administrator or the like then uses the
domain management screen 900 and registers, in thename management device 5, the information of a discovery domain to which an initiator managed by thehost 4 and a target managed by the migrationsource storage device 1 belong. Then, the system administrator or the like uses theport management screen 920 and thetarget management screen 1000 and registers, in the migrationsource storage device 1, the information relating to physical ports and targets, respectively, of the migration source storage device 1 (the above-described communication sequence will be explained with reference toFIG. 13 ). Then, the system administrator or the like performs configuration for accessing the target managed by the migrationsource storage device 1 in thehost 4 and activates the initiator (the above-described communication sequence will be explained with reference toFIG. 14 ). - Then, the system administrator or the like conducts operations necessary for migrating the target managed by the migration
source storage device 1 to the migrationdestination storage device 3. First, the system administrator or the like uses the storagedevice management screen 800 and registers the information relating to the migrationdestination storage device 3 in themanagement terminal 2, and then the system administrator or the like uses theport management screen 920 and registers the information relating to physical ports of the migrationdestination storage device 3 in the migration destination storage device 3 (the above-described communication sequence will be explained with reference toFIG. 15 ). Further, the system administrator or the like uses themigration management screen 1100 and designates the start of the migration processing to the migration source storage device 1 (the above-described communication sequence will be explained with reference toFIG. 16 toFIG. 18 ). - In the explanation of the communication sequence and the operation procedure provided hereinbelow, the following examples of parameters will be used.
- First, in the present embodiment, the IP-
SAN 13 is assumed to be composed of one subnet, and the network address, the subnet mask, and the IP address of the default gateway of this subnet will be “172. 16. 0. 0”, “255. 255. 0. 0”, and “172. 16. 0. 254”, respectively. On the other hand, the management network is assumed to be composed of one subnet, and the network address and the subnet mask of this subnet will be “192. 168. 0. 0” and “255. 255. 255. 0”, respectively. - Furthermore, in the present embodiment, the device ID and the management IP address of the migration
source storage device 1 are assumed to be “STR01” and “192. 168. 0. 1”, respectively. The migrationsource storage device 1 is assumed to have two physical ports. The respective physical ports will be referred to hereinbelow as the first physical port and the second physical port. The IP addresses “172. 16. 0. 1” and “172. 16. 0. 2” will be allocated to the first physical port and the second physical port, respectively. Further, the migrationsource storage device 1 is assumed to manage two LUs. Those LUs will be referred to hereinbelow as the first LU and the second LU. Further, the migrationsource storage device 1 is assumed to manage two targets. The first target is assumed to have an iSCSI name “iqn. 2004-06. com. hitachi : tar01” and to be allocated with the first physical port and the first LU. The second target is assumed to have an iSCSI name “iqn. 2004-06. com. hitachi : tar02” and to be allocated with the second physical port and the second LU. Both targets are assumed to use “3260”, which is the well-known port, as the TCP port number. - On the other hand, the device ID and the management IP address of the migration
destination storage device 3 are assumed to be “STR02” and “192. 168. 0. 2”, respectively. The migrationdestination storage device 3 is assumed to have two physical ports. The respective physical ports will be referred to hereinbelow as the third physical port and the fourth physical port. The IP addresses “172. 16. 0. 3” and “172. 16. 0. 4” are assumed to be allocated to the third physical port and the fourth physical port, respectively. Further, the migrationdestination storage device 3 is assumed to manage two LUs. Those LUs will be referred to hereinbelow as the third LU and the fourth LU, respectively. The capacity of the third LU and the fourth LU will be assumed to be identical to that of the first LU and the second LU, respectively. - Further, in the present embodiment, the device ID, the IP address of the
NIF 508, and the management IP address of thename management device 5 are assumed to be “NM01”, “172. 16. 0. 253”, and “192. 168. 0. 253”, respectively. - Further, in the present embodiment, the IP address of the
NIF 408 of thehost 4 is assumed to be “172. 16. 0. 128”. The initiator managed by thehost 4 will be called the first initiator, and it is assumed to have an iSCSI name “iqn. 1999-08. com. abc : host01” and to communicate by the iSCSI protocol with the first target managed by the migrationsource storage device 1. A discovery domain, which has a domain ID “DD01” and to which the first initiator and the first target belong, is assumed to be registered in thename management device 5 so that the first initiator will be capable of discovery of the first target. -
FIG. 12 shows an example of a communication sequence relating to a case where the system administrator or the like designates the table initialization processing to themanagement terminal 2 and registers the information relating to thename management device 5 and the migrationsource storage device 1 in themanagement terminal 2. - First, the system administrator or the like uses the
pointing device 206 or thecharacter input device 207 and designates table initialization to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and conducts the table initialization processing (1201). In this table initialization processing, theCPU 204 of themanagement terminal 2 sets the storage device table 221 and the name management device table 222 into a state in which no records are present. After the table initialization processing has been completed, theCPU 204 of themanagement terminal 2 displays, on thedisplay 205, a screen showing the completion of the table initialization. - If the system administrator or the like then uses the
pointing device 206 or thecharacter input device 207 and designates display of the name managementdevice management screen 820 to themanagement terminal 2, theCPU 204 of themanagement terminal 2 executes theGUI control program 211 and carries out the name management device management screen display processing (1202). In this name management device management screen display processing, theCPU 204 of themanagement terminal 2 displays the name managementdevice management screen 820 on thedisplay 205. - Then, the system administrator or the like inputs the information relating to the
name management device 5 into the areas from thearea 821 to thearea 823 of the name managementdevice management screen 820. In the present embodiment, “NM01”, which is the device ID of thename management device 5, “172. 16. 0. 253”, which is the IP address of theNIF 508 of thename management device 5, and “192. 168. 0. 253”, which is the management IP address of thename management device 5″ are inputted into thearea 821, thearea 822, and thearea 823, respectively. - If the system administrator or the like then actuates the
button 828, theCPU 204 of themanagement terminal 2 executes theGUI control program 211 and conducts the name management device addition processing (1203). In the name management device addition processing, first, theCPU 204 of themanagement terminal 2 adds a record to the name management device table 222. Here, the contents inputted into thearea 821, thearea 822, and thearea 823 are respectively registered in the entry 2221 (device ID), the entry 2222 (IP address), and the entry 2233 (management IP address) of the record which is added. - Then, the
CPU 204 of themanagement terminal 2 sends an initialization request to thename management device 5 via the management NIF 209 (1204). The destination IP address of the initialization request is assumed to be the contents inputted into thearea 823. - If the initialization request is received, the
CPU 504 of thename management device 5 executes thedomain management program 511 and the iSCSInode management program 512 and conducts the table initialization processing (1205). In this table initialization processing, first, theCPU 504 of thename management device 5 executes thedomain management program 511 and sets the domain table 522 into a state in which no records are present. Then, theCPU 504 of thename management device 5 executes the iSCSInode management program 512 and sets the iSCSI node table 521 into a state in which no records are present. - If the above-described table initialization processing is completed, the
CPU 504 of thename management device 5 composes an initialization response indicating that the initialization was completed successfully and sends the response to themanagement terminal 2 via the management NIF 509 (1206). If the initialization response is received, theCPU 204 of themanagement terminal 2, displays on thedisplay 205, the screen showing that the registration of the name management device has been completed. - Then, the system administrator or the like uses the
pointing device 206 or thecharacter input device 207 and designates the display of the storagedevice management screen 800 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and carries out the storage device management screen display processing (1207). In the storage device management screen display processing, theCPU 204 of themanagement terminal 2 displays the storagedevice management screen 800 on thedisplay 205, reads all of the records from the storage device table 221, and displays the contents of each record in thearea 812. - Then, the system administrator or the like inputs the information relating to the migration
source storage device 1 into thearea 801 and thearea 802 of the storagedevice management screen 800. In the present embodiment, “STR01”, which is the device ID of the migrationsource storage device 1, and “192. 168. 0. 1”, which is the management IP address of the migrationsource storage device 1, are inputted into thearea 801 and thearea 802, respectively. - If the system administrator or the like then specifies the
button 810, theCPU 204 of themanagement terminal 2 executes theGUI control program 211 and carries out the storage device addition processing (1208). In the storage device addition processing, first, theCPU 204 of themanagement terminal 2 adds a record to the storage device table 221. Here, the contents inputted into thearea 801 and thearea 802 are respectively registered in the entry 2211 (device ID) and the entry 2212 (management IP address) of the record which is added. - Then, the
CPU 204 of themanagement terminal 2 composes an initialization request, including the contents of the entry 2221 (device ID) and the entry 2222 (IP address) of the first record of the name management device table 222, and sends this request to the migrationsource storage device 1 via the management NIF 209 (1209). The destination IP address of the initialization request is assumed to be the contents inputted into thearea 802. - If the initialization request is received, the
CPU 104 of the migrationsource storage device 1 executes the migration source storagedevice control program 112 and conducts the table initialization processing (1210). In this table initialization processing, first, theCPU 104 of the migrationsource storage device 1 sets the name management device table 121, the port table 122, the domain table 123, and the LU table 124 into a state in which no records are present. Then, theCPU 104 of the migrationsource storage device 1 fetches the contents of the entry 2221 (device ID) and the entry 2222 (IP address) from the initialization request and adds a record to the name management device table 121. The contents of the entry 2221 (device ID) and the entry 2222 (IP address) fetched from the initialization request are registered respectively in the entry 1211 (device ID) and the entry 1212 (IP address) of the record which is added. - Then, the
CPU 104 of the migrationsource storage device 1 allocates port IDs to all of the physical ports of the migrationsource storage device 1 and adds, to the port table 122, records, in each of which each of the port IDs allocated is registered, in the entry 1221 (port ID) and “0. 0. 0. 0.” is registered in the entry 1222 (IP address), the entry 1223 (subnet mask), and the entry 1224 (gateway). In the present embodiment, theCPU 104 of the migrationsource storage device 1 is assumed to sequentially allocate integers, starting from “1”, as a port ID, to each physical port. In the present embodiment, port IDs “1” and “2” are allocated to the first physical port and the second physical port, respectively, of the migrationsource storage device 1. - Furthermore, the
CPU 104 of the migrationsource storage device 1 allocates LUNs to all the LUs managed by the migrationsource storage device 1 and adds, to the LU table 124, records, in each of which the entry 1241 (target) is “null” and the entry 1242 (LUN) is each allocated LUN. In the present embodiment, theCPU 104 of the migrationsource storage device 1 is assumed to sequentially allocate integers, starting from “0”, as an LUN to each LU. In the present embodiment, the LUNs “0” and “1” are allocated, respectively, to the first and the second LU of the migrationsource storage device 1. - After the above-described table initialization processing has been completed, the
CPU 104 of the migrationsource storage device 1 composes an initialization response showing that the initialization was completed successfully and sends the response to themanagement terminal 2 via the management NIF 109 (1211). If the initialization response is received from the migrationsource storage device 1, theCPU 204 of themanagement terminal 2 adds to the area 812 a line composed of the contents inputted into thearea 801 and thearea 802. -
FIG. 13 shows an example of a communication sequence relating to the case where the system administrator or the like registers, in thename management device 5, the information of the discovery domain to which the first initiator and the first target belong and registers, in the migrationsource storage device 1, the information relating to the physical ports and the targets. Further, in the present embodiment, the system administrator or the like carries out the discovery domain registration in the order of the first target and the first initiator, but the registration may be conducted in a reverse order. Further, in the present embodiment, the system administrator registers the information relating to physical ports in the order of the first physical port and the second physical port and registers the information relating to targets in the order of the first target and the second target, but the registration also may be conducted in a reverse order. - First, the system administrator or the like uses the
pointing device 206 or thecharacter input device 207 and designates display of thedomain management screen 900 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and conducts the domain management screen display processing (1301). In the domain management screen display processing, first, theCPU 204 of themanagement terminal 2 displays thedomain management screen 900 on thedisplay 205. Then, theCPU 204 of themanagement terminal 2 reads all of the records of the domain table 522 from thename management device 5 corresponding to the first record of the name management device table 222 and displays the contents of the records in thearea 912. - Then, the system administrator or the like inputs the information relating to the discovery domain into the
area 901 and thearea 902 of thedomain management screen 900. In the present embodiment, “DD01” and “iqn. 2004-06. com. hitachi : tar01”, which is the iSCSI name of the first target, are inputted into thearea 901 and thearea 902, respectively. - If the system administrator or the like then specifies the
button 910, theCPU 204 of themanagement terminal 2 composes a domain change request including the contents of thearea 901 and thearea 902 and sends the request via themanagement NIF 209 to thename management device 5 corresponding to the first record of the name management device table 222 (1302). The destination IP address of the domain change request is assumed to be the contents of the entry 2223 (management IP address) of the first record of the name management device table 222. - If the domain change request is received, the
CPU 504 of thename management device 5 executes thedomain control program 511 and carries out the domain change processing (1303). In this domain change processing, theCPU 504 of thename management device 5 fetches the contents of thearea 901 and thearea 902 from the received domain change request and adds a record to the domain table 522. The contents of thearea 901 are registered in the entry 5221 (domain ID) of the record which is added, and the contents of thearea 902 are registered in the entry 5222 (iSCSI node) of the record. - If the above-described domain change processing is completed, the
CPU 504 of thename management device 5 composes a domain change response indicating that the addition of the iSCSI node to the discovery domain was completed successfully and sends this response to themanagement terminal 2 via the management NIF 509 (1304). If the domain change response is received, theCPU 204 of themanagement terminal 2 adds, to thearea 912, one line composed of the contents inputted into thearea 901 and thearea 902. - Then, the system administrator or the like again executes the operations from 1301 to 1304. However, in the
operation 1301, “DD01” and “iqn. 1999-08. com. abc : host01”, which is the iSCSI name of the first initiator, are inputted into thearea 901 and thearea 902, respectively. The discovery domain “DD01” to which the first initiator and the first target belong has been thereby registered in thename management device 5. - Then, the system administrator or the like uses the
pointing device 206 or thecharacter input device 207 and designates display of theport management screen 920 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and conducts the port management screen display processing (1305). In the port management screen display processing, theCPU 204 of themanagement terminal 2 displays theport management screen 920 on thedisplay 205. Furthermore, theCPU 204 of themanagement terminal 2 reads the entries 2211 (device ID) of all of the records of the storage device table 221, creates a list of device IDs of the storage devices according to the results, and makes it possible to display the list of device IDs of the storage devices when thebutton 922 is specified by the system administrator or the like. Duplication of a device ID in the list of the device IDs of the storage devices is avoided. Further, theCPU 204 of themanagement terminal 2 reads the port table 122 from the storage devices corresponding to all of the records of the storage device table 221, merges the contents thereof, and displays it in thearea 932. - Then, the system administrator or the like selects the device ID of the migration
source storage device 1 by using thebutton 922 of theport management screen 920 and inputs the information relating to the physical ports of the migrationsource storage device 1 into the areas from thearea 923 to thearea 926. In the present embodiment, “STR01”, which is the device ID of the migrationsource storage device 1, is selected by using thebutton 922, and “1”, which is the port ID of the first physical port, “172. 16. 0. 1”, “255. 255. 0. 0”, and “172. 16. 0. 254” are inputted into thearea 923, thearea 924, thearea 925, and thearea 926, respectively. - If the system administrator or the like then specifies the
button 930, theCPU 204 of themanagement terminal 2 composes a port addition request including the contents of areas from thearea 923 to thearea 926 and sends the request via themanagement NIF 209 to the storage device having the device ID selected by using the button 922 (1306). The destination IP address of the port addition request is obtained by searching the storage device table 221 on condition that the device ID selected by using thebutton 922 matches the contents of the entry 2211 (device ID), and fetching the contents of the entry 2212 (management IP address) of the record that agrees with this condition. - If the port addition request is received, the
CPU 104 of the migrationsource storage device 1 executes the migration source storagedevice control program 112 and conducts the port addition processing (1307). In the port addition processing, theCPU 104 of the migrationsource storage device 1 fetches the contents of the areas from thearea 923 to thearea 926 from the port addition request and changes the record of the port table 122. The contents of thearea 923 are registered in the entry 1221 (port ID) of the changed record. The contents of thearea 924 are registered in the entry 1222 (IP address), the contents of thearea 925 are registered in the entry 1223 (subnet mask), and the contents of thearea 926 are registered in the entry 1224 (gateway). - If the above-described port addition processing is completed, the
CPU 104 of the migrationsource storage device 1 composes a port addition response indicating that the registration of information relating to the physical port was completed successfully and sends the response to themanagement terminal 2 via the management NIF 109 (1308). If the port addition response is received, theCPU 204 of themanagement terminal 2 adds to thearea 932 one line composed of the device ID selected by using thebutton 922 and the contents inputted into the areas from thearea 923 to thearea 926. - Furthermore, the system administrator or the like then again executes the operations from 1305 to 1308. However, in the
operation 1305, “STR01”, which is the device ID of the migrationsource storage device 1, is selected by using thebutton 922, and “2”, which is the port ID of the second physical port, “172. 16. 0. 2”, “255. 255. 0. 0”, and “172. 16. 0. 254” are inputted into thearea 923, thearea 924, thearea 925, and thearea 926, respectively. In other words, the system administrator or the like repeats the above-described processing until off of the port information is registered. - The system administrator or the like then designates display of the
target management screen 1000 to themanagement terminal 2. Based on this designation, theCPU 204 of themanagement terminal 2 executes theGUI control program 211 and carries out the target management screen display processing (1309). In the target management screen display processing, theCPU 204 of themanagement terminal 2 displays thetarget management screen 1000 on thedisplay 205. Furthermore, theCPU 204 of themanagement terminal 2 reads the entries 2211 (device ID) of all of the records of the storage device table 221, creates a list of device IDs of the storage devices according to the results, and makes it possible to display the list of device IDs of the storage devices when thebutton 1002 is specified by the system administrator or the like. Duplication of a device ID in the list of the device IDs of the storage devices is avoided. - Further, the
CPU 204 of themanagement terminal 2 reads the target table 123 and the LU table 124 from the storage devices corresponding to all of the records of the storage device table 221, merges the contents thereof, and displays it in thearea 1012. Then, the system administrator or the like selects the device ID of the migrationsource storage device 1 by using thebutton 1002 of the target management screen 100 and inputs the information relating to one of the targets operating in the migrationsource storage device 1 into the areas from thearea 1003 to thearea 1006. In the present embodiment, “STR01”, which is the device ID of the migrationsource storage device 1, is selected by using thebutton 1002, and “iqn. 2004-06. com. hitachi : tar01”, which is the iSCSI name of the first target, “1”, which is the port ID of the first physical port, “3260”, which is the well-known port, and “0”, which is the LUN of the first LU, are inputted into thearea 1003, thearea 1004, thearea 1005, and thearea 1006, respectively. - If the system administrator or the like then specifies the
button 1010, theCPU 204 of themanagement terminal 2 composes a target addition request including the contents of the areas from thearea 1003 to thearea 1006 and sends the request via themanagement NIF 209 to the storage device having the device ID selected by using the button 1002 (1310). The destination IP address of the target addition request is obtained by searching the storage device table 221 on condition that the device ID selected by using thebutton 1002 matches the contents of the entry 2211 (device ID) and by fetching the contents of the entry 2212 (management IP address) of the record that agrees with this condition. - If the target addition request is received, the
CPU 104 of the migrationsource storage device 1 executes the migration source storagedevice control program 112 and carries out the target addition processing (1311). In the target addition processing, theCPU 104 of the migrationsource storage device 1 fetches the contents of the areas from thearea 1003 to thearea 1006 from the target addition request and adds a record to the target table 123. The contents of thearea 1003 are registered in the entry 1231 (target) of the record added to the target table 123, the contents of thearea 1004 are registered in the entry 1232 (port ID), and the contents of thearea 1005 are registered in the entry 1233 (port number). Then, theCPU 104 of the migrationsource storage device 1 searches the LU table 124 on condition that the contents of the entry 1242 (LUN) match the contents of thearea 1006 and registers the contents of thearea 1003 in the entry 1241 (target) of the record that agrees with this condition. - If the above-described target addition processing is completed, the
CPU 104 of the migrationsource storage device 1 searches the port table 122 on condition that the contents of thearea 1004 match the contents of the entry 1221 (port ID) and reads the contents of the entry 1222 (IP address) of the record that agrees with this condition. Then, theCPU 104 of the migrationsource storage device 1 composes a name registration request, including the contents of thearea 1003, information showing than the node type is a target, contents of the entry 1222 (IP address), which was read, and the contents of thearea 1005, and sends the request to thename management device 5 via the NIF 108 (1312). The destination IP address of the name registration request is assumed to be the contents of the entry 1212 (IP address) of the first record of the name management device table 121. - If the name registration request is received, the
CPU 504 of thename management device 5 executes the iSCSInode management program 512 and carries out the name management processing (1313). In the name management processing, theCPU 504 of thename management device 5 fetches the contents of thearea 1003, information showing that the node type is a target, the contents of the entry 1222 (ID address), and the contents of thearea 1005 from the received name registration request, and adds a record to the iSCSI node table 521. The contents of thearea 1003 is registered in the entry 5211 (iSCSI node) of the record which is added, “target” is registered in the entry 5212 (node type), the contents of entry 1222 (IP address) are registered in the entry 5213 (IP address), the contents ofarea 1005 are registered in the entry 5214 (port number), and “0” is registered in the entry 5215 (change notification flag). - If the above-described name registration processing is completed, the
CPU 504 of thename management device 5 composes a name registration response indicating that the registration of the name was completed successfully and sends the response to the migrationsource storage device 1 via the NIF 508 (1314). - If the name registration response is received, the
CPU 104 of the migrationsource storage device 1 composes a target addition response indicating that the registration of the information relating to the target was completed successfully and sends this response to themanagement terminal 2 via the management NIF 109 (1315). If the target addition response is received, theCPU 204 of themanagement terminal 2 adds to thearea 1012 one line composed of the device ID selected by using thebutton 1002 and the contents inputted into the areas from thearea 1003 to thearea 1006. - The system administrator or the like then again executes the operations from 1309 to 1315. However, in the
operation 1309, “STR01”, which is the device ID of the migrationsource storage device 1, is selected by using thebutton 1002, and “iqn. 2004-06. com. hitachi : tar02”, which is the iSCSI name of the second target, “2”, which is the port ID of the second physical port, “3260”, and “1”, which is the LUN of the second LU, are inputted into thearea 1003, thearea 1004, thearea 1005, and thearea 1006, respectively. In other words, the system administrator or the like repeats the above-described processing until all of the targets are configured. -
FIG. 14 shows an example of a communication sequence relating to the case where the initiator is activated on thehost 4. In the present embodiment, the system administrator in advance configures “iqn. 1999-08. com. abc : host01”, “172. 16. 0. 128”, “255. 255. 0. 0”, “172. 16. 0. 254”, and “172. 16. 0. 253” in thehost 4 as the iSCSI name of the first initiator, the IP address of theNIF 408, the subnet mask, the IP address of the default gateway, and the IP address allocated to theNIF 508 of thename management device 5, respectively. - After the above-described configuration has been conducted in the
host 4, the system administrator or the like uses thepointing device 406 or thecharacter input device 407 and designates the activation of the first initiator. TheCPU 404 of thehost 4, to which the activation was designated, executes thename operation program 412, composes a name registration request including the iSCSI name of the first initiator, information showing that the node type is an initiator, the IP address of theNIF 408, and information indicating that the TCP port number is undetermined, and sends the request to thename management device 5 via the NIF 408 (1401). The destination IP address of the name registration request is assumed to be the IP address allocated to theNIF 508. - If the name registration request from the
host 4 is received, theCPU 504 of thename management device 5 executes the iSCSInode management program 512 and conducts the name registration processing (1402). In the name registration processing, theCPU 504 of thename management device 5 fetches, from the name registration request, the iSCSI name of the first initiator, the information showing that the node type is an initiator, the IP address of theNIF 408, and the information showing that the TCP port number is undetermined, and adds a record to the iSCSI node table 521. Here, the iSCSI name of the first initiator is registered in the entry 5211 (iSCSI node) of the record which is added, “initiator” is registered in the entry 5212 (node type), the IP address of theNIF 408 is registered in the entry 5213 (IP address), “null” is registered in the entry 5214 (port number), and “0” is registered in entry 5215 (change notification flag). - If the above-described name registration processing is completed, the
CPU 504 of thename management device 5 composes a name registration response indicating that the name registration was completed successfully and sends this response to thehost 4 via the NIF 508 (1403). - If the name registration response is received, the
CPU 404 of thehost 4 executes thename operation program 412, composes a change notification registration request including the iSCSI name of the first initiator, and sends this, request to thename management device 5 via the NIF 408 (1404). The destination IP address of the change notification registration request is the same as the destination IP address of the name registration request of 1401. - If the change notification registration request is received, the
CPU 504 of thename management device 5 executes thechange notification program 513 and conducts the change notification registration processing (1405). In the change notification registration processing, theCPU 504 of thename management device 5 fetches the iSCSI name of the first initiator from the change notification registration request, searches the iSCSI node table 521 on condition that the iSCSI name matches the contents of the entry 5211 (iSCSI node) and registers “1” in the entry 5215 (change notification flag) of the record that agrees with this condition. - If the above-described change notification registration processing is completed, the
CPU 504 of thename management device 5 composes a change notification registration response indicating that change notification registration was completed successfully and sends this response to thehost 4 via the NIF 508 (1406). - If the change notification registration response is received, the
CPU 404 of thehost 4 executes thename operation program 412, composes a discovery request including the iSCSI name of the first initiator, and sends the request to thename management device 5 via the NIF 408 (1407). The destination IP address of the discovery request, is identical to the destination IP address of the name registration request of 1401. - If the discovery request is received, the
CPU 504 of thename management device 5 executes the iSCSInode management program 512 and conducts the target search processing (1408). In the target search processing, theCPU 504 of thename management device 5 sends, to the initiator, information on iSCSI names, IP addresses and TCP port numbers of all of the targets belonging to the same discovery domain as the initiator which the discovery request sent. - Initially, the
CPU 504 of thename management device 5 fetches the iSCSI name of the first initiator from the received discovery request. Then, theCPU 504 of the name management device searches the domain table 522 on condition that the iSCSI name matches the contents of the entry 5222 (iSCSI node) and fetches the contents of the entry 5221 (domain ID) of the record that agrees with this condition. Then, theCPU 504 of thename management device 5 again searches the domain table 522 on condition that the contents of the fetched entry 5221 (domain ID) match the contents of the entry 5221 (domain ID) and fetches the contents of the entry 5222 (iSCSI node) of all of the records that agree with this condition. - Then, the
CPU 504 of thename management device 5 searches the iSCSI node table 521 on condition that the contents of the entry 5222 (iSCSI node) match the contents of the entry 5211 (iSCSI node) for the contents of all of the fetched entries 5222 (iSCSI node) and that the contents of the entry 5212 (node type) is “target” and fetches the contents of the entry 5211 (iSCSI node), the contents of the entry 5213 (IP address), and the contents of the entry 5214 (port number) of the record that agrees with this condition. - Finally, the
CPU 504 of thename management device 5 composes a discovery response, including all of the combinations of the contents of the fetched entry 5211 (iSCSI node), the contents of the entry 5213 (IP address), and the contents of the entry 5214 (port number), and sends the response to thehost 4 via the NIF 508 (1409). In the present embodiment, the first initiator belongs to the same discovery domain as the first target. Therefore, the discovery response comprises “iqn. 2004-06. com. hitachi : tar01”, which is the iSCSI name of the first target, “172. 16. 0. 1”, which is the IP address used by the first target, and “3260”, which is the TCP port number used by the first target. - If the discovery response is received, the
CPU 404 of thehost 4 fetches a combination of the contents of the entry 5211 (iSCSI node), the contents of the entry 5213 (IP address), and the contents 5214 (port number) from the received discovery response. Then, theCPU 404 of thehost 4 executes theiSCSI processing program 411 and establishes a TCP connection with an end point whose IP address and TCP port number are the contents of the entry 5213 (IP address) and the contents 5214 (port number), respectively, which are fetched from the received discovery response. - Then, the
CPU 404 of thehost 4 composes an iSCSI login request, including the iSCSI name of the first initiator, as the iSCSI name of the initiator performing login, and the contents of the entry 5211 (iSCSI node) as the iSCSI name of the target serving as a login object, and sends this request by using the TCP connection established heretofore (1410). - If the iSCSI login request is received, the
iSCSI processing device 110 of the migrationsource storage device 1 conducts the login processing (1411). In the login processing, theiSCSI processing device 110 fetches the iSCSI name of the first initiator and the contents of the entry 5211 (iSCSI node) from the received iSCSI login request, confirms that the combination of the iSCSI name of the initiator and the iSCSI name of the target is correct, authenticates the initiator, and conducts the negotiation of various parameters. - If the login processing is completed successfully, the
iSCSI processing device 110 composes an iSCSI login response showing that the login was completed successfully and sends the response to thehost 4 via the NIF 108 (1412). - If the
host 4 receives the iSCSI login response, a new iSCSI session is established between the first initiator managed by thehost 4 and the first target managed by the migrationsource storage device 1. Then, thehost 4 uses this iSCSI session and conducts the read/write of data from/to the first LU of the migration source storage device 1 (1413). -
FIG. 15 shows an example of a communication sequence relating to the case where the system administrator or the like registers the information relating to the migrationdestination storage device 3 in themanagement terminal 2, as processing prior to migration, and then registers the information relating to the physical ports of the migrationdestination storage device 3 in the migrationdestination storage device 3. In the present embodiment, the system administrator will register the information relating to the physical ports in the order of the third physical port and the fourth physical port, but the registration may be conducted in a reverse order. - Initially, the system administrator or the like designates display of the storage
device management screen 800 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received this designation, executes theGUI control program 211 and, similarly tooperation 1207, carries out the storage device management screen display processing (1501). After the storage device management screen display processing has been completed, the system administrator or the like inputs the information relating to the migrationdestination storage device 3 to thearea 801 and thearea 802 of the storagedevice management screen 800. In the present embodiment, “STR02”, which is the device ID of the migrationdestination storage device 3, and “192. 168. 0. 2”, which is the management IP address of the migrationdestination storage device 3, are inputted into thearea 801 and thearea 802, respectively. - If the system administrator or the like then specifies the
button 810, theCPU 204 of themanagement terminal 2 executes theGUI control program 211 and, similar tooperation 1208, conducts the storage device addition processing (1502). - Then, the
CPU 204 of themanagement terminal 2 composes an initialization request similar tooperation 1209 and sends the request to the migrationdestination storage device 3 via the management NIF 209 (1503). The destination IP address of this initialization request is assumed to be the contents inputted into thearea 802. - If the initialization request is received, the
CPU 304 of the migrationdestination storage device 3 executes the migration destination storagedevice control program 312 and conducts the table initialization processing (1504). In the table initialization processing, first, theCPU 304 of the migrationdestination storage device 3 sets the name management device table 321, the port table 322, the target table 323, and the LU table in a state where no records are present. Then, theCPU 304 of the migrationdestination storage device 3 fetches the contents of the entry 2221 (device ID) and the entry 2222 (IP address) from the initialization request and adds a record to the name management device table 321. Here, the contents of the entry 2221 (device ID) and the entry 2222 (IP address), which were fetched from the received initialization request, are respectively registered in the entry 3211 (device ID) and the entry 3212 (IP address) of the record that will be added. - Then, the
CPU 304 of the migrationdestination storage device 3 allocates port IDs to all of the physical ports of the migrationdestination storage device 3 and adds, to the port table 322, records, in each of which the port ID allocated is registered in the entry 3221 (port ID) and “0. 0. 0. 0” is registered in the entry 3222 (IP address), the entry 3223 (subnet mask), and the entry 3224 (gateway). Then, theCPU 304 of the migrationdestination storage device 3 allocates LUNs to all of the LUs managed by the migrationdestination storage device 3 and adds, to the LU table 324, records in each of which “null” is registered in the entry 3241 (target) and the allocated LUN is registered in the entry 3242 (LUN). - In the present embodiment, the port ID allocation method and the LUN allocation method of the migration
destination storage device 3 are identical to those of the migrationsource storage device 1. In the present embodiment, the port ID of “1” and “2” are allocated to the third physical port and the fourth physical port, respectively. Furthermore, the LUN of “0” and “1” are allocated to the third LU and the fourth LU, respectively. - After the above-described table initialization processing has been completed, the
CPU 304 of the migrationdestination storage device 3, similar tooperation 1211, composes an initialization response showing that the initialization was completed successfully and sends the response to themanagement terminal 2 via the management NIF 309 (1505). If the initialization response is received, then, similar tooperation 1211, theCPU 204 of themanagement terminal 2 adds, to thearea 812, a line composed of the contents inputted into thearea 801 and thearea 802. - Then, the system administrator or the like uses the
pointing device 206 or thecharacter input device 207 and designates display of theport management screen 920 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received this designation, executes theGUI control program 211 and, similar tooperation 1305, conducts the port management screen display processing (1506). After completion of the port management screen display processing, the system administrator or the like selects the device ID of the migrationdestination storage device 3 by using thebutton 922 of theport management screen 920 and inputs the information relating to one of the physical ports of the migrationdestination storage device 3 into the areas from thearea 923 to thearea 926. - In the present embodiment, “STR02”, which is the name ID of the migration
destination storage device 3, is selected by using thebutton 922, and “1”, which is the port ID of the third physical port, “172. 16. 0. 3”, “255. 255. 0. 0”, and “172. 16. 0. 254” are inputted into thearea 923, thearea 924, thearea 925, and thearea 926, respectively. - If the system administrator or the like then specifies the
button 930, theCPU 204 of themanagement terminal 2, similar to theoperation 1306, composes a port addition request and sends it to the migrationdestination storage device 3 via the management NIF 209 (1507). - If the port addition request is received, the
CPU 304 of the migrationdestination storage device 3 executes the migration destination storagedevice control program 312 and, similar to theoperation 1307, conducts the port addition processing (1508). - If the port addition processing is completed, the
CPU 304 of the migrationdestination storage device 3, similar to theoperation 1308, composes a port addition response showing that the registration of information relating to the physical port was completed successfully and sends this response to themanagement terminal 2 via the management NIF 109 (1509). If the port addition response is received, theCPU 204 of themanagement terminal 2, similar to theoperation 1308, adds to the area 932 a line composed of the device ID selected by using thebutton 922 and the contents inputted into the areas from thearea 923 to thearea 926. - Then, the system administrator or the like again executes the operations from 1506 to 1509. However, in the
operation 1506, “STR02”, which is the device ID of the migrationdestination storage device 3, is selected by using thebutton 922 and “2”, which is the port ID of the fourth physical port, “172. 16. 0. 4”, “255. 255. 0. 0”, and “172. 16. 0. 254” are inputted into thearea 923, thearea 924, thearea 925, and thearea 926, respectively. Therefore, the system administrator or the like repeats the above-described processing as many times as there are physical ports of the migrationdestination storage device 3. - The communication sequence and the operation procedure relating to the case where the first target managed by the migration
source storage device 1 is migrated to the migrationdestination storage device 3 will be explained below. - In the present embodiment, the system administrator or the like uses the
migration management screen 1100 that is displayed by themanagement terminal 2 and designates the start of the migration processing of the first target to the migrationsource storage device 1. The migrationsource storage device 1, which has received the designation, creates a third target in the migrationdestination storage device 3 and allocates the third physical port and the third LU of the migrationdestination storage device 3 to the third target. - Then, the migration
source storage device 1 executes an initial copy by which data stored in the first LU is copied into the third LU of the migrationdestination storage device 3. After the initial copy of the data has been completed, the migrationsource storage device 1 executes synchronous replication (in the case where changes have occurred in the data that is stored in one LU, the identical data stored in the other LU is also changed), and maintains the consistency of the data stored in the first LU and the data stored in the third LU (the above-described communication sequence will be explained with reference toFIG. 16 ). - Then, the migration
source storage device 1 creates, in the migrationdestination storage device 3, a target (referred to hereinbelow as “virtual first target”) having the iSCSI name identical to that of the first target and allocates the third physical port and the third LU of the migrationdestination storage device 3 to the created target. The migrationdestination storage device 3 notifies thehost 4 via thename management device 5 that the third physical port has been allocated to the first target (that is, the virtual first target). If the notification that the third physical port has been allocated to the first target is received, thehost 4 establishes a TCP connection with the third physical port of the migrationdestination storage device 3. Then, the TCP connection is added to the iSCSI session between the first initiator and the first target when the initiator managed by thehost 4 performs login to the target having the iSCSI name identical to that of the first target managed by the migrationdestination storage device 3. - However, at this point of time, the
host 4 does not carry out the disk access using the TCP connection with the migration destination storage device 3 (the above-described communication sequence will be described with reference toFIG. 17 ). - Then, the migration
source storage device 1 notifies thehost 4 via thename management device 5 that the allocation of the first physical port to the first target is deleted. If this notification is received, the initiator managed by thehost 4 performs logout from the first target managed by the migrationsource storage device 1 and thehost 4 disconnects the TCP connection with the first physical port of the migrationsource storage device 1. Further, thehost 4 temporarily saves, to thebuffer area 421, the disk access that was generated after the deletion notification was received and before the TCP connection was disconnected. - After the TCP connection disconnection, the
host 4 uses the TCP connection with the migrationdestination storage device 3 and starts the disk access with the virtual first target. On the other hand, the migrationsource storage device 1, after the TCP connection with thehost 4 has been disconnected, stops the synchronous replication with the migrationdestination storage device 3 and deletes the first target (the above-described communication sequence will be explained with reference toFIG. 18 ). - In the explanation of the communication sequence and the operation procedure provided hereinbelow the following examples of parameters will be used.
- First, in the present embodiment, the initial copy and the synchronous replication are assumed to be conducted by using the iSCSI protocol. For the initial copy and the synchronous replication, the migration
source storage device 1 will use a second initiator having an iSCSI name of “iqn. 2004-06. com. hitachi : replication-ini02”. The second physical port will be allocated to the second initiator. On the other hand, for the initial copy and the synchronous replication, the migrationdestination storage device 3 will use a third target having an iSCSI name of “iqn. 2004-06. com. hitachi : replication-tar03”. The third physical port is assumed to be allocated to the third target. Further, the target is assumed to use “3260”, which is the well-known port, as the TCP port number. -
FIG. 16 shows an example of the first communication sequence relating to the case where the migrationsource storage device 1 conducts migration of a target to the migrationdestination storage device 3. - First, the system administrator uses the
character input device 207 and thepointing device 206 and designates themanagement terminal 2 to display themigration management screen 1100. TheCPU 204 of themanagement terminal 2 that received this designation executes theGUI control program 211 and conducts the migration management screen display processing (1601). In this migration management screen display processing, first, theCPU 204 of themanagement terminal 2 reads the entries 2211 (device ID) of all of the records of the storage device table 221, creates a list of device IDs of the storage devices according to the results obtained and makes it possible to display the list of device IDs of the storage devices when thebutton 1104 or thebutton 1111 is specified by the system administrator or the like. Duplication of a device ID in the list of device IDs of the storage devices is avoided. - Then, the system administrator or the like selects the device ID of the migration
source storage device 1 by using thebutton 1104 of themigration management screen 1100 and inputs the information relating to the migrationsource storage device 1 into the areas from thearea 1105 to thearea 1107. In addition to that, the system administrator or the like selects the device ID of the migrationdestination storage device 3 by using thebutton 1111 and inputs the information relating to the migrationdestination storage device 3 to the areas from thearea 1112 to thearea 1117. In the present embodiment, “STR01”, which is the device ID of the migrationsource storage device 1, is selected by using thebutton 1104, and “iqn. 2004-06. com. hitachi : tar01”, which is the iSCSI name of the first target, “iqn. 2004-06. com. hitachi : replication-ini02”, which is the iSCSI name of the second initiator, and “1”, which is the port ID of the second physical port, are inputted into thearea 1105, thearea 1106, and thearea 1107, respectively. - Further, “STR02”, which is the device ID of the migration
destination storage device 3, is selected by using thebutton 1111 and “iqn. 2004-06. com. hitachi : replication-tar03”, which is the iSCSI name of the third target, “1”, which is the port ID of the third physical port, “3260”, which is the well-known port, “1”, which is the port ID of the third physical port, “3260”, which is the well-known port, and “0”, which is the LUN of the third LU, are inputted into thearea 1112, thearea 1113, thearea 1114, thearea 1115, thearea 1116, and thearea 1107, respectively. - If the system administrator or the like then specifies the
button 1128, theCPU 204 of themanagement terminal 2 searches the storage device table 221 on condition that the device ID selected by using thebutton 1111 matches the contents of the entry 2211 (device ID) and fetches the contents of the entry 2212 (management IP address) of the record that agree with this condition. Then, theCPU 204 of themanagement terminal 2 composes a migration start request including the fetched contents of the entry 2212 (management IP address), the contents of the areas from thearea 1105 to thearea 1107, and the contents of the areas from thearea 1112 to thearea 1117 and sends the request via themanagement NIF 209 to the migrationsource storage device 1 having the device ID selected by using the button 1104 (1602). The destination IP address of this migration start request is obtained by searching the storage device table 221 on condition that the device ID selected by using thebutton 1104 matches the contents of the entry 2211 (device ID) and fetching the contents of the entry 2212 (management IP address) of the record that agree with this condition. - If the migration start request is received, first, the
CPU 104 of the migrationsource storage device 1 executes the migration source storagedevice control program 112 and fetches the contents of the entry 221 (management IP address), the contents of the areas from thearea 1112 to thearea 1114, and the contents of thearea 1117 from the received migration start request. Then, theCPU 104 of the migrationsource storage device 1 composes a target addition request, including the contents of the areas from thearea 1112 to thearea 1114 and the contents of thearea 1117, and sends the request to the migrationdestination storage device 3 via the management NIF 109 (1603). The destination IP address of the target addition request is the contents of the entry 2212 (management IP address). - If the target addition request is received, the
CPU 304 of the migrationdestination storage device 3 executes the migration destination storagedevice control program 312 and carries out the target addition processing in which the migrationdestination storage device 3 creates a third target and allocates the third physical port and the third LU to the third target (1604). In the target addition processing, theCPU 304 of the migrationdestination storage device 3 fetches the contents of the areas from thearea 1112 to thearea 1114 and the contents of thearea 1117 from the received target addition request and adds a record to the target table 323. Here, the contents of thearea 1112 are registered in the entry 3231 (target) of the record added to the target table 323, the contents of thearea 1113 are registered in the entry 3232 (port ID), and the contents of thearea 1114 are registered in the entry 3233 (port number). - Then, the
CPU 304 of the migrationdestination storage device 3 searches the LU table 324 on the condition that the contents of the entry 3242 (LUN) matches the contents of thearea 1117 and registers the contents of thearea 1112 in the entry 3241 (target) of the record that agrees with this condition. - Then, the
CPU 304 of the migrationdestination storage device 3 searches the port table 322 on the condition that the contents of thearea 1113 matches the contents of the entry 3221 (port ID) and fetches the entry 3222 (IP address) of the record that agrees with this condition. Then, theCPU 304 of the migrationdestination storage device 3 composes a target addition response, including the contents of the entry 3222 (IP address) that was fetched, and sends the response to the migrationsource storage device 1 via the management NIF 309 (1605). - If the target addition response is received, the
CPU 104 of the migrationsource storage device 1 executes thesynchronous replication program 114 and conducts the synchronous replication initialization processing (1606). In this synchronous replication initialization processing, theCPU 104 of the migrationsource storage device 1 fetches the contents of the areas from thearea 1105 to thearea 1107 from the received migration start request. Then, theCPU 104 of the migrationsource storage device 1 conducts the destage processing of writing, to the first LU, the wait data for writing to the first LU, out of all the wait data for writing stored in thecache area 111. - It is assumed that after this point of time, the migration
source storage device 1 does not temporarily store the write data, from thehost 4 to the first LU, in thecache area 111. Then, theCPU 104 of the migrationsource storage device 1 fetches the contents of the entry 3222 (IP address) from the received target addition response. TheCPU 104 of the migrationsource storage device 1 then searches the port table 122 on condition that the contents of thearea 1107 match the contents of the entry 1221 (port ID) and fetches the contents of the entry 1222 (IP address) of the record that agrees with this condition. - Then, the
iSCSI processing device 110 of the migrationsource storage device 1 establishes a TCP connection in which the contents of the entry 1222 (IP address) are the sending source IP address and the contents of the entry 3222 (IP address) and the contents of thearea 1114 are the destination IP address and TCP port number, respectively. TheiSCSI processing device 110 of the migrationsource storage device 1 then composes an iSCSI login request, including the contents of thearea 1106 and thearea 1112, and sends it by using the established TCP connection (1607). - If the iSCSI login request is received, the
iSCSI processing device 310 of the migrationdestination storage device 3 conducts a login processing (1608). In the login processing, theiSCSI processing device 310 fetches the contents of thearea 1106 and thearea 1112 from the received iSCSI login request, confirms that the combination of the iSCSI name of the initiator and the iSCSI name of the target is correct, authenticates the initiator and conducts the negotiation of various parameters. - If the login processing is completed successfully, the
iSCSI processing device 310 composes an iSCSI login response showing that login was completed successfully and sends it to the migrationsource storage device 1 via the NIF 108 (1609). - If the migration
source storage device 1 receives the iSCSI login response, an iSCSI session is established between the second initiator and the third target. Then, the migrationsource storage device 1 uses this iSCSI session and executes the initial copy operation by copying the data stored in the first LU into the third LU (1610, 1611). - After the initial copy of data has been completed, when the migration
source storage device 1 receives the data write from thehost 4 to the first LU (1612), it executes the synchronous replication into the third LU and maintains the consistency of data stored in the first LU and the data stored in the third LU (1613). -
FIG. 17 illustrates an example of the second communication sequence in which the migrationsource storage device 1 conducts the migration of a target to the migrationdestination storage device 3. - After the initial copy of 1610 and 1611 has been completed, the
CPU 104 of the migrationsource storage device 1 executes the migration source storagedevice control program 112, composes a target addition request including the contents of thearea 1105, the contents of the areas from thearea 1115 to thearea 1117, and the iSCSI name of the initiator (in the present embodiment, the first initiator) that has performed login into the target which is to be added and sends the request to the migrationdestination storage device 3 via the management NIF 109 (1710). The destination IP address of the target addition request is the contents of the entry 2212 (management IP address) fetched from the migration start request in 1603. - If the target addition request is received, the
CPU 304 of the migrationdestination storage device 3 executes the migration destination storagedevice control program 312 and creates a target (virtual first target) having the iSCSI name identical to that of the first target. Then, theCPU 304 of the migrationdestination storage device 3 conducts the target addition processing in which the third physical port and the third LU are allocated to the created target (1702). - In the target addition processing, the
CPU 304 of the migrationdestination storage device 3, first, fetches the contents of thearea 1105, the contents of the areas from thearea 1115 to thearea 1117, and the iSCSI name of the initiator that has performed login into the target which is to be added from the received target addition request and adds a record to the target table 323. Here, the contents of thearea 1105 are registered in the entry 3231 (target) of the record which is added to the target table 323, the contents of thearea 1115 are registered in the entry 3232 (port ID), and the contents of thearea 1116 are registered in the entry 3233 (port number). - Then, the
CPU 304 of the migrationdestination storage device 3 searches the LU table 324 on condition that the contents of the entry 3242 (LUN) match the contents of thearea 1117 and registers the contents of thearea 1105 in the entry 3241 (target) of the record that agrees with this condition. - Then, the
CPU 304 of the migrationdestination storage device 3 executes thename change program 313, first, searches the port table 322 on condition that the contents of thearea 1115 match the contents of the entry 3221 (port ID) and fetches the entry 3222 (IP address) of the record that agrees with this condition. Then, theCPU 304 of the migrationdestination storage device 3 composes a name registration request including the contents of thearea 1105, information showing that the node type is a target, the fetched contents of the entry 3222 (IP address), and the contents of thearea 1116 and sends the request to thename management device 5 via the NIF 308 (1703). The destination IP address of the name registration request is the contents of the entry 3212 (IP address) of the first record of the name management device table 321. - If the name registration request is received, the
CPU 504 of thename management device 5 executes the iSCSInode management program 512 and conducts the name registration processing, in the same manner as in the operation 1313 (1704). - If the name registration processing is completed, the
CPU 504 of thename management device 5 composes a name registration response showing that the name registration was completed successfully and sends the response to the migrationdestination storage device 3 via theNIF 508, similar to the operation 1304 (1705). - If the name registration response is received, the
CPU 304 of the migrationdestination storage device 3 composes a target addition response showing that the addition of the information relating to the target was completed successfully and sends the response to the migrationsource storage device 1 via the management NIF 309 (1706). - On the other hand, because a record was added to the iSCSI node table 521 in the
operation 1704, theCPU 504 of thename management device 5 executes thechange notification program 513 and conducts the change notification destination search processing (1707). In the change notification destination search processing, theCPU 504 of thename management device 5, first, searches the domain table 522 on condition that the iSCSI name of the iSCSI node (that is, the first target) corresponding to the record that was added matches the contents of the entry 5222 (iSCSI node) and fetches the contents of the entry 5221 (domain ID) of the record that agrees with this conditions. - Then, the
CPU 504 of thename management device 5 searches again the domain table 522 on condition that the fetched contents of the entry 5221 (domain ID) matches the contents of the entry 5221 (domain ID) and fetches the contents of the entry 5222 (iSCSI node) of all of the records that agree with this condition. Then, theCPU 504 of the name management table 5 searches the iSCSI node table 521 with respect to the fetched contents of each entry 5222 (iSCSI node) on condition that the contents of this entry 5222 (iSCSI node) match the contents of the entry 5211 (iSCSI node) and the contents of the entry 5215 (change notification flag) is “1”, and fetches the contents of the entry 5213 (IP address) of the record that agrees with this condition. In the present embodiment, the record corresponding to the first initiator managed by thehost 4 matches this condition. As a result, “172. 16. 0. 128” is fetched as the contents of the entry 521 (IP address). - After the above-described change notification destination search processing has been completed, the
CPU 504 of thename management device 5 composes a change notification, including the iSCSI name of the first target and information showing that a new physical port (that is, the third physical port) has been allocated to this target, and sends this notification to the host 4 (1708). The destination IP address of this change notification is the contents of the entry 5213 (IP address) fetched in theoperation 1707. Further, in the present embodiment, the use of SCN of iSNSP was assumed for this change notification, but a method other than SCN may be used. - If the change notification is received, the
CPU 404 of thehost 4 executes thename operation program 412, fetches from the change notification the iSCSI name of the first target and the information showing that a new physical port has been allocated to this target, composes a target read request including the iSCSI name of the first target and sends this request to thename management device 5 via the NIF 408 (1709). - If the target read request is received, the
CPU 504 of thename management device 5 executes the iSCSInode management program 512 and executes the target read processing (1710). In the target read processing, theCPU 504 of thename management device 5, first, fetches the iSCSI name of the first target from the received target read request. Then, theCPU 504 of thename management device 5 searches the iSCSI node table 521 on condition that the iSCSI name of the first target matches the contents of the entry 5211 (iSCSI node) and fetches the contents of the entry 5213 (IP address) and the contents of the entry 5214 (port number) of all of the records that agree with this condition. - Then, the
CPU 504 of thename management device 5 composes a target read response including the contents of all of the entries 5213 (IP address) and 5214 (port ID) that were fetched in theoperation 1710, and sends the response to thehost 4 via the NIF 508 (1711). - If the target read response is received, the
CPU 404 of thehost 4 fetches the contents of all of the entries 5213 (IP address) and 5214 (port number) from the received target read response. Then, theCPU 404 of thehost 4 checks as to determine whether a TCP connection, with the end point whose IP address and TCP port number each correspond to the fetched contents of the entry 5213 (ID address) and the fetched contents of the entry 5214 (port number), respectively, has already been established. - If the TCP connection has not been established yet, the
CPU 404 of thehost 4 executes theiSCSI processing program 411, establishes the TCP connection with this end point, composes an iSCSI login request including the iSCSI name of the first initiator and the iSCSI name of the first target, and sends the request by using the established TCP connection (1712). Thehost 4 thus establishes the TCP connection with the migrationdestination storage device 3 via the third physical port. - If the iSCSI login request is received by the migration
destination storage device 3, theiSCSI processing device 310 of the migrationdestination storage device 3 conducts the login processing (1713). In this login processing, theiSCSI processing device 310 fetches the iSCSI name of the first initiator and the iSCSI name of the first target from the received iSCSI login request, verifies whether the combination of the iSCSI name of the initiator and the iSCSI name of the target is correct, authenticates the initiator, and conducts the negotiation of various parameters. - If the login processing is successfully completed, the
iSCSI processing device 310 of the migrationdestination storage device 3 composes an iSCSI login response including information indicating that the login was completed successfully and the TCP connection use reservation information and sends the response to thehost 4 via the NIF 308 (1714). This TCP connection use reservation information is information indicating that thehost 4 reserves the use of the newly established TCP connection until the present TCP connection is disconnected. The migrationdestination storage device 3 includes the TCP connection use reservation information in the iSCSI login response only when the iSCSI name of the initiator, that is a transmission source of the login request received in theoperation 1712, matches the iSCSI name of the initiator fetched from the target addition request in theoperation 1702. In the present embodiment, it is assumed that vendor-specific login parameters contained in the iSCSI login response are used as the TCP connection use reservation information. If the iSCSI login response is received by thehost 4, the TCP connection established via the third physical port between thehost 4 and the migrationdestination storage device 3 is added to the iSCSI session between the first initiator and the first target. However, because the TCP connection use reservation information is contained in the iSCSI login response, at this point of time, theCPU 404 of thehost 4 does not carry out the disk access using the TCP connection with the migrationdestination storage device 3. The control, such as reserving the use of the TCP connection, is conducted by theCPU 404 of thehost 4 executing theiSCSI processing program 411. - After the
iSCSI processing device 310 has sent the iSCSI login response, theCPU 304 of the migrationdestination storage device 3 executes the migration destination storagedevice control program 312, composes a login completion notification showing that the login from thehost 4 has been completed, and sends this notification to the migrationsource storage device 1 via the management NIF 309 (1715). -
FIG. 18 shows an example of the third communication sequence in which the migrationsource storage device 1 conducts the migration of a target to the migrationdestination storage device 3. - If the login completion notification is received, the
CPU 104 of the migrationsource storage device 1 executes thename change program 113 and notifies thename management device 5 that the allocation of the first physical port to the first target has been deleted. - First, the
CPU 104 of the migrationsource storage device 1 searches the target table 123 on condition that the contents of thearea 1105 match the contents of the entry 1231 (target) and fetches the contents of the entry 1232 (port ID) and the entry 1233 (port number) of the record that agrees with the condition. Then, theCPU 104 of the migrationsource storage device 1 searches the port table 122 on condition that the fetched contents of the entry 1232 (port ID) match the contents of the entry 1221 (port ID) and fetches the contents of the entry 1222 (IP address) of the record that agrees with the condition. - Further, the
CPU 104 of the migrationsource storage device 1 composes a name deregistration request, including the contents of thearea 1105, the fetched contents of the entry 1222 (IP address) and the fetched contents of the entry 1233 (port number), and sends the request to thename management device 5 via the NIF 108 (1801). The destination IP address of the name deregistration request is assumed to be the contents of the entry 1212 (IP address) of the first record of the name management device table 121. - If the name deregistration request is received, the
CPU 504 of thename management device 5 executes the iSCSInode management program 512 and conducts the name deregistration processing (1802). In this name deregistration processing, theCPU 504 of thename management device 5, first, fetches the contents of thearea 1105 and the contents of the entry 1222 (IP address) and the entry 1233 (port number) from the received name deregistration request. Then, theCPU 504 of thename management device 5 searches the iSCSI node table 521 on condition that the contents of thearea 1105, the contents of the entry 1222 (IP address) and the contents of the entry 1233 (port number) match the contents of the entry 5211 (iSCSI node), the contents of the entry 5213 (IP address), and the contents of the entry 5214 (port number), respectively, and deletes the record that agrees with this condition. - If the above-described name deregistration processing is completed, the
CPU 504 of thename management device 5 composes a name deregistration response indicating that the name deregistration was completed successfully and sends the response to the migrationsource storage device 1 via the NIF 508 (1803). - On the other hand, because the record has been deleted from the iSCSI node table 521 in the
operation 1802, theCPU 504 of thename management device 5 executes thechange notification program 513 and carries out the change notification destination search processing similar to the operation 1707 (1804). - After the change notification destination search processing has been completed, the
CPU 504 of thename management device 5 composes a change notification similar to theoperation 1708 and sends it to the host 4 (1805). However, this change notification comprises information showing that the allocation of the iSCSI name of the iSCSI node (that is, the first target) corresponding to the deleted record and the physical port (that is, the first physical port) corresponding to this iSCSI node was deleted. - If the change notification containing the information of the cancellation of the first physical port allocation is received, the
CPU 404 of thehost 4 executes thename operation program 412, fetches from the received change notification the information showing that the allocation of the iSCSI name of the first target and the physical port corresponding to this target has been deleted, composes a target read request containing the iSCSI name of the first target, and sends this request to thename management device 5 via the NIF 408 (1806). - If the target read request is received, the
CPU 504 of thename management device 5 executes the iSCSInode management program 512 and carries out the target read processing similar to the operation 1710 (1807). Then, theCPU 504 of thename management device 5 composes a target read response, similar to theoperation 1711, and sends this response to thehost 4 via the NIF 508 (1808). - If the target read response is received, then, in the case where there is an executed disk access with respect to the first LU of the migration
source storage device 1, thehost 4 waits until this access is completed. When the disk access request is newly generated in thehost 4 while the completion of the disk access is being waited for, thehost 4 saves the contents of this request in thebuffer area 421. After the executed disk access has been completed, theCPU 404 of thehost 4 fetches the contents of all of the entries 5213 (IP address) and the contents of all of the entries 5214 (port number) from the received target read response. - Then, the
CPU 404 of thehost 4 investigates as to whether the TCP connection has been established with any end point other than the end point whose IP address and TCP port number correspond to the contents of the entry 5213 (IP address) and the entry 5214 (port number), respectively. When such a TCP connection is present, theCPU 404 of thehost 4 executes theiSCSI processing program 411, composes an iSCSI logout request, including the iSCSI name of the first initiator and the iSCSI name of the first target, and sends this request by using the above-described discovered TCP connection (1809). - If the iSCSI logout request is received, the
iSCSI processing device 110 of the migrationsource storage device 1 carries out the logout processing (1810). In this logout processing, theiSCSI processing device 110 fetches the iSCSI name of the first initiator and the iSCSI name of the first target from the received iSCSI logout request and releases resources relating to the iSCSI session between the initiator managed by thehost 4 and the first target. - If the logout processing is successfully completed, the
iSCSI processing device 110 composes an iSCSI logout response indicating that the logout was completed successfully and sends the response to thehost 4 via the NIF 108 (1811). - If the
host 4 receives the iSCSI logout response, the TCP connection established via the first physical port between thehost 4 and the migrationsource storage device 1 is deleted from the iSCSI session established between the first initiator and the first target. Then, the first initiator conducts the iSCSI communication with the first target by using the TCP connection with the migrationdestination storage device 3. The switching control of this TCP connection is conducted by theCPU 404 of thehost 4 executing the iSCSI processing program 411 (1812). In the case a disk access request has been saved in thebuffer area 421, thehost 4 executes this disk access by using the TCP connection with the migrationdestination storage device 3. - On the other hand, the
CPU 104 of the migrationsource storage device 1, after sending the iSCSI logout response, disconnects the iSCSI session with the third target used for the synchronous replication and deletes the third target from the migrationdestination storage device 3. First, theiSCSI processing device 110 of the migrationsource storage device 1 composes an iSCSI logout request, including the contents of thearea 1106 and the contents of thearea 1112, and sends this request to the migrationdestination storage device 3 via the NIF 108 (1813). - The
iSCSI processing device 310 of the migrationdestination storage device 3, which received the above-described iSCSI logout request, conducts the logout processing (1814). In this logout processing, theiSCSI processing device 310 fetches the contents of thearea 1106 and the contents of thearea 1112 from the received iSCSI logout request and releases resources relating to the iSCSI session between the initiator corresponding to the contents of thearea 1106 and the target (that is, the third target) corresponding to the contents of thearea 1112. - If the logout processing is completed successfully, the
iSCSI processing device 310 composes an iSCSI logout response showing that the logout was completed successfully and sends this response to the migrationsource storage device 1 via the NIF 308 (1815). - The
iSCSI processing device 110 of the migrationsource storage device 1, which has received the iSCSI logout response, disconnects the TCP connection with the migrationdestination storage device 3. Then, theCPU 104 of the migrationsource storage device 1 executes the migration source storagedevice control program 112 and deletes the third target from the migrationdestination storage device 3. For this purpose, theCPU 104 of the migrationsource storage device 1 composes a target deletion request including the contents of thearea 1112 and sends it to the migrationdestination storage device 3 via the NIF 108 (1816). - The
CPU 304 of the migrationdestination storage device 3, which has received the target deletion request, executes the migration destination storagedevice control program 312 and carries out the target deletion processing (1817). In this target deletion processing, theCPU 304 of the migrationdestination storage device 3, first, fetches the contents of thearea 1112 from the received target deletion request. Then, theCPU 304 of the migrationdestination storage device 3 searches the target table 323 on condition that the contents of the entry 3231 (target) match the contents of thearea 1112 and deletes the record that agrees with this condition. - The
CPU 304 of the migrationdestination storage device 3 then searches the LU table 324 on condition that the contents of the entry 3241 (target) match the contents of thearea 1112 and registers “null” in the entry 3241 (target) of the record that agrees with this condition. - After the above-described target deletion processing has been completed, the
CPU 304 of the migrationdestination storage device 3 composes a target deletion response indicating that the target deletion was completed successfully and sends it to the migrationsource storage device 1 via the NIF 308 (1818). - If the target deletion response is received, the
CPU 104 of the migrationsource storage device 1 executes the migration source storagedevice control program 112 and carries out the target deletion processing for deleting the first target (1819). In this target deletion processing, theCPU 104 of the migrationsource storage device 1, first, searches the target table 123 on condition that the contents of the entry 1231 (target) match the contents of thearea 1105 and deletes the record that agrees with this condition. - Then, the
CPU 104 of the migrationsource storage device 1 searches the LU table 124 on condition that the contents of the entry 1241 (target) match the contents of thearea 1105 and registers “null” in the entry 1241 (target) of the record that agrees with this condition. - After the above-described target deletion processing has been completed, the
CPU 104 of the migrationsource storage device 1 composes a migration start response indicating that the target migration has been completed and sends this response to themanagement terminal 2 via the management NIF 109 (1820). If the migration start response is received, theCPU 204 of themanagement terminal 2 displays, on thedisplay 205, a screen showing the completion of the migration. - Then, the system administrator or the like repeats the operations from 1601 to 1820 with respect to the remaining targets that are required to migrate from the migration
source storage device 1 to the migrationdestination storage device 3. - The first embodiment has been explained hereinabove. According to the first embodiment, the target for which an initiator conducts iSCSI communication can be migrated from the migration
source storage device 1 to the migrationdestination storage device 3, without disconnecting the iSCSI session of this initiator managed by thehost 4. As a result, the migration of storage devices is possible without stopping applications operating in thehost 4. - As for the second embodiment, only the portion thereof which differs from the first embodiment will be explained. The second embodiment relates to a system in which the migration source storage device and migration destination storage device of the first embodiment and a third storage device for replicating the data stored therein to the migration source storage device are connected to a network. The third storage device will be referred to hereinbelow as a master storage device. In the present embodiment, the migration source storage device, the migration destination storage device, and the master storage device will be assumed to be disposed at the same site.
-
FIG. 19 illustrates an example of the system configuration of the present embodiment. In the system of the present embodiment, in addition to the first embodiment, amaster storage device 7 is connected to the IP-SAN 13 and themanagement network 15 by acommunication line 10 and acommunication line 12, respectively. Further, in the present embodiment, themaster storage device 7, the migrationsource storage device 1, and the migrationdestination storage device 3 are collectively called storage devices. Themaster storage device 7 carries out data transmission and reception by using the iSCSI protocol between thehost 4, the migrationsource storage device 1, and the migrationdestination storage device 3. Further, thename management device 5 also carries out management of iSCSI names of themaster storage device 7. -
FIG. 20 shows an example of the configuration of themaster storage device 7. Themaster storage device 7 is a storage device system having at least one storage device. Themaster storage device 7, similar to the migrationsource storage device 1, has adisk device 703, acontrol device 707, and acommunication line 706. Further, thecontrol device 707, similar to thecontrol device 107 of the migrationsource storage device 1, has amain memory 701, acommunication line 702, aCPU 704, anIO IF 705, aNIF 708, amanagement NIF 709, and aniSCSI processing device 710. TheNIF 708 and themanagement NIF 709 have one or more physical ports. - The
main memory 701 stores acache area 711 for storing data read out from thedisk device 703 or data received from thehost 4; aname operation program 715 that is executed by theCPU 704 when conducting the registration or deregistration of an iSCSI name of an initiator, an IP address, and a TCP port number in thename management device 5, sending an inquiry to thename management device 5, and receiving an inquiry response or change notification from thename management device 5; and areplication program 716 that is executed by theCPU 704 when data stored in thedisk device 703 is replicated into other storage devices. - Further, similar to the
main memory 101 of the migrationsource storage device 1, themain memory 701 stores a name management device table 721, a port table 722, a target table 723, and an LU table 724. - The configuration of the migration
source storage device 1, the migrationdestination storage device 3, themanagement terminal 2, thehost 4, theterminal 6, and thename management device 5 are identical to that of the first embodiment. - The data structure of each table in the present embodiment is identical to that of the first embodiment.
- The GUI in the present embodiment will be explained below. In the present embodiment, in addition to the GUIs of the first embodiment, a target
replication management screen 2100 is provided by themanagement terminal 2. -
FIG. 21 shows an example of the targetreplication management screen 2100 used when the system administrator or the like designates, to one of the storage devices, replication of data stored in an LU allocated to a target of the storage device into another storage device. The targetreplication management screen 2100 has abutton 2102 that is used when the device ID of the storage device managing the target, which is the object of the replication, is selected from a list; anarea 2101 for displaying the device ID selected by using thebutton 2102; anarea 2103 to which an iSCSI name of the target is inputted, which is the object of the replication; anarea 2104 to which an iSCSI name of an initiator is inputted, which is used by the storage device when the replication is conducted; anarea 2105 to which a port ID of a physical port is inputted, which is used by the storage device when the replication is conducted; a button 2128 that is used when the storage device having the device ID selected by using thebutton 2102 is designated, so as to start the replication processing according to the information inputted into the areas from thearea 2103 to thearea 2105; and a button 2129 that is used when the replication processing is canceled. - The communication sequence and the operation procedure of the present embodiment will be described below. In the present embodiment, the
master storage device 7 replicates the data stored therein into the first target of the migrationsource storage device 1. Then, the first target of the migrationsource storage device 1 is migrated into the migrationdestination storage device 3, and then themaster storage device 7 continues the replication of the data into the target (has the same iSCSI name as the first target) created in the migrationdestination storage device 3. - Initially, the system administrator or the like, as was described with reference to
FIG. 12 , designates the table initialization processing to themanagement terminal 2 and then registers the device ID, the IP address, and the management IP address of thename management device 5 and the device ID and the management IP address of the migrationsource storage device 1 in themanagement terminal 2. - The system administrator or the like then registers the device ID and the management IP address of the
master storage device 7 in themanagement terminal 2. The communication sequence and the operation procedure, at the time the registration work is conducted, are identical to those of the steps from 1208 to 1211 inFIG. 12 . In the present embodiment, the device ID and the management IP address of themaster storage device 7 are assumed to be “STR03” and “192. 168. 0. 3”, respectively. Furthermore, themaster storage device 7 is assumed to have two physical ports. Those physical ports will be called the fifth physical port and the sixth physical. In registration processing, a port ID of “1” and “2” will be allocated to the fifth physical port and the sixth physical port, respectively. Furthermore, themaster storage device 7 is assumed to have two LUs. The respective LUs will be called the fifth LU and sixth LU. In this registration work, the LUN of “0” and “1” will be allocated to the fifth LU and the sixth LU, respectively. - Then, the system administrator or the like, as was described with reference to
FIG. 13 , registers the discovery domain information in thename management device 5 and registers the information relating to the physical ports and targets of the migrationsource storage device 1 in the migrationsource storage device 1. However, in the present embodiment, the system administrator or the like conducts the registration work of the discovery domain so that the first target “iqn. 2004-06. com. hitachi : tar01” and the third initiator used by themaster storage device 7 for data replication into the migrationsource storage device 1 belong to the discovery domain “DD01”. In the present embodiment, the iSCSI name of the third initiator is assumed to be “iqn. 2004-06. com. hitachi : replication-ini03”. - Then, the system administrator or the like registers, in the
name management device 5, the information of the discovery domain to which the first initiator managed by thehost 4 and the fourth target managed by themaster storage device 7 belong, this target being the target with which the first initiator conducts iSCSI communication. The communication sequence and the operation procedure, at the time the registration work is conducted, are identical to those ofoperations 1301 to 1304 ofFIG. 13 . In the present embodiment, the domain ID of the discovery domain registered herein is assumed to be “DD02” and the iSCSI name of the fourth target is assumed to be “iqn. 2004-06. com. hitachi : tar04”. - Then, the system administrator or the like registers, in the
master storage device 7, the information relating to the physical ports and the targets of themaster storage device 7. The communication sequence and the operation procedure, at the time the registration work is conducted, are identical to those of the steps from 1305 to 1315 inFIG. 13 . In the present embodiment, the IP addresses of “172. 16. 0. 5” and “172. 168. 0. 6” are assumed to be allocated to the fifth physical port and the sixth physical port, respectively, in this registration work. Furthermore, the fourth target is assumed to be registered in themaster storage device 7 and the fifth physical port and the fifth LU are assumed to be respectively allocated in this registration work. - Then, the system administrator or the like uses the
character input device 207 and thepointing device 206 and designates display of the targetreplication management screen 2100 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and conducts the target replication management screen display processing. In the target replication management screen display processing, first, theCPU 204 of themanagement terminal 2 displays the targetreplication management screen 2100 on thedisplay 205. - Further, the
CPU 204 of themanagement terminal 2 reads the entries 2211 (device ID) of all of the records of the storage device table 221, creates a list of device IDs of the storage devices according to the results obtained, and makes it possible to display a list of device IDs of the storage devices when thebutton 2102 is specified by the system administrator or the like. Duplication of a device ID in the list of device IDs of the storage devices is avoided. Then, the system administrator or the like selects the device ID of themaster storage device 7 by using thebutton 2102 of the targetreplication management screen 2100 and inputs the iSCSI name of the target which is the object of replication, the iSCSI name of the initiator used by themaster storage device 7, and the port ID of the physical port used by this initiator in thearea 2103, thearea 2104, and thearea 2105, respectively. - In the present embodiment, “STR03”, which is the device ID of the
master storage device 7, is selected by using thebutton 2102, and “iqn. 2004-06. com. hitachi : tar04”, which is the iSCSI name of the fourth target, “iqn. 2004-06. com. hitachi : replication-ini03”, which is the iSCSI name of the third initiator, and “2”, which is the port ID of the sixth physical port, are inputted into thearea - If the system administrator or the like then specifies the button 2128, the
CPU 204 of themanagement terminal 2 searches the storage device table 221 on condition that the device ID selected by using thebutton 2102 matches the contents of the entry 2211 (device ID) and fetches the contents of the entry 2212 (management IP address) of the record that agrees with this condition. Then, theCPU 204 of themanagement terminal 2 composes a target replication start request including the contents of the areas from thearea 2103 to thearea 2105 and sends the request via themanagement NIF 209 to themaster storage device 7 having the device ID selected by using thebutton 2102. The destination IP address of the target replication start request is the contents of the aforementioned entry 2212 (management IP address). - If the target replication start request is received, the
CPU 704 of themaster storage device 7 executes thereplication program 716 and starts the target replication. First, theCPU 704 of themaster storage device 7 fetches the contents of the areas from thearea 2103 to thearea 2105 from the received target replication start request. Then, theCPU 707 of themaster storage device 7 searches the port table 722 on condition that the contents of the entry 7221 (port ID) match the contents of thearea 2105 and fetches the contents of the entry 7222 (IP address) of the record that agrees with this condition. - Then, the
CPU 704 of themaster storage device 7 registers the iSCSI name of the third initiator and the IP address of the physical port used by the third initiator in thename management device 6 according to the contents of thearea 2104 and the contents of the entry 7222 (IP address). The communication sequence and the operation procedure, at the time the name registration work is conducted, are identical to those of theoperations 1401 to 1403 shown inFIG. 14 , except that thehost 4 is replaced with themaster storage device 7 and the first initiator is replaced with the third initiator. - Then, the
CPU 704 of themaster storage device 7 sends a request for the change notification registration to thename management device 5. The communication sequence and the operation procedure at the time the change notification registration work is conducted are identical to those of theoperations 1404 to 1406 shown inFIG. 14 , except that thehost 4 is replaced with themaster storage device 7 and the first initiator is replaced with the third initiator. - Then, the
CPU 704 of themaster storage device 7 sends the discovery request to thename management device 5. The communication sequence and the operation procedure at the time the discovery operation is conducted are identical to those of theoperations 1407 to 1409 shown inFIG. 14 , except that thehost 4 is replaced with themaster storage device 7 and the first initiator is replaced with the third initiator. In the present embodiment, the iSCSI name, the IP address, and the TCP port number of the target contained in the discovery response received by themaster storage device 7 are “iqn. 2004-06. com. hitachi : tar01”, which is the iSCSI name of the first target that belongs to the discovery domain identical to third initiator, “172. 16. 0. 1”, which is the IP address allocated to the first physical port, and “3260”, which is the well-known port. - Then, the
CPU 704 of themaster storage device 7 performs login to the first target by using the third initiator and establishes an iSCSI session between the third initiator and the first target. The communication sequence and the operation procedure at the time the login operation is conducted are identical to those of theoperations 1410 to 1412 shown inFIG. 14 , except that thehost 4 is replaced with themaster storage device 7 and the first initiator is replaced with the third initiator. - Each time the data of the LU (that is, the fifth LU) allocated to the target (that is, the fourth target), whose iSCSI name is identical to the contents of the
area 2103, is changed by thehost 4, theCPU 704 of themaster storage device 7 uses the established iSCSI session and carries out an identical data change with respect to the LU, that is, the first LU allocated to the first target. As a result, a consistency is maintained between the data of the fifth LU and the data of the first LU. - Then, the system administrator or the like carries out configuration relating to the fourth target as the target of access destination in the
host 4 and then activates the first initiator. The communication sequence and the operation procedure at the time the initiator activation is conducted are identical to those shown inFIG. 14 , except that the migrationsource storage device 1 is replaced with themaster storage device 7, the first target is replaced with the fourth target, the first LU is replaced with the fifth LU, and the processing of the steps from 1404 to 1406 is not carried out. - Then, the system administrator or the like, as was described with reference to
FIG. 15 , registers the device ID and the management IP address of the migrationdestination storage device 3 in themanagement terminal 2 and then registers the information relating to the physical ports of the migrationdestination storage device 3 in the migrationdestination storage device 3. - Further, the system administrator or the like, as was described with reference to
FIG. 16 , designates the migrationsource storage device 1 to start the migration processing of the first target. The communication sequence and the operation procedure relating to the subsequent migration processing are identical to those shown inFIG. 16 toFIG. 18 , except that thehost 4 is replaced with themaster storage device 7 and the first initiator is replaced with the third initiator. - The second embodiment has been explained hereinabove. According to the second embodiment, the target for which an initiator conducts iSCSI communication can be migrated from the migration
source storage device 1 to the migrationdestination storage device 3, without disconnecting the iSCSI session of this initiator used by themaster storage device 7 for replication. As a result, the migration of storage devices where a replica of data is stored is possible without changing the configuration of themaster storage device 7 where the original data is stored. - As for the third embodiment, only the portion thereof which differs from the second embodiment will be explained. The third embodiment has a configuration similar to that of the second embodiment. However, in the second embodiment, all of the storage devices were assumed to be disposed at the same site, whereas in the present embodiment, the migration source storage device and the migration destination storage device are disposed in one site and the master storage device is disposed in another site.
-
FIG. 22 illustrates an example of the system configuration of the present embodiment. The system of the present embodiment has amaster site 20 wherein amaster storage device 7 is disposed, aremote site 21 where a migrationsource storage device 1 and a migrationdestination storage device 3 are disposed, and a WAN (Wide Area Network) 16, which is a network connecting themaster site 20 with theremote site 21. In the present embodiment, themaster site 20 and theremote site 21 are assumed to be at a certain distance from each other (for example, in Tokyo, Japan, on the one hand and in Osaka, Japan on the other hand). - Further, the
master storage device 7, amanagement terminal 2, ahost 4, aterminal 6, aname management device 5, an IP-SAN 13, aLAN 14, and amanagement network 15 are disposed in themaster site 20. - On the other hand, the migration
source storage device 1, the migrationdestination storage device 3, ahost 4, aterminal 6, a remotename management device 8 for unified management of combinations of an iSCSI name, an IP address, and a TCP port number of the migrationsource storage device 1 and the migrationdestination storage device 3 and replication of parts of those combinations to thename management device 5 via theWAN 16, an IP-SAN 13, aLAN 14, and amanagement network 15 are disposed in theremote site 21. - The remote
name management device 8 is connected to the IP-SAN 13 and themanagement network 15 with acommunication line 10 and acommunication line 12, respectively. -
FIG. 23 shows an example of the configuration of the remotename management device 8. The remotename management device 8 is a computer comprising amain memory 2301, acommunication line 2302, adisk device 2303, aCPU 2304, adisplay 2305, apointing device 2306, acharacter input device 2307, aNIF 2308, and amanagement NIF 2309. Themain memory 2301 stores adomain management program 2311, an iSCSInode management program 2312, achange notification program 2313, and adomain replication program 2314 that is executed by theCPU 2304 when part of the contents of an iSCSI node table 2321 is replicated to thename management device 5. Furthermore, themain memory 2301 also stores the iSCSI node table 2321, a domain table 2322, and a domain replication table 2323 storing associations of a domain ID of a discovery domain, which is the replication object, and the IP address of thename management device 5, which is the replication destination. - The configuration of the
master storage device 7, the migrationsource storage device 1, the migrationdestination storage device 3, themanagement terminal 2, thehost 4, theterminal 6, and thename management device 5 is identical to that of the second embodiment. - The data structure of the replication table 2323 stored in the
disk device 2303 of the remotename management device 8 will be described below. The domain replication table 2323 has an array structure and can store at least one record. However, the data structure is not limited to an array structure. The data structures of the iSCSI node table 821 and the domain table 2322 are identical to those of the iSCSI node table 521 and the domain table 522 of the second embodiment. -
FIG. 24 illustrates an example of the data structure of the domain replication table 2323. The domain replication table 2323 has the same number of records as discovery domains which constitute the objects of replication. Each record of the domain replication table 2323 has anentry 8231 in which a domain ID of a discovery domain is registered, which is the replication object, and an entry 8233 to which the IP address of thename management device 5 is registered, which is the replication destination. - The data structure of each table stored in the main memory of the
master storage device 7, the migrationsource storage device 1, and the migrationdestination storage device 3 and the data structure of each table stored in thedisk device 203 of themanagement terminal 2 are identical to those of the second embodiment. - The GUI of the present embodiment will be explained below. In the present embodiment, in addition to the GUIs explained in connection with the first and second embodiments, the
management terminal 2 provides a domainreplication management screen 2500. -
FIG. 25 shows an example of the domainreplication management screen 2500 used by the system administrator or the like for registering a discovery domain, which is the replication object, in the remotename management device 8 and deleting it therefrom. The domainreplication management screen 2500 has anarea 2501 to which a domain ID of the discovery domain is inputted, which is the replication object; anarea 2502 to which an IP address allocated to theNIF 508 of thename management device 5 is inputted, which is the replication destination; abutton 2510 that is used when the information inputted into thearea 2501 and thearea 2502 is registered in the remotename management device 8; abutton 2511 that is used when the information of replication specified by using anarea 2512 is deleted from the remotename management device 8; thearea 2512 for displaying the entire information of replication that has already been registered in the remotename management device 8; abutton 2513 and abutton 2515 that is used when the display range of thearea 2512 is moved up and down, respectively, by one line; abutton 2514 that is used when the display range of thearea 2512 is moved to any position; and abutton 2519 that is used when the domainreplication management screen 2500 is closed. - The communication sequence and the operation procedure of the present embodiment will be described below. In the present embodiment, the
master storage device 7 replicates the data stored in itself into the first target of the migrationsource storage device 1 via theWAN 16. Then, the first target of the migrationsource storage device 1 is migrated to the migrationdestination storage device 3, and themaster storage device 7 continues the replication of the data into the migrationdestination storage device 3. - First, the system administrator or the like, similar to the second embodiment, designates the
management terminal 2 to conduct the table initialization processing and then registers the device ID, the IP address, and the management IP address of the remotename management device 8 and the device ID, and the management IP address of the migrationsource storage device 1 in themanagement terminal 2. The communication sequence and the operation procedure of this registration work are identical to those ofoperations 1201 through 1211 shown inFIG. 12 , except that thename management device 5 is replaced with the remotename management device 8 and the management IP address of thename management device 5 contained in the initialization request is replaced with the management IP address of the remotename management device 8. - Then, the system administrator or the like registers the device ID, the IP address, and the management IP address of the
name management device 5 and the device ID and the management IP address of themaster storage device 7 in themanagement terminal 2. The communication sequence and the operation procedure of this registration work are identical to those of theoperations 1202 through 1211 shown inFIG. 12 . - Then, the system administrator or the like uses the
character input device 207 and thepointing device 206 and designates display of the domainreplication management screen 2500 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and conducts the domain replication management screen display processing. In the target replication management screen display processing, first, theCPU 204 of themanagement terminal 2 displays the targetreplication management screen 2100 on thedisplay 205. Then, the system administrator or the like inputs the domain ID of the discovery domain, which is the object of replication, and the IP address of thename management device 5, which is the replication destination, in thearea 2501 and thearea 2502, respectively, of the domainreplication management screen 2500. - In the present embodiment, “DD01”, to which the third initiator and the first target belong, and “172. 16. 0. 253”, which is the IP address allocated to the
NIF 508 of thename management device 5, are inputted into thearea 2501 and thearea 2502, respectively. - If the system administrator or the like then specifies the
button 2510, theCPU 204 of themanagement terminal 2 composes a domain replication start request, including the contents of thearea 2501 and the contents of thearea 2502, and sends the request to the remotename management device 8 via themanagement NIF 209. If the replication start request is received, theCPU 2304 of the remotename management device 8 executes thedomain replication program 2314 and conducts the following processing. - First, the
CPU 2304 of the remotename management device 8 fetches the contents of thearea 2501 and the contents of thearea 2502 from the received domain replication start request. Then, theCPU 2304 of the remotename management device 8 adds a record to the domain replication table 2323. Here, the contents of thearea 2501 and the contents of thearea 2502 are respectively registered in the entry 8231 (domain ID) and the entry 8232 (IP address) of the record which is being added. - After the above-described record addition operation has been completed, the
CPU 2304 of the remotename management device 8 composes a domain replication start response showing that the domain replication start processing was completed successfully and sends the response to themanagement terminal 2 via themanagement NIF 2309. Then, each time a data change is generated, for example, when the iSCSI node is added to the discovery domain, which is the replication object, or is deleted therefrom, and the information of the iSCSI node belonging to this discovery domain is changed, the CPU 804 of the remotename management device 8 executes thedomain replication program 2314, conducts a similar data change in thename management device 5, which is the replication destination, and maintains the consistency of the data managed by the remotename management device 8 and the data managed by thename management device 5. - If the
management terminal 2 receives the aforementioned domain replication start response, the system administrator or the like, similar to the second embodiment, registers the information of the discovery domain “DD01”, to which the third initiator and the first target belong, in the remotename management device 8 and registers the information relating to the physical ports and targets of the migrationsource storage device 1 in the migrationsource storage device 1. The communication sequence and the operation procedure of this registration work are identical to those shown inFIG. 13 , except that thename management device 5 is replaced with the remotename management device 8 and that the identical data change is conducted in thename management device 5 after the remotename management device 8 has executed the domain change processing or name registration processing. - Then, the system administrator or the like, similar to the second embodiment, registers in the
name management device 5 the information of the discovery domain “DD02”, to which the first initiator and the fourth target belong, and registers in themaster storage device 7 the information relating to the physical ports and targets of themaster storage device 7. The communication sequence and the operation procedure of this registration work are identical to those shown inFIG. 13 , except that the migrationsource storage device 1 is replaced with themaster storage device 7. Further, the first initiator and the fourth target do not conduct iSCSI communication with the initiators and the targets in theremote site 21. Therefore, the information of the discovery domain “DD02” is not required to be registered in the remotename management device 8. - Then, the system administrator or the like, similar to the second embodiment, uses the target
replication management screen 2100 and designates to themaster storage device 7 the start of replication from the fourth target to the first target. The communication sequence and the operation procedure of this replication start operation are identical to those ofoperations 1401 through 1412 shown inFIG. 14 , except that thehost 4 is replaced with themaster storage device 7 and the first initiator is replaced with the third initiator. - Then, the system administrator or the like, similar to the second embodiment, activates the first initiator in the
host 4 of themaster site 20. The communication sequence and the operation procedure at the time of this initiator start are identical to those ofoperations 1401 through 1412 shown inFIG. 14 , except that the migrationsource storage device 1 is replaced with themaster storage device 7, the first target is replaced with the fourth target, the first LU is replaced with the fifth LU, and the processing of thesteps 1404 through 1406 is not carried out. - Then, the system administrator or the like registers the device ID and the management IP address of the migration
destination storage device 3 in themanagement terminal 2 and then registers the information relating to the physical ports of the migrationdestination storage device 3 in the migrationdestination storage device 3. The communication sequence and the operation procedure of this registration procedure are identical to those shown inFIG. 15 , except that the management IP address of thename management device 5 contained in the initialization request is replaced with the management IP address of the remotename management device 8. - Then, the system administrator or the like, as described with reference to
FIG. 16 , designates the start of the migration processing of the first target to the migrationsource storage device 1. The communication sequence and the operation procedure of a subsequent migration procedure are identical to those shown inFIG. 16 throughFIG. 18 , except that thehost 4 is replaced with themaster storage device 7, the first initiator is replaced with the third initiator, the device with which the migrationdestination storage device 3 exchanges the name registration request and the name registration response is the remotename management device 8, the device with which the migrationsource storage device 1 exchanges the name deregistration request and the name deregistration response is the remotename management device 8, and that the remotename management device 8 executes the name registration processing or the name deregistration processing, then conducts the identical data change to thename management device 5, and thename management device 5, which has received the change conducts the change notification destination search processing and sends the change notification. - The third embodiment has been explained hereinabove. According to the third embodiment, even when the
master storage device 7 where the original data is stored and the storage device where a replica of the data is stored are disposed in separate sites, the migration of the storage device where the replica of the data is stored is possible without changing the configuration of themaster storage device 7. - As for the fourth embodiment, only the portion thereof which differs from the first embodiment will be explained. In the fourth embodiment, the LU that is the access object for the
host 4 is changed from the first LU to the second LU of the LU managed by the storage device. -
FIG. 26 shows an example of the system configuration of the present embodiment. The system of the present embodiment has a configuration obtained by removing the migrationdestination storage device 3 from the configuration of the first embodiment. -
FIG. 27 shows an example of the configuration of astorage device 9. Thestorage device 9, similar to the migrationsource storage device 1, has adisk device 2703, acontrol device 2707, and acommunication line 2706. Furthermore, thecontrol device 2707, similar to thecontrol device 107 of the migrationsource storage device 1, has amain memory 2701, acommunication line 2702, a CPU 2704, an IO IF 2705, aNIF 2708, amanagement NIF 2709, and aniSCSI processing device 2710. TheNIF 2708 and themanagement NIF 2709 have one or more physical ports. - The
main memory 2701 has acache area 2711 for storing data read out from thedisk device 2703 or data received from thehost 4 or the like, amigration control program 2712 that is executed by the CPU 2704 when an LU accessed by thehost 4 is changed; aname change program 2713 that is executed by the CPU 2704 when an iSCSI name, an IP address, and a TCP port number of a target are registered in thename management device 5 or deregistered therefrom; and anLU replication program 2714 that is executed by the CPU 2704 when the data stored in the LU are replicated to another LU. - Further, the
main memory 2701, similar to themain memory 101 of the migrationsource storage device 1, stores a name management device table 2721, a port table 2722, a target table 2723, and a LU table 2724. - The configuration of the
management terminal 2, thehost 4, theterminal 6, and thename management device 5 is identical to that of the first embodiment. - The data structures of the name management device table 2721, the port table 2722, the target table 2723, and the LU table 2724 stored in the
main memory 2701 of thestorage device 9 are identical to those of the name management device table 121, the port table 122, the target table 123, and the LU table 124, respectively, of the first embodiment. On the other hand, the data structure of each table stored in thedisk device 203 of themanagement terminal 2 is identical to that of the first embodiment. - The GUI in the present embodiment will be described below.
-
FIG. 28 (a) shows an example of an LUreplication management screen 2800 used by the system administrator or the like to designate the start of LU replication to thestorage device 9. The LUreplication management screen 2800 has abutton 2802 that is used when the device ID of thestorage device 9 is selected from a list; anarea 2801 for displaying the device ID selected by using thebutton 2802; anarea 2803 to which an LUN of an LU, where original data is stored, is inputted; anarea 2804 to which an LUN of an LU, where a replica of the original data is stored, is inputted; abutton 2818 that is used when thestorage device 9 having the device ID selected by using thebutton 2802 is designated to start the replication processing according to the information inputted into thearea 2803 and thearea 2804; and abutton 2819 that is used when the replication processing is canceled. -
FIG. 28 (b) shows a display example of an inside-storagemigration management screen 2820 used by the system administrator or the like to designate a change of an LU accessed by thehost 4 to thestorage device 9. The inside-storagemigration management screen 2820 has abutton 2822 that is used when the device ID of thestorage device 9 is selected from a list; anarea 2821 for displaying the device ID selected by using thebutton 2822; anarea 2823 to which an iSCSI name of a target is inputted, to which an LU prior to change was allocated; anarea 2824 to which a port ID of a physical port is inputted, which is allocated to the target after the LU accessed by thehost 4 was changed; anarea 2825 to which a TCP port number used by the target is inputted; anarea 2826 to which an LUN of an LU after the change is inputted; abutton 2838 that is used when thestorage device 9, having the device ID selected by using thebutton 2822, is instructed to start the replication processing according to the information inputted the areas from thearea 2823 to thearea 2826; and abutton 2839 that is used when the replication processing is canceled. Other GUIs in the present embodiment are identical to those of the first embodiment. - The communication sequence and the operation procedure in the present embodiment will be described below. In the present embodiment, the
storage device 9 copies the data stored in the first LU to the second LU. In this case, the LU allocated to the first target of thestorage device 9 is assumed to be changed from the first LU into the second LU. Further, in the present embodiment, the device ID and management IP address of the storeddevice 9 are assumed to be “STR01” and “192. 168. 0. 1”, respectively, similar to the migrationsource storage device 1 of the first embodiment, to have the first and second physical ports and the first and second LU, and to manage the first target having the first physical port and LU allocated thereto. - First, the system administrator or the like designates the table initialization processing to the
management terminal 2 and then registers the device ID, the IP address and the management IP address of thename management device 5 and the device ID and the management IP address of thestorage device 9 in themanagement terminal 2. The communication sequence and the operation procedure of this registration work are identical to those shown inFIG. 12 , except that the migrationsource storage device 1 is replaced with thestorage device 9. - Then, the system administrator or the like registers in the
name management device 5 the information of the discovery domain “DD01” to which the first initiator and the first target belong, and registers in thestorage device 9 the information relating to the physical ports and the targets of thestorage device 9. The communication sequence and the operation procedure of this registration work are identical to those shown inFIG. 13 , except that the migrationsource storage device 1 is replaced with thestorage device 9 and the system administrator or the like does not register the information relating to the second target. - Then, the system administrator or the like uses the
pointing device 206 or thecharacter input device 207 and designates display of the LUreplication management screen 2800 to themanagement terminal 2. TheCPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and conducts the LU replication management screen display processing. In the LU replication management screen display processing, first, theCPU 204 of themanagement terminal 2 displays the LU replication management screen on thedisplay 205. Further, theCPU 204 of themanagement terminal 2 reads the entries 2211 (device ID) of all of the records of the storage device table 221 and creates a list of device IDs of the storage devices according to the results, and makes it possible to display the list of device IDs of storage devices when thebutton 2802 is specified by the system administrator or the like. Duplication of a device ID in the list of device IDs of the storage devices is avoided. - Then, the system administrator or the like selects the device ID of the
storage device 9 by activating thebutton 2802 of the LUreplication management screen 2800 and inputs the LUN of the LU where the original data is stored and the LUN of the LU where a replica of the original data is stored into thearea 2803 and thearea 2804, respectively. In the present embodiment, “STR01”, which is the device ID of thestorage device 9, is selected by using thebutton 2802 and “0”, which is the LUN of the first LU, and “1”, which is the LUN of the second LU, are inputted into thearea 2803 and thearea 2804, respectively. - If the system administrator or the like then specifies the button 2828, the
CPU 204 of themanagement terminal 2 searches the storage device table 221 on condition that the device ID selected by using thebutton 2802 matches the contents of the entry 221 (device ID) and fetches the contents of the entry 221 (management IP address) of the record that agrees with this condition. - Then, the
CPU 204 of themanagement terminal 2 composes a LU replication start request, including the contents of thearea 2803 and the contents of thearea 2804, and sends this request via themanagement NIF 209 to thestorage device 9 holding the device ID selected by using thebutton 2802. The destination IP address of this LU replication start request is assumed to be the contents of the aforementioned entry 221 (management IP address). - If the LU replication start request is received, the CPU 904 of the
storage device 9 executes theLU replication program 914 and starts the replication of the LU. First, the CPU 904 of thestorage device 9 fetches the contents of thearea 2803 and the contents of thearea 2804 from the received LU replication start request. Then, the CPU 2704 of thestorage device 9 conducts initial copy of the data stored in the LU whose LUN is identical to the contents of thearea 2803 into the LU whose LUN is identical to the contents of thearea 2804. After the initial copy has been completed, the CPU 2704 of thestorage device 9, for each change in the data of the LU whose LUN is identical to the contents of thearea 2803, executes the change identical thereto with respect to the LU whose LUN is identical to the contents of thearea 2804. As a result, the consistency of the data of the two LUs is maintained. In the present embodiment, after the data stored in the first LU is initially copied into the second LU, the consistency between the data of the first LU and the data of the second LU is maintained. - Then, the system administrator or the like carries out the configuration relating to the first target as the access destination target in the
host 4 and then activates the first initiator. The communication sequence and the operation procedure of this initiator activation are identical to those shown inFIG. 14 , except that the migrationsource storage device 1 is replaced with thestorage device 9. - Then, the system administrator or the like designates, to the
storage device 9, the start of a processing changing the LU accessed by thehost 4. In the present embodiment, the system administrator or the like is assumed to change the access destination of thehost 4 from the first LU to the second LU. First, the system administrator or the like uses thepointing device 206 or thecharacter input device 207 and designates display of the inside-storagemigration management screen 2820 to themanagement terminal 2. - The
CPU 204 of themanagement terminal 2, which has received the designation, executes theGUI control program 211 and conducts the inside-storage migration management screen display processing. In the inside-storage migration management screen display processing, first, theCPU 204 of themanagement terminal 2 displays the inside-storagemigration management screen 2820 on thedisplay 205. Then, theCPU 204 of themanagement terminal 2 reads the entries 2211 (device ID) of all of the records of the storage device table 221, creates a list of device IDs of the storage devices according to the results, and makes it possible to display the list of device IDs of the storage devices when thebutton 2822 is specified by the system administrator or the like. Duplication of a device ID in the list of device IDs of the storage devices is avoided. - Then, the system administrator or the like selects the device ID of the
storage device 9 by using thebutton 2822 of the inside-storagemigration management screen 2820 and inputs the iSCSI name of the target to which the LU prior to change was allocated, the port ID of the physical port that will be allocated to the target after the LU change, and TCP port number that will be used by the target after the LU change, and the LUN of the LU after the change. In the present embodiment, “STR01”, which is the device ID of thestorage device 9, is selected by using thebutton 2822, and “iqn. 2004-06. com. hitachi : tar01”, which is the iSCSI name of the first target, “2”, which is the port ID of the second physical port, “3260”, which is the well-known port, and “1”, which is the LUN of the second LU, are inputted into thearea 2823, thearea 2824, thearea 2825, and thearea 2826, respectively. - If the system administrator or the like then specifies the
button 2838, theCPU 204 of themanagement terminal 2 searches the storage device table 221 on condition that the device ID selected by using thebutton 2822 matches the contents of the entry 2211 (device ID) and fetches the contents of the entry 2212 (management IP address) of the record that agrees with this condition. Then, theCPU 204 of themanagement terminal 2 composes a migration start request, including the contents of the areas from thearea 2823 to thearea 2826, and sends the request via themanagement NIF 209 to thestorage device 9 holding the device ID selected by using thebutton 2822. The destination IP address of the migration start request is the contents of the entry 2212 (management IP address). - The communication sequence and the operation procedure relating to the subsequent migration processing are identical to those shown in
FIG. 16 throughFIG. 18 , except that the migrationsource storage device 1 and the migrationdestination storage device 3 are replaced with thestorage device 9, the contents of thearea 1105, thearea 1115, thearea 1116, and thearea 1117 are replaced with the contents of thearea 2823, thearea 2824, thearea 2825, and thearea 2826, respectively, the migration source storagedevice control program 112 and the migration destination storagedevice control program 312 are replaced with themigration control program 2712, the communication processing between the migrationsource storage device 1 and the migrationdestination storage device 3 is not required, the processing from 1606 to 1611 inFIG. 16 is not required, the third physical port is replaced with the second physical port, and the third LU is replaced with the second LU. - Further, in the present embodiment, a case was explained where the physical port used by the first target was changed following the changes in the LU accessed by the
host 4. However, it is also possible to change the LU accessed by thehost 4, without changing the physical port used by the first target. In this case, the system administrator or the like inputs the port ID of the physical port used by the first target and the TCP port number other than the TCP port number used by the first target into thearea 2824 and thearea 2825 of the inside-storagemigration management screen 2820. For example, the system administrator or the like inputs “1”, which is the port ID of the first physical port, and “10000”, which is the TCP port number other than the well-known port, into thearea 2824 and thearea 2825, respectively. - The fourth embodiment has been explained hereinabove. With the fourth embodiment, the LU accessed by the
host 4 can be changed, without terminating applications running in thehost 4. - In accordance with the present invention, the migration of the storage device accessed by a host or change of the LU accessed by the host are possible without terminating applications running in the host. Further, the migration of the storage device where a replica of data is stored is possible without changing the configuration of the storage device where the original data is stored.
Claims (9)
1. A system comprising:
a first storage system having a first port and a first storage area assigned to a first identifier which is a kind of an i-SCSI name;
a second storage system having a second port;
a computer;
a network, configured to adapt to an i-SCSI protocol, coupling the first storage system, the second storage system and the computer; and
a name management computer coupled to the first storage system, the second storage system and the computer via the network,
wherein
the computer establishes a first communication path from the computer itself to the first port of the first storage system for accessing from the computer to the first storage area via the first port by using the first identifier;
after creation of the first communication path, the first storage system designates the second storage system to create a second storage area to be assigned to the first identifier which is already assigned to the first storage area in the first storage system,
the second storage system creates the second storage area assigned to the first identifier based on the designation of the first storage system,
the second storage system sends information of the second port and the second storage area together with information of the first identifier to the name management computer after the creation of the second storage area assigned the first identifier,
the name management computer notifies the computer that the second storage area is assigned to the first identifier based on the information of the second port and the second storage area sent from the second storage system,
the computer establishes a second communication path from the computer itself to the second port of the second storage system after the second storage area assigned the first identifier has been created in the second storage system and in response to the notification from the name management computer,
the computer disconnects the first communication path after establishment of the second communication path and then conducts communication with the second storage system by using the second communication path and the first identifier so that the computer can access the second storage area in the second storage system instead of the, first storage area by using the first identifier which is not changed.
2. The system according to claim 1 , wherein, before the creation of the second storage area assigned the first identifier is designated to the second storage system by the first storage system, the first storage system sends data stored in the first storage area in the first storage system to a third storage area in the second storage system via the network,
wherein, when the creation of the second storage area assigned the first identifier is designated by the first storage system, the second storage system creates the second storage area assigned the first identifier by assigning the first identifier to the third storage area where the data are stored.
3. The system according to claim 2 , wherein the first storage system requests the name management computer to delete information of association of the first storage area and the first identifier after the second storage area has been created in the second storage system,
the name management computer notifies the computer of the deletion information of association of the first storage area and the first identifier in response to the request from the first storage system, and
the computer disconnects the first communication path in response to the notification of the deletion from the name management computer.
4. The system according to claim 3 , when the second communication path is created, the second storage system sends a command, which requests the computer not to use the second communication path until the first communication path is disconnected, to the computer via the second communication path.
5. The system according to claim 4 , wherein the name management computer communicates with the first storage system, the second storage system and the computer by using iSNSP.
6. A data migration method in a system comprising a first storage device, a second storage device, a computer and a name management device, the first storage device including a first physical port, a first logical volume and a target to which the first physical port and the first logical volume are allocated and the target having a first identifier of iSCSI name, the second storage device including a second physical port, the computer which establishes a first communication path from the computer to the first physical port and accesses to the target by using the first communication path and the first identifier, the name management device which stores association information of the first identifier of the target and a first network address of the first physical port and the first storage device, the second storage device, the computer and the name management device being coupled by an i-SCSI network, the method comprising:
a step in which the first storage device copies data stored in the first logical volume to a second logical volume in the second storage device;
a step in which after completion of the copying, the first storage device designates the second storage device to assign the target, having the first identifier which is already allocated to the first physical port and the first logical volume, to the second logical volume and the second physical port;
a step in which the second storage device allocates the second logical volume and the second physical port to the target;
a step in which the second storage device notifies the name management device that the second physical port and the second logical volume are allocated to the target;
a step in which the name management device adds a relationship between the target and a second network address of the second physical port to the association information and notifies the computer that the second physical port and the second logical volume are added to the target and the second network address of the second physical port;
a step in which the computer receives the notification of addition from the name management device and establishes a second communication path from the computer itself to the second physical port by using the second network address of the second physical port;
a step in which said first storage device notifies the name management device that the first physical port is deleted from the target;
a step in which the name management device deletes a relationship between the target and the first network address of the first physical port from the association information and notifies the computer that the first physical port has been deleted from the target; and
a step in which the computer receives the notification of deletion from the name management device, disconnects the first communication path, and maintains an access to the target by using the second communication path.
7. The data migration method according to claim 6 , further comprising:
a step in which, when the second communication path is created, the second storage device sends a command, which requests the computer not to use the second communication path until the first communication path is disconnected, to the computer via the second communication path.
8. A system comprising:
a first storage device including a first physical port and a first logical volume which are allocated to a target, the target being assigned to a first identifier of an i-SCSI name;
a second storage device including a second physical port;
a computer coupled to the first storage device and the second storage device; and
a name management device coupled to the first storage device, the second storage device, and the computer via a network, wherein the computer establishes a first communication path from the computer to the first physical port and conducts access to the target by using the first communication path and the first identifier;
the first storage device replicates data stored in the first logical volume to a second logical volume in the second storage device;
after completion of said replication, the first storage device designates the second storage device to create the target having the first identifier assigned to the target in the first storage device;
the second storage device allocates the second logical volume and the second physical port to the target after receiving the designation from the first storage device;
the second storage device notifies the name management device that the second physical port and the second logical volume are allocated to the target;
the name management device notifies the computer that the second physical port and the second logical volume are added to the target;
the computer receives the notification of addition from the name management device and establishes a second communication path from the computer to the second physical port;
the first storage device notifies the name management device that the first physical port and the first logical volume are deleted from the target;
the name management device notifies the computer that the first physical port and the first logical volume are deleted from the target; and
the computer receives the notification of deletion from the name management device, disconnects the first communication path, and maintains the access to the target by using the second communication path and the first identifier which is not changed.
9. The system according to claim 8 , when the second communication path is created, the second storage device sends command, which request the computer not to use the second communication path until the first communication path is disconnected, to the computer via the second communication path.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/759,524 US20070233704A1 (en) | 2004-09-22 | 2007-06-07 | Data migration method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-274338 | 2004-09-22 | ||
JP2004274338A JP4438582B2 (en) | 2004-09-22 | 2004-09-22 | Data migration method |
US10/980,196 US7334029B2 (en) | 2004-09-22 | 2004-11-04 | Data migration method |
US11/759,524 US20070233704A1 (en) | 2004-09-22 | 2007-06-07 | Data migration method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/980,196 Continuation US7334029B2 (en) | 2004-09-22 | 2004-11-04 | Data migration method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070233704A1 true US20070233704A1 (en) | 2007-10-04 |
Family
ID=34941434
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/980,196 Expired - Fee Related US7334029B2 (en) | 2004-09-22 | 2004-11-04 | Data migration method |
US11/759,524 Abandoned US20070233704A1 (en) | 2004-09-22 | 2007-06-07 | Data migration method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/980,196 Expired - Fee Related US7334029B2 (en) | 2004-09-22 | 2004-11-04 | Data migration method |
Country Status (3)
Country | Link |
---|---|
US (2) | US7334029B2 (en) |
EP (1) | EP1641220A1 (en) |
JP (1) | JP4438582B2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090234982A1 (en) * | 2008-03-13 | 2009-09-17 | Inventec Corporation | Method of identifying and dynamically updating storage device status at target |
US20100100678A1 (en) * | 2008-10-16 | 2010-04-22 | Hitachi, Ltd. | Volume management system |
US20120150945A1 (en) * | 2010-12-08 | 2012-06-14 | Kt Corporation | System and method for providing content-centric services using ultra-peer |
US20160191299A1 (en) * | 2014-12-24 | 2016-06-30 | Fujitsu Limited | Information processing system and control method for information processing system |
CN106446111A (en) * | 2016-09-14 | 2017-02-22 | 广东欧珀移动通信有限公司 | Data migration method and terminal |
US9934302B1 (en) * | 2014-09-30 | 2018-04-03 | EMC IP Holding Company LLC | Method and system for performing replication to a device while allowing application access |
US10613755B1 (en) * | 2014-09-30 | 2020-04-07 | EMC IP Holding Company LLC | Efficient repurposing of application data in storage environments |
US10628379B1 (en) * | 2014-09-30 | 2020-04-21 | EMC IP Holding Company LLC | Efficient local data protection of application data in storage environments |
Families Citing this family (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004355503A (en) * | 2003-05-30 | 2004-12-16 | Canon Inc | Device management apparatus and method therefor |
US7681105B1 (en) * | 2004-08-09 | 2010-03-16 | Bakbone Software, Inc. | Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network |
US7681104B1 (en) * | 2004-08-09 | 2010-03-16 | Bakbone Software, Inc. | Method for erasure coding data across a plurality of data stores in a network |
US7366846B2 (en) * | 2005-01-14 | 2008-04-29 | International Business Machines Corporation | Redirection of storage access requests |
US7337350B2 (en) * | 2005-02-09 | 2008-02-26 | Hitachi, Ltd. | Clustered storage system with external storage systems |
JP4836533B2 (en) * | 2005-09-27 | 2011-12-14 | 株式会社日立製作所 | File system migration method in storage system, storage system, and management computer |
US8107467B1 (en) * | 2005-09-30 | 2012-01-31 | Emc Corporation | Full array non-disruptive failover |
US8072987B1 (en) | 2005-09-30 | 2011-12-06 | Emc Corporation | Full array non-disruptive data migration |
JP2007140699A (en) * | 2005-11-15 | 2007-06-07 | Hitachi Ltd | Computer system and storage device and management server and communication control method |
US7697515B2 (en) * | 2005-12-27 | 2010-04-13 | Emc Corporation | On-line data migration of a logical/virtual storage array |
US9348530B2 (en) * | 2005-12-27 | 2016-05-24 | Emc Corporation | Presentation of virtual arrays using n-port ID virtualization |
US7685395B1 (en) | 2005-12-27 | 2010-03-23 | Emc Corporation | Spanning virtual arrays across multiple physical storage arrays |
US7697554B1 (en) | 2005-12-27 | 2010-04-13 | Emc Corporation | On-line data migration of a logical/virtual storage array by replacing virtual names |
JP4767773B2 (en) * | 2006-06-29 | 2011-09-07 | 株式会社日立製作所 | Computer system and method for changing authentication information of computer system |
US8539177B1 (en) * | 2006-06-29 | 2013-09-17 | Emc Corporation | Partitioning of a storage array into N-storage arrays using virtual array non-disruptive data migration |
US8533408B1 (en) * | 2006-06-29 | 2013-09-10 | Emc Corporation | Consolidating N-storage arrays into one storage array using virtual array non-disruptive data migration |
US8452928B1 (en) * | 2006-06-29 | 2013-05-28 | Emc Corporation | Virtual array non-disruptive migration of extended storage functionality |
US7757059B1 (en) | 2006-06-29 | 2010-07-13 | Emc Corporation | Virtual array non-disruptive management data migration |
US8583861B1 (en) * | 2006-06-29 | 2013-11-12 | Emc Corporation | Presentation of management functionality of virtual arrays |
US8589504B1 (en) | 2006-06-29 | 2013-11-19 | Emc Corporation | Full array non-disruptive management data migration |
JP5034495B2 (en) * | 2006-12-27 | 2012-09-26 | 日本電気株式会社 | Storage system, program and method |
US8627418B2 (en) * | 2007-03-23 | 2014-01-07 | Pmc-Sierra, Inc. | Controlled discovery of san-attached SCSI devices and access control via login authentication |
JP2008269469A (en) | 2007-04-24 | 2008-11-06 | Hitachi Ltd | Storage system and management method therefor |
US8825870B1 (en) * | 2007-06-29 | 2014-09-02 | Symantec Corporation | Techniques for non-disruptive transitioning of CDP/R services |
US9098211B1 (en) * | 2007-06-29 | 2015-08-04 | Emc Corporation | System and method of non-disruptive data migration between a full storage array and one or more virtual arrays |
US9063896B1 (en) | 2007-06-29 | 2015-06-23 | Emc Corporation | System and method of non-disruptive data migration between virtual arrays of heterogeneous storage arrays |
US9063895B1 (en) | 2007-06-29 | 2015-06-23 | Emc Corporation | System and method of non-disruptive data migration between heterogeneous storage arrays |
JP5040629B2 (en) * | 2007-12-10 | 2012-10-03 | 富士通株式会社 | Data migration program, data migration method, and data migration apparatus |
JP2009199406A (en) * | 2008-02-22 | 2009-09-03 | Fujitsu Ltd | Apparatus management system |
US20100070722A1 (en) * | 2008-09-16 | 2010-03-18 | Toshio Otani | Method and apparatus for storage migration |
US8055736B2 (en) * | 2008-11-03 | 2011-11-08 | International Business Machines Corporation | Maintaining storage area network (‘SAN’) access rights during migration of operating systems |
US20100153612A1 (en) | 2008-12-15 | 2010-06-17 | Lsi Corporation | Transport agnostic scsi i/o referrals |
US9258391B2 (en) * | 2009-05-29 | 2016-02-09 | Canon Kabushiki Kaisha | Processing method and apparatus |
US20110153905A1 (en) * | 2009-12-23 | 2011-06-23 | Hitachi, Ltd. | Method and apparatus for i/o path switching |
US8824471B2 (en) | 2011-06-01 | 2014-09-02 | Cisco Technology, Inc. | Maintained message delivery during routing domain migration |
TW201327160A (en) * | 2011-12-21 | 2013-07-01 | Ind Tech Res Inst | Method for hibernation mechanism and computer system therefor |
CN104657396B (en) * | 2013-11-25 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Data migration method and device |
KR20150130039A (en) * | 2014-05-13 | 2015-11-23 | 한다시스템 주식회사 | CRM based data migration system and method |
WO2016046943A1 (en) * | 2014-09-25 | 2016-03-31 | 株式会社日立製作所 | Storage device and storage device control method |
JP6565248B2 (en) * | 2015-03-20 | 2019-08-28 | 日本電気株式会社 | Storage device, management device, storage system, data migration method and program |
US10063376B2 (en) * | 2015-10-01 | 2018-08-28 | International Business Machines Corporation | Access control and security for synchronous input/output links |
US10120818B2 (en) | 2015-10-01 | 2018-11-06 | International Business Machines Corporation | Synchronous input/output command |
US10114616B2 (en) * | 2016-08-04 | 2018-10-30 | International Business Machines Corporation | Discovery for pattern utilization for application transformation and migration into the cloud pattern |
JP6740911B2 (en) * | 2017-01-16 | 2020-08-19 | 富士通株式会社 | Port switching program, port switching method, and information processing device |
CN113329057B (en) * | 2021-04-30 | 2022-05-27 | 新华三技术有限公司成都分公司 | Equipment access method and network equipment |
Citations (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5708812A (en) * | 1996-01-18 | 1998-01-13 | Microsoft Corporation | Method and apparatus for Migrating from a source domain network controller to a target domain network controller |
US5731764A (en) * | 1992-10-13 | 1998-03-24 | Sony Corporation | Power control system for connecting and verifying slave connections |
US5734859A (en) * | 1993-10-14 | 1998-03-31 | Fujitsu Limited | Disk cache apparatus having selectable performance modes |
US5832274A (en) * | 1996-10-09 | 1998-11-03 | Novell, Inc. | Method and system for migrating files from a first environment to a second environment |
US5860137A (en) * | 1995-07-21 | 1999-01-12 | Emc Corporation | Dynamic load balancing |
US6108748A (en) * | 1995-09-01 | 2000-08-22 | Emc Corporation | System and method for on-line, real time, data migration |
US20010000818A1 (en) * | 1997-01-08 | 2001-05-03 | Teruo Nagasawa | Subsystem replacement method |
US6230239B1 (en) * | 1996-12-11 | 2001-05-08 | Hitachi, Ltd. | Method of data migration |
US20010047460A1 (en) * | 2000-04-25 | 2001-11-29 | Naotaka Kobayashi | Remote copy system of storage systems connected to fibre network |
US6336172B1 (en) * | 1999-04-01 | 2002-01-01 | International Business Machines Corporation | Storing and tracking multiple copies of data in a data storage library system |
US6421711B1 (en) * | 1998-06-29 | 2002-07-16 | Emc Corporation | Virtual ports for data transferring of a data storage system |
US20020112008A1 (en) * | 2000-02-22 | 2002-08-15 | Christenson Nikolai Paul | Electronic mail system with methodology providing distributed message store |
US20020174307A1 (en) * | 2001-03-15 | 2002-11-21 | Stuart Yoshida | Security-enhanced network attached storage device |
US20030074523A1 (en) * | 2001-10-11 | 2003-04-17 | International Business Machines Corporation | System and method for migrating data |
US20030110237A1 (en) * | 2001-12-06 | 2003-06-12 | Hitachi, Ltd. | Methods of migrating data between storage apparatuses |
US20030115447A1 (en) * | 2001-12-18 | 2003-06-19 | Duc Pham | Network media access architecture and methods for secure storage |
US20030135511A1 (en) * | 2002-01-11 | 2003-07-17 | International Business Machines Corporation | Method, apparatus, and program for separate representations of file system locations from referring file systems |
US20030140193A1 (en) * | 2002-01-18 | 2003-07-24 | International Business Machines Corporation | Virtualization of iSCSI storage |
US20030182257A1 (en) * | 2002-03-25 | 2003-09-25 | Emc Corporation | Method and system for migrating data while maintaining hard links |
US20030182330A1 (en) * | 2002-03-19 | 2003-09-25 | Manley Stephen L. | Format for transmission file system information between a source and a destination |
US6654830B1 (en) * | 1999-03-25 | 2003-11-25 | Dell Products L.P. | Method and system for managing data migration for a storage system |
US20040049553A1 (en) * | 2002-09-05 | 2004-03-11 | Takashige Iwamura | Information processing system having data migration device |
US6715031B2 (en) * | 2001-12-28 | 2004-03-30 | Hewlett-Packard Development Company, L.P. | System and method for partitioning a storage area network associated data library |
US20040068629A1 (en) * | 2001-08-10 | 2004-04-08 | Hitachi, Ltd. | Apparatus and method for online data migration with remote copy |
US20040083285A1 (en) * | 2002-10-25 | 2004-04-29 | Alex Nicolson | Abstracted node discovery |
US20040088483A1 (en) * | 2002-11-04 | 2004-05-06 | Paresh Chatterjee | Online RAID migration without non-volatile memory |
US20040117546A1 (en) * | 2002-12-11 | 2004-06-17 | Makio Mizuno | iSCSI storage management method and management system |
US20040139237A1 (en) * | 2002-06-28 | 2004-07-15 | Venkat Rangan | Apparatus and method for data migration in a storage processing device |
US20040143642A1 (en) * | 2002-06-28 | 2004-07-22 | Beckmann Curt E. | Apparatus and method for fibre channel data processing in a storage process device |
US6772306B2 (en) * | 1998-03-24 | 2004-08-03 | Hitachi, Ltd. | Data saving method and external storage device |
US20040172512A1 (en) * | 2003-02-28 | 2004-09-02 | Masashi Nakanishi | Method, apparatus, and computer readable medium for managing back-up |
US20050005062A1 (en) * | 2003-07-02 | 2005-01-06 | Ling-Yi Liu | Redundant external storage virtualization computer system |
US20050010688A1 (en) * | 2003-06-17 | 2005-01-13 | Hitachi, Ltd. | Management device for name of virtual port |
US20050033878A1 (en) * | 2002-06-28 | 2005-02-10 | Gururaj Pangal | Apparatus and method for data virtualization in a storage processing device |
US6950833B2 (en) * | 2001-06-05 | 2005-09-27 | Silicon Graphics, Inc. | Clustered filesystem |
US20060031651A1 (en) * | 2004-08-03 | 2006-02-09 | Yusuke Nonaka | Data migration with worm guarantee |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000187608A (en) | 1998-12-24 | 2000-07-04 | Hitachi Ltd | Storage device sub-system |
US7610348B2 (en) * | 2003-05-07 | 2009-10-27 | International Business Machines | Distributed file serving architecture system with metadata storage virtualization and data access at the data server connection speed |
-
2004
- 2004-09-22 JP JP2004274338A patent/JP4438582B2/en not_active Expired - Fee Related
- 2004-11-04 US US10/980,196 patent/US7334029B2/en not_active Expired - Fee Related
-
2005
- 2005-05-24 EP EP05253183A patent/EP1641220A1/en not_active Withdrawn
-
2007
- 2007-06-07 US US11/759,524 patent/US20070233704A1/en not_active Abandoned
Patent Citations (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5731764A (en) * | 1992-10-13 | 1998-03-24 | Sony Corporation | Power control system for connecting and verifying slave connections |
US5734859A (en) * | 1993-10-14 | 1998-03-31 | Fujitsu Limited | Disk cache apparatus having selectable performance modes |
US5860137A (en) * | 1995-07-21 | 1999-01-12 | Emc Corporation | Dynamic load balancing |
US6108748A (en) * | 1995-09-01 | 2000-08-22 | Emc Corporation | System and method for on-line, real time, data migration |
US5708812A (en) * | 1996-01-18 | 1998-01-13 | Microsoft Corporation | Method and apparatus for Migrating from a source domain network controller to a target domain network controller |
US5832274A (en) * | 1996-10-09 | 1998-11-03 | Novell, Inc. | Method and system for migrating files from a first environment to a second environment |
US6230239B1 (en) * | 1996-12-11 | 2001-05-08 | Hitachi, Ltd. | Method of data migration |
US20010000818A1 (en) * | 1997-01-08 | 2001-05-03 | Teruo Nagasawa | Subsystem replacement method |
US6240494B1 (en) * | 1997-12-24 | 2001-05-29 | Hitachi, Ltd. | Subsystem replacement method |
US6772306B2 (en) * | 1998-03-24 | 2004-08-03 | Hitachi, Ltd. | Data saving method and external storage device |
US6421711B1 (en) * | 1998-06-29 | 2002-07-16 | Emc Corporation | Virtual ports for data transferring of a data storage system |
US6654830B1 (en) * | 1999-03-25 | 2003-11-25 | Dell Products L.P. | Method and system for managing data migration for a storage system |
US6336172B1 (en) * | 1999-04-01 | 2002-01-01 | International Business Machines Corporation | Storing and tracking multiple copies of data in a data storage library system |
US20020112008A1 (en) * | 2000-02-22 | 2002-08-15 | Christenson Nikolai Paul | Electronic mail system with methodology providing distributed message store |
US20010047460A1 (en) * | 2000-04-25 | 2001-11-29 | Naotaka Kobayashi | Remote copy system of storage systems connected to fibre network |
US20020174307A1 (en) * | 2001-03-15 | 2002-11-21 | Stuart Yoshida | Security-enhanced network attached storage device |
US6950833B2 (en) * | 2001-06-05 | 2005-09-27 | Silicon Graphics, Inc. | Clustered filesystem |
US20040068629A1 (en) * | 2001-08-10 | 2004-04-08 | Hitachi, Ltd. | Apparatus and method for online data migration with remote copy |
US20030074523A1 (en) * | 2001-10-11 | 2003-04-17 | International Business Machines Corporation | System and method for migrating data |
US20030110237A1 (en) * | 2001-12-06 | 2003-06-12 | Hitachi, Ltd. | Methods of migrating data between storage apparatuses |
US20030115447A1 (en) * | 2001-12-18 | 2003-06-19 | Duc Pham | Network media access architecture and methods for secure storage |
US6715031B2 (en) * | 2001-12-28 | 2004-03-30 | Hewlett-Packard Development Company, L.P. | System and method for partitioning a storage area network associated data library |
US20030135511A1 (en) * | 2002-01-11 | 2003-07-17 | International Business Machines Corporation | Method, apparatus, and program for separate representations of file system locations from referring file systems |
US20050262102A1 (en) * | 2002-01-11 | 2005-11-24 | Anderson Owen T | Method, apparatus, and program for separate representations of file system locations from referring file systems |
US6931410B2 (en) * | 2002-01-11 | 2005-08-16 | International Business Machines Corporation | Method, apparatus, and program for separate representations of file system locations from referring file systems |
US20030140193A1 (en) * | 2002-01-18 | 2003-07-24 | International Business Machines Corporation | Virtualization of iSCSI storage |
US20030182330A1 (en) * | 2002-03-19 | 2003-09-25 | Manley Stephen L. | Format for transmission file system information between a source and a destination |
US20030182257A1 (en) * | 2002-03-25 | 2003-09-25 | Emc Corporation | Method and system for migrating data while maintaining hard links |
US20050033878A1 (en) * | 2002-06-28 | 2005-02-10 | Gururaj Pangal | Apparatus and method for data virtualization in a storage processing device |
US20040139237A1 (en) * | 2002-06-28 | 2004-07-15 | Venkat Rangan | Apparatus and method for data migration in a storage processing device |
US20040143642A1 (en) * | 2002-06-28 | 2004-07-22 | Beckmann Curt E. | Apparatus and method for fibre channel data processing in a storage process device |
US20040049553A1 (en) * | 2002-09-05 | 2004-03-11 | Takashige Iwamura | Information processing system having data migration device |
US20040083285A1 (en) * | 2002-10-25 | 2004-04-29 | Alex Nicolson | Abstracted node discovery |
US20040088483A1 (en) * | 2002-11-04 | 2004-05-06 | Paresh Chatterjee | Online RAID migration without non-volatile memory |
US20040117546A1 (en) * | 2002-12-11 | 2004-06-17 | Makio Mizuno | iSCSI storage management method and management system |
US20040172512A1 (en) * | 2003-02-28 | 2004-09-02 | Masashi Nakanishi | Method, apparatus, and computer readable medium for managing back-up |
US20050010688A1 (en) * | 2003-06-17 | 2005-01-13 | Hitachi, Ltd. | Management device for name of virtual port |
US20050005062A1 (en) * | 2003-07-02 | 2005-01-06 | Ling-Yi Liu | Redundant external storage virtualization computer system |
US20060031651A1 (en) * | 2004-08-03 | 2006-02-09 | Yusuke Nonaka | Data migration with worm guarantee |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090234982A1 (en) * | 2008-03-13 | 2009-09-17 | Inventec Corporation | Method of identifying and dynamically updating storage device status at target |
US20100100678A1 (en) * | 2008-10-16 | 2010-04-22 | Hitachi, Ltd. | Volume management system |
US8103826B2 (en) | 2008-10-16 | 2012-01-24 | Hitachi, Ltd. | Volume management for network-type storage devices |
US8402239B2 (en) | 2008-10-16 | 2013-03-19 | Hitachi, Ltd. | Volume management for network-type storage devices |
US20120150945A1 (en) * | 2010-12-08 | 2012-06-14 | Kt Corporation | System and method for providing content-centric services using ultra-peer |
US9451021B2 (en) * | 2010-12-08 | 2016-09-20 | Kt Corporation | System and method for providing content-centric services using ultra-peer |
US9934302B1 (en) * | 2014-09-30 | 2018-04-03 | EMC IP Holding Company LLC | Method and system for performing replication to a device while allowing application access |
US10613755B1 (en) * | 2014-09-30 | 2020-04-07 | EMC IP Holding Company LLC | Efficient repurposing of application data in storage environments |
US10628379B1 (en) * | 2014-09-30 | 2020-04-21 | EMC IP Holding Company LLC | Efficient local data protection of application data in storage environments |
US20160191299A1 (en) * | 2014-12-24 | 2016-06-30 | Fujitsu Limited | Information processing system and control method for information processing system |
US10382250B2 (en) * | 2014-12-24 | 2019-08-13 | Fujitsu Limited | Information processing system and control method for information processing system |
CN106446111A (en) * | 2016-09-14 | 2017-02-22 | 广东欧珀移动通信有限公司 | Data migration method and terminal |
Also Published As
Publication number | Publication date |
---|---|
US7334029B2 (en) | 2008-02-19 |
US20060064466A1 (en) | 2006-03-23 |
JP4438582B2 (en) | 2010-03-24 |
JP2006092054A (en) | 2006-04-06 |
EP1641220A1 (en) | 2006-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7334029B2 (en) | Data migration method | |
US7519769B1 (en) | Scalable storage network virtualization | |
US7617318B2 (en) | Storage system and a storage management system | |
US7603507B2 (en) | Storage system and method of storage system path control | |
US7711979B2 (en) | Method and apparatus for flexible access to storage facilities | |
EP2247076B1 (en) | Method and apparatus for logical volume management | |
JP4500057B2 (en) | Data migration method | |
US7680953B2 (en) | Computer system, storage device, management server and communication control method | |
JP4859471B2 (en) | Storage system and storage controller | |
US7971089B2 (en) | Switching connection of a boot disk to a substitute server and moving the failed server to a server domain pool | |
JP4852298B2 (en) | Method for taking over information for identifying virtual volume and storage system using the method | |
JP4488807B2 (en) | Volume providing system and method | |
JP4568574B2 (en) | Storage device introduction method, program, and management computer | |
JP4217273B2 (en) | Storage system | |
JP5091833B2 (en) | Monitored device management system, management server, and monitored device management method | |
US20100036896A1 (en) | Computer System and Method of Managing Backup of Data | |
JP2004192305A (en) | METHOD AND SYSTEM FOR MANAGING iSCSI STORAGE | |
US20090070579A1 (en) | Information processing system and login method | |
JP2009199584A (en) | Method and apparatus for managing hdd's spin-down and spin-up in tiered storage system | |
US20070192553A1 (en) | Backup apparatus and backup method | |
US8949562B2 (en) | Storage system and method of controlling storage system | |
JP5272185B2 (en) | Computer system and storage system | |
JP4485875B2 (en) | Storage connection changing method, storage management system and program | |
JP4326819B2 (en) | Storage system control method, storage system, program, and recording medium | |
JP2004252934A (en) | Method and system for managing replication volume |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |