FDR/Upstream for zLinux disaster recovery using the SLES10 Installer
FDR/Upstream for zLinux disaster recovery using the SLES10 installer
Assumptions: 1. You have created a Linux guest with all the right disk drives attached at all the right addresses. In this case we have a mod 9 boot volume at 100, swap volumes at 102, 103, 104, 105, 106, 107, and a data volume at 200. 2. You have booted the Linux guest to a SSH commands prompt. 3. You have placed the usdr.tar file on an accessible NFS server. 4. You know your lvm configuration. That is, you physical volumes, your volume groups, your logical volumes, and what filesystems are mounted on what logical volumes.
Notes: 1. We have some reiserfs file systems, so we have to load that module. 2. This procedure has it's roots in a procedure that was developed for SLES9. It has not been fully updated for SLES10. 3. This version of the procedure uses lvm for all filesystems except for /boot. Our new clone base uses a different lvm layout, I have not updated the examples in this document.
This procedure works with 3.5.0.f of the Upstream client.
Steps:
1. Load the DASD device driver.
modprobe dasd_eckd_mod
2. Make all the dasd devices available
dasd_configure 0.0.0100 1 0 dasd_configure 0.0.0102 1 0 dasd_configure 0.0.0103 1 0 dasd_configure 0.0.0104 1 0 dasd_configure 0.0.0105 1 0 dasd_configure 0.0.0106 1 0 dasd_configure 0.0.0107 1 0 dasd_configure 0.0.0200 1 0
3. Verify that all devices came online
cat /proc/dasd/devices 0.0.0100(ECKD) at ( 94: 0) is dasda : active at blocksize 4096, 1802880 blocks, 7042 MB 0.0.0102(ECKD) at ( 94: 4) is dasdb : active at blocksize 4096, 600840 blocks, 2347 MB 0.0.0103(ECKD) at ( 94: 8) is dasdc : active at blocksize 4096, 600840 blocks, 2347 MB 0.0.0104(ECKD) at ( 94: 12) is dasdd : active at blocksize 4096, 600840 blocks, 2347 MB 0.0.0105(ECKD) at ( 94: 16) is dasde : active at blocksize 4096, 600840 blocks, 2347 MB 0.0.0106(ECKD) at ( 94: 20) is dasdf : active at blocksize 4096, 600840 blocks, 2347 MB 0.0.0107(ECKD) at ( 94: 24) is dasdg : active at blocksize 4096, 600840 blocks, 2347 MB 0.0.0200(ECKD) at ( 94: 28) is dasdh : active at blocksize 4096, 1802880 blocks, 7042 MB
4. Make a cross-reference of the device addresses and device names for later use. For example 0.0.0100 is dasda.
5. Load the kernel modules that allow the use of lvm.
modprobe dm-mod modprobe reiserfs /sbin/devmap_mknod.sh
6. Format the drives for Linux use.
dasdfmt –b 4096 –d cdl –p –f /dev/dasda Repeat for each dasd device found above.
7. Partition the boot volume.
fdasd /dev/dasda add partition
n
first track
2
last track
2114
add a partition
n
first track 2115 last track (for a mod 9, use 150239)
150239
write the partition table and exit
w 8. Partition the remaining volumes
fdasd -k -a /dev/dasdb (Repeat for all the remaining dasd devices)
9. Create the physical volumes
pvcreate /dev/dasda2 pvcreate /dev/dasdc1
10.Create the volume groups.
vgcreate SapVG /dev/dasdc1 vgcreate System /dev/dasda2
11.Activate the volume groups
vgscan vgchange -ay
12.Create the logical volumes.
lvcreate -L 100.00M -n CTD SapVG lvcreate -L 152.00M -n CTM SapVG lvcreate -L 4.00M -n SapLV SapVG lvcreate -L 4.00M -n SapLV1 SapVG lvcreate -L 4.00M -n SapLV2 SapVG lvcreate -L 4.00M -n SapLV3 SapVG lvcreate -L 4.00M -n SapLV4 SapVG lvcreate -L 700.00M -n SapLV5 SapVG lvcreate -L 1.00G -n SapLV6 SapVG lvcreate -L 4.00G -n SapLV7 SapVG lvcreate -L 128.00M -n home System lvcreate -L 1.00G -n opt System lvcreate -L 512.00M -n root System lvcreate -L 92.00M -n srv System lvcreate -L 164.00M -n tmp System lvcreate -L 2.00G -n usr System lvcreate -L 1.00G -n var System
13.Make the swap spaces.
mkswap /dev/dasdb1 mkswap /dev/dasdc1 mkswap /dev/dasdd1 mkswap /dev/dasdf1 mkswap /dev/dasdg1
14.Make the filesystems.
mkreiserfs -b 4096 -q /dev/system/root mke2fs -b 4096 -q /dev/dasda1 mkreiserfs -b 4096 -q /dev/System/home mkreiserfs -b 4096 -q /dev/System/opt mkreiserfs -b 4096 -q /dev/System/srv mkreiserfs -b 4096 -q /dev/System/tmp mkreiserfs -b 4096 -q /dev/System/usr mkreiserfs -b 4096 -q /dev/System/var mkreiserfs -b 4096 -q /dev/SapVG/CTD mkreiserfs -b 4096 -q /dev/SapVG/CTM mke2fs -b 4096 -q /dev/SapVG/SapLV1 mke2fs -b 4096 -q /dev/SapVG/SapLV4 mkreiserfs -b 4096 -q /dev/SapVG/SapLV6 mke2fs -b 4096 -q /dev/SapVG/SapLV2 mkreiserfs -b 4096 -q /dev/SapVG/SapLV5 mkreiserfs -b 4096 -q /dev/SapVG/SapLV7 mke2fs -b 4096 -q /dev/SapVG/SapLV3
15.Make the mount points and mount the filesystems.
mkdir -p /mnt/ mount -t reiserfs /dev/system/root /mnt/ mkdir -p /mnt/boot mount -t ext2 /dev/dasda1 /mnt/boot mkdir -p /mnt/home mount -t reiserfs /dev/System/home /mnt/home mkdir -p /mnt/opt mount -t reiserfs /dev/System/opt /mnt/opt mkdir -p /mnt/srv mount -t reiserfs /dev/System/srv /mnt/srv mkdir -p /mnt/tmp mount -t reiserfs /dev/System/tmp /mnt/tmp mkdir -p /mnt/usr mount -t reiserfs /dev/System/usr /mnt/usr mkdir -p /mnt/var mount -t reiserfs /dev/System/var /mnt/var mkdir -p /mnt/controld mount -t reiserfs /dev/SapVG/CTD /mnt/controld mkdir -p /mnt/controlm mount -t reiserfs /dev/SapVG/CTM /mnt/controlm mkdir -p /mnt/idoc mount -t ext2 /dev/SapVG/SapLV1 /mnt/idoc mkdir -p /mnt/sapmnt mount -t ext2 /dev/SapVG/SapLV4 /mnt/sapmnt mkdir -p /mnt/sapmnt/PRD mount -t reiserfs /dev/SapVG/SapLV6 /mnt/sapmnt/P10 mkdir -p /mnt/usr/sap mount -t ext2 /dev/SapVG/SapLV2 /mnt/usr/sap mkdir -p /mnt/usr/sap/PRD mount -t reiserfs /dev/SapVG/SapLV5 /mnt/usr/sap/P10 mkdir -p /mnt/usr/sap/sort mount -t reiserfs /dev/SapVG/SapLV7 /mnt/usr/sap/sort mkdir -p /mnt/usr/workdata mount -t ext2 /dev/SapVG/SapLV3 /mnt/usr/workdata
16. Create and mount the FDR/Upstream client directory.
mke2fs –vm0 /dev/ram3 mount –t ext2 /dev/ram3 /tmp mkdir /tmp/fdrupstream
17. Mount the directory containing the usdr.tar file
mount -t nfs 10.10.10.10:/zlinux /mnt
18. Extract the FDR/UPstream Disaster Recovery image.
cd /tmp/fdrupstream tar -xvf /mnt/usdr.tar umount /mnt
19. Configure Upstream to communicate with the Storage Server.
./uscfg (Contact your FDR/Upstream administrator for IP addresses and ports that need to be entered in the configuration tool.)
20. Start the recovery utility.
Obtain the following information. The FDR/Upstream profile you are going to restore. The RACF Userid and Password.
./usdr Enter the requested information. Profile name Userid Password Hit Enter Select appropriate remote version Select Highlighted back to full Select Allow any file system type Select Begin Restore Select Yes to Are you ready to begin the system restore.
21. The FDR restore does not perform zipl to make the restored system bootable. Issue the following commands.
chroot /mnt
22. Using your favorite editor verify that /etc/zipl.conf is set to appropriate values. Then run the zipl command.
zipl -V
23. Use the following commands to back out gracefully.
exit cd / umount -a vgchange -a n
24. The following commands back gracefully out of the installer.
yast Tab to the Abort option and press enter Tab to the Abort Installation option and press enter. (Your SSH connection should go away.)
25. The following shuts down the installer kernel.
Select option 8 to power off Select option 1 yes.
26. IPL the restored system. Please be aware that unless you have defined your logical volumes in the same sequence as they were originally defined, some of you file systems may not contain what you think they ought to. The following commands should remedy the situation.
If the root file system is not mounted rw. Try this command.
mount –n –oremount,rw
The following command will rebuild the lvm nodes.
vgscan --mknodes
While you are at it, it may be helpful to rewrite the boot area
mount –t ext2 /dev/dasda1 /boot zipl
Reboot the system and look for errors.
If you are having a really bad day, it may be necessary to run the vgscan --mknodes command under the SLES10 installer. (I have had to do this under SLES9.) Boot the installer. Load the device driver. Step 1. Make the drives available. Step 2. Load the kernel modules that allow the use of lvm. Step 5. Activate the volume groups. Step 11. Make the mount points and mount the filesystems. Step 15. chroot /mnt vgscan --mknodes Using your favorite editor verify that /etc/zipl.conf is set to appropriate values. Then run the zipl command. Step 22. Use the following commands to back out gracefully. Step 23. The following commands back gracefully out of the installer. Step 24. Boot the restored system and check for errors.
27. This procedure was developed from an email from the FDR/Upsteam folks way back in the early SLES9 days. Information for updating it work with SLES10 came from a SHARE presentation "Linux on z/VM System Programmer Survival Guide" by Jay Brenneman.