Server replace san storage

1. Notify DBA to stop storage;

Serial login server

2. Backup system information

mkdir -p /bakinfo

df -h > /bakinfo/df.txt_`date +%Y%m%d%H%M%S`

ps -ef > /bakinfo/ps.txt_`date +%Y%m%d%H%M%S`

ip a > /bakinfo/ip.txt_`date +%Y%m%d%H%M%S`

netstat -rn > /bakinfo/netstat.txt_`date +%Y%m%d%H%M%S`

free -g > /bakinfo/free.txt_`date +%Y%m%d%H%M%S`

route -n > /bakinfo/route_`date +%Y%m%d%H%M%S`

The following are mainly for GI environment

multipath -ll > /bakinfo/multipath_`date +%Y%m%d%H%M%S`

sysauto_SF lunuseinfo > /bakinfo/lun_`date +%Y%m%d%H%M%S`

cat /etc/multipath.conf > /bakinfo/lultipath.conf_`date +%Y%m%d%H%M%S`

oracleasm listdisks > /bakinfo/disk_`date +%Y%m%d%H%M%S`

If configuration modification is involved, please back up the basic hardware information

free -g > /bakinfo/free_`date +%Y%m%d%H%M%S`

cat /proc/cpuinfo | grep physical | uniq -c > /bakinfo/cpucore_`date +%Y%m%d%H%M%S`

If hard disk changes are involved, please back up basic hard disk information

fdisk -l > /bakinfo/fdisk_`date +%Y%m%d%H%M%S`

mount -v > /bakinfo/mount_`date +%Y%m%d%H%M%S`

cat /proc/mounts > /bakinfo/mounts_`date +%Y%m%d%H%M%S`

3. Check crs and restart:

crsctl check crs

crsctl stop crs

crsctl start crs

After all hosts are rebooted, perform the following steps:

4. Log in to the master node and delete ASM "disk:

oracleasm deletedisk DATA_DISK001
oracleasm deletedisk DATA_DISK002
oracleasm deletedisk DATA_DISK003
oracleasm deletedisk DATA_DISK004
oracleasm deletedisk DATA_DISK005
oracleasm deletedisk DATA_DISK006
oracleasm deletedisk FRA_DISK001
oracleasm deletedisk FRA_DISK002
oracleasm deletedisk OCRVD_DISK001
oracleasm deletedisk OCRVD_DISK002
oracleasm deletedisk OCRVD_DISK003

oracleasm scandisks

oracleasm listdisks

5. Serial login server:

oracleasm scandisks

oracleasm listdisks

6. Create PV on each

pvcreate /dev/mapper/data_grid0001
pvcreate /dev/mapper/data_grid0002
pvcreate /dev/mapper/data_grid0003
pvcreate /dev/mapper/data_grid0004

7. Expand VG on each set

vgextend VolGroup01 /dev/mapper/data_grid0001
vgextend VolGroup02 /dev/mapper/data_grid0002
vgextend VolGroup03 /dev/mapper/data_grid0003
vgextend VolGroup04 /dev/mapper/data_grid0004

8. Perform PV migration on each set

pvmove /dev/mapper/data_grid001 /dev/mapper/data_grid0001
pvmove /dev/mapper/data_grid002 /dev/mapper/data_grid0002
pvmove /dev/mapper/data_grid003 /dev/mapper/data_grid0003
pvmove /dev/mapper/data_grid004 /dev/mapper/data_grid0004

9. Remove the old PV on each host VG

vgreduce VolGroup01 /dev/mapper/data_grid001
vgreduce VolGroup02 /dev/mapper/data_grid002
vgreduce VolGroup03 /dev/mapper/data_grid003
vgreduce VolGroup04 /dev/mapper/data_grid004

10. Remove the previous PV on each

pvremove /dev/mapper/data_grid001
pvremove /dev/mapper/data_grid002
pvremove /dev/mapper/data_grid003
pvremove /dev/mapper/data_grid004

11. Modify multipath.conf

vim /etc/multipath.conf

12. Restart multipath service

/etc/init.d/multipathd reload

13. Notify DBA

Tags: Linux vim

Posted on Fri, 01 May 2020 04:05:47 -0700 by Nilpez