Wednesday, October 24, 2007

Perfect Storm Disk Replacement

I recently had a drive go bad in a Sun StorEdge 3510 FC JBOD array connected to a V490 running Solaris 10 with Solaris Volume Manager. The disk was part of a five-disk stripeset that was mirrored with another stripset.

It was *not* easy finding documentation for getting this done. Tools I'd used on other systems that had SCSI attached arrays and on systems with FC attached RAID arrays did not work. The combination of JBOD with FC on a 3510 managed with Solaris Volume Manager with an active hot spare made it interesting. So without further ado...

How To Replace a Failed Drive on a JBOD Sun StorEdge 3510 FC Array That Has Been Failed Over to a Hot Spare Managed by Volume Manager in Solaris 10 (whew!)

Here's the device with the bad disk c1t10d0s0 that was replaced with the hot spare from c1t11d0s0:

# metastat d15
d15: Mirror
Submirror 0: d16
State: Okay
Submirror 1: d17
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 716634624 blocks (341 GB)

d16: Submirror of d15
State: Okay
Hot spare pool: hsp000
Size: 716634624 blocks (341 GB)
Stripe 0: (interlace: 256 blocks)
Device Start Block Dbase State Reloc Hot Spare
c1t4d0s0 20352 Yes Okay Yes
c1t3d0s0 20352 Yes Okay Yes
c1t2d0s0 20352 Yes Okay Yes
c1t1d0s0 20352 Yes Okay Yes
c1t0d0s0 20352 Yes Okay Yes

d17: Submirror of d15
State: Okay
Hot spare pool: hsp000
Size: 716634624 blocks (341 GB)
Stripe 0: (interlace: 256 blocks)
Device Start Block Dbase State Reloc Hot Spare
c1t9d0s0 20352 Yes Okay Yes
c1t8d0s0 20352 Yes Okay Yes
c1t7d0s0 20352 Yes Okay Yes
c1t6d0s0 20352 Yes Okay Yes
c1t10d0s0 20352 No Okay Yes c1t11d0s0

Device Relocation Information:
Device Reloc Device ID
c1t4d0 Yes id1,ssd@n20000011c6968cf9
c1t3d0 Yes id1,ssd@n20000011c6967f16
c1t2d0 Yes id1,ssd@n20000011c6968c7c
c1t1d0 Yes id1,ssd@n20000011c68baaed
c1t0d0 Yes id1,ssd@n20000011c6968ca1
c1t9d0 Yes id1,ssd@n20000011c6967e6e
c1t8d0 Yes id1,ssd@n20000011c68b0388
c1t7d0 Yes id1,ssd@n20000011c68deaaf
c1t6d0 Yes id1,ssd@n20000011c6969259
c1t11d0 Yes id1,ssd@n20000011c68bbb2d

I removed the meta database replicas that were on c1t10d0 but I'm not convinced I had to do that before continuing.

The cfgadm command can show the attachment point for the disk.

# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 CD-ROM connected configured unknown
c1 fc-private connected configured unknown
c1::22000011c68b0388 disk connected configured unknown
c1::22000011c68b5cb3 disk connected configured unknown
c1::22000011c68baaed disk connected configured unknown
c1::22000011c68bbb2d disk connected configured unknown
c1::22000011c68deaaf disk connected configured unknown
c1::22000011c6967e6e disk connected configured unknown
c1::22000011c6967f16 disk connected configured unknown
c1::22000011c6968c7c disk connected configured unknown
c1::22000011c6968ca1 disk connected configured unknown
c1::22000011c6968cf9 disk connected configured unknown
c1::22000011c6969259 disk connected configured unknown
c1::22000011c696a895 disk connected configured unknown
c1::225000c0ff086290 ESI connected configured unknown
c2 fc-private connected configured unknown
c2::500000e01127c191 disk connected configured unknown
c2::500000e01127c8a1 disk connected configured unknown
usb0/1 unknown empty unconfigured ok
usb0/2 unknown empty unconfigured ok
usb0/3 unknown empty unconfigured ok
usb0/4 unknown empty unconfigured ok

However, both the cfgadm and luxadm commands are unable to remove the drive since it's on a fiber loop and is a JBOD array.

# cfgadm -x replace_device c1::22000011c68b5cb3
cfgadm: Configuration operation not supported

# luxadm remove_device 22000011c68b5cb3

WARNING!!! Please ensure that no filesystems are mounted on these device(s).
All data on these devices should have been backed up.

Error: Invalid path. Device is not a SENA subsystem. - 22000011c68b5cb3.

Instead, use luxadm to offline the bad disk:

# luxadm -e offline /dev/rdsk/c1t10d0s2

Then devfsadm to remove the dev entries:

# devfsadm -Cv
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s0
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s1
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s2
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s3
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s4
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s5
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s6
devfsadm[3915]: verbose: removing file: /dev/dsk/c1t10d0s7
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s0
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s1
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s2
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s3
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s4
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s5
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s6
devfsadm[3915]: verbose: removing file: /dev/rdsk/c1t10d0s7

The output from cfgadm now shows the device as unusable:

# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 CD-ROM connected configured unknown
c1 fc-private connected configured unknown
c1::22000011c68b0388 disk connected configured unknown
c1::22000011c68b5cb3 disk connected configured unusable
c1::22000011c68baaed disk connected configured unknown
c1::22000011c68bbb2d disk connected configured unknown
[...snip...]

Physically replace the device. In the 3510 JBOD array with the default boxid of zero (check the button hidden under the left plastic ear tab), the disk layout looks like this:

0 3 6 9
1 4 7 10
2 5 8 11

(0 to 11 counting down columns first then over rows)

When the disk is replaced, the devfsadm daemon should pick up the disk immediately and configure the dev entries. If not, try this to see what the problem is:

# luxadm -e port
/devices/pci@9,600000/SUNW,qlc@2/fp@0,0:devctl CONNECTED
/devices/pci@8,600000/SUNW,qlc@1/fp@0,0:devctl CONNECTED

Note: If you get a "NOT CONNECTED" error on the 3510 path, check cfgadm to see if the fiber connection is connected.

# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 CD-ROM connected configured unknown
c1 fc-private connected configured unknown
c1::22000011c68b0388 disk connected configured unknown
c1::22000011c68baaed disk connected configured unknown
c1::22000011c68bbb2d disk connected configured unknown
c1::22000011c68deaaf disk connected configured unknown
c1::22000011c6967e6e disk connected configured unknown
c1::22000011c6967f16 disk connected configured unknown
c1::22000011c6968c7c disk connected configured unknown
c1::22000011c6968ca1 disk connected configured unknown
c1::22000011c6968cf9 disk connected configured unknown
c1::22000011c6969259 disk connected configured unknown
c1::22000011c696a895 disk connected configured unknown
c1::225000c0ff086290 ESI connected configured unknown
c1::500000e014cb0282 disk connected configured unknown
c2 fc-private connected configured unknown
c2::500000e01127c191 disk connected configured unknown
c2::500000e01127c8a1 disk connected configured unknown
usb0/1 unknown empty unconfigured ok
usb0/2 unknown empty unconfigured ok
usb0/3 unknown empty unconfigured ok
usb0/4 unknown empty unconfigured ok

If the controller isn't there or is unconfigured try the following:

# cfgadm -c configure cx

If the drives appear with a condition set to "unusable" do the following using the pathname from the luxadm -e port command above:

# luxadm -e forcelip devices/pci@9,600000/SUNW,qlc@2/fp@0,0:devctl

Once the dev devices for the replaced drive are back in, use format to partition the new drive like the old one used to be. You can use the partition map from the hot spare as a template.

Once the drive is partitioned, add any database replicas that may have been on the original device (I should mention that I forgot to do that, so I'm not 100% sure that works), then do a metareplace to trigger the hot spare to go back to available and the replaced drive to start resyncing:

# metareplace -e d17 c1t10d0s0

Show progress with:

# metastat | grep %

Resync in progress: 73 % done

and see that the hot spare is available again with:

# metahs -i

# metahs -i
hsp000: 2 hot spares
Device Status Length Reloc
c1t11d0s0 Available 143349312 blocks Yes
c1t5d0s0 Available 143349312 blocks Yes

Device Relocation Information:
Device Reloc Device ID
c1t11d0 Yes id1,ssd@n20000011c68bbb2d
c1t5d0 Yes id1,ssd@n20000011c696a895

keywords: 3150 storedge storagetek solaris volume manager hot spare fc fiber channel jbod

1 comment:

Unknown said...

Can I assume the process is pretty much the same for an A1000 and zfs?

I mean instead of the metatool stuff I will be using the zpool commands to deal with that side of things.

As for the hardware it looks to me like everything you did will be what I need to do with the A1000.