Our SAN Storage admin added a LUN from our second CLARiiON to a dual-node cluster of T5140 Solaris servers but it didn't come up multipathed. Instead, it appeared as separate devices on c2 and c3. I followed this procedure to correct:
# format
Searching for disks...
The current rpm value 0 is invalid, adjusting it to 3600
The current rpm value 0 is invalid, adjusting it to 3600
done
c2t5006016941E01B4Ed0: configured with capacity of 25.00GB
c2t5006016141E01B4Ed0: configured with capacity of 24.98GB
c3t5006016041E01B4Ed0: configured with capacity of 24.98GB
c3t5006016841E01B4Ed0: configured with capacity of 25.00GB
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@400/pci@0/pci@8/scsi@0/sd@0,0
1. c2t5006016941E01B4Ed0
/pci@400/pci@0/pci@c/SUNW,emlxs@0/fp@0,0/ssd@w5006016941e01b4e,0
2. c2t5006016141E01B4Ed0
/pci@400/pci@0/pci@c/SUNW,emlxs@0/fp@0,0/ssd@w5006016141e01b4e,0
3. c3t5006016041E01B4Ed0
/pci@500/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w5006016041e01b4e,0
4. c3t5006016841E01B4Ed0
/pci@500/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w5006016841e01b4e,0
5. c4t600601603F301D00B8615DAABC4BDE11d0
/scsi_vhci/ssd@g600601603f301d00b8615daabc4bde11
[...snip...]
11. c4t6006016041301D0022C173CB42BFDE11d0
/scsi_vhci/ssd@g6006016041301d0022c173cb42bfde11
# ls /dev/rdsk/*s2
/dev/rdsk/c0t0d0s2 /dev/rdsk/c4t600601603F301D00502E7B0875DBDD11d0s2
/dev/rdsk/c1t0d0s2 /dev/rdsk/c4t600601603F301D007283D06C76DBDD11d0s2
/dev/rdsk/c2t5006016141E01B4Ed0s2 <-here /dev/rdsk/c4t600601603F301D00B8615DAABC4BDE11d0s2
/dev/rdsk/c2t5006016941E01B4Ed0s2 <-here /dev/rdsk/c4t600601603F301D00EEEBA37BBC4BDE11d0s2
/dev/rdsk/c3t5006016041E01B4Ed0s2 <-here /dev/rdsk/c4t6006016041301D0022C173CB42BFDE11d0s2
/dev/rdsk/c3t5006016841E01B4Ed0s2 <-here /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0s2
/dev/rdsk/c4t600601603F301D002C28D638BC4BDE11d0s2 /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0s2
/dev/rdsk/c4t600601603F301D0040A246697BDBDD11d0s2 /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0s2
/dev/rdsk/c4t600601603F301D00428B785176DBDD11d0s2
# cfgadm -al -o show_SCSI_LUN
Ap_Id Type Receptacle Occupant Condition
c2 fc-fabric connected configured unknown
c2::5006016141e01b4e,0 <-here disk connected configured unknown
c2::5006016141e05590,0 disk connected configured unknown
c2::5006016141e05590,1 disk connected configured unknown
c2::5006016141e05590,2 disk connected configured unknown
c2::5006016141e05590,3 disk connected configured unknown
c2::5006016141e05590,4 disk connected configured unknown
c2::5006016141e05590,5 disk connected configured unknown
c2::5006016141e05590,6 disk connected configured unknown
c2::5006016141e05590,7 disk connected configured unknown
c2::5006016141e05590,8 disk connected configured unknown
c2::5006016141e05590,9 disk connected configured unknown
c2::5006016141e05590,10 disk connected configured unknown
c2::5006016941e01b4e,0 <-here disk connected configured unknown
c2::5006016941e05590,0 disk connected configured unknown
c2::5006016941e05590,1 disk connected configured unknown
c2::5006016941e05590,2 disk connected configured unknown
c2::5006016941e05590,3 disk connected configured unknown
c2::5006016941e05590,4 disk connected configured unknown
c2::5006016941e05590,5 disk connected configured unknown
c2::5006016941e05590,6 disk connected configured unknown
c2::5006016941e05590,7 disk connected configured unknown
c2::5006016941e05590,8 disk connected configured unknown
c2::5006016941e05590,9 disk connected configured unknown
c2::5006016941e05590,10 disk connected configured unknown
c3 fc-fabric connected configured unknown
c3::5006016041e01b4e,0 <-here disk connected configured unknown
c3::5006016041e05590,0 disk connected configured unknown
c3::5006016041e05590,1 disk connected configured unknown
c3::5006016041e05590,2 disk connected configured unknown
c3::5006016041e05590,3 disk connected configured unknown
c3::5006016041e05590,4 disk connected configured unknown
c3::5006016041e05590,5 disk connected configured unknown
c3::5006016041e05590,6 disk connected configured unknown
c3::5006016041e05590,7 disk connected configured unknown
c3::5006016041e05590,8 disk connected configured unknown
c3::5006016041e05590,9 disk connected configured unknown
c3::5006016041e05590,10 disk connected configured unknown
c3::5006016841e01b4e,0 <-here disk connected configured unknown
c3::5006016841e05590,0 disk connected configured unknown
c3::5006016841e05590,1 disk connected configured unknown
c3::5006016841e05590,2 disk connected configured unknown
c3::5006016841e05590,3 disk connected configured unknown
c3::5006016841e05590,4 disk connected configured unknown
c3::5006016841e05590,5 disk connected configured unknown
c3::5006016841e05590,6 disk connected configured unknown
c3::5006016841e05590,7 disk connected configured unknown
c3::5006016841e05590,8 disk connected configured unknown
c3::5006016841e05590,9 disk connected configured unknown
c3::5006016841e05590,10 disk connected configured unknown
Checked the multipath status of existing drives to make sure none was using that device:
# for i in `ls *s2`; do
> mpathadm show lu $i
> done | grep -i 01b4e
Error: Logical-unit c0t0d0s2 is not found.
Error: Logical-unit c1t0d0s2 is not found.
Error: Logical-unit c2t5006016141E01B4Ed0s2 is not found.
Error: Logical-unit c2t5006016941E01B4Ed0s2 is not found.
Error: Logical-unit c3t5006016041E01B4Ed0s2 is not found.
Error: Logical-unit c3t5006016841E01B4Ed0s2 is not found.
(Those errors are to be expected when examining non-multipathed devices.)
Looks good.
# cfgadm -c unconfigure c2::5006016141e01b4e
# cfgadm -c unconfigure c2::5006016941e01b4e
# cfgadm -c unconfigure c3::5006016041e01b4e
# cfgadm -c unconfigure c3::5006016841e01b4e
# devfsadm -Cvc disk
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t6006016041301D0022C173CB42BFDE11d0s7
devfsadm[13115]: verbose: symlink /dev/dsk/c4t6006016041301D0022C173CB42BFDE11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d0022c173cb42bfde11:wd
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t6006016041301D0022C173CB42BFDE11d0s7
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t6006016041301D0022C173CB42BFDE11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d0022c173cb42bfde11:wd,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t600601603F301D00EEEBA37BBC4BDE11d0
devfsadm[13115]: verbose: symlink /dev/dsk/c4t600601603F301D00EEEBA37BBC4BDE11d0s7 -> ../../devices/scsi_vhci/ssd@g600601603f301d00eeeba37bbc4bde11:h
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t600601603F301D00EEEBA37BBC4BDE11d0
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t600601603F301D00EEEBA37BBC4BDE11d0s7 -> ../../devices/scsi_vhci/ssd@g600601603f301d00eeeba37bbc4bde11:h,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t600601603F301D00428B785176DBDD11d0s7
devfsadm[13115]: verbose: symlink /dev/dsk/c4t600601603F301D00428B785176DBDD11d0 -> ../../devices/scsi_vhci/ssd@g600601603f301d00428b785176dbdd11:wd
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t600601603F301D00428B785176DBDD11d0s7
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t600601603F301D00428B785176DBDD11d0 -> ../../devices/scsi_vhci/ssd@g600601603f301d00428b785176dbdd11:wd,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t600601603F301D00502E7B0875DBDD11d0
devfsadm[13115]: verbose: symlink /dev/dsk/c4t600601603F301D00502E7B0875DBDD11d0s7 -> ../../devices/scsi_vhci/ssd@g600601603f301d00502e7b0875dbdd11:h
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t600601603F301D00502E7B0875DBDD11d0
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t600601603F301D00502E7B0875DBDD11d0s7 -> ../../devices/scsi_vhci/ssd@g600601603f301d00502e7b0875dbdd11:h,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0
devfsadm[13115]: verbose: symlink /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:h
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:h,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0s7
devfsadm[13115]: verbose: symlink /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:wd
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0s7
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:wd,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0
devfsadm[13115]: verbose: symlink /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:h
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:h,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0s7
devfsadm[13115]: verbose: symlink /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:wd
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0s7
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:wd,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0
devfsadm[13115]: verbose: symlink /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:h
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:h,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0s7
devfsadm[13115]: verbose: symlink /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:wd
devfsadm[13115]: verbose: removing file: /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0s7
devfsadm[13115]: verbose: symlink /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:wd,raw
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016041E01B4Ed0s7
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/dsk/c3t5006016841E01B4Ed0s7
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016941E01B4Ed0s7
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/dsk/c2t5006016141E01B4Ed0s7
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016041E01B4Ed0s7
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/rdsk/c3t5006016841E01B4Ed0s7
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016941E01B4Ed0s7
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s0
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s1
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s2
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s3
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s4
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s5
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s6
devfsadm[13115]: verbose: removing file: /dev/rdsk/c2t5006016141E01B4Ed0s7
# cfgadm -c configure c2::5006016141e01b4e
# cfgadm -c configure c2::5006016941e01b4e
# cfgadm -c configure c3::5006016041e01b4e
# cfgadm -c configure c3::5006016841e01b4e
# devfsadm -Cvc disk
devfsadm[13122]: verbose: removing file: /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0
devfsadm[13122]: verbose: symlink /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:h
devfsadm[13122]: verbose: removing file: /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0
devfsadm[13122]: verbose: symlink /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:h,raw
devfsadm[13122]: verbose: removing file: /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0s7
devfsadm[13122]: verbose: symlink /dev/dsk/c4t6006016041301D00C21EE519C443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:wd
devfsadm[13122]: verbose: removing file: /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0s7
devfsadm[13122]: verbose: symlink /dev/rdsk/c4t6006016041301D00C21EE519C443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00c21ee519c443df11:wd,raw
devfsadm[13122]: verbose: removing file: /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0
devfsadm[13122]: verbose: symlink /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:h
devfsadm[13122]: verbose: removing file: /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0
devfsadm[13122]: verbose: symlink /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:h,raw
devfsadm[13122]: verbose: removing file: /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0s7
devfsadm[13122]: verbose: symlink /dev/dsk/c4t6006016041301D00BEF6F73CC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:wd
devfsadm[13122]: verbose: removing file: /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0s7
devfsadm[13122]: verbose: symlink /dev/rdsk/c4t6006016041301D00BEF6F73CC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d00bef6f73cc443df11:wd,raw
devfsadm[13122]: verbose: removing file: /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0
devfsadm[13122]: verbose: symlink /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:h
devfsadm[13122]: verbose: removing file: /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0
devfsadm[13122]: verbose: symlink /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0s7 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:h,raw
devfsadm[13122]: verbose: removing file: /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0s7
devfsadm[13122]: verbose: symlink /dev/dsk/c4t6006016041301D002C8AD75BC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:wd
devfsadm[13122]: verbose: removing file: /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0s7
devfsadm[13122]: verbose: symlink /dev/rdsk/c4t6006016041301D002C8AD75BC443DF11d0 -> ../../devices/scsi_vhci/ssd@g6006016041301d002c8ad75bc443df11:wd,raw
# format
Searching for disks...done
c4t6006016008501E0016DD8AD9436ADF11d0: configured with capacity of 25.00GB
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@400/pci@0/pci@8/scsi@0/sd@0,0
1. c4t600601603F301D00B8615DAABC4BDE11d0
/scsi_vhci/ssd@g600601603f301d00b8615daabc4bde11
[...snip...]
11. c4t6006016041301D0022C173CB42BFDE11d0
/scsi_vhci/ssd@g6006016041301d0022c173cb42bfde11
12. c4t6006016008501E0016DD8AD9436ADF11d0
/scsi_vhci/ssd@g6006016008501e0016dd8ad9436adf11
Looking better. Though, what was all that removing of a c4... device both times with the devfsadm command?? Some device link hanging out there that doesn't want to go away? Hmmm...
Friday, May 28, 2010
Monday, May 24, 2010
Handy Solaris Hardware Troubleshooting Commands
ipmitool - utility for controlling IPMI-enabled devices
ipmitool chassis status
ipmitool fru
ipmitool pef status
ipmitool pef list
ipmitool sel info
ipmitool sel elist
ipmitool sdr list all info
ipmitool sunoem led get
ipmitool sunoem sbled get
ipmitool chassis status
ipmitool fru
ipmitool pef status
ipmitool pef list
ipmitool sel info
ipmitool sel elist
ipmitool sdr list all info
ipmitool sunoem led get
ipmitool sunoem sbled get
Friday, May 21, 2010
NFS client on OpenVMS limited to 32-bit NFSv2
I recently had a chance to dig into OpenVMS, getting back to days gone by. A project required transfer of data from one OpenVMS verion 8 host to another, and I decided to take a crack at setting up NFS between the two nodes.
It didn't take too long once I found this document from HP.
But then I got a nasty surprise when I tried to copy a 4GB file from client to server:
%COPY-E-WRITEERR, error writing DNFS2:[000000]FILENAME.DAT;1
-RMS-F-FUL, device full (insufficient space for allocation)
%COPY-W-NOTCMPLT, DRA0:[000000.TEST]FILENAME.DAT;1 not completely copied
The target drive was on a brand new server and had 140GB free, but only about half of the file was there. So what gives?
A search turned up the OpenVMS UCX TCPIP Services v5.6 release notes and this section:
3.7.2 NFS Client Problems and Restrictions
[...snip...]
* The NFS client included with TCP/IP Services uses the NFS Version 2 protocol only.
* With the NFS Version 2 protocol, the value of the file size is limited to 32 bits.
[...snip...]
With 1 of the bits to track file locking, that leaves 31 bits for the size of the file, or 2.1GB.
I expect better of OpenVMS.
It didn't take too long once I found this document from HP.
But then I got a nasty surprise when I tried to copy a 4GB file from client to server:
%COPY-E-WRITEERR, error writing DNFS2:[000000]FILENAME.DAT;1
-RMS-F-FUL, device full (insufficient space for allocation)
%COPY-W-NOTCMPLT, DRA0:[000000.TEST]FILENAME.DAT;1 not completely copied
The target drive was on a brand new server and had 140GB free, but only about half of the file was there. So what gives?
A search turned up the OpenVMS UCX TCPIP Services v5.6 release notes and this section:
3.7.2 NFS Client Problems and Restrictions
[...snip...]
* The NFS client included with TCP/IP Services uses the NFS Version 2 protocol only.
* With the NFS Version 2 protocol, the value of the file size is limited to 32 bits.
[...snip...]
With 1 of the bits to track file locking, that leaves 31 bits for the size of the file, or 2.1GB.
I expect better of OpenVMS.
Thursday, May 6, 2010
Solaris 10 FC disk/LUN Commands
Here are some commands related to diagnosing problems with FC disks/LUNs and showing multpath status if using Sun's stmsboot/MPXIO/Multipathing software:
# luxadm display /dev/rdsk/c0t0d0s2
# mpathadm show lu /dev/rdsk/c4t600...dd11d0
# mpathadm list mpath-support
# mpathadm list lu /dev/rdsk/c4t600...dd11d0
# mpathadm show mpath-support libmpscsi-vhci.so
# mpathadm show initiator-port 2101...4f93
# fcinfo hba-port
# fcinfo remote-port -l -s -p 2101...4f93
I found many useful when having to check status of and/or compare the paths.
# luxadm display /dev/rdsk/c0t0d0s2
# mpathadm show lu /dev/rdsk/c4t600...dd11d0
# mpathadm list mpath-support
# mpathadm list lu /dev/rdsk/c4t600...dd11d0
# mpathadm show mpath-support libmpscsi-vhci.so
# mpathadm show initiator-port 2101...4f93
# fcinfo hba-port
# fcinfo remote-port -l -s -p 2101...4f93
I found many useful when having to check status of and/or compare the paths.
Solaris 10 SMTP Server Not Running After Patch
About a month ago, I installed over one hundred patches on a Solaris 10 box, including patch 142436-03. Just today I discovered that the server was refusing incoming SMTP connections. (It gets incoming mail only infrequently, obviously.)
I tried to connect to port 25 from another server. No dice. I tried this from the local server:
# mconnect localhost (worked!)
# mconnect actual_hostname (didn't work)
From a Solaris 10 Discussion forum post at http://72.5.124.102/thread.jspa?threadID=5233087, I discovered that the patch must've turned on the local_only configuration by default in an effort to help out with security. I think it would be nice if a patch left things the way they were and maybe prompted you to change it. Oh well. Here's the fix:
# svccfg -s sendmail listprop | grep local_only (to verify it's set to true)
# svccfg -s sendmail setprop config/local_only = false
# svcadm refresh sendmail
# svcadm restart sendmail
Voila!
I tried to connect to port 25 from another server. No dice. I tried this from the local server:
# mconnect localhost (worked!)
# mconnect actual_hostname (didn't work)
From a Solaris 10 Discussion forum post at http://72.5.124.102/thread.jspa?threadID=5233087, I discovered that the patch must've turned on the local_only configuration by default in an effort to help out with security. I think it would be nice if a patch left things the way they were and maybe prompted you to change it. Oh well. Here's the fix:
# svccfg -s sendmail listprop | grep local_only (to verify it's set to true)
# svccfg -s sendmail setprop config/local_only = false
# svcadm refresh sendmail
# svcadm restart sendmail
Voila!
Thursday, April 29, 2010
Cleaning up Solaris 10 Device Tree when LUNs Removed
The following was copied from Symantec here: http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/html/vxvm_admin/ch02s24s03.htm but I added some notes because their instructions were not correct (at least on my system) in some spots.
To clean up the device tree after you remove LUNs
1.
The removed devices show up as drive not available (or drive type unknown) in the output of the format command:
413. c3t5006048ACAFE4A7Cd252
/pci@1d,700000/SUNW,qlc@1,1/fp@0,0/ssd@w5006048acafe4a7c,fc
2.
After the LUNs are unmapped using Array management or the command line, Solaris also displays the devices as either
unusable or failing (or maybe unknown just like all the devices - make sure you have the right ones!).
bash-3.00# cfgadm -al -o show_SCSI_LUN
[...]
c2::5006048acafe4a73,256 disk connected configured unusable
c3::5006048acafe4a7c,255 disk connected configured unusable
[...]
3.
If the removed LUNs show up as failing, you need to force a LIP on the HBA. This operation probes the
targets again, so that the device shows up as unusable. Unless the device shows up as unusable, it cannot be
removed from the device tree. Do a long listing of the rdsk directory to see what device to spevify:
luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl
4.
To remove the device from the cfgadm database, run the following commands on the HBA:
cfgadm -c unconfigure -o unusable_SCSI_LUN c2::5006048acafe4a73
or this one if not unusable:
cfgadm -c unconfigure -o c3::5006048acafe4a7c
5.
Repeat step 2 to verify that the LUNs have been removed.
6.
Clean up the device tree. The following command removes the /dev/rdsk... links to /devices.
$devfsadm -Cv
To clean up the device tree after you remove LUNs
1.
The removed devices show up as drive not available (or drive type unknown) in the output of the format command:
413. c3t5006048ACAFE4A7Cd252
/pci@1d,700000/SUNW,qlc@1,1/fp@0,0/ssd@w5006048acafe4a7c,fc
2.
After the LUNs are unmapped using Array management or the command line, Solaris also displays the devices as either
unusable or failing (or maybe unknown just like all the devices - make sure you have the right ones!).
bash-3.00# cfgadm -al -o show_SCSI_LUN
[...]
c2::5006048acafe4a73,256 disk connected configured unusable
c3::5006048acafe4a7c,255 disk connected configured unusable
[...]
3.
If the removed LUNs show up as failing, you need to force a LIP on the HBA. This operation probes the
targets again, so that the device shows up as unusable. Unless the device shows up as unusable, it cannot be
removed from the device tree. Do a long listing of the rdsk directory to see what device to spevify:
luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl
4.
To remove the device from the cfgadm database, run the following commands on the HBA:
cfgadm -c unconfigure -o unusable_SCSI_LUN c2::5006048acafe4a73
or this one if not unusable:
cfgadm -c unconfigure -o c3::5006048acafe4a7c
5.
Repeat step 2 to verify that the LUNs have been removed.
6.
Clean up the device tree. The following command removes the /dev/rdsk... links to /devices.
$devfsadm -Cv
Friday, February 29, 2008
Working With and Cleaning Out wtmpx
% last
# This example keeps only last 500 records. You might want more on a busy system
% /usr/lib/acct/fwtmp < /var/adm/wtmpx | tail -500 | /usr/lib/acct/fwtmp -ic > /tmp/wtmpx
# Test it
% last -f /tmp/wtmpx
% cat /tmp/wtmpx > /var/adm/wtmpx
# This example keeps only last 500 records. You might want more on a busy system
% /usr/lib/acct/fwtmp < /var/adm/wtmpx | tail -500 | /usr/lib/acct/fwtmp -ic > /tmp/wtmpx
# Test it
% last -f /tmp/wtmpx
% cat /tmp/wtmpx > /var/adm/wtmpx
Subscribe to:
Posts (Atom)