Configuring Solaris 10 host with SAN storage ( EMC Clariion )

Configuring SAN storage is usual task of system administration in Enterprise network environment. And Of course  there is lot of  engineering activity involved in certifying a process , that defines the guidelines on what hardware / software matrix should be used for stabled operating environment.

In this post , we are assuming that the server is Solaris OS and configuring with EMC clarion device. Please note the instructions given here are may vary depends on many different variables and may/may not work for your specific environment, but you can always use these instructions as guidelines to define solution specific to your environment.

Steps involved in Configuring a Solaris Host with SAN Storage:

  1. HBA Installation & EMC Powerpath Software Installation
  2. Identifying HBA WWPN numbers to input to the Storage Team
  3. Creating and allocating Storage for the Host ( Storage Administrator’s Job)
  4. Discovering Storage from the host
  5. Configuring Powerpath on the host
  6. Creating Volumes / Partitions / File systems on Storage Disks

1. HBA Installation & EMC PowerPath Installation  (c) www.gurkulindia.com

This step involved physically installing HBA cards into the server and install  HBA drivers ( provided by the vendor ) to the OS.

Powerpath is a software which manages dual paths and takes care of automatic failover and load balancing of the connection from host to storage.

We need to install the packages that is tested and provided by the vendor.

 Please refer to the post for Solaris10: SAN/SAS/MPXIO/STMS Config files

2. Identifying the WWPNs for HBA (c) www.gurkulindia.com

Few Points about WWN / WWPN / WWNN ( please refer to the Post for more information SAN for System Administrators)

  • World Wide Name (WWN) are unique 8 byte (64-bit) identifiers in SCSI or fibre channel similar to that of MAC Addresses on a Network Interface Card (NIC).
  •  World Wide port Name (WWPN), a WWN assigned to a port on a Fabric which is what you would be looking for most of the time.
  • World Wide node Name (WWNN), a WWN assigned to a node/device on a Fibre Channel fabric.

Once you install the HBA, we have to perform reconfiguration boot using either on of below commands.

method 1:  if you are at ok prompt , just rung Ok> boot -r

method 2: If you are at Operating System Prompt  # reboot — -r

 

Dynamic Reconfiguration of SAN disks from Solaris 10, without Solaris reconfiguration reboot:

# cfgadm -al ( verification of FC conroller connections )
# devfsadm -c disk -v (rescanning disks )
# echo|format ( verifiy disks from OS)

Once server reboots in reconfiguration mode it will recognize HBA and corresponding World Wide Port Names ( WWPN) .  You can find out the WWPN numbers using below command

1.using fcinfo command

# fcinfo hba-port
HBA Port WWN: 22ooo11b32xxxxxx
OS Device Name: /dev/cfg/c2
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 4.04.01
FCode/BIOS Version:  BIOS: 1.24; fcode: 1.24; EFI: 1.8;
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 22ooo11b32xxxxxx
HBA Port WWN: 2111201b32yyyyyy
OS Device Name: /dev/cfg/c3
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 4.04.01
FCode/BIOS Version:  BIOS: 1.24; fcode: 1.24; EFI: 1.8;
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 2111201b32yyyyyy

2. Using Prtconf command

# prtconf -vp | grep -i wwn
port-wwn:  22ooo11b32xxxxxx
node-wwn:  22ooo11b32xxxxxx
port-wwn:  2111201b32yyyyyy
node-wwn:  2111201b32yyyyyy

3. using luxadm Command

  • First Find out the Physical paths available

$ luxadm -e port

/devices/pci@400/pci@0/pci@9/SUNW,qlc@0/fp@0,0:devctl              CONNECTED
/devices/pci@400/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0:devctl            NOT CONNECTED

  • Second find the WWN numbers of HBA connected to specific physical path

$luxadm -e dump_map /devices/pci@400/pci@0/pci@9/SUNW,qlc@0/fp@0,0:devctl

Pos  Port_ ID Hard_Addr Port WWN         Node WWN         Type
0    123456      0         1111111111111111 2222222222222222 0×0  (Disk device)
1    789123       0         1111111111111111 2222222222222222 0×0  (Disk device)
2    453789       0         22ooo11b32xxxxxx 22ooo11b32xxxxxx 0x1f (Unknown Type,Host Bus Adapter)

 

(c) www.gurkulindia.com

3. Creating and allocating Storage for the host

This is purely storage administrator task, where storage administrator and where he will follow below steps to complete Storage side configuration. I am Just outlines the steps but not going inside to avoid deviation in the topic

  • Retrieve the HBA WWPNS
  • Determine Fabric Information
  • Verify Host Connectivity
  • Create Nicknames for the HBAs
  • Create Zones
  • Adding Zones to a Zone Set
  • Activate a Zone Set
Once above steps complete, System administrator able to see the allocated storage from the Server Side

4. Discovering Storage from Host

Solaris 10 hosts can dynamically add new storage without a reboot ( unlike solaris 8/9, we dont need to configure the sd.conf / lpfc.conf files to mention new disk target and lun information ). Once the

# devfsadm

 

(c) www.gurkulindia.com

5. Configuring Powerpath to the Host

If your server configured connected with more than on fiber path to the storage you should manage those multiple paths using some kind of multipath software, for EMC Storage most of the times it is “PowerPath”. Configuration of powerpath involved following steps

a. Power Path Discovery
b. Activating New Devices
c. Set the Failover Policy

a . Power Path Discovery:
At this step system administrator instruct powerpath to detect discover any NEW EMC Luns connected through this paths, and to create new pseudo powerpath devices for each new LUN found, using the command
# /etc/powercf -q
The ‘-q’ option (quiet mode) tells PowerPath to automatically create psuedo devices for every new LUN it detects.

Running this interactively causes PowerPath to ask the user if it should create a new psuedo device for each new LUN found.

The above PowerPath command will also destroy psuedo devices (emcpowerX devices only) for LUNs that were previously managed by PowerPath but have disappeared from the system

b. Activating New Devices
Once the new psuedo devices have been created via /etc/powercf, they should be activated using below command, so that PowerPath starts using them.
# /etc/powermt config
you can check the devices that currently configured using the below command
# /etc/powermt display dev=all

 

(c) www.gurkulindia.com 

c. Set the Failove Policy
If the host has more than one HBA and has a PowerPath license, the fail over policy should be set to optimize for the CLARiiON arrays (as opposed to basic fail over). use the following command to set the failover policy to co ( CLAROpt)
# /etc/powermt set policy=co dev=all
Once you see all the paths are active you can save the powerpath configuration using the below command, to make the configuration persistant across reboot.
# /etc/powermt save

Refer the Post “POWERMT Commands” for more information

6. Configuration Volumes / Partitions / Filesystems on SAN Storage Disks
Below Steps assume that we have Veritas Volume Manager configured on the server, and we want to configure a new Veritas Disk Group with new volumes.
Steps involved:

  • A. Verify Luns
  • B. Label Disks
  • C. Initialize each disk & Verify size each disk
  • D. Create Disk Groups , Volumes and File Systems
  • E. Create Mount points and mount new Veritas File Systems

A. Verify LUNS
Once you confirm all the storage paths are active, from the below command
# /etc/powermt display dev=all
Just run “# echo|format ” to recognize the new disks visible through the above identified paths, Please note that for every single path related to each LUN device you will see one entry in format output.

 

 

(c) www.gurkulindia.com 

e.g. If you see 4 paths for each lun in the output of “powermt display dev=all” then you will see four disk entries for each LUN in format output.

# echo |format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@0/pci@0/pci@2/scsi@0/sd@0,0
1. c1t1d0
/pci@0/pci@0/pci@2/scsi@0/sd@1,0
2. c2t5006016B39A03710d0
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,0
3. c2t5006016B39A03710d1
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,1
4. c2t5006016B39A03710d2
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,2
5. c2t5006016B39A03710d3
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,3
6. c2t5006016B39A03710d4
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,4
7. c2t5006016B39A03710d5
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,5
8. c2t5006016B39A03710d6
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,6
9. c2t5006016B39A03710d7
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016b39a03710,7
10. c2t5006016339A03710d0
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,0
11. c2t5006016339A03710d1
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,1
12. c2t5006016339A03710d2
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,2
13. c2t5006016339A03710d3
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,3
14. c2t5006016339A03710d4
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,4
15. c2t5006016339A03710d5
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,5
16. c2t5006016339A03710d6
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,6
17. c2t5006016339A03710d7
/pci@0/pci@0/pci@8/pci@0/pci@8/lpfc@0/fp@0,0/ssd@w5006016339a03710,7
18. c4t5006016A39A03710d0
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,0
19. c4t5006016A39A03710d1
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,1
20. c4t5006016A39A03710d2
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,2
21. c4t5006016A39A03710d3
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,3
22. c4t5006016A39A03710d4
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,4
23. c4t5006016A39A03710d5
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,5
24. c4t5006016A39A03710d6
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,6
25. c4t5006016A39A03710d7
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016a39a03710,7
26. c4t5006016239A03710d0
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,0
27. c4t5006016239A03710d1
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,1
28. c4t5006016239A03710d2
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,2
29. c4t5006016239A03710d3
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,3
30. c4t5006016239A03710d4
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,4
31. c4t5006016239A03710d5
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,5
32. c4t5006016239A03710d6
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,6
33. c4t5006016239A03710d7
/pci@0/pci@0/pci@9/lpfc@0/fp@0,0/ssd@w5006016239a03710,7
Specify disk (enter its number): Specify disk (enter its number):

B. Label Each Disk
Remember we dont need to label for all psuedo entries of each LUN, we have to label just one entry related to one LUN device.  For example – following are 4 entries for one disk.

2. c2t5006016B39A03710d0
10. c2t5006016339A03710d0
18. c4t5006016A39A03710d0
26. c4t5006016239A03710d0

Disk c2t5006016B39A03710d0 has multiple paths through c2 and c4, you need to label only one physical disk.

10. c2t5006016339A03710d0
Or
26. c4t5006016239A03710d0

 

 

 

Out put from “format” , when you label the disk :

Specify disk (enter its number)[10]: 10
selecting c2t5006016339A03710d0
[disk formatted]
Disk not labeled. Label it now? y

C.Initialize each disk in VxVM

#vxdctl enable

above command recognizes the new disks connected to OS into VxVM 

# vxdisk list
DEVICE TYPE DISK GROUP STATUS
EMC_CLARiiON0_0 auto – – error
EMC_CLARiiON0_1 auto – – error
EMC_CLARiiON0_2 auto – – error
EMC_CLARiiON0_3 auto – – error
EMC_CLARiiON0_4 auto – – error
EMC_CLARiiON0_5 auto – – error
EMC_CLARiiON0_6 auto – – error
EMC_CLARiiON0_7 auto – – error
c1t0d0s2 auto:none – – online invalid
c1t1d0s2 auto:none – – online invalid

Initialize LUNS/ Disks

( Note -Pleae make sure you are not initializing root disks, which are normally c1t0d0s2 and c1t1d0s2 )

#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_0
#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_1
#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_2
#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_3
#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_4
#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_5
#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_6
#/etc/vx/bin/vxdisksetup -i EMC_CLARiiON0_7

you can find out each disk information like. Disk Size , Serial number ..etc. using below command

# vxdisk list EMC_CLARiiON0_0

Device: EMC_CLARiiON0_0
devicetag: EMC_CLARiiON0_0
type: auto
hostid: gis1.gurkulindia.com
disk: name=gis1_os_lun1 id=1207238009.13.gis1.gurkulindia.com
group: name=gis1_dg id=1207238612.33.gis1.gurkulindia.com
info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags: online ready private autoconfig autoimport imported
pubpaths: block=/dev/vx/dmp/EMC_CLARiiON0_0s2 char=/dev/vx/rdmp/EMC_CLARiiON0_0s2
guid: {1a5de644-1dd2-11b2-abf7-00144f97d77c}
udid: DGC%5FCLARiiON%5FAPM00074503678%5F600601606C9B1F00007FE7E32501DD11
site: -
version: 3.1
iosize: min=512 (bytes) max=2048 (blocks)
public: slice=2 offset=65792 len=33486592 disk_offset=0
private: slice=2 offset=256 len=65536 disk_offset=0
update: time=1207344899 seqno=0.11
ssb: actual_seqno=0.0
headers: 0 240
configs: count=1 len=48144
logs: count=1 len=7296
Defined regions:
config priv 000048-000239[000192]: copy=01 offset=000000 enabled
config priv 000256-048207[047952]: copy=01 offset=000192 enabled

log priv 048208-055503[007296]: copy=01 offset=000000 enabled

lockrgn priv 055504-055647[000144]: part=00 offset=000000

Multipathing information:
numpaths: 4
c2t5006016B39A03710d2s2 state=enabled type=secondary
c2t5006016339A03710d2s2 state=enabled type=primary
c4t5006016A39A03710d2s2 state=enabled type=secondary
c4t5006016239A03710d2s2 state=enabled type=primary

E. Creating Volumes / Partitions / Filesystems on SAN Storage Disks

Create new volume group in veritas (This step is required only if Volume Group is not present).

Create Disk Group for Zone
#vxdg init gis1_dg gis1_os_lun1=EMC_CLARiiON0_0

Find free space in blocks in newly created disk group
# vxdg –g gis1_dg free

DISK DEVICE TAG OFFSET LENGTH FLAGS
gis1_os_lun1 EMC_CLARiiON0_0 EMC_CLARiiON0_0 0 100591360 –

Create volume for OS:
# vxassist -g gis1_dg make gis1_os 100591360 alloc=”gis1_os_lun1″

Create File System:
Note -File System on OS LUN should be UFS only
# newfs /dev/vx/rdsk/gis1_dg/gis1_os

Add disk to create Data volume:
# vxdg -g gis1_dg adddisk gis1_data_lun1=EMC_CLARiiON0_1

Free space in Disk Group:
# vxdg –g gis1_dg free

Create data volume:
#vxassist -g gis1_dg make gis1_app1 104786688 alloc=”gis1_data_lun1″

Create File System:
# mkfs -F vxfs -o largefiles /dev/vx/rdsk/gis1_dg/gis1_app1

F. Creating Mount Points and Mount filesytems

#mkdir /gis1_app1
#mount /dev/vx/dsk/gis1_dg/gis1_appa1 /gis1_app1

To make this filesytem automount during the server boot , modify /etc/vfstab and add the below entry

/dev/vx/dsk/gis1_dg/gis1_app1 /dev/vx/rdsk/gis1_dg/gis1_app1 /gis_app1

Republished by Blog Post Promoter

Ramdev

Unix geek by profession and a part time trainer by passion. I have completed my AMIE ( Computer Engineering) from Institution of Engineers ( India) and currently working as Unix technology Integration specialist in a Multinational Financial Firm, Singapore. As a system administrator my typical day life involves anything from a simple password reset to entire Data center Recovery operation. And as an Integration engineer, I do work on solutions that requires integration of different technologies on Unix platform . As an Unix administrator, for past 10 years, I had very good opportunity to work with various platform technologies like Solaris, Linux, AIX, Shell Scripting and Veritas Products.And my knowledge area partially covers Oracle DBA, Cisco Networking and Windows Administration. 

  44 comments for “Configuring Solaris 10 host with SAN storage ( EMC Clariion )

  1. Sourav Chatterjee
    July 27, 2011 at 10:46 am

    thanks ramkumar great post ……
    please post on migration:- host based and storage based

    cheers

  2. July 31, 2011 at 7:45 pm

    very good site ,helping me more and my colligues.thanks alot…please i need some basics of san storage…(EMC)

  3. seema
    August 23, 2011 at 3:20 pm

    Hello Ram

    i was trying to label San disks buit i am left with errors cannot label it. when i wanted to add to datapool .. i have a san disk can u please tell is initialising and formatting disk on svm opr zfs same since on veritas we can use vxdiskadm utility to initialise
    ……

    • Gurkulindia
      August 23, 2011 at 6:27 pm

      Can you give me the command and error message? and also “echo|format” output.

  4. Yogesh Raheja
    August 24, 2011 at 4:09 am

    Also let us know the OS version..

  5. seema
    August 24, 2011 at 12:52 pm

    OS version ” Oracle Solaris 10 9/10 SPARC
    Error: # echo | format
    Searching for disks…done

    c1t5006048ACAFDF44Cd0: configured with capacity of 44.06MB
    c2t5006048ACAFDF443d0: configured with capacity of 44.06MB

    AVAILABLE DISK SELECTIONS:
    0. c0t0d0
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
    1. c0t1d0
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
    2. c1t5006048ACAFDF44Cd0
    /pci@1,700000/SUNW,emlxs@0/fp@0,0/ssd@w5006048acafdf44c,0
    3. c1t5006048ACAFDF44Cd258
    /pci@1,700000/SUNW,emlxs@0/fp@0,0/ssd@w5006048acafdf44c,102
    4. c2t5006048ACAFDF443d0
    /pci@1,700000/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048acafdf443,0
    5. c2t5006048ACAFDF443d258
    /pci@1,700000/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048acafdf443,102
    6. emcpower0a
    /pseudo/emcp@0
    Specify disk (enter its number): Specify disk (enter its number):

    partition> label
    [0] SMI Label
    [1] EFI Label
    Specify Label type[0]: 1
    Warning: This disk has an SMI label. Changing to EFI label will erase all
    current partitions.
    Continue? y
    Warning: error writing EFI. and

  6. seema
    August 24, 2011 at 12:55 pm

    Ram can you please delete above comment since it has echo| format ouput

    • Gurkulindia
      August 24, 2011 at 5:19 pm

      no problem..we need “echo|format” output anyway. But I couldn’t see which disk you are trying to label here. I could see powerpath installed on this machine. please be clear what disks you planned to use.

      And Just login to chat, and see if you can find either me / yogesh in the chat, probably we can talk to you in chat to resolve the issue.

  7. seema
    August 25, 2011 at 1:11 pm

    Sure will discuss in chat … let me know what timezone will you guys be avaiable ….

    Once again thanks …

  8. Yogesh Raheja
    August 25, 2011 at 3:11 pm

    @seema, as Ram stated you can try login into the Chat. If you find anyone of us we will discuss the same over their.
    Also from the outputs (I suppose it truncated) it is showing only one powerpath device which means there is only one path coming from the HBA. also first of all Lable below two disks and then proceed with labeling the emcpower#c. Note: you have to label the albhabet “c” not “a” i.e emcpower0c. Try and let us know the result plz.

  9. Yogesh Raheja
    August 25, 2011 at 3:15 pm

    I am available on chat now. If you are there.

  10. Yogesh Raheja
    August 25, 2011 at 3:18 pm

    Also I have checked the echo | format output of yours, it seems all devices are already labeled. Error: # echo | format
    Searching for disks…done

    c1t5006048ACAFDF44Cd0: configured with capacity of 44.06MB
    c2t5006048ACAFDF443d0: configured with capacity of 44.06MB

    “NO UNLABELLED DEVICE PRESENT” AVAILABLE DISK SELECTIONS:

  11. seema
    August 25, 2011 at 8:39 pm

    all are raw devices they are reclamed from aix OS… @yogesh please specify ur available timings…thanks

  12. Yogesh Raheja
    August 26, 2011 at 6:27 am

    @seema, I will try my best to login tonite 00:00 hrs (as I will be coming home at midnite only).

  13. Yogesh Raheja
    August 26, 2011 at 3:08 pm

    @Seema, try it: One problem faced when issued ‘label’ command was:

    > label
    Cannot label disk when partitions are in use as described.

    And the solution for this:

    The environment variable NOINUSE_CHECK (see PSARC/2005/461), when set, can be used to turn off this new functionality:

    NOINUSE_CHECK=1
    export NOINUSE_CHECK

  14. sonu
    February 24, 2012 at 3:46 pm

    is it possible to rename the multipathed disk say “c2tXXXXXXXXXXXXXXd2s2″ to c3tXXXXXXXXXXXXXXd2s2 ( changing the c2 to c3) or is it fixed and cannot be

  15. Ramdev
    February 24, 2012 at 4:07 pm

    you can play little bit on controller numbers if you know exactly how /etc/path_to_install works, but I really don’t prefer to do this. Solaris will identify the controller numbers during the boot, as per the rules mentioned in the post.

    http://gurkulindia.com/main/2011/04/solaris-how-solaris-assigns-controller-numbers/

    And if we mess it up with them, we can’t recover the server back during the crashes.

  16. sonu
    March 4, 2012 at 4:47 am

    Thanks Ram.found a way out for this, which worked ..in case you want to change the controller number..going to /dev/cfg/ and renaming the controller number say ( #mv c2 c3)..
    and then removing all the old device link from /dev/dsk and /dev/rdsk ( rm -r c3* ).. and then running devfsadm -C …

    • Ramdev
      March 5, 2012 at 4:13 am

      @sonu, good tweak. I am just curious why do you need to change this? And are you doing this on you work machine or personal machine?
      And I am also curious to know how your system will behave if you add more devices like EMC/External storage( which usually takes C3 , C4 ..etc ).

      In this case you actually removed the device links ( with specific major and minor numbers), that means the server no longer able to recognize the disks having those missing major and minor numbers.And you don’t realize this abnormal behavior until you add new external disks to the machine.

      Thanks for posting your experiments :)

  17. sonu
    March 12, 2012 at 4:20 pm

    sorry was stuck up with something…yes that was the requirement from the one of the team to see the same disk name in all the similar servers..

  18. March 17, 2012 at 7:56 am

    By running this cmd /etc/powercf -q ,it create psuedo devices for every new LUN only or will it destroy the existing suedo names .pl advise ?

    • Ramdev
      March 17, 2012 at 5:12 pm

      Hi Sekhar, powercf -q updates emc configuration by removing PowerPath devices that were not found in the host adapter scan and by adding new PowerPath devices that were found. Saves a primary and an alternate primary path to each PowerPath device.

  19. March 18, 2012 at 9:59 pm

    Thanks for your update Ram .The main intention i hv asked you , i want to increase the File System space .SAN team team have added 20g
    so by running cfgadm -al and decfsadm -C ,hv detected cx7ty4000…..d2 and cx7ty4000…..d2 (20g), so if i run
    1 powercf -q ( in my case i,say t detects newly created 2 psueudo names)
    or can i skip this step and continue 2 because

    in the above blog ,it says “The above PowerPath command will also DESTROY psuedo devices (emcpowerX devices only) for LUNs that were previously managed by PowerPath but have disappeared from the system” will this destroy my exising t empowerpath which is running with less space does it mean that ?

    2 powermt config
    if u run powermt display dev=all , i shud see as per the eg in step 1 psueduo devices say eg /emcpower5a /emcpower6a

    3 powermt save
    4.I beleive no reboot is required on Solaris 10 zfs FS .

    On Solaris 10 ( zfs ) to increase space

    5 format
    select the newly created device

    12 cx7ty4000…..d2
    14 cx8ty4000…..d2

    Say i hv labeled by selecting 14 ,then

    Can i run zpool add the “current poolname newl” cx8ty4000…..d2

    will this increase my FS on zfs on the fly or is there any steps missed out .pl advise .

  20. Yogesh Raheja
    March 19, 2012 at 7:18 am

    @Sheker, Firstly powercf -q is to remove/flush/washed the storage(LUNS) which are not in use and has been removed by storage team physically. So no need to worry about existing FS/Vol. unless and untill your stoarge is attached to the server.
    Secondly in simple layman language powermt config is to configure the new SAN provide to the server so that EMC internally recongnise the new luns (consider it for scaaning purpose).
    thirdly powermt save is to save the configuration so that next time when you reboot the server your configuration wont disapperas. Suppose you have configured the SAN and forgot to save its configuration then you will loss the current configution if your system gets rebooted. and again you have to clean/configure/save the configuration for your system so every time powermt saveis must and recommended step.
    4th ZFS dont required reboot, the task can be done on the fly. 5th zpool add will add the LUN in you Pool (i.e disk group)
    5th yes you are absolutely correct that adding a disk in pool will give you the space in FS as ZFS take the space from zpool instead of individual FS space.

  21. March 19, 2012 at 2:57 pm

    Thanks Yogesh . In case of Solaris 8 and 9 , 1 Do i have add any SAN /Emc powerpath details /kernek/drv/ in sd.conf and lpcf.conf. ? 2do i need to reebot ?

  22. Yogesh Raheja
    March 19, 2012 at 4:02 pm

    @Shekar, while expanding your LUNS nothing required, but if you are migrating any of the emc version (like powerpath/symmertix/hba firmware,driver version etc etc) then yes you have to keep copy of these files and have to put the wwn no. which storage team will provide.

  23. Yogesh Raheja
    March 19, 2012 at 4:21 pm

    @Shekar, below gurkul posts will helps you to understand storage concepts:  http://gurkulindia.com/main/2011/12/pre-planningpre-work-required-for-storage-migrations-in-unix/

    http://gurkulindia.com/main/2011/12/emcpowerpath-emcsymmetrix-upgrades-on-solaris-servers/

    http://gurkulindia.com/main/2012/01/hba-firmware-upgrades-on-solaris-servers/

  24. Santosh
    March 19, 2012 at 4:39 pm

    Hi Ram, can you let me know the limitations to a Solaris administrator who is at level 2 (L2 Support).
    i.e. what tickets/issues he will take care and what tickets/issues he will escalates to next level?

    please i need this information …

  25. Ramdev
    March 19, 2012 at 5:04 pm

    @santhosh – for a startup company / a company where you have not many teams, level2 is every thing that is related to the environment. But if you are talking about the MNC organizations there you will see clear differentiation between level2 and level3 responsibilities based on the servers/services they manage. Normally Level3 manages the core infrastructure services like NIS , DNS , LDAP, jumpstart services, automation scripts , DR / BCP setup…etc. L2 admins will be dedicated to the maintenance and configuring existing servers related to application services. And also handles the requests that required to access core infra servers(but not related to the configuration of the core infra servers) . And for some companies L2 admins also responsible for regular server installations and configurations.

  26. pitmod
    May 3, 2012 at 6:12 pm

    Thanks. really helpful post.
    cheers

  27. Ramdev
    May 4, 2012 at 4:21 am

    @pitmod – you welcome

  28. Murali
    May 14, 2012 at 5:41 pm

    HI,
    can some one aswer the the following questions for me,

    1.) Do we really need POWERPATH to be installed on Solaris 10 server to connect to EMC clarion ?
    2.) Higer version of POWERPATH in Solaris 10 server can able to connect to the EMC Clarion having lower version of POWERPATH?

  29. May 17, 2012 at 10:14 am

    @murali – Solaris 10 supports Mpxio for multipath and you can use it  instead of power path .

    .  And basically powerpath is server side software used to configure multiple access path to storage device( clarion) to provide connection redundancy.Emc website will have compatibility matrix ( commonly called san stack ) which talks about the compatibility between the EMC device and power path version

  30. Ramachandra
    August 3, 2012 at 5:26 am

    Hi Ram,

    I would like to clarify one doubt regarding EMCPowepath,Currently we using IBM HS22 blades and VNX EMC storage and its connected via->Brocade SAN with Emulex Lpe HBAanywhere driver.

    Currently we are running in EMCPowerpath mode and we want migrate to normal multipath and OS is SusE.

    here are the questions:;

    1.Is it is required to recreate HBA Zoning and uninstall Powerpath 1st before going for server scratch (We are going for server scratch as we also need reconfigure most of IP part and other stuff)
    2.What happens when we only re-scratch server without touching SAN and HBA part..will it automatically again install PowerPath or it will insatll Multipath software..

    Thanks in advance..

  31. Ramdev
    August 3, 2012 at 7:01 am

    @ramachandra – i am assuming your  word ‘server scratch’ means “server reinstall”. My understanding is, as long as the HBA’s are not changing on the server there will be no  zoning and allocation again.. About the powerpath, anyway that will go away once you reinstall the OS.  You can select the native multipath products during install, and can configure after the installation.

  32. Ramachandra
    August 6, 2012 at 12:00 pm

    @Ram..

    But we faced situation after server re installation system was already come up with Emc powerpath,
    1.Actually its very huge network and So we are using almost 100servers and 3 EMC storage of 5000GB capacity,and we are using a Install server to re install other servers here…meanwhile we also updated firmware for Brocade and Emulex..with Fabric OS firmware 6.4.2b for Brocade and lpfc 0:8.2.0.106.1p… but in one location we reconfigured HBA 1st and then we re installed servers there it took native multipath..

    We are trying to debug why in other set of servers where we not reconfigured Zone,HBA and SAN switch and we just re installed servers come up with powerpath instead of native multipath ?

    And please let us how server will decide it has to go for powerpath or native multipath in re installation if possible …

    Thanks…

    • Ramdev
      August 13, 2012 at 2:37 pm

      @ramachandra – sorry for late response, I am in short vacation and away from my desk. About your issue, i would recommend to check the installation logs for both the servers to see when exactly the powerpath packages getting installed and getting activated. And also please check if you can disable SAN storage/HBAs ( from bios) before starting the installation. And then enable them later after the installation.

  33. solaris351
    September 10, 2012 at 2:22 pm

    How to find the SAN type in the solaris server?.Please help me on this…

    • Ramdev
      September 11, 2012 at 1:16 am

      Hello, can you please be elaborate on what information you want to see about San?

  34. Yogesh Raheja
    September 11, 2012 at 4:09 am

    Hi, in simplest way you can get an idea from format command only.

  35. Mani Kishore
    December 26, 2012 at 5:17 am

    Hi Ram Thanks for u r greate support.

  36. Ramdev
    December 26, 2012 at 1:17 pm

    Hi Mani, It’s my pleasure

Tell me, what you think about this post