Category Archives: SVC

Match v7000 presented volume as a mdisk on SVC

Its really hard to play with storage when production data on it

To whom need to be remove , unmap volumes which presented to SVC via v7000 need to be know that UID numbers of mdisks on SVC and v7000 is same but with little modification.

These are volume UIDs on v7000

These are mdisk

You can see that SVC add additional 00000 behind the value which come from v7000 UID

You can easily and safely unmap if you need from v7000 to take the space back

VM

Put Storwize(v7000) behind the SVC and do not confuse what you see

When you configure your SAN configuration and zone the SVC and v7000 together  you will see that two v7000 controllers will be appear, after i check on the internet is it true and maybe before i have to check this link first

https://www.ibm.com/developerworks/mydeveloperworks/blogs/anthonyv/entry/virtualizing_a_storwize_v7000_with_an_ibm_svc?lang=en

i have extra words, all presented volumes on v7000 without what preferred owner set  appear under configuration v7000 node controller on SVC, pls see below

Don’t worry that load balance is always working , check SAN traffic to find out ….

Also before configure mdisks and mdiskgroups pls read best practice paper before implement but shortcut for your all

On v7000 site :

Create 1 mdiskgroup

svctask mkmdiskgrp -ext 256 -name name_of_mdiskgroup -warning 80%

Create 1 array and assign it to such mdiskgroup

svctask mkarray -drive 12:36:34:19:20:18:16:15 -level raid5 -sparegoal 1 name_of_mdiskgroup

Numbers are disk you can list it under v7000

On SVC site:

You will see  mdisks under v7000 controller, create mdiskgroup from this mdisks and start to create volumes ..

 

My full commands ;

 

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup1 -ext 32 -warning 80%

MDisk Group, id [0], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup2 -ext 32 -warning 80%

MDisk Group, id [1], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup3 -ext 32 -warning 80%

MDisk Group, id [2], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup4 -ext 32 -warning 80%

MDisk Group, id [3], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup5 -ext 32 -warning 80%

MDisk Group, id [4], successfully created

 

svctask mkarray -drive 12:36:34:19:20:18:16:15 -level raid5 -strip 128 mdiskgroup1

svctask mkarray -drive 14:17:35:46:40:39:42:45 -level raid5 -strip 128 mdiskgroup2

svctask mkarray -drive 43:44:37:38:47:0:41:24 -level raid5 -strip 128 mdiskgroup3

svctask mkarray -drive 11:10:8:7:31:30:29:33 -level raid5 -strip 128 mdiskgroup4

svctask mkarray -drive 32:28:27:26:6:5:3:2 -level raid5 -strip 128 mdiskgroup5

 

VM

lsfabric out active/inactive issue with EMC VNX Series

Hi ,
We found that SVC do not understand EMC VNX fabric active or inactive, after long time IBM developers find out the issue and they will fix it.
VM
APAR IC80749:
ERROR DESCRIPTION:
On controllers (for example EMC Clariion) that have fibre
channel logins as both target and initiator the active/inactive
state may be incorrectly displayed.
The reason is that both the target logins (which are being used)
and the initiator logins (which are not) may update the status.
Hence the status may be shown as “inactive” becuase the
initiator logins are not active, while the target ones are in
fact being used.
CMVC 142997 is tracking this.
LOCAL FIX:
No local fix or workaround.