Category Archives: IBM

IBM Storages

Match v7000 presented volume as a mdisk on SVC

Its really hard to play with storage when production data on it

To whom need to be remove , unmap volumes which presented to SVC via v7000 need to be know that UID numbers of mdisks on SVC and v7000 is same but with little modification.

These are volume UIDs on v7000

These are mdisk

You can see that SVC add additional 00000 behind the value which come from v7000 UID

You can easily and safely unmap if you need from v7000 to take the space back

VM

Put Storwize(v7000) behind the SVC and do not confuse what you see

When you configure your SAN configuration and zone the SVC and v7000 together  you will see that two v7000 controllers will be appear, after i check on the internet is it true and maybe before i have to check this link first

https://www.ibm.com/developerworks/mydeveloperworks/blogs/anthonyv/entry/virtualizing_a_storwize_v7000_with_an_ibm_svc?lang=en

i have extra words, all presented volumes on v7000 without what preferred owner set  appear under configuration v7000 node controller on SVC, pls see below

Don’t worry that load balance is always working , check SAN traffic to find out ….

Also before configure mdisks and mdiskgroups pls read best practice paper before implement but shortcut for your all

On v7000 site :

Create 1 mdiskgroup

svctask mkmdiskgrp -ext 256 -name name_of_mdiskgroup -warning 80%

Create 1 array and assign it to such mdiskgroup

svctask mkarray -drive 12:36:34:19:20:18:16:15 -level raid5 -sparegoal 1 name_of_mdiskgroup

Numbers are disk you can list it under v7000

On SVC site:

You will see  mdisks under v7000 controller, create mdiskgroup from this mdisks and start to create volumes ..

 

My full commands ;

 

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup1 -ext 32 -warning 80%

MDisk Group, id [0], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup2 -ext 32 -warning 80%

MDisk Group, id [1], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup3 -ext 32 -warning 80%

MDisk Group, id [2], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup4 -ext 32 -warning 80%

MDisk Group, id [3], successfully created

IBM_2076:EMPIRE2:superuser>svctask mkmdiskgrp -name mdiskgroup5 -ext 32 -warning 80%

MDisk Group, id [4], successfully created

 

svctask mkarray -drive 12:36:34:19:20:18:16:15 -level raid5 -strip 128 mdiskgroup1

svctask mkarray -drive 14:17:35:46:40:39:42:45 -level raid5 -strip 128 mdiskgroup2

svctask mkarray -drive 43:44:37:38:47:0:41:24 -level raid5 -strip 128 mdiskgroup3

svctask mkarray -drive 11:10:8:7:31:30:29:33 -level raid5 -strip 128 mdiskgroup4

svctask mkarray -drive 32:28:27:26:6:5:3:2 -level raid5 -strip 128 mdiskgroup5

 

VM

RDAC ADT AVT HostType What are that ?

I decide to write this article after start to introduce DS3400 to IBM SVC

I already have DS5000 series storage before and do not have any problem to introduce SVC and no knowledge about ADT/AVT/HostTypeOptions 🙂 what a pity !

After search tens of article and page finally find out a perfect document written by Chinese from IBM, really surprised but congratulation ! You can find the link end of the article

Logical Drives, you can call it as a LUN, which hosts get it as raw space

Controller Ownership, if you are  using dual controller systems each created LUNs/Logical drives I/O assigned/shared different controllers for balance. Controller of LUN which do I/O is owner of that LUN.

AVT (Auto Volume Transfer) also referred to as the Auto Disk Transfer, its driver-level failover, its not depend controller-level.

RDAC  (Redundant Disk Array Controller), its multipathing software and provide controller redundancy, if one of controller goes down all I/O request directly forwarded to other controller. Some OS do not use RDAC for example Windows, its have own Multipathing software which generally called Native MPIO.

AVT with RDAC, AVT help that logical drive/LUN really switch or sure to secondary controller ready to handle I/O of LUN if Primary controller goes down

HostType (Linux,IBM TS SAN VCE,AIX,HP-UX)

if you want to enable or disable AVT pls use such commends from your DS Storage Manager, choose storage and from Tools menu , select execute script

Execute below is your hosttype selected Linux ;

show controller [a] HostNVSRAMBYTE [5,0x24];

You can change first digit (which is 5 in this example), pls check the pdf to find out related index numbers ” 4.3. How to enable and disable AVT ”

You can see ADT option with script below

show storageSubsystem hostTypeTable;

Pls be careful upgrade times or first installation times , different series DS storages and different versions of firmwares can have different AVT option for same type of host, this is what i see on my systems .

Related Doc

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/5cb5ed706d254a8186256c71006d2e0a/ed9dc8b295aa3a80862577a80060240b/$FILE/ATTCUP21/IBM%20DS4000%20Auto%20Volume%20Transfer%20Usage%20with%20RDAC.pdf

Hope this article summaries many things ..

VM

lsfabric out active/inactive issue with EMC VNX Series

Hi ,
We found that SVC do not understand EMC VNX fabric active or inactive, after long time IBM developers find out the issue and they will fix it.
VM
APAR IC80749:
ERROR DESCRIPTION:
On controllers (for example EMC Clariion) that have fibre
channel logins as both target and initiator the active/inactive
state may be incorrectly displayed.
The reason is that both the target logins (which are being used)
and the initiator logins (which are not) may update the status.
Hence the status may be shown as “inactive” becuase the
initiator logins are not active, while the target ones are in
fact being used.
CMVC 142997 is tracking this.
LOCAL FIX:
No local fix or workaround.

VNX Disk Fails Couse IBM SVC Cluster Error

Yesterday we saw that disk failures on VNX side are effecting IBM SVC.

Yes very interesting but it is. You can find out  related explanation below (pls check number 4 because until that step you believe the its a controller issue why disk). I believe that this is really hard issue for storage admins which try to fix both side, first fix the VNX second fix the IBM SVC. The same issue is not available on DS series storages.

Error ID = 10011 : Remote Port excluded for a specific Managed Disk
Error Code = 1220 : Remote FC port excluded

THIS IS AN EXTERNAL ISSUE REPORTED BY THE SVC. NOT AN SVC FAULT

Possible Cause: A remote fibre-channel port has been excluded.

SAN Volume Controller 2145-8G4: N/A

SAN Volume Controller 2145-8F4: N/A

SAN Volume Controller 2145-8F2: N/A

SAN Volume Controller 2145-4F2: N/A

Other: Enclosure/controller fault (50%); Fibre-channel network fabric (50%)
Action:
1. View the error log. Note the MDisk ID associated with the error code.
2. From the MDisk, determine the failing disk controller ID.
3. Refer to the service documentation for the disk controller and the fibre-channel network to resolve the reported problem.
4. After the disk drive is repaired, start a cluster discovery operation to recover the excluded fibre-channel port by rescanning the fibre-channel network.
5. To restore MDisk online status, include the managed disk that you noted in step 1.
6. Check the status of the disk controller. If all disk controllers show a ″good″ status, mark the error that you have just repaired, ″fixed.″
7. If all disk controllers do not show a good status, contact your support center to resolve the problem with the disk controller.
8. Go to repair verification MAP.

VM

Consider to resize LUN when you are using DS5000 series storages

Last week i kicked my customer service when try to extend LUN on IBM DS5000 series storage.

Really i do not want to do it but really i don’t now the design of storage and how handle the related process because just think that every storage should do same thing when extend the LUN, what is 0 impact the performance like Netapp or IBM SVC product

Actually some of DS series like DS3000 and DS5000 are not IBM product, they are LSI one but LSI acquired by Netapp and they are Netapp product now !

Issue is when you extend the LUN, IBM DS 3000/5000 series storages should have to move next LUN before extend the space. Why pls see below, X1 LUN was created and space identified between Offset 0x0 and 0x16800000. The second LUN is started from Offset 0x168000000, then if i want to extend X1 IBM storage should move the Y1 and this is impact the performance pls be sure the time is not peak time of your application or service. İf you say why this is like this its about performance and design of storage, IBM support line do not want to explain it until i compline from performance degradation.

LUN X1

Offset: 0x0                    Length:     0x16800000 (377487360 dec)

LUN Y1

Offset: 0x16800000             Length:     0xb1cb000 (186429440 dec)

VM