Monthly Archives: November 2011

VNX Do Not Support MCS (multiple connections per session)

Sometimes its hard to adapt VNX if you are coming from Netapp.
VNX do not support MCS type iscsi connection, only allowed MPIO reason is bellow from EMC Global Service.

To answer your  question MCS Is not supported on EMC.

You cannot aggregate the iSCSI ports and they should be on separate subnets for redundancy.

MCS was designed to help with failover on the iSCSI level.

This is not handled by Failover  software on the OS , either MPIO or Powerpath.

MC/S was designed at time, when most OS’ didn’t have standard OS level multipath. Instead, each vendor had its own implementation, which created huge interoperability problems. So, one of the goals of MC/S was to address this issue and standardize the multipath area in a single standard. But nowadays almost all OS’s has OS level multipath implemented using standard SCSI facilities, hence this purpose of MC/S isn’t valid anymore.

You can find the differences between MCS and MPIO

Article on WindowsIT Pro

Microsoft test said that, there is no difference about performance

Looks like good explanation

VM

Consider to resize LUN when you are using DS5000 series storages

Last week i kicked my customer service when try to extend LUN on IBM DS5000 series storage.

Really i do not want to do it but really i don’t now the design of storage and how handle the related process because just think that every storage should do same thing when extend the LUN, what is 0 impact the performance like Netapp or IBM SVC product

Actually some of DS series like DS3000 and DS5000 are not IBM product, they are LSI one but LSI acquired by Netapp and they are Netapp product now !

Issue is when you extend the LUN, IBM DS 3000/5000 series storages should have to move next LUN before extend the space. Why pls see below, X1 LUN was created and space identified between Offset 0x0 and 0x16800000. The second LUN is started from Offset 0x168000000, then if i want to extend X1 IBM storage should move the Y1 and this is impact the performance pls be sure the time is not peak time of your application or service. İf you say why this is like this its about performance and design of storage, IBM support line do not want to explain it until i compline from performance degradation.

LUN X1

Offset: 0x0                    Length:     0x16800000 (377487360 dec)

LUN Y1

Offset: 0x16800000             Length:     0xb1cb000 (186429440 dec)

VM

VNX is really a windows base storage? Is it Windows?

Pls watch the video, you will discover
Too many windows storage server is working all around the world
Wow …..

Sorry for video quality because I was in shock ….

VM

Spanning-Tree or No Spanning-Tree, that is the question

Todays big issue on networks is Spanning-Tree and vendors focused to handle this issue with trill.

ToR (Top of Rack) installations are so popular and still some ways to handle STP instead of using Nexus and VDX like solutions to save a money.

My self i could’t imagine before connect ToR switches to core or distribution switch with aggregation, because i always connect my servers to switch like this for bw issue or NFT but one of my friend Levent OGUT who is in London and working on Junper said that aggregate your ToR switches to core/distribution switch to eliminate hugh STP process.

Read the rest of this entry

IBM SVC really support VNX and VNX working together with IBM SVC

After a month later, VNX 5500 and IBM SVC is working together how i want instead of little status issue which is not have a difficulty, but i will investigate

The story is starting when i create a ticket to IBM for to be sure about the support and L2 answer was IBM do not support VNX 5500 😦
L2 reason is cx5 and related FLARE version is not on http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003797
but web site said that we support “EMC CLARiiON CX-series models” which VNX is not a so different things then CX-5 and Celerra, its only bundle solution http://www-03.ibm.com/systems/storage/software/virtualization/svc/specifications.html

Read the rest of this entry

Why Hot-Plug Modules but Not Plug-and-Play I/O Modules

Right now i and Gökhan is working on to introduce iSCSI I/O module to VNX storage. We expected that without stop i/o we can plug and go the iscsi card but not ! We have to reboot both i/o controller together to initialize cards. Somebody can say what you expecting what happened if you have IBM or Netapp storages, pls see below

Check the link http://www.emc.com/storage/vnx/vnx-series.htm , its said that AUTOMATED, what is automated ? I’m still up and waiting the steps over , its take closer a hour

Why recognized I/O modules need to be restart SPs for initialize, this is cloud world and cloud storages need to be more flexible, i believe the programmers and logic can handle it

Is it really hard to created manually powered bus and recognize and initialize the i/o module like kudzu in Linux

 

Hello World !

This is really a hello world post.  Related blog created for mostly focused tested and running configurations on productions for feel the people not alone, interoperability craps, ideas, critics and some HOWTOs.

Hope you like all !