Monthly Archives: March 2012
Last login: Thu Mar 8 00:29:20 2012
-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8)
check [root@Centos6x64-001 ~]# cat /etc/environment , it should be empty
root@Centos6x64-001 ~]# vi /etc/environment –> and add related line below , thats it
After use more then 3 or 4 different storage vender really you can understand whats happening 😀
EMC is leader storage vender and selling more storage from other vendors but this not means its perfect, if you are thinking to purchase think twice …
(I’m writing this article 10 March 2012 and first update 28 March 2012, if they improve something pls do not get my words 🙂 AND PLS DO NOT FORGET I’M WORKING ON ISP AND INTERNET SERVICE PROVIDERS ARE DIFFERENT THEN ENTERPRISE CUSTOMERS )
Update 3 Sep 2012, new update …… new subject is : Think third times before purchase VNX
- Block device software and File Server software are should be upgraded together, if you upgrade Block software you MUST upgrade FLARE too ! Means if Block device has a issue your have a double upgrade and 5-6 minute downtime for FileServices (Good thing EMC guys are making the upgrade maybe you can like this but its taking too much time, i can upgrade my IBM SVC Cluster in a 30 mins, thats it)
- Think again again and again when you set raid level, because later can not change it why ?! because….. i don’t know, ask EMC experts, they always have something to tell about you
- If you want performance, then forget to use -Disk Pool- , because when you have problem and send nar files(Kind of debug log) to EMC Support they will say ‘The pool configuration is just used for the effective storage management, it wouldn’t benefit the storage performance.’
- If you will not use Disk Pool then its means ready to categories your workload and purchase more more disk then what you think before
- Try to do everything with professional service because at the end they will say ‘Suggest to engage the EMC Professional Service to reconfigure the storage.‘
- Try to get bigger controllers then what you expecting and try to purchase max cache because at the end you will start to deal with cache watermarks, if cache is low and full more quickly then host I/O will be stop for a while and this will couse delay.
Do not believe SSD with Fast Cache, do not think that put more more SSD drive as a Fast Cache because you will wait that system will use closer %90 percent of Fast Cache SSD layer but in our environment its stay on %40-50 means for us no need it and you will get such answer from EMC guys ‘SSD drives are more efficient especially if your applications have small, random and read weighted I/Os.’SORRY 😦 but this words are wrong, pls check well your analyzer to determine how many cache miss you have, if its a lot then you should increase the number of SSD drive in Fast Cache, also recommendation of EMC is %5 of total storage space determine Fast Cache size.
- DeDup – NO They do not have it
- Pls do not mix the compression with dedup because you should only compress something if it will not used any more or maybe 10 years later you need it because every access time EMC should unzip it before use it this means you push the CPU , EMC do not have ASIC like 3PAR pls do not tire it 🙂
- DO NOT, but pls DO DO NOT create a thin LUN in production , EMC say 40-50% performance degradation could be and that are preparing an update of firmware on June or closer to improve thin performance , but our contact from EMC say, do not use thin in production, its also eat memory from SP Caches 🙂
- if you have a money try to purchase more disks or if you still want to create a pool purchase SSD and put it with FC and SATA pull because EMC tearing is not working well , rule is very easy if if performance layer is empty use it , very easy decision but this is not what i want ! also chunks should be 1GB to move between layers , how much data should be stay on tier zero only 20 MB of it ?!
- Update 3 Sep 2012 – Be careful when you upgrade the system , if you have heavy loaded system think more then twice, last week we did an upgrade to latest version and SPA did not back because of LCC upgrade and more then half business day we worked with single SP , its back again and working well right now, very nice 🙂
The world is going on another way, pls keep looking my posts, i will tell about that.
I decide to write this article after start to introduce DS3400 to IBM SVC
I already have DS5000 series storage before and do not have any problem to introduce SVC and no knowledge about ADT/AVT/HostTypeOptions 🙂 what a pity !
After search tens of article and page finally find out a perfect document written by Chinese from IBM, really surprised but congratulation ! You can find the link end of the article
Logical Drives, you can call it as a LUN, which hosts get it as raw space
Controller Ownership, if you are using dual controller systems each created LUNs/Logical drives I/O assigned/shared different controllers for balance. Controller of LUN which do I/O is owner of that LUN.
AVT (Auto Volume Transfer) also referred to as the Auto Disk Transfer, its driver-level failover, its not depend controller-level.
RDAC (Redundant Disk Array Controller), its multipathing software and provide controller redundancy, if one of controller goes down all I/O request directly forwarded to other controller. Some OS do not use RDAC for example Windows, its have own Multipathing software which generally called Native MPIO.
AVT with RDAC, AVT help that logical drive/LUN really switch or sure to secondary controller ready to handle I/O of LUN if Primary controller goes down
HostType (Linux,IBM TS SAN VCE,AIX,HP-UX)
if you want to enable or disable AVT pls use such commends from your DS Storage Manager, choose storage and from Tools menu , select execute script
Execute below is your hosttype selected Linux ;
show controller [a] HostNVSRAMBYTE [5,0x24];
You can change first digit (which is 5 in this example), pls check the pdf to find out related index numbers ” 4.3. How to enable and disable AVT ”
You can see ADT option with script below
show storageSubsystem hostTypeTable;
Pls be careful upgrade times or first installation times , different series DS storages and different versions of firmwares can have different AVT option for same type of host, this is what i see on my systems .
Hope this article summaries many things ..
We had a problem about isolation address ping issue, somebody can say “you stupid” but i would like to create such article who can have such issue an maybe like us forgot or can imagine the problem
Short explanation, when you crate VMware vSphere4/5 cluster , default system check the isolation address which is default ESX/ESXi node/host gateway, which usually your firewall 🙂
When you install FortiGate and configure the “Administrator” settings like below (Black line indicate allowed IP address who can access FortiGate Box to manage)
Related picture indicate inside interface are allowed to ping
You can think that inside interface open for PING but don’t forget because of you activated Administrative access which was 0.0.0.0/0 (any) default and set some ip addresses which only can access to management Forti , now no any ESX/ESXi node/host can PING the their gateway and you will get an isolation error messages.
Pls add your ESX/ESXi node/host ip network in to Administrative access section , then you done !