Think twice before purchase VNX
After use more then 3 or 4 different storage vender really you can understand whats happening😀
EMC is leader storage vender and selling more storage from other vendors but this not means its perfect, if you are thinking to purchase think twice …
(I’m writing this article 10 March 2012 and first update 28 March 2012, if they improve something pls do not get my words🙂 AND PLS DO NOT FORGET I’M WORKING ON ISP AND INTERNET SERVICE PROVIDERS ARE DIFFERENT THEN ENTERPRISE CUSTOMERS )
Update 3 Sep 2012, new update …… new subject is : Think third times before purchase VNX
- Block device software and File Server software are should be upgraded together, if you upgrade Block software you MUST upgrade FLARE too ! Means if Block device has a issue your have a double upgrade and 5-6 minute downtime for FileServices (Good thing EMC guys are making the upgrade maybe you can like this but its taking too much time, i can upgrade my IBM SVC Cluster in a 30 mins, thats it)
- Think again again and again when you set raid level, because later can not change it why ?! because….. i don’t know, ask EMC experts, they always have something to tell about you
- If you want performance, then forget to use -Disk Pool- , because when you have problem and send nar files(Kind of debug log) to EMC Support they will say ‘The pool configuration is just used for the effective storage management, it wouldn’t benefit the storage performance.’
- If you will not use Disk Pool then its means ready to categories your workload and purchase more more disk then what you think before
- Try to do everything with professional service because at the end they will say ‘Suggest to engage the EMC Professional Service to reconfigure the storage.‘
- Try to get bigger controllers then what you expecting and try to purchase max cache because at the end you will start to deal with cache watermarks, if cache is low and full more quickly then host I/O will be stop for a while and this will couse delay.
Do not believe SSD with Fast Cache, do not think that put more more SSD drive as a Fast Cache because you will wait that system will use closer %90 percent of Fast Cache SSD layer but in our environment its stay on %40-50 means for us no need it and you will get such answer from EMC guys ‘SSD drives are more efficient especially if your applications have small, random and read weighted I/Os.’SORRY😦 but this words are wrong, pls check well your analyzer to determine how many cache miss you have, if its a lot then you should increase the number of SSD drive in Fast Cache, also recommendation of EMC is %5 of total storage space determine Fast Cache size.
- DeDup – NO They do not have it
- Pls do not mix the compression with dedup because you should only compress something if it will not used any more or maybe 10 years later you need it because every access time EMC should unzip it before use it this means you push the CPU , EMC do not have ASIC like 3PAR pls do not tire it🙂
- DO NOT, but pls DO DO NOT create a thin LUN in production , EMC say 40-50% performance degradation could be and that are preparing an update of firmware on June or closer to improve thin performance , but our contact from EMC say, do not use thin in production, its also eat memory from SP Caches🙂
- if you have a money try to purchase more disks or if you still want to create a pool purchase SSD and put it with FC and SATA pull because EMC tearing is not working well , rule is very easy if if performance layer is empty use it , very easy decision but this is not what i want ! also chunks should be 1GB to move between layers , how much data should be stay on tier zero only 20 MB of it ?!
- Update 3 Sep 2012 – Be careful when you upgrade the system , if you have heavy loaded system think more then twice, last week we did an upgrade to latest version and SPA did not back because of LCC upgrade and more then half business day we worked with single SP , its back again and working well right now, very nice🙂
The world is going on another way, pls keep looking my posts, i will tell about that.