Centera Update – 12TB nodes now available

December 24, 2012 1 comment

Here’s one that slipped in under the radar.  With EMC’s focus for archiving being on Isilon and Atmos these days, it wasn’t well publicized that a new Gen4LP (LP = Low Power = 5400RPM drives) node is now available.   I didn’t realize it myself until I was filling out a Centera change control form and noticed a new option.    The 12TB G4LP nodes use 3TB drives internally.  Other than that I doubt there is much change to the hardware.   One thing to note, you cannot add these to an existing cube that is built on previous G4LP nodes.   12TB nodes can only be in a cube by themselves. 

I’ve commented before that I still believe Centera is still a very legitimate platform: https://hoosierstorage.wordpress.com/2011/05/04/does-archiving-to-centera-or-cas-still-matter-2/.  Many Centera customers are struggling with the decision of whether to move to Isilon or Atmos.  While either one may make sense in some cases, in other cases there’s nothing wrong with sticking with Centera.  It’s a pretty painless migration do to something like a CICM or C2C, certainly less effort and cost will be involved than migrating to an alternative technology.   Yes, Centera is somewhat “proprietary”, especially now that EMC has ended XAM development, but if EMC is a preferred manufacturer then there isn’t much to worry about.    You can rest assured that EMC is going to support this platform for a minimum of 5 years once the node hardware goes end of sale like they do with all hardware (except Symm, which is longer).   Even in an unfathomable scenario where EMC went out of business, there are multiple 3rd parties that can migrate data off Centera now.  The 12TB node will offer a pretty attractive refresh TCO based on hardware refreshes I’ve seen going from G3/G4 to Gen4LP 8TB nodes.  Then, in 5-7 years when it’s time for another tech refresh, hopefully the debate between Isilon and Atmos as the preferred archiving platform will be over 🙂

Release announcement:

Product: Centera CentraStar v4.2 SP2 (4.2.2) and 12TB Node Hardware

General Availability Date: Nov 19, 2012

Product Overview

Centera is a networked storage system specifically designed to store and provide fast, easy access to fixed content (information in its final form). It is the first solution to offer online availability with long-term retention and assured integrity for this fastest-growing category of information. CentraStar is the operating environment that runs the Centera cluster.

New Feature Summary

The following new features and functionality are available within CentraStar v4.2.2:

  • The introduction and support for GEN 4LP 12TB nodes utilizing 3TB drives.
  • Improved security through an updated SUSE Linux platform (SLES 11 SP2) and Java updates.
  • Consolidation of the CentraStar and Centera Virtual Archive software into one package for improved installation and maintenance.

 

Advertisements
Categories: Archive, EMC

VMware vDP and Avamar – Blown out of Proportion

October 8, 2012 2 comments

The dust has settled a bit since the announcement of vSphere 5.1, including the new VMware Data Protector (vDP) functionality based on EMC Avamar code.   Immediately following the announcement there were:

  • EMC folks reporting this proves Avamar is the greatest thing since sliced bread because VMware chose Avamar Virtual Edition (AVE) as the basis for vDP
  • VMware folks stating vDP only leverages Avamar technology – it is a new product co-developed by VMware and EMC rather than AVE with a new GUI.
  • Critics/Competitors saying they are two completely different products and this announcement doesn’t mean anything or this announcement means the world will be running Hyper-V in 12-18 months as EMC takes over VMware and fails miserably.

What’s my opinion?  Being a middle-of-the-road guy, naturally I think both the far left and right are blowing things out of proportion and VMware employees were generally the most accurate in their assessments.    

We can hold these things to be self-evident:

  • vDP is a virtual appliance.  AVE is a virtual appliance.   One would find it highly unlikely that VMware would completely re-write the virtual appliance used for vDP, but we don’t know for sure.
  • The vDP GUI is a heck of a lot simpler to manage for the average SMB shop than AVE.  EMC needs to learn a lesson here and quickly – not just for SMB customers but also Enterprise customers running full-blown Avamar. 
  • vDR was getting a little bit better, but a scan of the VMware Community Forums quickly showed it was a poor product.  Even the smallest of SMB shops did not like it and usually ended up going the Veeam route after struggling to get vDR working.
  • Avamar does have best-in-class de-duplication algorithms so it’s not hard to accept the argument that VMware evaluated different de-dupe technologies and picked Avamar’s to the be nuts and bolts under vDP.
  • I wouldn’t try to outsmart Joe Tucci.  We might see some pushing of the envelope with regards to the EMC-VMware relationship, but he’s not going to screw this thing up. 

 

Questions in my mind…

  • AVE was very performance hungry.  In fact, before install it required a benchmark test be run for 24-48 hours that was very disk intensive.  If certain specs were not met, EMC would not support the AVE configuration.    This is why EMC almost always sells Avamar as a HW/SW appliance.   In my mind, the typical vDP user is probably going to use some very low-cost storage as the backup repository.  I wonder how this product is going to perform unless some significant performance enhancements were made to the vDP product relative to AVE. 
  • Even the smallest of SMB’s typically want their backups to be stored off-site, and vDP doesn’t offer any replication capability, nor does it offer any sort of tape-out mechanism.    Is this really a practical solution for anybody nowadays?
  • Is there an upgrade path from vDP to full Avamar?   I’ve seen EMC employees post in their blogs that there is a clear upgrade path if you outgrow vDP, every other post I’ve seen says there is no upgrade path.  I’ve not been able to find any official documentation about the upgrade path.  Which is it, and is there an expensive PS engagement involved? 

 

All in all, the providers of SMB-oriented VMware backup solutions such as Veeam don’t have much to be worried about yet.    It’s a strange world of “coopetition” that we live in today.   EMC and VMware cooperating on vDP.  VMware partnering with all storage vendors, yet being majority owned by EMC.    EMC partnering closely with Microsoft and beefing up Hyper-V support in all their products.   All storage vendors partnering closely with Oracle, but Oracle getting into the storage business.   Cisco partnering with NetApp on FlexPod and also with VCE on vBlock.  EMC pushing Cisco servers to their clients but also working with Lenovo for some server OEM business.      The list goes on and all indications are this is the new reality we will be living with for some time.  

What would I do if I were Veeam or another provider of SMB backup for virtual machines?  Keep continuing to innovate like crazy, as Veeam has done.  It’s no different than what VMware needs to keep doing to ensure they stay ahead of Microsoft.   Might I suggest for Veeam specifically, amp up the “coopetition” and build DD BOOST support into your product.    DataDomain is the best-in-class target de-dupe appliance with the most market share.  Unfortunately, the way Veeam and DD work together today is kludgey at best.   Although Veeam can write to NFS storage, it does not work well with a NFS connection directly to the DD appliance.   Rather, it is recommended to setup an intermediary Linux server to re-export the NFS export from the DD box.    A combination of Veeam with DD BOOST and something like a DD160 for the average SMB shop would be a home run and crush vDP as a solution any day of the week.    I have heard that Quest vRanger recently built support for DD BOOST into their product and it will be interesting to see if that remains now that Quest was purchased by Dell. 

 

 

A look at how Inyo (VNXOE 32) optimizes a VNX configuration

October 2, 2012 Leave a comment

I previously discussed EMC’s new release of code for VNX here.   As I catch up on writing blog articles following my wedding last month, I thought I’d highlight one real customer situation that I was working on in August where the improvements saved a considerable amount of money, increased the usable capacity percentage, and offered greater IOPS capability.   This is a modification of an email I sent to the customer explaining the changes I made in the configuration. 

The previous config consisted of a VNX5300 with 37 SAS drives and 31 NL-SAS drives.   The new config consisted of 37 SAS drives and 33 NL-SAS drives.

In short:

  • The previous config required the storage pool to use RAID6 for all drives, which is needed for the high-capacity 7200RPM drives, but is a bit overkill for 15K drives and reduces usable capacity on those drives.     This new config is able to support mixed RAID types in the same FAST storage pool.  

 

  • EMC has now blessed new RAID layouts as best-practice.    Underneath the storage pool covers, the 15K drives use an 8+1 RAID5 protection scheme instead of 4+1, and the 7200RPM drives use 14+2 RAID6 instead of 8+2 (Note: the default previously was 6+2 but there was a workaround documented by penguinpunk here to use 8+2, which I’ve confirmed with EMC still works).  It is still presented as a single storage pool. 

 

  • The previous config offered 110TB Raw and 76TB Usable, or 69% usable.   The new config is 100TB Raw and 76TB usable, or 76% usable.  
    • The previous config offered  12TB usable on the 15K tier and 64TB usable on the 7200RPM.   The new config is 13TB usable on the 15K tier and 63TB usable on 7200RPM.

 

  • The new config offers 6,300 IOPS from 15K, 2,880 from 7200RPM, and 7,000 IOPS from FAST Cache, for a total of 9,180 IOPS.   The previous config offered 6,480 IOPS from 15K and 2,700 IOPS from 7200RPM for a total of 9,180 IOPS (same number).  
    • Though this is the same number of IOPS, the new config will offer more effective IOPS because there will be less IOPS consumed by RAID6 parity calculations on the 15K drives.

 

  • The storage pools will now automatically load-balance within a tier if there is a hot spot within a 8+1 or 14+2 RAID group.  In other words, if one of the 8+1 R5 groups in the storage pool is running hot, it will move slices of data as needed to a less busy 8+1 15K group of disks. 

 

  • If you add drives, the VNX will now automatically re-balance all existing data across all the new drives to increase performance for existing data in addition to adding new capacity.     

 

There are still some things I’d like to see improved in terms of how pools work on VNX, which I have a feeling won’t come around until VNX “2”, regardless there’s lots of great stuff in this code release that gives the VNX a considerable boost in its market relevance. 

Categories: EMC

Wedding planning and blogging do not mix

September 19, 2012 2 comments

Just a quick note to update my readers – the Hoosier Storage Guy is dark until early Oct when I return from my honeymoon. Wedding planning has consumed my life for the past several months, hence no posts since July, and I’m on a much needed break in Hawaii right now.

Categories: Uncategorized

EMC VNX OE 32 (i.e. Flare 32) is finally here!

July 17, 2012 2 comments

It sure would’ve been nice to see this sooner, but better late than never.  Finally, we get to see the really good stuff that has been in the works for sometime and takes a great product and makes it even better.     This is a key update for any existing EMC VNX customers (though I recommend waiting 1-2 quarters before upgrading) and any new VNX customers.   

The key updates include:

  • Support for mixed RAID types in a storage pool
  • A new Flash 1st auto-tiering policy
  • New RAID templates to support better efficiency – such as changing RAID6 protection scheme from 6+2 to 14+2.
  • In-family data-in-place upgrades – bringing back the capability that existed within Clariion to essentially do a head-swap and grow to the next model. 
  • Windows Branch Cache support for CIFS/SMB file shares
  • Load-balancing and re-balancing within a storage tier
  • VNX Snapshots now provides write-in-place pointer-based snapshots that in their initial release will support Block LUNs and require pool-based LUNs. 

 

You can read more here:  https://community.emc.com/message/646744

 

Categories: Uncategorized

A look at Atlantis ILIO

July 2, 2012 2 comments

I first mentioned Atlantis back in March 2012 (http://bit.ly/wMl1cc) as one of the hot start-up’s I’ve been tracking with a really strong value proposition.

http://www.atlantiscomputing.com/technology

Atlantis ILIO Storage Optimization technology works at the Windows NTFS protocol layer to offload virtual desktop IO traffic before it impacts storage. When the Microsoft Windows operating system and applications send IO traffic to storage, Atlantis ILIO intercepts and intelligently deduplicates all traffic before it reaches storage, locally processing up to 90% of all Windows IO requests in RAM, delivering a VDI solution with storage characteristics similar to a local desktop PC using solid state storage (SSD). The result is a VDI environment requires up to 90% less storage to deliver better desktop performance than a physical PC.

 

One thing that’s not mentioned here is important to note…….for the I/O that does end up landing on physical disk, Atlantis aggregates it into 64KB sequential I/O, which is a huge benefit compared to a bunch of 4-8KB random I/O thrashing your spindles.

I’ve been speaking about Atlantis to a handful of customers now and I thought it’d be beneficial to give a sample of how it can greatly reduce the cost of virtual desktop storage.

Customer Environment

The customer currently has a VDI POC environment setup with 1-2 dozen machines running on about 10-15 spindles on a mid-tier storage array provided by a major manufacturer.   Most users fall into the medium workload category, which generate 10-15 IOPS per desktop in steady-state (I used 12 in all my calculations).

The customer is now ready to rollout deployment to 150 users (persistent), with long-term scaling to 300.  They are evaluating a new SAN to help support the project.   Assuming an 80% write ratio, 12 IOPS per desktop will generate 41 back-end RAID5 IOPS per desktop, for a total of 6,120.   If we assume 25GB per persistent desktop, approximately 3.6TB of storage will be required.

The Atlantis Effect

Taking these same IOPS numbers into account, Atlantis will process the 6,120 IOPS and reduce them (conservatively) down to 1,224 IOPS on the back-end.    Additionally, it will reduce the space requirements for the persistent desktops from 3.6TB to about 720GB.  That’s a tremendous value for both IOPS and capacity savings.   If we assume the standard 180 IOPS per 15K drive, the production rollout of 150 desktops can live on the same number of spindles that the POC runs on today!

The Bottom Line

Although I can’t divulge street pricing for various vendors in this blog, I can provide some general details on the savings seen with Atlantis.   In this case, since Atlantis offered the ability to run the production VDI deployment on the same number of spindles that supported the POC environment, the customer could choose to delay the new SAN purchase until it is time to do the normal storage technology refresh.   Factoring in the cost of the Atlantis ILIO appliance, the customer sees an 80% savings relative to the cost of a new SAN designed to meet the 6,124 IOPS workload.

Categories: Atlantis, Virtualization

Strategies for SRM with a VNXe

June 18, 2012 1 comment

Give credit where credit is due, EMC does a lot of things well.   VMware Site Recovery Manager (SRM) support for the VNXe is definitely not one of those.   EMC has done such a great job turning the ship around when it comes to VMware integration with their products thanks to guys like Chad Sakac (@sakacc), that it is beyond mind-boggling to me as to why it is taking such a long time to get this straightened out on the VNXe.  

Originally, it was stated that VNXe would support SRM when SRM 5.0 came out (Q3 2011), at least with NFS and iSCSI would be later down the road.  Then, the date slipped to Q4 2011, and again to Q1 2012, and again to Q3 2012, and I just saw an update on the EMC community forums where it’s now stated as Q4 2012 (https://community.emc.com/thread/127434).   Let me be clear to EMC and their engineering group, this is not acceptable.    Customers who have bought this product with the intent to move to fully replicated vSphere environments have a right to be pissed.   Partners who are responsible for designing best-in-class high-availability solutions for their SMB customers have a right to be pissed.   We don’t have unreasonable expectations or unrealistic high demands.   EMC just screwed this one up badly.

What I find most incomprehensible of all is the fact that the VNXe software is largely based on the underpinnings of the previous Celerra (NAS) product.   Celerra had SRM support for both NFS and iSCSI previously!  For Pete’s sake, how hard can it be to modify this?!?!    In a recent explanation, it was stated that the API’s were changing between SRM 4.x and 5.x.   Well, somehow every other major storage array from EMC and other manufacturers didn’t seem to have a hiccup from this in their support of SRM.   Obviously, EMC is going to focus on the high-dollar VMAX and VNX platforms first, but no excuse to let your SMB product lag this far behind.

OK, now that the rant is out of the way, what options do you have to achieve a fully replicated solution for your vSphere environment?    It really boils down to two market-proven options, though you may come across some other fringe players:

 

1)Ÿ  SRM w/ vSphere Replication

      Seamless Disaster Recovery failover and testing

      Tightly integrated into vSphere and vCenter

      Easy per-VM replication management within vCenter

      Storage agnostic – no vendor lock-in with array replication

Ÿ2) Veeam

      Leverages backup snapshot functionality to also replicate to a remote Veeam server

      Storage agnostic

      Offers ability to do a file-level restore from remote replicas

      Included as part of Veeam Backup and Replication product.

 

Here’s a table I put together showing a comparison between the two options:

  Veeam Replication SRM w/ vSphere Replication
vSphere version required 4.0 and higher 5.0 (HW Version 7 or higher required on VMs)
Replication Methodology VM Snapshots vSCSI block tracking
Realistic best-case RPO 15 min 15 min
Includes Backup Yes No
Licensing Per socket Per VM
VSS quiescing Yes (custom VSS driver) Yes (VM Tools VSS)
Replicate powered-off VMs Yes No
File Level Restore from Replica Yes No
Orchestrated Failover based on defined DR plan No Yes
Easy non-disruptive DR testing capabilities No Yes
Multiple Restore Points from Replica Yes No
Re-IP VM during failover Yes Yes

 

So, how do you choose between the two?   Well, that’s where the proverbial “it depends” answer comes in.   When I’m speaking with SMB market customers, I’ll ask questions about their backup to get a sense as to whether or not they could benefit from Veeam.   If so, then it’s certainly advantageous to try and knock-out backup and replication with one product.   However, that’s not to say that there can’t be advantages to running Veeam for backup but using SRM with vSphere Replication as well, if you truly need that extra level of automation that SRM offers.

 

UPDATE 10/2/2012

I recently got notified about an update to the original post on the EMC community forums: https://community.emc.com/thread/127434.   An EMC representative has just confirmed that the target GA date is now Q1 2013….which marks another slip.

Also, with the announcement of vSphere 5.1 came a few improvements to vSphere Replication with SRM.   Most notably, SRM now supports auto-failback with vSphere Replication, which previously was a function only supported with array-based replication.

Categories: EMC, Veeam, VMware