Archive

Archive for the ‘Virtualization’ Category

VMware vDP and Avamar – Blown out of Proportion

October 8, 2012 2 comments

The dust has settled a bit since the announcement of vSphere 5.1, including the new VMware Data Protector (vDP) functionality based on EMC Avamar code.   Immediately following the announcement there were:

  • EMC folks reporting this proves Avamar is the greatest thing since sliced bread because VMware chose Avamar Virtual Edition (AVE) as the basis for vDP
  • VMware folks stating vDP only leverages Avamar technology – it is a new product co-developed by VMware and EMC rather than AVE with a new GUI.
  • Critics/Competitors saying they are two completely different products and this announcement doesn’t mean anything or this announcement means the world will be running Hyper-V in 12-18 months as EMC takes over VMware and fails miserably.

What’s my opinion?  Being a middle-of-the-road guy, naturally I think both the far left and right are blowing things out of proportion and VMware employees were generally the most accurate in their assessments.    

We can hold these things to be self-evident:

  • vDP is a virtual appliance.  AVE is a virtual appliance.   One would find it highly unlikely that VMware would completely re-write the virtual appliance used for vDP, but we don’t know for sure.
  • The vDP GUI is a heck of a lot simpler to manage for the average SMB shop than AVE.  EMC needs to learn a lesson here and quickly – not just for SMB customers but also Enterprise customers running full-blown Avamar. 
  • vDR was getting a little bit better, but a scan of the VMware Community Forums quickly showed it was a poor product.  Even the smallest of SMB shops did not like it and usually ended up going the Veeam route after struggling to get vDR working.
  • Avamar does have best-in-class de-duplication algorithms so it’s not hard to accept the argument that VMware evaluated different de-dupe technologies and picked Avamar’s to the be nuts and bolts under vDP.
  • I wouldn’t try to outsmart Joe Tucci.  We might see some pushing of the envelope with regards to the EMC-VMware relationship, but he’s not going to screw this thing up. 

 

Questions in my mind…

  • AVE was very performance hungry.  In fact, before install it required a benchmark test be run for 24-48 hours that was very disk intensive.  If certain specs were not met, EMC would not support the AVE configuration.    This is why EMC almost always sells Avamar as a HW/SW appliance.   In my mind, the typical vDP user is probably going to use some very low-cost storage as the backup repository.  I wonder how this product is going to perform unless some significant performance enhancements were made to the vDP product relative to AVE. 
  • Even the smallest of SMB’s typically want their backups to be stored off-site, and vDP doesn’t offer any replication capability, nor does it offer any sort of tape-out mechanism.    Is this really a practical solution for anybody nowadays?
  • Is there an upgrade path from vDP to full Avamar?   I’ve seen EMC employees post in their blogs that there is a clear upgrade path if you outgrow vDP, every other post I’ve seen says there is no upgrade path.  I’ve not been able to find any official documentation about the upgrade path.  Which is it, and is there an expensive PS engagement involved? 

 

All in all, the providers of SMB-oriented VMware backup solutions such as Veeam don’t have much to be worried about yet.    It’s a strange world of “coopetition” that we live in today.   EMC and VMware cooperating on vDP.  VMware partnering with all storage vendors, yet being majority owned by EMC.    EMC partnering closely with Microsoft and beefing up Hyper-V support in all their products.   All storage vendors partnering closely with Oracle, but Oracle getting into the storage business.   Cisco partnering with NetApp on FlexPod and also with VCE on vBlock.  EMC pushing Cisco servers to their clients but also working with Lenovo for some server OEM business.      The list goes on and all indications are this is the new reality we will be living with for some time.  

What would I do if I were Veeam or another provider of SMB backup for virtual machines?  Keep continuing to innovate like crazy, as Veeam has done.  It’s no different than what VMware needs to keep doing to ensure they stay ahead of Microsoft.   Might I suggest for Veeam specifically, amp up the “coopetition” and build DD BOOST support into your product.    DataDomain is the best-in-class target de-dupe appliance with the most market share.  Unfortunately, the way Veeam and DD work together today is kludgey at best.   Although Veeam can write to NFS storage, it does not work well with a NFS connection directly to the DD appliance.   Rather, it is recommended to setup an intermediary Linux server to re-export the NFS export from the DD box.    A combination of Veeam with DD BOOST and something like a DD160 for the average SMB shop would be a home run and crush vDP as a solution any day of the week.    I have heard that Quest vRanger recently built support for DD BOOST into their product and it will be interesting to see if that remains now that Quest was purchased by Dell. 

 

 

A look at Atlantis ILIO

July 2, 2012 2 comments

I first mentioned Atlantis back in March 2012 (http://bit.ly/wMl1cc) as one of the hot start-up’s I’ve been tracking with a really strong value proposition.

http://www.atlantiscomputing.com/technology

Atlantis ILIO Storage Optimization technology works at the Windows NTFS protocol layer to offload virtual desktop IO traffic before it impacts storage. When the Microsoft Windows operating system and applications send IO traffic to storage, Atlantis ILIO intercepts and intelligently deduplicates all traffic before it reaches storage, locally processing up to 90% of all Windows IO requests in RAM, delivering a VDI solution with storage characteristics similar to a local desktop PC using solid state storage (SSD). The result is a VDI environment requires up to 90% less storage to deliver better desktop performance than a physical PC.

 

One thing that’s not mentioned here is important to note…….for the I/O that does end up landing on physical disk, Atlantis aggregates it into 64KB sequential I/O, which is a huge benefit compared to a bunch of 4-8KB random I/O thrashing your spindles.

I’ve been speaking about Atlantis to a handful of customers now and I thought it’d be beneficial to give a sample of how it can greatly reduce the cost of virtual desktop storage.

Customer Environment

The customer currently has a VDI POC environment setup with 1-2 dozen machines running on about 10-15 spindles on a mid-tier storage array provided by a major manufacturer.   Most users fall into the medium workload category, which generate 10-15 IOPS per desktop in steady-state (I used 12 in all my calculations).

The customer is now ready to rollout deployment to 150 users (persistent), with long-term scaling to 300.  They are evaluating a new SAN to help support the project.   Assuming an 80% write ratio, 12 IOPS per desktop will generate 41 back-end RAID5 IOPS per desktop, for a total of 6,120.   If we assume 25GB per persistent desktop, approximately 3.6TB of storage will be required.

The Atlantis Effect

Taking these same IOPS numbers into account, Atlantis will process the 6,120 IOPS and reduce them (conservatively) down to 1,224 IOPS on the back-end.    Additionally, it will reduce the space requirements for the persistent desktops from 3.6TB to about 720GB.  That’s a tremendous value for both IOPS and capacity savings.   If we assume the standard 180 IOPS per 15K drive, the production rollout of 150 desktops can live on the same number of spindles that the POC runs on today!

The Bottom Line

Although I can’t divulge street pricing for various vendors in this blog, I can provide some general details on the savings seen with Atlantis.   In this case, since Atlantis offered the ability to run the production VDI deployment on the same number of spindles that supported the POC environment, the customer could choose to delay the new SAN purchase until it is time to do the normal storage technology refresh.   Factoring in the cost of the Atlantis ILIO appliance, the customer sees an 80% savings relative to the cost of a new SAN designed to meet the 6,124 IOPS workload.

Categories: Atlantis, Virtualization

Tintri – What’s the big deal?

April 19, 2011 2 comments

You may have seen several news articles a couple weeks back about the hottest new thing in VMware storage – Tintri. Their marketing department created quite a buzz with most major IT news outlets picking up the story and proclaiming that the Tintri appliance was the future of VMware storage.

Instead of re-hashing what’s been said already, here’s a brief description from CNET:

Tintri VMstore is a hardware appliance that is purpose-built for VMs. It uses virtual machine abstractions–VMs and virtual disks–in place of conventional storage abstractions such as volumes, LUNs, or files. By operating at the virtual machine and disk level, administrators get the same level of insight, control, and automation of CPU, memory, and networking resources as general-purpose shared-storage solutions.

A few more technical details from The Register:

The VMstore T440 is a 4U, rackmount, multi-cored, multi-processor, X86 server with gigabit or 10gigE ports to a VMware server host. It appears as a single datastore instance in the VMware vSphere Client – connecting to vCenter Server. Multiple appliances – nodes – can be connected to one vCenter Server to enable sharing by ESX hosts. Virtual machines (VMs) can be copied or moved between nodes using storage vMotion.

The T440 is a hybrid storage facility with 15 directly-attached 3.5-inch, 7,200rpm, 1TB, SATA disk drives, and 9 x 160GB SATA, 2-bit, multi-level cell (MLC) solid state drives (SSD), delivering 8.5TB of usable capacity across the two storage tiers. There is a RAID6 redundancy scheme with hot spares for both the flash and disk drives.

 

I was a bit skeptical as to how this could be much different from other storage options on the market today. Tintri claims that you don’t manage the storage, everything is managed by VM. The only logical way I could see this happening is if you’re managing files (with every VM being a file) instead of LUN’s. How do you accomplish this? Use a native file system as your datastore instead of creating a VMFS file system on top of a block-based datastore. In other words, NFS.

So, after doing a little research, it appears this box isn’t much more than a simple NAS, with a slick GUI, doing some neat things under the covers with auto-tiering (akin to Compellent’s Data Progression or EMC sub-LUN FAST) and de-duplication. Instead of adding drives to a tray to expand, you expand by adding nodes. This makes for a nice story in that you scale performance as you scale capacity, but in the SMB market where this product is focused, I typically find the performance offered in the base unit with multi-core processors is 10X more than the typical SMB customer needs. In that scenario, scaling by nodes starts to become expensive as you are re-buying the processors each time instead of just buying disks, it takes up more space in the rack, and it increases power/cooling costs over just expanding by adding drives.

Today, it appears the box does not offer dual-controllers, replication, or iSCSI. iSCSI is something most SMB folks can probably go without and rely solely on NFS, which performs very similar to iSCSI at comparable Ethernet speeds and can offer additional functionality. Replication is probably something most SMB’s can also go without. I don’t see too many SMB’s going down the VMware SRM path. Most either don’t need that level of DR, or a solution like Veeam Backup and Replication fits their needs well (host-based SRM is also rumored to be coming later this year from VMware). The dual-controller issue is one I believe no customer should ever compromise on for production data, even SMB customers. I’ve seen enough situations over the years where storage processors, switches, or HBA’s just die or go into a spontaneous reboot, and that’s with products that have been established in the marketplace for some time and are known to be reliable. In this scenario with a single-controller system on Gen1 equipment, you’re risking too much. With consolidated storage you’re putting all your eggs in one basket, and when you do that, it better be a pretty darn good basket. The Register reported that a future release of the product will support dual-controllers, which I would make a priority if I were running Tintri.

Tintri managed to create quite a splash, but of course only time will tell how successful this box is going to be. Evostor launched a similar VMware-centric storage at VMworld a couple years ago but now their official domain name is expired. Tintri will certainly have an uphill battle to fight. When I look at the competition Tintri is going to face, many of their claimed advantages have already been released in recent product refreshes by their competition. The VNXe is probably the box that competes the best. The VNXe GUI is incredibly easy to use and makes no mention of LUNs or RAID groups, just like Tintri. It’s extremely cheap and EMC has deep pockets, which will be tough for Tintri to compete with. VNXe is built on proven technology that’s very mature, while Tintri is Gen 1. It supports NFS with advanced functionality like de-dupe. Tintri has a small advantage here in that EMC’s de-dupe for VM’s is post-process, while Tintri claims to support inline de-dupe (but only for the portion of VM data that resides on SSD drives). This is probably using some of the intellectual property that the ex-Data Domain employees at Tintri provided. The VNXe also supports iSCSI and will support FCoE. The NetApp FAS2020 is also a competitor in this space, supporting many of the same things the VNXe has, although the GUI is nowhere near as simple. Tintri’s big advantages are that it supports SSD today and does sub-LUN auto-tiering. These are two things that EMC put in the VNX but left out of the VNXe. It’s been stated the VNXe was supposed to get Flash drive support later this year, but there’s been no mention of auto-tiering support. Competition is good for end users and my hope is that with competitors putting sub-LUN tiering in their products at the low-end, it will force EMC’s hand to include FAST in the VNXe, because I think it will ultimately need it within 12-18 months to remain competitive in the market. Whether or not the typical SMB even needs auto-tiering with Flash drives is another story, but once the feature is there and customers start getting hyped about it, it’ll need to be there.

Further reading:

http://www.theregister.co.uk/2011/03/24/tintri_vmware_storage_appliance/

http://www.tintri.com/

http://news.cnet.com/8301-13846_3-20045989-62.html

http://www.yellow-bricks.com/2011/03/24/tintri-virtual-machine-aware-storage/

Categories: NAS, SAN, Virtualization, VMware