Wedding planning and blogging do not mix

September 19, 2012 2 comments

Just a quick note to update my readers – the Hoosier Storage Guy is dark until early Oct when I return from my honeymoon. Wedding planning has consumed my life for the past several months, hence no posts since July, and I’m on a much needed break in Hawaii right now.

Categories: Uncategorized

EMC VNX OE 32 (i.e. Flare 32) is finally here!

July 17, 2012 2 comments

It sure would’ve been nice to see this sooner, but better late than never.  Finally, we get to see the really good stuff that has been in the works for sometime and takes a great product and makes it even better.     This is a key update for any existing EMC VNX customers (though I recommend waiting 1-2 quarters before upgrading) and any new VNX customers.   

The key updates include:

  • Support for mixed RAID types in a storage pool
  • A new Flash 1st auto-tiering policy
  • New RAID templates to support better efficiency – such as changing RAID6 protection scheme from 6+2 to 14+2.
  • In-family data-in-place upgrades – bringing back the capability that existed within Clariion to essentially do a head-swap and grow to the next model. 
  • Windows Branch Cache support for CIFS/SMB file shares
  • Load-balancing and re-balancing within a storage tier
  • VNX Snapshots now provides write-in-place pointer-based snapshots that in their initial release will support Block LUNs and require pool-based LUNs. 

 

You can read more here:  https://community.emc.com/message/646744

 

Categories: Uncategorized

A look at Atlantis ILIO

July 2, 2012 2 comments

I first mentioned Atlantis back in March 2012 (http://bit.ly/wMl1cc) as one of the hot start-up’s I’ve been tracking with a really strong value proposition.

http://www.atlantiscomputing.com/technology

Atlantis ILIO Storage Optimization technology works at the Windows NTFS protocol layer to offload virtual desktop IO traffic before it impacts storage. When the Microsoft Windows operating system and applications send IO traffic to storage, Atlantis ILIO intercepts and intelligently deduplicates all traffic before it reaches storage, locally processing up to 90% of all Windows IO requests in RAM, delivering a VDI solution with storage characteristics similar to a local desktop PC using solid state storage (SSD). The result is a VDI environment requires up to 90% less storage to deliver better desktop performance than a physical PC.

 

One thing that’s not mentioned here is important to note…….for the I/O that does end up landing on physical disk, Atlantis aggregates it into 64KB sequential I/O, which is a huge benefit compared to a bunch of 4-8KB random I/O thrashing your spindles.

I’ve been speaking about Atlantis to a handful of customers now and I thought it’d be beneficial to give a sample of how it can greatly reduce the cost of virtual desktop storage.

Customer Environment

The customer currently has a VDI POC environment setup with 1-2 dozen machines running on about 10-15 spindles on a mid-tier storage array provided by a major manufacturer.   Most users fall into the medium workload category, which generate 10-15 IOPS per desktop in steady-state (I used 12 in all my calculations).

The customer is now ready to rollout deployment to 150 users (persistent), with long-term scaling to 300.  They are evaluating a new SAN to help support the project.   Assuming an 80% write ratio, 12 IOPS per desktop will generate 41 back-end RAID5 IOPS per desktop, for a total of 6,120.   If we assume 25GB per persistent desktop, approximately 3.6TB of storage will be required.

The Atlantis Effect

Taking these same IOPS numbers into account, Atlantis will process the 6,120 IOPS and reduce them (conservatively) down to 1,224 IOPS on the back-end.    Additionally, it will reduce the space requirements for the persistent desktops from 3.6TB to about 720GB.  That’s a tremendous value for both IOPS and capacity savings.   If we assume the standard 180 IOPS per 15K drive, the production rollout of 150 desktops can live on the same number of spindles that the POC runs on today!

The Bottom Line

Although I can’t divulge street pricing for various vendors in this blog, I can provide some general details on the savings seen with Atlantis.   In this case, since Atlantis offered the ability to run the production VDI deployment on the same number of spindles that supported the POC environment, the customer could choose to delay the new SAN purchase until it is time to do the normal storage technology refresh.   Factoring in the cost of the Atlantis ILIO appliance, the customer sees an 80% savings relative to the cost of a new SAN designed to meet the 6,124 IOPS workload.

Categories: Atlantis, Virtualization

Strategies for SRM with a VNXe

June 18, 2012 1 comment

Give credit where credit is due, EMC does a lot of things well.   VMware Site Recovery Manager (SRM) support for the VNXe is definitely not one of those.   EMC has done such a great job turning the ship around when it comes to VMware integration with their products thanks to guys like Chad Sakac (@sakacc), that it is beyond mind-boggling to me as to why it is taking such a long time to get this straightened out on the VNXe.  

Originally, it was stated that VNXe would support SRM when SRM 5.0 came out (Q3 2011), at least with NFS and iSCSI would be later down the road.  Then, the date slipped to Q4 2011, and again to Q1 2012, and again to Q3 2012, and I just saw an update on the EMC community forums where it’s now stated as Q4 2012 (https://community.emc.com/thread/127434).   Let me be clear to EMC and their engineering group, this is not acceptable.    Customers who have bought this product with the intent to move to fully replicated vSphere environments have a right to be pissed.   Partners who are responsible for designing best-in-class high-availability solutions for their SMB customers have a right to be pissed.   We don’t have unreasonable expectations or unrealistic high demands.   EMC just screwed this one up badly.

What I find most incomprehensible of all is the fact that the VNXe software is largely based on the underpinnings of the previous Celerra (NAS) product.   Celerra had SRM support for both NFS and iSCSI previously!  For Pete’s sake, how hard can it be to modify this?!?!    In a recent explanation, it was stated that the API’s were changing between SRM 4.x and 5.x.   Well, somehow every other major storage array from EMC and other manufacturers didn’t seem to have a hiccup from this in their support of SRM.   Obviously, EMC is going to focus on the high-dollar VMAX and VNX platforms first, but no excuse to let your SMB product lag this far behind.

OK, now that the rant is out of the way, what options do you have to achieve a fully replicated solution for your vSphere environment?    It really boils down to two market-proven options, though you may come across some other fringe players:

 

1)Ÿ  SRM w/ vSphere Replication

      Seamless Disaster Recovery failover and testing

      Tightly integrated into vSphere and vCenter

      Easy per-VM replication management within vCenter

      Storage agnostic – no vendor lock-in with array replication

Ÿ2) Veeam

      Leverages backup snapshot functionality to also replicate to a remote Veeam server

      Storage agnostic

      Offers ability to do a file-level restore from remote replicas

      Included as part of Veeam Backup and Replication product.

 

Here’s a table I put together showing a comparison between the two options:

  Veeam Replication SRM w/ vSphere Replication
vSphere version required 4.0 and higher 5.0 (HW Version 7 or higher required on VMs)
Replication Methodology VM Snapshots vSCSI block tracking
Realistic best-case RPO 15 min 15 min
Includes Backup Yes No
Licensing Per socket Per VM
VSS quiescing Yes (custom VSS driver) Yes (VM Tools VSS)
Replicate powered-off VMs Yes No
File Level Restore from Replica Yes No
Orchestrated Failover based on defined DR plan No Yes
Easy non-disruptive DR testing capabilities No Yes
Multiple Restore Points from Replica Yes No
Re-IP VM during failover Yes Yes

 

So, how do you choose between the two?   Well, that’s where the proverbial “it depends” answer comes in.   When I’m speaking with SMB market customers, I’ll ask questions about their backup to get a sense as to whether or not they could benefit from Veeam.   If so, then it’s certainly advantageous to try and knock-out backup and replication with one product.   However, that’s not to say that there can’t be advantages to running Veeam for backup but using SRM with vSphere Replication as well, if you truly need that extra level of automation that SRM offers.

 

UPDATE 10/2/2012

I recently got notified about an update to the original post on the EMC community forums: https://community.emc.com/thread/127434.   An EMC representative has just confirmed that the target GA date is now Q1 2013….which marks another slip.

Also, with the announcement of vSphere 5.1 came a few improvements to vSphere Replication with SRM.   Most notably, SRM now supports auto-failback with vSphere Replication, which previously was a function only supported with array-based replication.

Categories: EMC, Veeam, VMware

March 27, 2012 Leave a comment

Great article here for VNXe customers to get more detailed performance stats

Henriwithani

The latest Operating Environment upgrades have already brought some improvements to the statistics that are shown through the Unisphere GUI. The first VNXe OE that I worked with was showing only CPU statistics. Then along with update 2.1.0 Network Activity and Volume Activity statistics came available. I was still hoping to get some more statistics. IOps and latency graphs would have been nice additions. So I did some digging and found out that there is actually lots of statistics parameters that VNXe gathers but those are just stored in the database, maybe for support purposes.

Where is the data stored?

When logging in to the VNXe via SSH using service account and listing the content of the folder /EMC/backend/perf_stats you will see that there are several db-files in that folder.

Now when opening the file with notepad it is quite clear what kind of databases those are:

How to read…

View original post 551 more words

Categories: Uncategorized

A look at Block Compression and De-duplication with Veeam and EMC VNX

March 26, 2012 4 comments

Before I proceed any further, I want to state clearly that the testing I performed was not to pit one alternative vs. another.   Rather, I was curious to do some testing to see what type of Block LUN Compression rates I could get for backup data written to a CX4/VNX, including previously de-duped data.   At the same time, I had a need to do some quick testing in the lab comparing Veeam VSS vs. VMware Tools VSS snapshot quiescing.    Since Veeam does de-duplication of data, I ended up just using the backup data that Veeam wrote to disk for my Block LUN Compression tests.

Lab Environment

My lab consists of a VNX5300, a Veeam v6 server, and vSphere 5 running on Cisco UCS.   The VM’s I backed up with Veeam included a mix of app, file, and database VMs.  App/File constituted about 50% of the data and DB was the other 50%.   By no means will I declare this to be a scientific test, but these were fairly typical VM’s that you might find in a small customer environment and I didn’t modify the data sets in any way to try and enhance results.

Veeam VSS Provider Results

For those not aware, most VADP backup products will quiesce the VM by leveraging MS VSS.  Some backup applications provide their own VSS provider (including Veeam), and others like vDR rely on the VMware VSS provider that gets installed along with VMware tools.   With Veeam, it’s possible to configure a job that quiesces the VM with or without their own provider.   My results showed the Veeam VSS provider was much faster than VMware’s native VSS.   On average Veeam created the backup snapshot in 3 seconds with their provider, and 20 seconds without it.   I also ran some continuous ping tests to the VM’s while this process was occurring, and 1/3 of the time I noticed a dropped ping or two when the snapshot was being created with VMware’s VSS provider.   A dropped ping is not necessarily a huge issue in itself, but certainly the longer the quiescing and snapshot process takes, the bigger your window for a “hiccup” to occur, which may be noticed the applications running on that server.

De-dupe and Compression Results

I ran two tests leveraging Veeam and a 200GB Thin LUN on the VNX5300.

Test 1

The settings used for this test were:

  • ·         Veeam De-dupe = ON
  • ·         Veeam In-line compression = ON
  • ·         EMC Block LUN Compression = Off
  Backup Job Size
Backup Job 1 6GB
Backup Job 2 1.2GB
Backup Job 3 12.3GB

 

The final space usage on the LUN was 42GB.   I then turned on Block LUN Compression and no additional savings were obtained, which was to be expected since the data had already been compressed.

Test 2

The settings used for this test were:

  • ·         Veeam De-dupe = ON
  • ·         Veeam In-line compression = Off
  • ·         EMC Block LUN Compression = ON
  Backup Job Size
Backup Job 1 13.6GB
Backup Job 2 3.4GB
Backup Job 3 51.3GB

 

The final space usage on the LUN was 135GB.  I then turned on VNX Block LUN Compression and the consumed space was reduced to 60GB – a 2.3:1 compression ratio or a 56% space savings.  Not too shabby for compression.   More details on how EMC’s Block LUN Compression are available at this link: http://www.emc.com/collateral/hardware/white-papers/h8045-data-compression-wp.pdf

In short, it looks at 64KB segments of data and tries to compress data within each segment. 

Again, this post isn’t about comparing de-dupe or compression rates between Veeam’s software approach within the backup job, or letting the storage hardware do the work.   There are going to be pros and cons to both methods.   For longer retentions (30 days and beyond), I tend to recommend a Purpose-built Backup Appliance (PBBA) that does variable-length block de-duplication.  Rather, for these tests I was out to confirm:

a)      Does Block LUN Compression work well for backup data (whether it has been de-duped or not)?  The conclusion here was Block LUN Compression worked quite well.  I really didn’t know what to expect, so the results were a pleasant surprise.   In hindsight, it does make sense that the data could still compress fairly well.   Although de-dupe has eliminated redundant patterns of blocks, if the remaining post-dedupe blocks still contain data that is compressable, you should be able to squeeze more out of it. This could come in handy for situations where B2D is leveraged and your backup software doesn’t offer compression, or shorter retentions that don’t warrant a PBBA that does variable-length block de-duplication.   

 

b)      The latest version of Veeam is quite impressive, they’ve done some nice things to enhance the architecture so it can scale out as larger enterprise backup software does.   The level of de-dupe and compression achieved within the software was impressive as well.   I can certainly understand why a large number of mid-market customers I speak with have little interest in using vDR for VM image backups as Veeam is still light-years ahead.    If you’re looking at these two products and you have highly-transactional systems in your environment such as busy SQL or Exchange boxes, you’ll be better off with Veeam and its enhanced VSS capabilities. 

Categories: Backup, De-dupe, EMC, Veeam, VMware

March 18, 2012 Leave a comment

Great article here on the Aussie Storage Blog on why NL-SAS > SATA.

Aussie Storage Blog

Here are two common statement I often hear from clients:

  1. I don’t just want SAS drives, I also want SATA drives.  SATA drives are cheaper than SAS drives.
  2. Nearline SAS drives are just SATA drives with some sort of converter on them.

So is this right?  Is this the actual situation?

First up, if your storage uses a SAS based controller with a SAS backplane, then normally you can plug SAS drives into that enclosure, or you can plug SATA drives into that enclosure.    This is great because when you plug SATA drives into a SAS backplane, you can actually send SCSI commands to the drive plus you can send native SATA commands t00 (which is  handy when you are writing software for RAID array drivers).

But (and this is a big but) what we do know is that equivalent (size and RPM) SAS drives perform better than SATA drives…

View original post 725 more words

Categories: Uncategorized

A new crop of storage start-ups has arrived

March 5, 2012 1 comment

About two years ago I was working at EMC and the company had just completed the acquisition of DataDomain, which was one the last “hot” storage-related start-ups around.     There were certainly other storage start-up companies around, but nobody really had a story that screamed “come here, get some shares, and get rich when we get bought”.  A prime example being Xiotech (now Xio).  Xio’s value-prop and future are quite fuzzy from my perspective, but somehow they keep hanging in there.   At the time, everybody wondered who the next hot startup would be, or even if there would be another hot startup.   Compellent was the closest thing one could find, and they were soon snatched up by Dell.

Fortunately for technology, innovation is constant.  New ideas are always being generated, particularly within the realm of data storage.  Anyone who analyzes the balance sheets of EMC, NetApp, and others realizes that data storage is a profitable business, much more so than servers.   I do believe this partly explains why we see so many startups in the data storage arena (because venture capitalists see the $$$), and we also see large conglomerates accustomed to skinny margins like Dell beefing up their storage and services portfolio.

If you follow social media, then you’re already well-aware of Tintri, Whiptail, PureStorage, Violin, Atlantis, Oxygen, Nirvanix, and more.   Today, I’ll give my thoughts on some of the most-discussed startups.

1)      Tintri – my thoughts on Tintri were already published in an earlier post here: https://hoosierstorage.wordpress.com/2011/04/19/tintri-whats-the-big-deal/.   I heard from a handful of Tintri folks after posting that, none too happy with my post.   Some of them are now gone from Tintri.   Ultimately, my thoughts are largely still the same.  It’s my understanding Tintri now has HA controllers, which is a big plus, but I still question the entire market of dedicated VMware storage appliances.    EMC, the parent of VMware, and the largest storage vendor is as focused as I’ve ever seen them in increasing their integration with Microsoft technologies, particularly Hyper-V.    Joe Tucci knows he can’t tie his cart to just one horse, just like he knew he had to let VMware remain independent back in 2006.   Similarly, Veeam has been putting tons of effort into increasing their functionality with Hyper-V.    These companies are both leaders in their respective market segments, think they are doing this in anticipation of receiving no value from it?  Most people buy storage arrays with the intent of using them for at least 5 years.    5 years is a lifetime in the technology world.  There’s no guarantee that VMware will be the dominant hypervisor in 2-3 years.   I certainly hope that they are, and if they continue to out-innovate everyone else they should be.  However, if you buy a dedicated storage appliance for VMware, and in 2 years the world is moving to Hyper-V 4.0, what then?   Microsoft is unlikely to make Hyper-V work with NFS anytime soon.   Would you buy a dedicated storage device for Sharepoint and nothing else?   There still remain use-cases for physical servers and physical servers that need SAN storage.   A dedicated VMware storage box can’t help here.  Why run two storage devices when you can do it all with one?

2)      Whiptail and other dedicated Flash arrays:  Dedicated Flash arrays seems to be generating quite a lot of buzz these days.   They all share a lot of similarities, in most cases the claim is made that by leveraging cheaper consumer-grade MLC flash drives and adding in some fancy magic on top, they can get a much bigger bang for the buck from these drives and make them “enterprise-class”.    They also make crazy claims like “200,000 IOPS”, a number that you simply won’t see in the real world.   Real-world numbers for enterprise-class SLC flash drives are 3500 IOPS per drive.    Anybody who tells you more than that is just blowing smoke.

I know of at least one customer who tested out one of these all-flash appliances.   It was nothing more than a rack-mountable server stuffed with Intel consumer-grade MLC drives (he took a pic and showed me).    He saw a 25% increase in DB performance when compared to the current 50 15K drives that the DB is spread across.   I’m sorry but…..I’m not impressed.    These devices also tend to be single points of failure, unless you buy a second box and connect them together to form a cluster.    I have said it before and I’ll say it again, never buy a SPOF storage solution unless your data is disposable!

As with VMware-dedicated storage appliances, I really have to question the value of the all-flash appliances, except for very niche use cases.   Flash storage for an existing array isn’t that expensive.   The real value in flash is by leveraging small amounts to increase performance where it’s needed, then fulfill capacity requirements by leveraging cheaper high-capacity SATA or NL-SAS drives.   This works and it’s in use today in many environments.  Plus, it’s really not that expensive.   Why buy two devices when you can do it all with one?

3)      Oxygen:  Now we’re getting into some start-ups that I see having good value propositions.   I first became aware of Oxygen about 6-9 months ago, and have been testing the technology out personally.  I also have at least one client that was looking for a secure, Dropbox-like technology for their enterprise that is testing it out.   I posted some previous thoughts on Oxygen here:  https://hoosierstorage.wordpress.com/?s=oxygen

As technologies like Oxygen become more robust, I truly do see this being the next-generation file server within the enterprise.   There is no doubt that we are witnessing a consumerization of IT, with tablets, smartphones, etc.   Users have a need to access their business files on these devices, and if you don’t provide them with the technology to do it, they will find a way using consumer-technologies that you don’t want them to be using.   Oxygen in particular offers a great alternative, providing sync-and share capabilities between your PC and mobile devices, yet retaining the safety and security of keeping data inside the corporate firewall.

4)      Atlantis:  When I first saw the Atlantis ILIO appliance in use, I couldn’t help but be impressed.   Storage performance with VDI is a problem many shops encounter, and when a company can cut that down by 90%, well it definitely turns heads.   Plus, unlike the dedicated physical appliances I mentioned above, Atlantis runs as a vApp, and can leverage your existing SAN environment (or local storage in some cases).   Rather than me do the talking, I would recommend taking a look at this article for a deep-dive on Atlantis: http://myvirtualcloud.net/?p=2604.   I’m currently evaluating Atlantis in my employers demo lab – so far so good.  I’m also working on a model to see just how (or if) it ends up being more cost-effective than a traditional SAN leveraging some SSD’s.

That’s it for now.  Other technologies I hope to be discussing soon include Actifio and Nirvanix.

Categories: SAN, VMware

Is the end of the File Server finally in sight?

December 28, 2011 Leave a comment

A year ago I wrote an article detailing my thoughts on how greatly exaggerated predictions of the imminent death of the file server truly were. A few years back many thought the file server would be gone by now, replaced by SharePoint or other similar content portals. Today, file servers (herein referenced as NAS) are alive and well, storing more unstructured content than ever before. You can read the original article here: http://bit.ly/t573Ry

In summary, the main reasons why NAS has not disappeared are:

  • Much of the content stored on NAS is simply not suitable for being stored in a database, and middleware technologies that allow the data to stay on NAS but be presented as if it were in the database adds complexity.
  • Legacy environments are often too big to accommodate a migration of all user and department shared files into a new repository in a cost effective manner.
  • Legacy environments often have legacy apps that were hard-coded to use UNC paths or mapped drive letters.
  • Many businesses in various industries have instruments or machinery that write data to a network share to store data using commonly accepted CIFS and NFS protocols.
  • The bulk of file growth today is in larger rich media formats, which are not well-suited for SharePoint.
  • NAS is a great option for VMware using NFS

The other day I found myself in a presentation where the file server is dead claim was made once again, and the very thought crossed my mind as well after seeing some examples of impressive technology hitting the street. What’s driving the new claims? Not just cloud storage (internal or external), but more specifically Cloud storage with CIFS/NFS gateways and sync and share capabilities with mobile devices.

EMC’s Atmos is certainly one technology playing in this space, another other is Nirvanix. I’ve also had some exposure to Oxygen Cloud and am really impressed with their corporate IT friendly DropBox-like offering. So how do these solutions replace NAS? Most would agree that the consumerization of corporate IT is a trend going on in the workplace right now. Many companies are considering “Bring your own device” deployments instead of supplying desktops and laptops to everyone. Many users (such as doctors) are adopting tablet technology on their own to make themselves more productive at work. Additionally, many users are using consumer-oriented websites like DropBox to collaborate at work. The cloud storage solutions augment or replace the file server by providing functionality similar to these public cloud services, but the data resides inside the corporate firewall. Instead of a home drive or department share, a user gets a “space” with a private folder and shared folders. New technologies allow that shared space to be accessed by traditional NFS or CIFS protocols, as a local drive letter, via mobile devices, or via a web-based interface. Users can also generate links that expire within X number of hours or days that allow an external user to access one of their files, without the needing to email a copy of the document or put it out on DropBox, FTP, etc.

The one challenge I see is that no single solution does everything yet, meaning CIFS/NFS, web-based, and mobile sync and share. Atmos can do CIFS/NFS, but mobile device access requires something like Oxygen. Nirvanix also does just CIFS/NFS. Oxygen by itself isn’t really setup to be an internal CIFS/NFS access gateway, it’s primarily intended for web/mobile sync and share use cases. Panzura, Nasuni, etc offer CIFS/NFS or iSCSI gateway access to the cloud, but they don’t offer sync and share to mobile devices. You could certainly cobble together something that does everything by putting appliances in front of gateways that sit in front of a storage platform, but then it starts to become difficult to justify the effort. You’d also have to consider the fact you’ll need to re-architect within 12-18 months when more streamlined solutions are available. Either way, file sharing is still an exciting place to be with lots of change occurring in the industry. I can definitely see the possibility of home drives and department/workgroup shares going away into a private cloud offering, but the concept of file sharing is certainly still alive and well and CIFS/NFS isn’t going anywhere anytime soon. I don’t like to make predictions, but at this point my best guess is the technology that can do the best job of integrating legacy NFS/CIFS not just with “cloud storage”, but with web-friendly access and mobile device access that accelerate the consumerization trends will be the winner in this race.

Categories: Cloud, EMC, NAS, SAN Tags:

Implemented virtual desktops? Please contribute to this survey!

November 18, 2011 Leave a comment

Storage IOPS are one of the most important considerations for sizing a virtual desktop environment.    Most folks do not have a good handle on the breakdown of writes vs. reads in their Windows desktop environment, which is incredibly important to know when implementing virtual desktops since it has a huge impact on disk array performance when using RAID5.    Think a “heavy” I/O user uses about 20 IOPS during steady-state workload?  Not if you’re using RAID5 disk!   It can actually be as high as 70 back-end IOPS.

Originally, many VDI vendors posted numbers indicating that the mix of writes vs. reads in a desktop environment was about 50/50 (which is still way more writes than the average server workload that usually follows the 80/20 rule in favor of reads).     I’ve also seen 60/40 and 70/30 thrown about more recently.    Andre Leibovici (http://myvirtualcloud.net), a well-known authority on VDI, is regularly seeing steady-state workloads approaching 80% writes.    In that scenario, your 20 IOPS heavy-user ends up generating 68 IOPS on the RAID5 disk array, thanks to the write overhead of RAID5.   ((20 x 80%) x 4) + (20 x 20%) = 68.

The best way to continue to get clarity on this topic is going to involve gathering more data from real-world customers.   This is where your help is much needed if you have implemented virtual desktops, even if it’s just a POC so far.    Please click the link below and follow the instructions on how to contribute to this survey.

The VDI Read/Write Ratio Challenge

http://myvirtualcloud.net/?p=2352

 

 

Categories: Uncategorized