Archive

Archive for the ‘SAN’ Category

Running List of “N” Storage Companies

January 25, 2013 2 comments

One of my work colleagues (Mike Ellis @v2mike) and I have been keeping track lately of any and all storage companies that have a name starting with the letter N. Whether it is purely coincidence or something else, I have yet to figure out. But, it is quite interesting how many companies have picked this letter to start their name. Any conspiracy theories out there?

NetApp
Nimble
Nexenta
Nutanix
Nirvanix
Nexsan
Netgear (a bit of a stretch)
Nasuni
Nexgen

ADDED 1/27/13
Nimbus

If you come up with any others, please comment and I will get it added!

Categories: SAN

A new crop of storage start-ups has arrived

March 5, 2012 1 comment

About two years ago I was working at EMC and the company had just completed the acquisition of DataDomain, which was one the last “hot” storage-related start-ups around.     There were certainly other storage start-up companies around, but nobody really had a story that screamed “come here, get some shares, and get rich when we get bought”.  A prime example being Xiotech (now Xio).  Xio’s value-prop and future are quite fuzzy from my perspective, but somehow they keep hanging in there.   At the time, everybody wondered who the next hot startup would be, or even if there would be another hot startup.   Compellent was the closest thing one could find, and they were soon snatched up by Dell.

Fortunately for technology, innovation is constant.  New ideas are always being generated, particularly within the realm of data storage.  Anyone who analyzes the balance sheets of EMC, NetApp, and others realizes that data storage is a profitable business, much more so than servers.   I do believe this partly explains why we see so many startups in the data storage arena (because venture capitalists see the $$$), and we also see large conglomerates accustomed to skinny margins like Dell beefing up their storage and services portfolio.

If you follow social media, then you’re already well-aware of Tintri, Whiptail, PureStorage, Violin, Atlantis, Oxygen, Nirvanix, and more.   Today, I’ll give my thoughts on some of the most-discussed startups.

1)      Tintri – my thoughts on Tintri were already published in an earlier post here: https://hoosierstorage.wordpress.com/2011/04/19/tintri-whats-the-big-deal/.   I heard from a handful of Tintri folks after posting that, none too happy with my post.   Some of them are now gone from Tintri.   Ultimately, my thoughts are largely still the same.  It’s my understanding Tintri now has HA controllers, which is a big plus, but I still question the entire market of dedicated VMware storage appliances.    EMC, the parent of VMware, and the largest storage vendor is as focused as I’ve ever seen them in increasing their integration with Microsoft technologies, particularly Hyper-V.    Joe Tucci knows he can’t tie his cart to just one horse, just like he knew he had to let VMware remain independent back in 2006.   Similarly, Veeam has been putting tons of effort into increasing their functionality with Hyper-V.    These companies are both leaders in their respective market segments, think they are doing this in anticipation of receiving no value from it?  Most people buy storage arrays with the intent of using them for at least 5 years.    5 years is a lifetime in the technology world.  There’s no guarantee that VMware will be the dominant hypervisor in 2-3 years.   I certainly hope that they are, and if they continue to out-innovate everyone else they should be.  However, if you buy a dedicated storage appliance for VMware, and in 2 years the world is moving to Hyper-V 4.0, what then?   Microsoft is unlikely to make Hyper-V work with NFS anytime soon.   Would you buy a dedicated storage device for Sharepoint and nothing else?   There still remain use-cases for physical servers and physical servers that need SAN storage.   A dedicated VMware storage box can’t help here.  Why run two storage devices when you can do it all with one?

2)      Whiptail and other dedicated Flash arrays:  Dedicated Flash arrays seems to be generating quite a lot of buzz these days.   They all share a lot of similarities, in most cases the claim is made that by leveraging cheaper consumer-grade MLC flash drives and adding in some fancy magic on top, they can get a much bigger bang for the buck from these drives and make them “enterprise-class”.    They also make crazy claims like “200,000 IOPS”, a number that you simply won’t see in the real world.   Real-world numbers for enterprise-class SLC flash drives are 3500 IOPS per drive.    Anybody who tells you more than that is just blowing smoke.

I know of at least one customer who tested out one of these all-flash appliances.   It was nothing more than a rack-mountable server stuffed with Intel consumer-grade MLC drives (he took a pic and showed me).    He saw a 25% increase in DB performance when compared to the current 50 15K drives that the DB is spread across.   I’m sorry but…..I’m not impressed.    These devices also tend to be single points of failure, unless you buy a second box and connect them together to form a cluster.    I have said it before and I’ll say it again, never buy a SPOF storage solution unless your data is disposable!

As with VMware-dedicated storage appliances, I really have to question the value of the all-flash appliances, except for very niche use cases.   Flash storage for an existing array isn’t that expensive.   The real value in flash is by leveraging small amounts to increase performance where it’s needed, then fulfill capacity requirements by leveraging cheaper high-capacity SATA or NL-SAS drives.   This works and it’s in use today in many environments.  Plus, it’s really not that expensive.   Why buy two devices when you can do it all with one?

3)      Oxygen:  Now we’re getting into some start-ups that I see having good value propositions.   I first became aware of Oxygen about 6-9 months ago, and have been testing the technology out personally.  I also have at least one client that was looking for a secure, Dropbox-like technology for their enterprise that is testing it out.   I posted some previous thoughts on Oxygen here:  https://hoosierstorage.wordpress.com/?s=oxygen

As technologies like Oxygen become more robust, I truly do see this being the next-generation file server within the enterprise.   There is no doubt that we are witnessing a consumerization of IT, with tablets, smartphones, etc.   Users have a need to access their business files on these devices, and if you don’t provide them with the technology to do it, they will find a way using consumer-technologies that you don’t want them to be using.   Oxygen in particular offers a great alternative, providing sync-and share capabilities between your PC and mobile devices, yet retaining the safety and security of keeping data inside the corporate firewall.

4)      Atlantis:  When I first saw the Atlantis ILIO appliance in use, I couldn’t help but be impressed.   Storage performance with VDI is a problem many shops encounter, and when a company can cut that down by 90%, well it definitely turns heads.   Plus, unlike the dedicated physical appliances I mentioned above, Atlantis runs as a vApp, and can leverage your existing SAN environment (or local storage in some cases).   Rather than me do the talking, I would recommend taking a look at this article for a deep-dive on Atlantis: http://myvirtualcloud.net/?p=2604.   I’m currently evaluating Atlantis in my employers demo lab – so far so good.  I’m also working on a model to see just how (or if) it ends up being more cost-effective than a traditional SAN leveraging some SSD’s.

That’s it for now.  Other technologies I hope to be discussing soon include Actifio and Nirvanix.

Categories: SAN, VMware

Is the end of the File Server finally in sight?

December 28, 2011 Leave a comment

A year ago I wrote an article detailing my thoughts on how greatly exaggerated predictions of the imminent death of the file server truly were. A few years back many thought the file server would be gone by now, replaced by SharePoint or other similar content portals. Today, file servers (herein referenced as NAS) are alive and well, storing more unstructured content than ever before. You can read the original article here: http://bit.ly/t573Ry

In summary, the main reasons why NAS has not disappeared are:

  • Much of the content stored on NAS is simply not suitable for being stored in a database, and middleware technologies that allow the data to stay on NAS but be presented as if it were in the database adds complexity.
  • Legacy environments are often too big to accommodate a migration of all user and department shared files into a new repository in a cost effective manner.
  • Legacy environments often have legacy apps that were hard-coded to use UNC paths or mapped drive letters.
  • Many businesses in various industries have instruments or machinery that write data to a network share to store data using commonly accepted CIFS and NFS protocols.
  • The bulk of file growth today is in larger rich media formats, which are not well-suited for SharePoint.
  • NAS is a great option for VMware using NFS

The other day I found myself in a presentation where the file server is dead claim was made once again, and the very thought crossed my mind as well after seeing some examples of impressive technology hitting the street. What’s driving the new claims? Not just cloud storage (internal or external), but more specifically Cloud storage with CIFS/NFS gateways and sync and share capabilities with mobile devices.

EMC’s Atmos is certainly one technology playing in this space, another other is Nirvanix. I’ve also had some exposure to Oxygen Cloud and am really impressed with their corporate IT friendly DropBox-like offering. So how do these solutions replace NAS? Most would agree that the consumerization of corporate IT is a trend going on in the workplace right now. Many companies are considering “Bring your own device” deployments instead of supplying desktops and laptops to everyone. Many users (such as doctors) are adopting tablet technology on their own to make themselves more productive at work. Additionally, many users are using consumer-oriented websites like DropBox to collaborate at work. The cloud storage solutions augment or replace the file server by providing functionality similar to these public cloud services, but the data resides inside the corporate firewall. Instead of a home drive or department share, a user gets a “space” with a private folder and shared folders. New technologies allow that shared space to be accessed by traditional NFS or CIFS protocols, as a local drive letter, via mobile devices, or via a web-based interface. Users can also generate links that expire within X number of hours or days that allow an external user to access one of their files, without the needing to email a copy of the document or put it out on DropBox, FTP, etc.

The one challenge I see is that no single solution does everything yet, meaning CIFS/NFS, web-based, and mobile sync and share. Atmos can do CIFS/NFS, but mobile device access requires something like Oxygen. Nirvanix also does just CIFS/NFS. Oxygen by itself isn’t really setup to be an internal CIFS/NFS access gateway, it’s primarily intended for web/mobile sync and share use cases. Panzura, Nasuni, etc offer CIFS/NFS or iSCSI gateway access to the cloud, but they don’t offer sync and share to mobile devices. You could certainly cobble together something that does everything by putting appliances in front of gateways that sit in front of a storage platform, but then it starts to become difficult to justify the effort. You’d also have to consider the fact you’ll need to re-architect within 12-18 months when more streamlined solutions are available. Either way, file sharing is still an exciting place to be with lots of change occurring in the industry. I can definitely see the possibility of home drives and department/workgroup shares going away into a private cloud offering, but the concept of file sharing is certainly still alive and well and CIFS/NFS isn’t going anywhere anytime soon. I don’t like to make predictions, but at this point my best guess is the technology that can do the best job of integrating legacy NFS/CIFS not just with “cloud storage”, but with web-friendly access and mobile device access that accelerate the consumerization trends will be the winner in this race.

Categories: Cloud, EMC, NAS, SAN Tags:

A new true Unified storage contender in the market

October 13, 2011 1 comment

Most folks have heard of Unified storage by now and are well aware of the basic capabilities, namely NAS and SAN in a single box.   NetApp and EMC have been the primary players in this market for some time, and to date have been the only vendors to offer a true Unified solution in the enterprise arena.  In using the term “true Unified”, I’m looking at the technology and determining if it is leveraging a purpose-built storage OS to handle SAN and NAS data delivery to hosts.    There are other vendors out there claiming they have Unified capabilities because it is a compelling feature for customers, but by my definition taking a SAN and throwing on a Windows Storage Server to do CIFS does not count as a true Unified solution.   I’m less concerned about the semantics of whether or not there are truly two code bases in the box, one serving SAN and the other serving NAS, as long as they operate from a common storage pool and have a single-point of management.  

 

I figured the next vendor with a true Unified solution would be Dell, as multiple signs have been pointing to them integrating some NAS technology they acquired into their existing SAN platforms (Compellent and Equalogic), but surprisingly, the announcement yesterday came from IBM.   IBM took the V7000 array they released last year based on SVC technology and added Unified functionality to it by leveraging their SONAS product (Scale-out NAS).    I consider this to be a pretty major announcement, as NetApp and EMC can no longer claim superiority as the only Unified storage vendors with native solutions.   IBM could sell OEM’d NetApp arrays (N-Series) in the past if the situation warranted, and it will be interesting to see if this announcement is the beginning of the end for the IBM-NTAP OEM relationship.

 

In the case of the V7000, IBM has integrated the SONAS code into the solution and made one GUI to manage it.   Because the V7000 runs SVC-based code and the NAS is handled by SONAS components, it does not appear to be a unified code-base like NetApp, but two code-bases tied together with a single GUI like the VNX.    From a picture I saw on Tony Pearson’s blog, they are including two IBM servers in the stack (called “File Modules” that are akin to datamovers or Filers) that run active-active sitting in front of the V7000 controllers.  

 

I had some exposure to SONAS when I worked at a large pharma and saw its development first-hand for a project we undertook but never bought.   IBM hired the guy who created SAMBA (Andrew  Tridgell) to architect an active-active clustered SAMBA architecture to run on top of IBM’s Global Parallel File System (GPFS).   It was a very interesting system, and Andrew Tridgell still ranks as one of the smartest people I have ever met, but back in 2007-2008 it was just a little too new.   Fast forward 3 years and I’m sure the system is much more robust and fully-baked, though I’m not 100% sold on using SAMBA for CIFS access in the enterprise.

 

Because SONAS/GPFS is a scale-out system, the NAS functionality in the V7000 does have an advantage over EMC and Netapp in that the same file system can be served out of the two File Modules simultaneously.  However, it appears the V7000 may be limited to just two file modules from what I see, unlike a full SONAS/GPFS solution or something like Isilon.

 

Only time will tell if the V7000 Unified will be successful and IBM will keep development of the product a hot-priority.   Some folks would point to the legacy DS boxes as an example of a technology that was good when it was first released, but then sat for years without any major updates while the technology continued to evolve.   But at least for the immediate future, the V7000 is certainly worthy competition in the Unified space and an example of how competition is good for the industry overall, as it forces the big gorillas to keep on their toes and continue to find new ways to innovate.  

 

Further reading:

http://searchstorage.techtarget.com/news/2240100771/IBM-adds-Storwize-V7000-Unified-array-for-multiprotocol-storage

 https://www.ibm.com/developerworks/mydeveloperworks/blogs/storagevirtualization/entry/announcing_ibm_storwize_v7000_unified_and_v6_3_software26?lang=en

Categories: IBM, NAS, SAN

The Importance of Storage Performance Benchmarks

September 15, 2011 2 comments

As one who scans the daily flood of storage news every day, I’ve started to notice an uptick in the number of articles and press releases over the past year highlighting various vendors who have “blown away” a benchmark score of some sort and claim ultimate superiority in the storage world. 2 weeks later, another vendor is trumping that they’ve beaten the score that was just posted 2 weeks prior. With the numbers we’re seeing touted, I’m sure 1 bazillion IOPS must only be right around the corner.

Most vendors who utilize benchmarks tend to be storage startups, looking to get some publicity for themselves, not that there’s anything wrong with that. You gotta get your name out there somehow. For the longest time, the dominant player in the storage world, EMC, refused to participate in benchmarks saying they were not realistic of real-world performance. I don’t disagree with that, in many cases benchmarks are not indicative of real-world performance. Nevertheless, now even EMC has jumped into the fray. Perhaps they decided that not participating costs more in negative press than it does good.

What does it all mean for you? Here are a couple things to consider:

  1. Most benchmark tests are not indicative of real-world results. If you want to use a benchmark stat to get a better sense of what the max system limits are, that’s fine. But, don’t forget what your requirements truly are and measure each system against that. In most cases, customers are using a storage array for mixed workloads from a variety of business apps for a variety of use cases (database, email, file, and VMware, etc).  These different applications all have different I/O patterns.  The benchmark tests don’t simulate workloads that are related to this “real-world” mixed I/O pattern.   Benchmarks are heavily tilted in favor of niche use cases with very specific workloads. I’m sure there are niche use cases out there where the benchmarks do matter, but for 95% of storage buyers, they don’t matter. The bottom line is be sure the system has enough bandwidth and spindles to handle your real MB/sec and IOPS requirements. Designing that properly will be much more beneficial to you than getting an array that recently did 1 million IOPS in a benchmark test.
  1. Every vendor will reference a benchmark that works in their favor. Every vendor pretty much seems to be able to pull a benchmark stat out of their hat that favors their systems above all others.

Here’s an example I saw last year when working with a customer who evaluated 3 different vendors (IBM, NetApp, and EMC), and how I helped the customer get clear on what was real. In this case, both non-EMC vendors were referencing SPC numbers that showed how badly an EMC CX3-40 performed relative to their platforms. A couple alarms went off for me immediately:

  1. The CX3-40 was a generation old relative to the other platforms. The CX4 was the current platform on the market (now replaced by VNX). In other words, not an apples-to-apples comparison.
  2. At the time the CX3-40 existed, EMC did not participate in SAN benchmarks for its mid-range or high-end arrays.

I took a look at the V7000 SPC-1 benchmark and found some interesting conclusions.  Here is a chart that shows how the V7000 performed on the benchmark, and it shows other competitors as well:


The V7000 scored 56,500.   Interesting to note, since the box only supported 120 drives at the time, they had to utilize the SVC code in it to add on a DS5020 box, which allowed them to add more drives (200 total) to the configuration.  They put 80 15K RPM drives in the DS5020, higher speed drives the V7000 didn’t support natively at the time.    What’s important to note about the CX3-40 results seen in the SPC1 results is that this was a CX3-40 that NetApp purchased and ran a test on, then submitted the results to without EMC’s permission.  I don’t care what your vendor affiliation is, that’s not a fair fight. EMC had no input into how the array was configured and tuned.    Even though the array could hold 240 drives, it was only configured it for 155.    The CX3-40 scored 25,000.   Let’s make a realistic assumption that if EMC had configured the array and tuned it for the benchmark as other vendors did, then it could have done at least 25% better.   This would give it a score of 31,000.    The CX3-40 was a predecessor to the CX4-240 and they both hold 240 drives.   Performance and spec limits pretty much doubled across the board from CX3 to CX4, because EMC implemented a 64-bit architecture with the CX4’s release in 2008. So, again let’s make a realistic assumption and take the 31,000 result of the CX3-40 and double it to create a theoretical score for the CX4-240 of 62,000.

If I look at other arrays that are comparable to the CX4-240 in the results list, such as the DS5300 or FAS 3000 series, this theoretical score is right in the ballpark of the other arrays.     I would hope most would agree that this shows all the arrays in-scope were within striking distance of each other.   What exactly do these numbers mean relative to your business….not much. You can’t design a system for your business needs using these numbers. When most customers are analyzing their performance requirements, they have figures for IOPS, throughput, and latency that they need to meet to ensure good business application performance, not a theoretical benchmark score target.

Benchmarks can certainly be interesting, and I admit sometimes I think it’s cool to see a system that did X number of GB’s per second of throughput or X million IOPS, but my recommendation is don’t get to spun up on them in your search for a storage platform.  Every vendor has a benchmark that makes them look the best.  Instead, use your own metrics or work with a trusted business partner who can help you gather the data specific to your environment and evaluate each technology against how well it meets your business needs.

Categories: EMC, NAS, NetApp, SAN

Update on FCoE: The Current State of Real World Deployments

May 27, 2011 Leave a comment

FCoE has been out in the marketplace now for approximately two years and I thought it’d be good to discuss what we’re seeing in the real world regarding deployment.

Background

For those not familiar with Fibre Channel over Ethernet (FCoE), it is being hailed as a key new technology that is a first step towards consolidation of the Fibre Channel storage networks and Ethernet data networks. This has several benefits including simplified network management, elimination of redundant cabling, switches, etc., as well as reduced power and heat requirements. Performance over the Ethernet network is similar to a traditional Fibre Channel network, because the 10Gb connection is “lossless”. Essentially, FCoE encapsulates FC frames in Ethernet packets and uses Ethernet instead of Fibre Channel links. Underneath it all, it is still Fibre Channel. Storage management is done in a very similar manner to traditional FC interfaces.

Adoption

Across the R0undTower customer base in the Ohio Valley, adoption is still relatively low. I would attribute this to the fact that many customers in the Ohio Valley have found that traditional 1GbE iSCSI bandwidth will suffice for their environment. They never had a need to implement Fibre Channel, hence, there is little need to move to a FCoE environment. The most common FCoE switch is the Nexus 5000. Although some customers may not implement FCoE, we are seeing significant adoption of the Nexus line, with the 5000 often being used as a straight 10GbE switch. Even for medium-sized businesses that haven’t seen a need to adopt 10GbE, the drive to virtualize more will require greater network aggregate bandwidth at the ESX server, making 10GbE a legitimate play. In this case, the customer can simply continue to run iSCSI or NFS over this 10GbE connection, without implementing FCoE.

NFS and iSCSI are great, but there’s no getting away from the fact that they depend on TCP retransmission mechanics. This is a problem in larger environments, which is why Fibre Channel has continued to remain a very viable technology. The higher you go in the network protocol stack, the longer the latencies that occur in various operations. This can mean seconds, and normally many tens of seconds for state/loss of connection. EMC, NetApp, and VMware recommend that timeouts for NFS and iSCSI datastores be set to at least 60 seconds. FCoE expects most transmission loss handling to be done at the Ethernet layer, for lossless congestion handling and legacy CRC mechanisms for line errors. This means link state sensitivity is in the millisecond or even microsecond range. This is an important difference that ultimately is behind why iSCSI didn’t displace Fibre Channel in larger environments.

Until recently, storage arrays were not supporting native FCoE connectivity. NetApp was first to market with FCoE support, though there were some caveats and the technology was “Gen 1”, which most folks prefer to avoid in production environments. Native FCoE attach also did not support a multi-hop environment. FCoE has been ratified as a standard now, some of the minor “gotchas” have been taken care of with firmware updates, and EMC has also released UltraFlex modules for the CX/NS line that allow you to natively attach your array to a FCoE enabled switch. These capabilities will most certainly accelerate the deployement of more FCoE.

At the host-level, early versions of the Converged Network Adapter (CNA) were actually two separate chipsets included on a single PCI card. This was a duct-tape and bailing wire way to get host support for FCoE out to the market quickly. Now, Gen2 CNA’s are hitting the market, which are based upon a single-chipset. FCoE on the motherboard is also coming in the not-too-distant future, and these developments will also contribute to accelerated adoption of FCoE.

Recommendations

The best use case for FCoE is still for customers who are building a completely new data center, or refreshing their entire data center network. I would go so far as to say it is a no-brainer to deploy 10GbE infrastructure in these situations. For customers with bandwidth exceeding 60MB/sec, it will most certainly make sense to leverage FCoE functionality. With a 10GbE infrastructure in place already, the uplift to implement FCoE should be relatively minimal. One important caveat to consider before implementing a converged infrastructure is to have organization discussions about management responsibility of the switch infrastructure. This will particularly apply to environments where the network team is separate from the storage team. Policies and procedures will have to be put in place for one group to manage the device, or create ACL’s and a rights-delegation structure that allow the LAN team to manage LAN traffic and the storage team to manage SAN traffic over the same wire.

The above option is a great use-case, but it still involves a fair amount of pieces and parts despite being streamlined as compared to an environment where LAN and SAN were completely separate. Another use case for implementing FCoE today that is incredibly simple and streamlined is to make it part of a server refresh. The Cisco UCS B-series blade-chassis offers some impressive advantages over other blade options, and FCoE is built right in. This allows the management and cabling setup of the Cisco UCS to be much cleaner as compared to other blade chassis options. With FCoE already being part of the UCS chassis right out of the box, there is relatively little infrastructure changes required in the environment, management is handled from the same management GUI as the blade chassis, and there is no need to do any cabling other than perhaps add a FC uplink to an existing FC SAN environment if one exists.

Categories: Cisco, SAN

Tintri – What’s the big deal?

April 19, 2011 2 comments

You may have seen several news articles a couple weeks back about the hottest new thing in VMware storage – Tintri. Their marketing department created quite a buzz with most major IT news outlets picking up the story and proclaiming that the Tintri appliance was the future of VMware storage.

Instead of re-hashing what’s been said already, here’s a brief description from CNET:

Tintri VMstore is a hardware appliance that is purpose-built for VMs. It uses virtual machine abstractions–VMs and virtual disks–in place of conventional storage abstractions such as volumes, LUNs, or files. By operating at the virtual machine and disk level, administrators get the same level of insight, control, and automation of CPU, memory, and networking resources as general-purpose shared-storage solutions.

A few more technical details from The Register:

The VMstore T440 is a 4U, rackmount, multi-cored, multi-processor, X86 server with gigabit or 10gigE ports to a VMware server host. It appears as a single datastore instance in the VMware vSphere Client – connecting to vCenter Server. Multiple appliances – nodes – can be connected to one vCenter Server to enable sharing by ESX hosts. Virtual machines (VMs) can be copied or moved between nodes using storage vMotion.

The T440 is a hybrid storage facility with 15 directly-attached 3.5-inch, 7,200rpm, 1TB, SATA disk drives, and 9 x 160GB SATA, 2-bit, multi-level cell (MLC) solid state drives (SSD), delivering 8.5TB of usable capacity across the two storage tiers. There is a RAID6 redundancy scheme with hot spares for both the flash and disk drives.

 

I was a bit skeptical as to how this could be much different from other storage options on the market today. Tintri claims that you don’t manage the storage, everything is managed by VM. The only logical way I could see this happening is if you’re managing files (with every VM being a file) instead of LUN’s. How do you accomplish this? Use a native file system as your datastore instead of creating a VMFS file system on top of a block-based datastore. In other words, NFS.

So, after doing a little research, it appears this box isn’t much more than a simple NAS, with a slick GUI, doing some neat things under the covers with auto-tiering (akin to Compellent’s Data Progression or EMC sub-LUN FAST) and de-duplication. Instead of adding drives to a tray to expand, you expand by adding nodes. This makes for a nice story in that you scale performance as you scale capacity, but in the SMB market where this product is focused, I typically find the performance offered in the base unit with multi-core processors is 10X more than the typical SMB customer needs. In that scenario, scaling by nodes starts to become expensive as you are re-buying the processors each time instead of just buying disks, it takes up more space in the rack, and it increases power/cooling costs over just expanding by adding drives.

Today, it appears the box does not offer dual-controllers, replication, or iSCSI. iSCSI is something most SMB folks can probably go without and rely solely on NFS, which performs very similar to iSCSI at comparable Ethernet speeds and can offer additional functionality. Replication is probably something most SMB’s can also go without. I don’t see too many SMB’s going down the VMware SRM path. Most either don’t need that level of DR, or a solution like Veeam Backup and Replication fits their needs well (host-based SRM is also rumored to be coming later this year from VMware). The dual-controller issue is one I believe no customer should ever compromise on for production data, even SMB customers. I’ve seen enough situations over the years where storage processors, switches, or HBA’s just die or go into a spontaneous reboot, and that’s with products that have been established in the marketplace for some time and are known to be reliable. In this scenario with a single-controller system on Gen1 equipment, you’re risking too much. With consolidated storage you’re putting all your eggs in one basket, and when you do that, it better be a pretty darn good basket. The Register reported that a future release of the product will support dual-controllers, which I would make a priority if I were running Tintri.

Tintri managed to create quite a splash, but of course only time will tell how successful this box is going to be. Evostor launched a similar VMware-centric storage at VMworld a couple years ago but now their official domain name is expired. Tintri will certainly have an uphill battle to fight. When I look at the competition Tintri is going to face, many of their claimed advantages have already been released in recent product refreshes by their competition. The VNXe is probably the box that competes the best. The VNXe GUI is incredibly easy to use and makes no mention of LUNs or RAID groups, just like Tintri. It’s extremely cheap and EMC has deep pockets, which will be tough for Tintri to compete with. VNXe is built on proven technology that’s very mature, while Tintri is Gen 1. It supports NFS with advanced functionality like de-dupe. Tintri has a small advantage here in that EMC’s de-dupe for VM’s is post-process, while Tintri claims to support inline de-dupe (but only for the portion of VM data that resides on SSD drives). This is probably using some of the intellectual property that the ex-Data Domain employees at Tintri provided. The VNXe also supports iSCSI and will support FCoE. The NetApp FAS2020 is also a competitor in this space, supporting many of the same things the VNXe has, although the GUI is nowhere near as simple. Tintri’s big advantages are that it supports SSD today and does sub-LUN auto-tiering. These are two things that EMC put in the VNX but left out of the VNXe. It’s been stated the VNXe was supposed to get Flash drive support later this year, but there’s been no mention of auto-tiering support. Competition is good for end users and my hope is that with competitors putting sub-LUN tiering in their products at the low-end, it will force EMC’s hand to include FAST in the VNXe, because I think it will ultimately need it within 12-18 months to remain competitive in the market. Whether or not the typical SMB even needs auto-tiering with Flash drives is another story, but once the feature is there and customers start getting hyped about it, it’ll need to be there.

Further reading:

http://www.theregister.co.uk/2011/03/24/tintri_vmware_storage_appliance/

http://www.tintri.com/

http://news.cnet.com/8301-13846_3-20045989-62.html

http://www.yellow-bricks.com/2011/03/24/tintri-virtual-machine-aware-storage/

Categories: NAS, SAN, Virtualization, VMware

Storage Industry Update: The Consolidation Trend (Best-of-breed vs. Single-stack)

March 22, 2011 1 comment

The storage industry has seen considerable consolidation lately. It started a couple years ago with HP acquiring LeftHand and Dell acquiring EqualLogic. More recently, EMC acquired DataDomain and Isilon, HP acquired 3PAR, and as of this week Dell has acquired Compellent. This latest move had been rumored for sometime after Dell failed in its attempt to acquire 3PAR.

Dell originally put out the bid for 3PAR, obviously looking for a large-enterprise storage solution that it could offer in-house. Dell for years had re-branded EMC Clariion storage arrays under its own moniker, but that agreement never expanded into the large-enterprise array space, to include the ubiquitous EMC Symmetrix. Symmetrix has long been known to be the market leader in the enterprise space, and with the introduction of the VMAX line, now has native scale-out functionality. In the past, enterprise arrays primarily were scale-up oriented. A tremendous amount of focus has come upon scale-out models within the past 12-18 months thanks to the proliferation of cloud strategies. Due to the enormous scale that cloud infrastructures must be able to grow into, traditional scale-up boxes based on proprietary architectures were simply too costly. Using commodity Intel components with scale-out architecture allows customers and/or service provides to achieve significant scale at a lower cost.

The recent behavior by multiple manufacturers shows that they are feeling the pressure to boost their product portfolios with regard to scale-out storage. It’s also clear that many manufacturers are trying to create a product suite so they can try to “own” the complete computing stack. IBM has been in the position for quite some time. Oracle acquired Sun to enter the hardware business. HP decided to outbid Dell for 3PAR because it needed scale-out storage. HP’s only true in-house storage product was the EVA, and LeftHand is an SMB solution that can’t scale to meet enterprise needs. In the enterprise space, HP had been OEM’ing an array from Hitachi called the USP. The USP is a monolithic scale-up array that didn’t offer scale-out capabilities. Hence, HP needed 3PAR to create a scale-out enterprise storage array, which most likely will lead to the termination of their OEM agreement with Hitachi.

The HP-3PAR acquisition left Dell as the odd-man out amongst the major players. With 3PAR off the market, Compellent was the most logical choice left. The interesting thing here is most folks would not recognize Compellent as a large enterprise class of array. Today, it is software that runs on whitebox servers. Dell must see something in the Compellent code that leads them to believe it can be reconstructed in a scale-out fashion. This is not going to be a trivial task. 

What does all this mean for you, the end-user? Personally, I feel that this consolidation is ultimately bad for innovation. The theory is pretty simple, when you try to be a jack of all trades, you end up being a master of none. We see this in practice already. IBM has historically had product offerings in all major infrastructure areas except networking, but few are recognized as being truly market leading. Servers have been one area where IBM does shine. Their storage arrays are typically generations behind the competition. HP has also been known to manufacturer really great servers, and now they are getting some serious consideration in the networking space. However, HP storage has been in disarray for quite some time. There has been a serious lack of product focus, the EVA in particular is very outdated and uncompetitive, and there is no in-house intellectual property in the high-end storage space. Dell has been known to make great servers as well, but didn’t really have any other offerings of their own for enterprise data centers. In the end, all of these conglomerates tend to do really well in one area while being mediocre when it comes to the rest, storage being one of the mediocre areas. This is proven in all the recent market share reports that show these companies have been losing storage market share to companies like EMC and NetApp.

So why are EMC and NetApp so successful right now? I believe it’s because of their singular focus on storage, which helps them have the most innovative products on the market that offer the highest quality and reliability as well. EMC’s strategy is a bit more holistic around the realm of information infrastructure than NetApp’s, but it is still highly focused nonetheless compared to an HP or IBM. Without a doubt, this is why they continue to lead with best-of-breed products year after year, and continue to retain their market leader status. This also bodes well for the VMware-Cisco-EMC (VCE) and VMware-Cisco-NetApp (VCN) strategies. Rather than one company trying to be a jack of all trades, you have market leaders in each specific category coming together to create strategic partnerships with each other. The best-of-breed products can be combined into a stack, with the strategic partnerships allowing for options like one phone call for support, and extensive integration testing between components. It provides the benefits of single-source stack together with the benefits of a best-of-breed approach, which essentially is giving you the best of both worlds!

Categories: EMC, NetApp, SAN